paper_id
stringlengths 9
13
| venue
stringclasses 171
values | year
stringclasses 7
values | paper_title
stringlengths 0
188
| paper_authors
stringlengths 4
1.01k
| paper_abstract
stringlengths 0
5k
| paper_keywords
stringlengths 2
679
| paper_content
stringlengths 0
100k
| review_id
stringlengths 9
12
| review_title
stringlengths 0
500
| review_rating
stringclasses 92
values | review_text
stringlengths 0
28.3k
| review_confidence
stringclasses 21
values |
---|---|---|---|---|---|---|---|---|---|---|---|---|
SJU4ayYgl | ICLR.cc/2017/conference | 2017 | Semi-Supervised Classification with Graph Convolutional Networks | ["Thomas N. Kipf", "Max Welling"] | We present a scalable approach for semi-supervised learning on graph-structured data that is based on an efficient variant of convolutional neural networks which operate directly on graphs. We motivate the choice of our convolutional architecture via a localized first-order approximation of spectral graph convolutions. Our model scales linearly in the number of graph edges and learns hidden layer representations that encode both local graph structure and features of nodes. In a number of experiments on citation networks and on a knowledge graph dataset we demonstrate that our approach outperforms related methods by a significant margin. | ["Deep learning", "Semi-Supervised Learning"] | ABSTRACTWe present a scalable approach for semi-supervised learning on graph-structureddata that is based on an efficient variant of convolutional neural networks whichoperate directly on graphs. We motivate the choice of our convolutional archi-tecture via a localized first-order approximation of spectral graph convolutions.Our model scales linearly in the number of graph edges and learns hidden layerrepresentations that encode both local graph structure and features of nodes. Ina number of experiments on citation networks and on a knowledge graph datasetwe demonstrate that our approach outperforms related methods by a significantmargin.1 I NTRODUCTIONWe consider the problem of classifying nodes (such as documents) in a graph (such as a citationnetwork), where labels are only available for a small subset of nodes. This problem can be framedas graph-based semi-supervised learning, where label information is smoothed over the graph viasome form of explicit graph-based regularization (Zhu et al., 2003; Zhou et al., 2004; Belkin et al.,2006; Weston et al., 2012), e.g. by using a graph Laplacian regularization term in the loss function:L=L0+Lreg;withLreg=Xi;jAijkf(Xi)f(Xj)k2=f(X)>f(X): (1)Here,L0denotes the supervised loss w.r.t. the labeled part of the graph, f()can be a neural network-like differentiable function, is a weighing factor and Xis a matrix of node feature vectors Xi. =DAdenotes the unnormalized graph Laplacian of an undirected graph G= (V;E)withNnodesvi2V, edges (vi;vj)2E, an adjacency matrix A2RNN(binary or weighted) anda degree matrix Dii=PjAij. The formulation of Eq. 1 relies on the assumption that connectednodes in the graph are likely to share the same label. This assumption, however, might restrictmodeling capacity, as graph edges need not necessarily encode node similarity, but could containadditional information.In this work, we encode the graph structure directly using a neural network model f(X;A)andtrain on a supervised target L0for all nodes with labels, thereby avoiding explicit graph-basedregularization in the loss function. Conditioning f()on the adjacency matrix of the graph willallow the model to distribute gradient information from the supervised loss L0and will enable it tolearn representations of nodes both with and without labels.Our contributions are two-fold. Firstly, we introduce a simple and well-behaved layer-wise prop-agation rule for neural network models which operate directly on graphs and show how it can bemotivated from a first-order approximation of spectral graph convolutions (Hammond et al., 2011).Secondly, we demonstrate how this form of a graph-based neural network model can be used forfast and scalable semi-supervised classification of nodes in a graph. Experiments on a number ofdatasets demonstrate that our model compares favorably both in classification accuracy and effi-ciency (measured in wall-clock time) against state-of-the-art methods for semi-supervised learning.1Published as a conference paper at ICLR 20172 F AST APPROXIMATE CONVOLUTIONS ON GRAPHSIn this section, we provide theoretical motivation for a specific graph-based neural network modelf(X;A)that we will use in the rest of this paper. We consider a multi-layer Graph ConvolutionalNetwork (GCN) with the following layer-wise propagation rule:H(l+1)=~D12~A~D12H(l)W(l): (2)Here, ~A=A+INis the adjacency matrix of the undirected graph Gwith added self-connections.INis the identity matrix, ~Dii=Pj~AijandW(l)is a layer-specific trainable weight matrix. ()denotes an activation function, such as the ReLU() = max(0;).H(l)2RNDis the matrix of ac-tivations in the lthlayer;H(0)=X. In the following, we show that the form of this propagation rulecan be motivated1via a first-order approximation of localized spectral filters on graphs (Hammondet al., 2011; Defferrard et al., 2016).2.1 S PECTRAL GRAPH CONVOLUTIONSWe consider spectral convolutions on graphs defined as the multiplication of a signal x2RN(ascalar for every node) with a filter g=diag()parameterized by 2RNin the Fourier domain,i.e.:g?x=UgU>x; (3)whereUis the matrix of eigenvectors of the normalized graph Laplacian L=IND12AD12=UU>, with a diagonal matrix of its eigenvalues andU>xbeing the graph Fourier transformofx. We can understand gas a function of the eigenvalues of L, i.e.g(). Evaluating Eq. 3 iscomputationally expensive, as multiplication with the eigenvector matrix UisO(N2). Furthermore,computing the eigendecomposition of Lin the first place might be prohibitively expensive for largegraphs. To circumvent this problem, it was suggested in Hammond et al. (2011) that g()can bewell-approximated by a truncated expansion in terms of Chebyshev polynomials Tk(x)up toKthorder:g0()KXk=00kTk(~); (4)with a rescaled ~ =2maxIN.maxdenotes the largest eigenvalue of L.02RKis now avector of Chebyshev coefficients. The Chebyshev polynomials are recursively defined as Tk(x) =2xTk1(x)Tk2(x), withT0(x) = 1 andT1(x) =x. The reader is referred to Hammond et al.(2011) for an in-depth discussion of this approximation.Going back to our definition of a convolution of a signal xwith a filterg0, we now have:g0?xKXk=00kTk(~L)x; (5)with ~L=2maxLIN; as can easily be verified by noticing that (UU>)k=UkU>. Note thatthis expression is now K-localized since it is a Kth-order polynomial in the Laplacian, i.e. it dependsonly on nodes that are at maximum Ksteps away from the central node ( Kth-order neighborhood).The complexity of evaluating Eq. 5 is O(jEj), i.e. linear in the number of edges. Defferrard et al.(2016) use this K-localized convolution to define a convolutional neural network on graphs.2.2 L AYER -WISELINEAR MODELA neural network model based on graph convolutions can therefore be built by stacking multipleconvolutional layers of the form of Eq. 5, each layer followed by a point-wise non-linearity. Now,imagine we limited the layer-wise convolution operation to K= 1(see Eq. 5), i.e. a function that islinear w.r.t.Land therefore a linear function on the graph Laplacian spectrum.1We provide an alternative interpretation of this propagation rule based on the Weisfeiler-Lehman algorithm(Weisfeiler & Lehmann, 1968) in Appendix A.2Published as a conference paper at ICLR 2017In this way, we can still recover a rich class of convolutional filter functions by stacking multiplesuch layers, but we are not limited to the explicit parameterization given by, e.g., the Chebyshevpolynomials. We intuitively expect that such a model can alleviate the problem of overfitting onlocal neighborhood structures for graphs with very wide node degree distributions, such as socialnetworks, citation networks, knowledge graphs and many other real-world graph datasets. Addition-ally, for a fixed computational budget, this layer-wise linear formulation allows us to build deepermodels, a practice that is known to improve modeling capacity on a number of domains (He et al.,2016).In this linear formulation of a GCN we further approximate max2, as we can expect that neuralnetwork parameters will adapt to this change in scale during training. Under these approximationsEq. 5 simplifies to:g0?x00x+01(LIN)x=00x01D12AD12x; (6)with two free parameters 00and01. The filter parameters can be shared over the whole graph.Successive application of filters of this form then effectively convolve the kth-order neighborhood ofa node, where kis the number of successive filtering operations or convolutional layers in the neuralnetwork model.In practice, it can be beneficial to constrain the number of parameters further to address overfittingand to minimize the number of operations (such as matrix multiplications) per layer. This leaves uswith the following expression:g?xIN+D12AD12x; (7)with a single parameter =00=01. Note that IN+D12AD12now has eigenvalues inthe range [0;2]. Repeated application of this operator can therefore lead to numerical instabilitiesand exploding/vanishing gradients when used in a deep neural network model. To alleviate thisproblem, we introduce the following renormalization trick :IN+D12AD12!~D12~A~D12, with~A=A+INand~Dii=Pj~Aij.We can generalize this definition to a signal X2RNCwithCinput channels (i.e. a C-dimensionalfeature vector for every node) and Ffilters or feature maps as follows:Z=~D12~A~D12X; (8)where 2RCFis now a matrix of filter parameters and Z2RNFis the convolved signalmatrix. This filtering operation has complexity O(jEjFC), as~AX can be efficiently implementedas a product of a sparse matrix with a dense matrix.3 S EMI-SUPERVISED NODE CLASSIFICATIONHaving introduced a simple, yet flexible model f(X;A)for efficient information propagation ongraphs, we can return to the problem of semi-supervised node classification. As outlined in the in-troduction, we can relax certain assumptions typically made in graph-based semi-supervised learn-ing by conditioning our model f(X;A)both on the data Xand on the adjacency matrix Aof theunderlying graph structure. We expect this setting to be especially powerful in scenarios where theadjacency matrix contains information not present in the data X, such as citation links between doc-uments in a citation network or relations in a knowledge graph. The overall model, a multi-layerGCN for semi-supervised learning, is schematically depicted in Figure 1.3.1 E XAMPLEIn the following, we consider a two-layer GCN for semi-supervised node classification on a graphwith a symmetric adjacency matrix A(binary or weighted). We first calculate ^A=~D12~A~D12ina pre-processing step. Our forward model then takes the simple form:Z=f(X;A) = softmax^AReLU^AXW(0)W(1): (9)3Published as a conference paper at ICLR 2017Cinput layerX1X2X3X4Foutput layerZ1Z2Z3Z4hiddenlayersY1Y41(a) Graph Convolutional Network30 20 10 0 10 20 303020100102030 (b) Hidden layer activationsFigure 1: Left: Schematic depiction of multi-layer Graph Convolutional Network (GCN) for semi-supervised learning with Cinput channels and Ffeature maps in the output layer. The graph struc-ture (edges shown as black lines) is shared over layers, labels are denoted by Yi.Right : t-SNE(Maaten & Hinton, 2008) visualization of hidden layer activations of a two-layer GCN trained onthe Cora dataset (Sen et al., 2008) using 5%of labels. Colors denote document class.Here,W(0)2RCHis an input-to-hidden weight matrix for a hidden layer with Hfeature maps.W(1)2RHFis a hidden-to-output weight matrix. The softmax activation function, defined assoftmax(xi) =1Zexp(xi)withZ=Piexp(xi), is applied row-wise. For semi-supervised multi-class classification, we then evaluate the cross-entropy error over all labeled examples:L=Xl2YLFXf=1YlflnZlf; (10)whereYLis the set of node indices that have labels.The neural network weights W(0)andW(1)are trained using gradient descent. In this work, weperform batch gradient descent using the full dataset for every training iteration, which is a viableoption as long as datasets fit in memory. Using a sparse representation for A, memory requirementisO(jEj), i.e. linear in the number of edges. Stochasticity in the training process is introduced viadropout (Srivastava et al., 2014). We leave memory-efficient extensions with mini-batch stochasticgradient descent for future work.3.2 I MPLEMENTATIONIn practice, we make use of TensorFlow (Abadi et al., 2015) for an efficient GPU-based imple-mentation2of Eq. 9 using sparse-dense matrix multiplications. The computational complexity ofevaluating Eq. 9 is then O(jEjCHF ), i.e. linear in the number of graph edges.4 R ELATED WORKOur model draws inspiration both from the field of graph-based semi-supervised learning and fromrecent work on neural networks that operate on graphs. In what follows, we provide a brief overviewon related work in both fields.4.1 G RAPH -BASED SEMI-SUPERVISED LEARNINGA large number of approaches for semi-supervised learning using graph representations have beenproposed in recent years, most of which fall into two broad categories: methods that use someform of explicit graph Laplacian regularization and graph embedding-based approaches. Prominentexamples for graph Laplacian regularization include label propagation (Zhu et al., 2003), manifoldregularization (Belkin et al., 2006) and deep semi-supervised embedding (Weston et al., 2012).2Code to reproduce our experiments is available at https://github.com/tkipf/gcn .4Published as a conference paper at ICLR 2017Recently, attention has shifted to models that learn graph embeddings with methods inspired bythe skip-gram model (Mikolov et al., 2013). DeepWalk (Perozzi et al., 2014) learns embeddingsvia the prediction of the local neighborhood of nodes, sampled from random walks on the graph.LINE (Tang et al., 2015) and node2vec (Grover & Leskovec, 2016) extend DeepWalk with moresophisticated random walk or breadth-first search schemes. For all these methods, however, a multi-step pipeline including random walk generation and semi-supervised training is required where eachstep has to be optimized separately. Planetoid (Yang et al., 2016) alleviates this by injecting labelinformation in the process of learning embeddings.4.2 N EURAL NETWORKS ON GRAPHSNeural networks that operate on graphs have previously been introduced in Gori et al. (2005);Scarselli et al. (2009) as a form of recurrent neural network. Their framework requires the repeatedapplication of contraction maps as propagation functions until node representations reach a stablefixed point. This restriction was later alleviated in Li et al. (2016) by introducing modern practicesfor recurrent neural network training to the original graph neural network framework. Duvenaudet al. (2015) introduced a convolution-like propagation rule on graphs and methods for graph-levelclassification. Their approach requires to learn node degree-specific weight matrices which does notscale to large graphs with wide node degree distributions. Our model instead uses a single weightmatrix per layer and deals with varying node degrees through an appropriate normalization of theadjacency matrix (see Section 3.1).A related approach to node classification with a graph-based neural network was recently introducedin Atwood & Towsley (2016). They report O(N2)complexity, limiting the range of possible appli-cations. In a different yet related model, Niepert et al. (2016) convert graphs locally into sequencesthat are fed into a conventional 1D convolutional neural network, which requires the definition of anode ordering in a pre-processing step.Our method is based on spectral graph convolutional neural networks, introduced in Bruna et al.(2014) and later extended by Defferrard et al. (2016) with fast localized convolutions. In contrastto these works, we consider here the task of transductive node classification within networks ofsignificantly larger scale. We show that in this setting, a number of simplifications (see Section 2.2)can be introduced to the original frameworks of Bruna et al. (2014) and Defferrard et al. (2016) thatimprove scalability and classification performance in large-scale networks.5 E XPERIMENTSWe test our model in a number of experiments: semi-supervised document classification in cita-tion networks, semi-supervised entity classification in a bipartite graph extracted from a knowledgegraph, an evaluation of various graph propagation models and a run-time analysis on random graphs.5.1 D ATASETSWe closely follow the experimental setup in Yang et al. (2016). Dataset statistics are summarizedin Table 1. In the citation network datasets—Citeseer, Cora and Pubmed (Sen et al., 2008)—nodesare documents and edges are citation links. Label rate denotes the number of labeled nodes that areused for training divided by the total number of nodes in each dataset. NELL (Carlson et al., 2010;Yang et al., 2016) is a bipartite graph dataset extracted from a knowledge graph with 55,864 relationnodes and 9,891 entity nodes.Table 1: Dataset statistics, as reported in Yang et al. (2016).Dataset Type Nodes Edges Classes Features Label rateCiteseer Citation network 3,327 4,732 6 3,703 0:036Cora Citation network 2,708 5,429 7 1,433 0:052Pubmed Citation network 19,717 44,338 3 500 0:003NELL Knowledge graph 65,755 266,144 210 5,414 0:0015Published as a conference paper at ICLR 2017Citation networks We consider three citation network datasets: Citeseer, Cora and Pubmed (Senet al., 2008). The datasets contain sparse bag-of-words feature vectors for each document and a listof citation links between documents. We treat the citation links as (undirected) edges and constructa binary, symmetric adjacency matrix A. Each document has a class label. For training, we only use20 labels per class, but all feature vectors.NELL NELL is a dataset extracted from the knowledge graph introduced in (Carlson et al., 2010).A knowledge graph is a set of entities connected with directed, labeled edges (relations). We followthe pre-processing scheme as described in Yang et al. (2016). We assign separate relation nodesr1andr2for each entity pair (e1;r;e 2)as(e1;r1)and(e2;r2). Entity nodes are described bysparse feature vectors. We extend the number of features in NELL by assigning a unique one-hotrepresentation for every relation node, effectively resulting in a 61,278-dim sparse feature vector pernode. The semi-supervised task here considers the extreme case of only a single labeled exampleper class in the training set. We construct a binary, symmetric adjacency matrix from this graph bysetting entries Aij= 1, if one or more edges are present between nodes iandj.Random graphs We simulate random graph datasets of various sizes for experiments where wemeasure training time per epoch. For a dataset with Nnodes we create a random graph assigning2Nedges uniformly at random. We take the identity matrix INas input feature matrix X, therebyimplicitly taking a featureless approach where the model is only informed about the identity of eachnode, specified by a unique one-hot vector. We add dummy labels Yi= 1for every node.5.2 E XPERIMENTAL SET-UPUnless otherwise noted, we train a two-layer GCN as described in Section 3.1 and evaluate pre-diction accuracy on a test set of 1,000 labeled examples. We provide additional experiments usingdeeper models with up to 10 layers in Appendix B. We choose the same dataset splits as in Yanget al. (2016) with an additional validation set of 500 labeled examples for hyperparameter opti-mization (dropout rate for all layers, L2 regularization factor for the first GCN layer and number ofhidden units). We do not use the validation set labels for training.For the citation network datasets, we optimize hyperparameters on Cora only and use the same setof parameters for Citeseer and Pubmed. We train all models for a maximum of 200 epochs (trainingiterations) using Adam (Kingma & Ba, 2015) with a learning rate of 0:01and early stopping with awindow size of 10, i.e. we stop training if the validation loss does not decrease for 10 consecutiveepochs. We initialize weights using the initialization described in Glorot & Bengio (2010) andaccordingly (row-)normalize input feature vectors. On the random graph datasets, we use a hiddenlayer size of 32 units and omit regularization (i.e. neither dropout nor L2 regularization).5.3 B ASELINESWe compare against the same baseline methods as in Yang et al. (2016), i.e. label propagation(LP) (Zhu et al., 2003), semi-supervised embedding (SemiEmb) (Weston et al., 2012), manifoldregularization (ManiReg) (Belkin et al., 2006) and skip-gram based graph embeddings (DeepWalk)(Perozzi et al., 2014). We omit TSVM (Joachims, 1999), as it does not scale to the large number ofclasses in one of our datasets.We further compare against the iterative classification algorithm (ICA) proposed in Lu & Getoor(2003) in conjunction with two logistic regression classifiers, one for local node features alone andone for relational classification using local features and an aggregation operator as described inSen et al. (2008). We first train the local classifier using all labeled training set nodes and useit to bootstrap class labels of unlabeled nodes for relational classifier training. We run iterativeclassification (relational classifier) with a random node ordering for 10 iterations on all unlabelednodes (bootstrapped using the local classifier). L2 regularization parameter and aggregation operator(count vs.prop, see Sen et al. (2008)) are chosen based on validation set performance for each datasetseparately.Lastly, we compare against Planetoid (Yang et al., 2016), where we always choose their best-performing model variant (transductive vs. inductive) as a baseline.6Published as a conference paper at ICLR 20176 R ESULTS6.1 S EMI-SUPERVISED NODE CLASSIFICATIONResults are summarized in Table 2. Reported numbers denote classification accuracy in percent. ForICA, we report the mean accuracy of 100 runs with random node orderings. Results for all otherbaseline methods are taken from the Planetoid paper (Yang et al., 2016). Planetoid* denotes the bestmodel for the respective dataset out of the variants presented in their paper.Table 2: Summary of results in terms of classification accuracy (in percent).Method Citeseer Cora Pubmed NELLManiReg [3] 60:1 59 :5 70 :7 21 :8SemiEmb [28] 59:6 59 :0 71 :1 26 :7LP [32] 45:3 68 :0 63 :0 26 :5DeepWalk [22] 43:2 67 :2 65 :3 58 :1ICA [18] 69:1 75 :1 73 :9 23 :1Planetoid* [29] 64:7(26s) 75:7(13s) 77:2(25s) 61:9(185s)GCN (this paper) 70:3(7s) 81:5(4s) 79:0(38s) 66:0(48s)GCN (rand. splits) 67:90:5 80:10:5 78:90:7 58:41:7We further report wall-clock training time in seconds until convergence (in brackets) for our method(incl. evaluation of validation error) and for Planetoid. For the latter, we used an implementation pro-vided by the authors3and trained on the same hardware (with GPU) as our GCN model. We trainedand tested our model on the same dataset splits as in Yang et al. (2016) and report mean accuracyof 100 runs with random weight initializations. We used the following sets of hyperparameters forCiteseer, Cora and Pubmed: 0.5 (dropout rate), 5104(L2 regularization) and 16(number of hid-den units); and for NELL: 0.1 (dropout rate), 1105(L2 regularization) and 64(number of hiddenunits).In addition, we report performance of our model on 10 randomly drawn dataset splits of the samesize as in Yang et al. (2016), denoted by GCN (rand. splits). Here, we report mean and standarderror of prediction accuracy on the test set split in percent.6.2 E VALUATION OF PROPAGATION MODELWe compare different variants of our proposed per-layer propagation model on the citation networkdatasets. We follow the experimental set-up described in the previous section. Results are summa-rized in Table 3. The propagation model of our original GCN model is denoted by renormalizationtrick (in bold). In all other cases, the propagation model of both neural network layers is replacedwith the model specified under propagation model . Reported numbers denote mean classificationaccuracy for 100 repeated runs with random weight matrix initializations. In case of multiple vari-ables iper layer, we impose L2 regularization on all weight matrices of the first layer.Table 3: Comparison of propagation models.Description Propagation model Citeseer Cora PubmedChebyshev filter (Eq. 5)K= 3PKk=0Tk(~L)Xk69:8 79:5 74:4K= 2 69 :6 81:2 73:81st-order model (Eq. 6) X0+D12AD12X1 68:3 80:0 77:5Single parameter (Eq. 7) (IN+D12AD12)X 69 :3 79:2 77:4Renormalization trick (Eq. 8) ~D12~A~D12X 70:3 81:5 79:01st-order term only D12AD12X 68 :7 80:5 77:8Multi-layer perceptron X 46 :5 55:1 71:43https://github.com/kimiyoung/planetoid7Published as a conference paper at ICLR 20176.3 T RAINING TIME PER EPOCH1k 10k 100k 1M 10M# Edges10-310-210-1100101Sec./epoch*GPUCPUFigure 2: Wall-clock time per epoch for randomgraphs. (*) indicates out-of-memory error.Here, we report results for the mean trainingtime per epoch (forward pass, cross-entropycalculation, backward pass) for 100 epochs onsimulated random graphs, measured in secondswall-clock time. See Section 5.1 for a detaileddescription of the random graph dataset usedin these experiments. We compare results ona GPU and on a CPU-only implementation4inTensorFlow (Abadi et al., 2015). Figure 2 sum-marizes the results.7 D ISCUSSION7.1 S EMI-SUPERVISED MODELIn the experiments demonstrated here, our method for semi-supervised node classification outper-forms recent related methods by a significant margin. Methods based on graph-Laplacian regular-ization (Zhu et al., 2003; Belkin et al., 2006; Weston et al., 2012) are most likely limited due to theirassumption that edges encode mere similarity of nodes. Skip-gram based methods on the other handare limited by the fact that they are based on a multi-step pipeline which is difficult to optimize.Our proposed model can overcome both limitations, while still comparing favorably in terms of ef-ficiency (measured in wall-clock time) to related methods. Propagation of feature information fromneighboring nodes in every layer improves classification performance in comparison to methods likeICA (Lu & Getoor, 2003), where only label information is aggregated.We have further demonstrated that the proposed renormalized propagation model (Eq. 8) offers bothimproved efficiency (fewer parameters and operations, such as multiplication or addition) and betterpredictive performance on a number of datasets compared to a na ̈ıve1st-order model (Eq. 6) orhigher-order graph convolutional models using Chebyshev polynomials (Eq. 5).7.2 L IMITATIONS AND FUTURE WORKHere, we describe several limitations of our current model and outline how these might be overcomein future work.Memory requirement In the current setup with full-batch gradient descent, memory requirementgrows linearly in the size of the dataset. We have shown that for large graphs that do not fit in GPUmemory, training on CPU can still be a viable option. Mini-batch stochastic gradient descent canalleviate this issue. The procedure of generating mini-batches, however, should take into account thenumber of layers in the GCN model, as the Kth-order neighborhood for a GCN with Klayers has tobe stored in memory for an exact procedure. For very large and densely connected graph datasets,further approximations might be necessary.Directed edges and edge features Our framework currently does not naturally support edge fea-tures and is limited to undirected graphs (weighted or unweighted). Results on NELL howevershow that it is possible to handle both directed edges and edge features by representing the originaldirected graph as an undirected bipartite graph with additional nodes that represent edges in theoriginal graph (see Section 5.1 for details).Limiting assumptions Through the approximations introduced in Section 2, we implicitly assumelocality (dependence on the Kth-order neighborhood for a GCN with Klayers) and equal impor-tance of self-connections vs. edges to neighboring nodes. For some datasets, however, it might bebeneficial to introduce a trade-off parameter in the definition of ~A:~A=A+IN: (11)4Hardware used: 16-core Intel RXeon RCPU E5-2640 v3 @ 2.60GHz, GeForce RGTX TITAN X8Published as a conference paper at ICLR 2017This parameter now plays a similar role as the trade-off parameter between supervised and unsuper-vised loss in the typical semi-supervised setting (see Eq. 1). Here, however, it can be learned viagradient descent.8 C ONCLUSIONWe have introduced a novel approach for semi-supervised classification on graph-structured data.Our GCN model uses an efficient layer-wise propagation rule that is based on a first-order approx-imation of spectral convolutions on graphs. Experiments on a number of network datasets suggestthat the proposed GCN model is capable of encoding both graph structure and node features in away useful for semi-supervised classification. In this setting, our model outperforms several recentlyproposed methods by a significant margin, while being computationally efficient.ACKNOWLEDGMENTSWe would like to thank Christos Louizos, Taco Cohen, Joan Bruna, Zhilin Yang, Dave Herman,Pramod Sinha and Abdul-Saboor Sheikh for helpful discussions. This research was funded by SAP. | HJ3LKSSEg | 7: Good paper, accept | The paper introduces a method for semi-supervised learning in graphs that exploits the spectral structure of the graph in a convolutional NN implementation. The proposed algorithm has a limited complexity and it is shown to scale well on a large dataset. The comparison with baselines on different datasets show a clear jump of performance with the proposed method.
The paper is technically fine and clear, the algorithm seems to scale well, and the results on the different datasets compare very favorably with the different baselines. The algorithm is simple and training seems easy. Concerning the originality, the proposed algorithm is a simple adaptation of graph convolutional networks (ref Defferrard 2016 in the paper) to a semi-supervised transductive setting. This is clearly mentioned in the paper, but the authors could better highlight the differences and novelty wrt this reference paper. Also, there is no comparison with the family of iterative classifiers, which usually compare favorably, both in performance and training time, with regularization based approaches, although they are mostly used in inductive settings. Below are some references for this family of methods.
The authors mention that more complex filters could be learned by stacking layers but they limit their architecture to one hidden layer. They should comment on the interest of using more layers for graph classification.
Some references on iterative classification Qing Lu and Lise Getoor. 2003. Link-based classification. In ICML, Vol. 3. 496–503.
Gideon S Mann and Andrew McCallum. 2010. Generalized expectation criteria for semi-supervised learning with weakly labeled data. The Journal of Machine Learning Research 11 (2010), 955–984.
David Jensen, Jennifer Neville, and Brian Gallagher. 2004. Why collective inference improves relational classification. In Proceedings of the tenth ACM SIGKDD international conference on Knowledge discovery and data mining. ACM, 593–598.
Joseph J Pfeiffer III, Jennifer Neville, and Paul N Bennett. 2015. Overcoming Relational Learning Biases
to Accurately Predict Preferences in Large Scale Networks. In Proceedings of the 24th International Conference on World Wide Web. International World Wide Web Conferences Steering Committee, 853–
863.
Stephane Peters, Ludovic Denoyer, and Patrick Gallinari. 2010. Iterative annotation of multi-relational social networks. In Advances in Social Networks Analysis and Mining (ASONAM), 2010 International Conference on. IEEE, 96–103.
| 4: The reviewer is confident but not absolutely certain that the evaluation is correct |
|
SJU4ayYgl | ICLR.cc/2017/conference | 2017 | Semi-Supervised Classification with Graph Convolutional Networks | ["Thomas N. Kipf", "Max Welling"] | We present a scalable approach for semi-supervised learning on graph-structured data that is based on an efficient variant of convolutional neural networks which operate directly on graphs. We motivate the choice of our convolutional architecture via a localized first-order approximation of spectral graph convolutions. Our model scales linearly in the number of graph edges and learns hidden layer representations that encode both local graph structure and features of nodes. In a number of experiments on citation networks and on a knowledge graph dataset we demonstrate that our approach outperforms related methods by a significant margin. | ["Deep learning", "Semi-Supervised Learning"] | ABSTRACTWe present a scalable approach for semi-supervised learning on graph-structureddata that is based on an efficient variant of convolutional neural networks whichoperate directly on graphs. We motivate the choice of our convolutional archi-tecture via a localized first-order approximation of spectral graph convolutions.Our model scales linearly in the number of graph edges and learns hidden layerrepresentations that encode both local graph structure and features of nodes. Ina number of experiments on citation networks and on a knowledge graph datasetwe demonstrate that our approach outperforms related methods by a significantmargin.1 I NTRODUCTIONWe consider the problem of classifying nodes (such as documents) in a graph (such as a citationnetwork), where labels are only available for a small subset of nodes. This problem can be framedas graph-based semi-supervised learning, where label information is smoothed over the graph viasome form of explicit graph-based regularization (Zhu et al., 2003; Zhou et al., 2004; Belkin et al.,2006; Weston et al., 2012), e.g. by using a graph Laplacian regularization term in the loss function:L=L0+Lreg;withLreg=Xi;jAijkf(Xi)f(Xj)k2=f(X)>f(X): (1)Here,L0denotes the supervised loss w.r.t. the labeled part of the graph, f()can be a neural network-like differentiable function, is a weighing factor and Xis a matrix of node feature vectors Xi. =DAdenotes the unnormalized graph Laplacian of an undirected graph G= (V;E)withNnodesvi2V, edges (vi;vj)2E, an adjacency matrix A2RNN(binary or weighted) anda degree matrix Dii=PjAij. The formulation of Eq. 1 relies on the assumption that connectednodes in the graph are likely to share the same label. This assumption, however, might restrictmodeling capacity, as graph edges need not necessarily encode node similarity, but could containadditional information.In this work, we encode the graph structure directly using a neural network model f(X;A)andtrain on a supervised target L0for all nodes with labels, thereby avoiding explicit graph-basedregularization in the loss function. Conditioning f()on the adjacency matrix of the graph willallow the model to distribute gradient information from the supervised loss L0and will enable it tolearn representations of nodes both with and without labels.Our contributions are two-fold. Firstly, we introduce a simple and well-behaved layer-wise prop-agation rule for neural network models which operate directly on graphs and show how it can bemotivated from a first-order approximation of spectral graph convolutions (Hammond et al., 2011).Secondly, we demonstrate how this form of a graph-based neural network model can be used forfast and scalable semi-supervised classification of nodes in a graph. Experiments on a number ofdatasets demonstrate that our model compares favorably both in classification accuracy and effi-ciency (measured in wall-clock time) against state-of-the-art methods for semi-supervised learning.1Published as a conference paper at ICLR 20172 F AST APPROXIMATE CONVOLUTIONS ON GRAPHSIn this section, we provide theoretical motivation for a specific graph-based neural network modelf(X;A)that we will use in the rest of this paper. We consider a multi-layer Graph ConvolutionalNetwork (GCN) with the following layer-wise propagation rule:H(l+1)=~D12~A~D12H(l)W(l): (2)Here, ~A=A+INis the adjacency matrix of the undirected graph Gwith added self-connections.INis the identity matrix, ~Dii=Pj~AijandW(l)is a layer-specific trainable weight matrix. ()denotes an activation function, such as the ReLU() = max(0;).H(l)2RNDis the matrix of ac-tivations in the lthlayer;H(0)=X. In the following, we show that the form of this propagation rulecan be motivated1via a first-order approximation of localized spectral filters on graphs (Hammondet al., 2011; Defferrard et al., 2016).2.1 S PECTRAL GRAPH CONVOLUTIONSWe consider spectral convolutions on graphs defined as the multiplication of a signal x2RN(ascalar for every node) with a filter g=diag()parameterized by 2RNin the Fourier domain,i.e.:g?x=UgU>x; (3)whereUis the matrix of eigenvectors of the normalized graph Laplacian L=IND12AD12=UU>, with a diagonal matrix of its eigenvalues andU>xbeing the graph Fourier transformofx. We can understand gas a function of the eigenvalues of L, i.e.g(). Evaluating Eq. 3 iscomputationally expensive, as multiplication with the eigenvector matrix UisO(N2). Furthermore,computing the eigendecomposition of Lin the first place might be prohibitively expensive for largegraphs. To circumvent this problem, it was suggested in Hammond et al. (2011) that g()can bewell-approximated by a truncated expansion in terms of Chebyshev polynomials Tk(x)up toKthorder:g0()KXk=00kTk(~); (4)with a rescaled ~ =2maxIN.maxdenotes the largest eigenvalue of L.02RKis now avector of Chebyshev coefficients. The Chebyshev polynomials are recursively defined as Tk(x) =2xTk1(x)Tk2(x), withT0(x) = 1 andT1(x) =x. The reader is referred to Hammond et al.(2011) for an in-depth discussion of this approximation.Going back to our definition of a convolution of a signal xwith a filterg0, we now have:g0?xKXk=00kTk(~L)x; (5)with ~L=2maxLIN; as can easily be verified by noticing that (UU>)k=UkU>. Note thatthis expression is now K-localized since it is a Kth-order polynomial in the Laplacian, i.e. it dependsonly on nodes that are at maximum Ksteps away from the central node ( Kth-order neighborhood).The complexity of evaluating Eq. 5 is O(jEj), i.e. linear in the number of edges. Defferrard et al.(2016) use this K-localized convolution to define a convolutional neural network on graphs.2.2 L AYER -WISELINEAR MODELA neural network model based on graph convolutions can therefore be built by stacking multipleconvolutional layers of the form of Eq. 5, each layer followed by a point-wise non-linearity. Now,imagine we limited the layer-wise convolution operation to K= 1(see Eq. 5), i.e. a function that islinear w.r.t.Land therefore a linear function on the graph Laplacian spectrum.1We provide an alternative interpretation of this propagation rule based on the Weisfeiler-Lehman algorithm(Weisfeiler & Lehmann, 1968) in Appendix A.2Published as a conference paper at ICLR 2017In this way, we can still recover a rich class of convolutional filter functions by stacking multiplesuch layers, but we are not limited to the explicit parameterization given by, e.g., the Chebyshevpolynomials. We intuitively expect that such a model can alleviate the problem of overfitting onlocal neighborhood structures for graphs with very wide node degree distributions, such as socialnetworks, citation networks, knowledge graphs and many other real-world graph datasets. Addition-ally, for a fixed computational budget, this layer-wise linear formulation allows us to build deepermodels, a practice that is known to improve modeling capacity on a number of domains (He et al.,2016).In this linear formulation of a GCN we further approximate max2, as we can expect that neuralnetwork parameters will adapt to this change in scale during training. Under these approximationsEq. 5 simplifies to:g0?x00x+01(LIN)x=00x01D12AD12x; (6)with two free parameters 00and01. The filter parameters can be shared over the whole graph.Successive application of filters of this form then effectively convolve the kth-order neighborhood ofa node, where kis the number of successive filtering operations or convolutional layers in the neuralnetwork model.In practice, it can be beneficial to constrain the number of parameters further to address overfittingand to minimize the number of operations (such as matrix multiplications) per layer. This leaves uswith the following expression:g?xIN+D12AD12x; (7)with a single parameter =00=01. Note that IN+D12AD12now has eigenvalues inthe range [0;2]. Repeated application of this operator can therefore lead to numerical instabilitiesand exploding/vanishing gradients when used in a deep neural network model. To alleviate thisproblem, we introduce the following renormalization trick :IN+D12AD12!~D12~A~D12, with~A=A+INand~Dii=Pj~Aij.We can generalize this definition to a signal X2RNCwithCinput channels (i.e. a C-dimensionalfeature vector for every node) and Ffilters or feature maps as follows:Z=~D12~A~D12X; (8)where 2RCFis now a matrix of filter parameters and Z2RNFis the convolved signalmatrix. This filtering operation has complexity O(jEjFC), as~AX can be efficiently implementedas a product of a sparse matrix with a dense matrix.3 S EMI-SUPERVISED NODE CLASSIFICATIONHaving introduced a simple, yet flexible model f(X;A)for efficient information propagation ongraphs, we can return to the problem of semi-supervised node classification. As outlined in the in-troduction, we can relax certain assumptions typically made in graph-based semi-supervised learn-ing by conditioning our model f(X;A)both on the data Xand on the adjacency matrix Aof theunderlying graph structure. We expect this setting to be especially powerful in scenarios where theadjacency matrix contains information not present in the data X, such as citation links between doc-uments in a citation network or relations in a knowledge graph. The overall model, a multi-layerGCN for semi-supervised learning, is schematically depicted in Figure 1.3.1 E XAMPLEIn the following, we consider a two-layer GCN for semi-supervised node classification on a graphwith a symmetric adjacency matrix A(binary or weighted). We first calculate ^A=~D12~A~D12ina pre-processing step. Our forward model then takes the simple form:Z=f(X;A) = softmax^AReLU^AXW(0)W(1): (9)3Published as a conference paper at ICLR 2017Cinput layerX1X2X3X4Foutput layerZ1Z2Z3Z4hiddenlayersY1Y41(a) Graph Convolutional Network30 20 10 0 10 20 303020100102030 (b) Hidden layer activationsFigure 1: Left: Schematic depiction of multi-layer Graph Convolutional Network (GCN) for semi-supervised learning with Cinput channels and Ffeature maps in the output layer. The graph struc-ture (edges shown as black lines) is shared over layers, labels are denoted by Yi.Right : t-SNE(Maaten & Hinton, 2008) visualization of hidden layer activations of a two-layer GCN trained onthe Cora dataset (Sen et al., 2008) using 5%of labels. Colors denote document class.Here,W(0)2RCHis an input-to-hidden weight matrix for a hidden layer with Hfeature maps.W(1)2RHFis a hidden-to-output weight matrix. The softmax activation function, defined assoftmax(xi) =1Zexp(xi)withZ=Piexp(xi), is applied row-wise. For semi-supervised multi-class classification, we then evaluate the cross-entropy error over all labeled examples:L=Xl2YLFXf=1YlflnZlf; (10)whereYLis the set of node indices that have labels.The neural network weights W(0)andW(1)are trained using gradient descent. In this work, weperform batch gradient descent using the full dataset for every training iteration, which is a viableoption as long as datasets fit in memory. Using a sparse representation for A, memory requirementisO(jEj), i.e. linear in the number of edges. Stochasticity in the training process is introduced viadropout (Srivastava et al., 2014). We leave memory-efficient extensions with mini-batch stochasticgradient descent for future work.3.2 I MPLEMENTATIONIn practice, we make use of TensorFlow (Abadi et al., 2015) for an efficient GPU-based imple-mentation2of Eq. 9 using sparse-dense matrix multiplications. The computational complexity ofevaluating Eq. 9 is then O(jEjCHF ), i.e. linear in the number of graph edges.4 R ELATED WORKOur model draws inspiration both from the field of graph-based semi-supervised learning and fromrecent work on neural networks that operate on graphs. In what follows, we provide a brief overviewon related work in both fields.4.1 G RAPH -BASED SEMI-SUPERVISED LEARNINGA large number of approaches for semi-supervised learning using graph representations have beenproposed in recent years, most of which fall into two broad categories: methods that use someform of explicit graph Laplacian regularization and graph embedding-based approaches. Prominentexamples for graph Laplacian regularization include label propagation (Zhu et al., 2003), manifoldregularization (Belkin et al., 2006) and deep semi-supervised embedding (Weston et al., 2012).2Code to reproduce our experiments is available at https://github.com/tkipf/gcn .4Published as a conference paper at ICLR 2017Recently, attention has shifted to models that learn graph embeddings with methods inspired bythe skip-gram model (Mikolov et al., 2013). DeepWalk (Perozzi et al., 2014) learns embeddingsvia the prediction of the local neighborhood of nodes, sampled from random walks on the graph.LINE (Tang et al., 2015) and node2vec (Grover & Leskovec, 2016) extend DeepWalk with moresophisticated random walk or breadth-first search schemes. For all these methods, however, a multi-step pipeline including random walk generation and semi-supervised training is required where eachstep has to be optimized separately. Planetoid (Yang et al., 2016) alleviates this by injecting labelinformation in the process of learning embeddings.4.2 N EURAL NETWORKS ON GRAPHSNeural networks that operate on graphs have previously been introduced in Gori et al. (2005);Scarselli et al. (2009) as a form of recurrent neural network. Their framework requires the repeatedapplication of contraction maps as propagation functions until node representations reach a stablefixed point. This restriction was later alleviated in Li et al. (2016) by introducing modern practicesfor recurrent neural network training to the original graph neural network framework. Duvenaudet al. (2015) introduced a convolution-like propagation rule on graphs and methods for graph-levelclassification. Their approach requires to learn node degree-specific weight matrices which does notscale to large graphs with wide node degree distributions. Our model instead uses a single weightmatrix per layer and deals with varying node degrees through an appropriate normalization of theadjacency matrix (see Section 3.1).A related approach to node classification with a graph-based neural network was recently introducedin Atwood & Towsley (2016). They report O(N2)complexity, limiting the range of possible appli-cations. In a different yet related model, Niepert et al. (2016) convert graphs locally into sequencesthat are fed into a conventional 1D convolutional neural network, which requires the definition of anode ordering in a pre-processing step.Our method is based on spectral graph convolutional neural networks, introduced in Bruna et al.(2014) and later extended by Defferrard et al. (2016) with fast localized convolutions. In contrastto these works, we consider here the task of transductive node classification within networks ofsignificantly larger scale. We show that in this setting, a number of simplifications (see Section 2.2)can be introduced to the original frameworks of Bruna et al. (2014) and Defferrard et al. (2016) thatimprove scalability and classification performance in large-scale networks.5 E XPERIMENTSWe test our model in a number of experiments: semi-supervised document classification in cita-tion networks, semi-supervised entity classification in a bipartite graph extracted from a knowledgegraph, an evaluation of various graph propagation models and a run-time analysis on random graphs.5.1 D ATASETSWe closely follow the experimental setup in Yang et al. (2016). Dataset statistics are summarizedin Table 1. In the citation network datasets—Citeseer, Cora and Pubmed (Sen et al., 2008)—nodesare documents and edges are citation links. Label rate denotes the number of labeled nodes that areused for training divided by the total number of nodes in each dataset. NELL (Carlson et al., 2010;Yang et al., 2016) is a bipartite graph dataset extracted from a knowledge graph with 55,864 relationnodes and 9,891 entity nodes.Table 1: Dataset statistics, as reported in Yang et al. (2016).Dataset Type Nodes Edges Classes Features Label rateCiteseer Citation network 3,327 4,732 6 3,703 0:036Cora Citation network 2,708 5,429 7 1,433 0:052Pubmed Citation network 19,717 44,338 3 500 0:003NELL Knowledge graph 65,755 266,144 210 5,414 0:0015Published as a conference paper at ICLR 2017Citation networks We consider three citation network datasets: Citeseer, Cora and Pubmed (Senet al., 2008). The datasets contain sparse bag-of-words feature vectors for each document and a listof citation links between documents. We treat the citation links as (undirected) edges and constructa binary, symmetric adjacency matrix A. Each document has a class label. For training, we only use20 labels per class, but all feature vectors.NELL NELL is a dataset extracted from the knowledge graph introduced in (Carlson et al., 2010).A knowledge graph is a set of entities connected with directed, labeled edges (relations). We followthe pre-processing scheme as described in Yang et al. (2016). We assign separate relation nodesr1andr2for each entity pair (e1;r;e 2)as(e1;r1)and(e2;r2). Entity nodes are described bysparse feature vectors. We extend the number of features in NELL by assigning a unique one-hotrepresentation for every relation node, effectively resulting in a 61,278-dim sparse feature vector pernode. The semi-supervised task here considers the extreme case of only a single labeled exampleper class in the training set. We construct a binary, symmetric adjacency matrix from this graph bysetting entries Aij= 1, if one or more edges are present between nodes iandj.Random graphs We simulate random graph datasets of various sizes for experiments where wemeasure training time per epoch. For a dataset with Nnodes we create a random graph assigning2Nedges uniformly at random. We take the identity matrix INas input feature matrix X, therebyimplicitly taking a featureless approach where the model is only informed about the identity of eachnode, specified by a unique one-hot vector. We add dummy labels Yi= 1for every node.5.2 E XPERIMENTAL SET-UPUnless otherwise noted, we train a two-layer GCN as described in Section 3.1 and evaluate pre-diction accuracy on a test set of 1,000 labeled examples. We provide additional experiments usingdeeper models with up to 10 layers in Appendix B. We choose the same dataset splits as in Yanget al. (2016) with an additional validation set of 500 labeled examples for hyperparameter opti-mization (dropout rate for all layers, L2 regularization factor for the first GCN layer and number ofhidden units). We do not use the validation set labels for training.For the citation network datasets, we optimize hyperparameters on Cora only and use the same setof parameters for Citeseer and Pubmed. We train all models for a maximum of 200 epochs (trainingiterations) using Adam (Kingma & Ba, 2015) with a learning rate of 0:01and early stopping with awindow size of 10, i.e. we stop training if the validation loss does not decrease for 10 consecutiveepochs. We initialize weights using the initialization described in Glorot & Bengio (2010) andaccordingly (row-)normalize input feature vectors. On the random graph datasets, we use a hiddenlayer size of 32 units and omit regularization (i.e. neither dropout nor L2 regularization).5.3 B ASELINESWe compare against the same baseline methods as in Yang et al. (2016), i.e. label propagation(LP) (Zhu et al., 2003), semi-supervised embedding (SemiEmb) (Weston et al., 2012), manifoldregularization (ManiReg) (Belkin et al., 2006) and skip-gram based graph embeddings (DeepWalk)(Perozzi et al., 2014). We omit TSVM (Joachims, 1999), as it does not scale to the large number ofclasses in one of our datasets.We further compare against the iterative classification algorithm (ICA) proposed in Lu & Getoor(2003) in conjunction with two logistic regression classifiers, one for local node features alone andone for relational classification using local features and an aggregation operator as described inSen et al. (2008). We first train the local classifier using all labeled training set nodes and useit to bootstrap class labels of unlabeled nodes for relational classifier training. We run iterativeclassification (relational classifier) with a random node ordering for 10 iterations on all unlabelednodes (bootstrapped using the local classifier). L2 regularization parameter and aggregation operator(count vs.prop, see Sen et al. (2008)) are chosen based on validation set performance for each datasetseparately.Lastly, we compare against Planetoid (Yang et al., 2016), where we always choose their best-performing model variant (transductive vs. inductive) as a baseline.6Published as a conference paper at ICLR 20176 R ESULTS6.1 S EMI-SUPERVISED NODE CLASSIFICATIONResults are summarized in Table 2. Reported numbers denote classification accuracy in percent. ForICA, we report the mean accuracy of 100 runs with random node orderings. Results for all otherbaseline methods are taken from the Planetoid paper (Yang et al., 2016). Planetoid* denotes the bestmodel for the respective dataset out of the variants presented in their paper.Table 2: Summary of results in terms of classification accuracy (in percent).Method Citeseer Cora Pubmed NELLManiReg [3] 60:1 59 :5 70 :7 21 :8SemiEmb [28] 59:6 59 :0 71 :1 26 :7LP [32] 45:3 68 :0 63 :0 26 :5DeepWalk [22] 43:2 67 :2 65 :3 58 :1ICA [18] 69:1 75 :1 73 :9 23 :1Planetoid* [29] 64:7(26s) 75:7(13s) 77:2(25s) 61:9(185s)GCN (this paper) 70:3(7s) 81:5(4s) 79:0(38s) 66:0(48s)GCN (rand. splits) 67:90:5 80:10:5 78:90:7 58:41:7We further report wall-clock training time in seconds until convergence (in brackets) for our method(incl. evaluation of validation error) and for Planetoid. For the latter, we used an implementation pro-vided by the authors3and trained on the same hardware (with GPU) as our GCN model. We trainedand tested our model on the same dataset splits as in Yang et al. (2016) and report mean accuracyof 100 runs with random weight initializations. We used the following sets of hyperparameters forCiteseer, Cora and Pubmed: 0.5 (dropout rate), 5104(L2 regularization) and 16(number of hid-den units); and for NELL: 0.1 (dropout rate), 1105(L2 regularization) and 64(number of hiddenunits).In addition, we report performance of our model on 10 randomly drawn dataset splits of the samesize as in Yang et al. (2016), denoted by GCN (rand. splits). Here, we report mean and standarderror of prediction accuracy on the test set split in percent.6.2 E VALUATION OF PROPAGATION MODELWe compare different variants of our proposed per-layer propagation model on the citation networkdatasets. We follow the experimental set-up described in the previous section. Results are summa-rized in Table 3. The propagation model of our original GCN model is denoted by renormalizationtrick (in bold). In all other cases, the propagation model of both neural network layers is replacedwith the model specified under propagation model . Reported numbers denote mean classificationaccuracy for 100 repeated runs with random weight matrix initializations. In case of multiple vari-ables iper layer, we impose L2 regularization on all weight matrices of the first layer.Table 3: Comparison of propagation models.Description Propagation model Citeseer Cora PubmedChebyshev filter (Eq. 5)K= 3PKk=0Tk(~L)Xk69:8 79:5 74:4K= 2 69 :6 81:2 73:81st-order model (Eq. 6) X0+D12AD12X1 68:3 80:0 77:5Single parameter (Eq. 7) (IN+D12AD12)X 69 :3 79:2 77:4Renormalization trick (Eq. 8) ~D12~A~D12X 70:3 81:5 79:01st-order term only D12AD12X 68 :7 80:5 77:8Multi-layer perceptron X 46 :5 55:1 71:43https://github.com/kimiyoung/planetoid7Published as a conference paper at ICLR 20176.3 T RAINING TIME PER EPOCH1k 10k 100k 1M 10M# Edges10-310-210-1100101Sec./epoch*GPUCPUFigure 2: Wall-clock time per epoch for randomgraphs. (*) indicates out-of-memory error.Here, we report results for the mean trainingtime per epoch (forward pass, cross-entropycalculation, backward pass) for 100 epochs onsimulated random graphs, measured in secondswall-clock time. See Section 5.1 for a detaileddescription of the random graph dataset usedin these experiments. We compare results ona GPU and on a CPU-only implementation4inTensorFlow (Abadi et al., 2015). Figure 2 sum-marizes the results.7 D ISCUSSION7.1 S EMI-SUPERVISED MODELIn the experiments demonstrated here, our method for semi-supervised node classification outper-forms recent related methods by a significant margin. Methods based on graph-Laplacian regular-ization (Zhu et al., 2003; Belkin et al., 2006; Weston et al., 2012) are most likely limited due to theirassumption that edges encode mere similarity of nodes. Skip-gram based methods on the other handare limited by the fact that they are based on a multi-step pipeline which is difficult to optimize.Our proposed model can overcome both limitations, while still comparing favorably in terms of ef-ficiency (measured in wall-clock time) to related methods. Propagation of feature information fromneighboring nodes in every layer improves classification performance in comparison to methods likeICA (Lu & Getoor, 2003), where only label information is aggregated.We have further demonstrated that the proposed renormalized propagation model (Eq. 8) offers bothimproved efficiency (fewer parameters and operations, such as multiplication or addition) and betterpredictive performance on a number of datasets compared to a na ̈ıve1st-order model (Eq. 6) orhigher-order graph convolutional models using Chebyshev polynomials (Eq. 5).7.2 L IMITATIONS AND FUTURE WORKHere, we describe several limitations of our current model and outline how these might be overcomein future work.Memory requirement In the current setup with full-batch gradient descent, memory requirementgrows linearly in the size of the dataset. We have shown that for large graphs that do not fit in GPUmemory, training on CPU can still be a viable option. Mini-batch stochastic gradient descent canalleviate this issue. The procedure of generating mini-batches, however, should take into account thenumber of layers in the GCN model, as the Kth-order neighborhood for a GCN with Klayers has tobe stored in memory for an exact procedure. For very large and densely connected graph datasets,further approximations might be necessary.Directed edges and edge features Our framework currently does not naturally support edge fea-tures and is limited to undirected graphs (weighted or unweighted). Results on NELL howevershow that it is possible to handle both directed edges and edge features by representing the originaldirected graph as an undirected bipartite graph with additional nodes that represent edges in theoriginal graph (see Section 5.1 for details).Limiting assumptions Through the approximations introduced in Section 2, we implicitly assumelocality (dependence on the Kth-order neighborhood for a GCN with Klayers) and equal impor-tance of self-connections vs. edges to neighboring nodes. For some datasets, however, it might bebeneficial to introduce a trade-off parameter in the definition of ~A:~A=A+IN: (11)4Hardware used: 16-core Intel RXeon RCPU E5-2640 v3 @ 2.60GHz, GeForce RGTX TITAN X8Published as a conference paper at ICLR 2017This parameter now plays a similar role as the trade-off parameter between supervised and unsuper-vised loss in the typical semi-supervised setting (see Eq. 1). Here, however, it can be learned viagradient descent.8 C ONCLUSIONWe have introduced a novel approach for semi-supervised classification on graph-structured data.Our GCN model uses an efficient layer-wise propagation rule that is based on a first-order approx-imation of spectral convolutions on graphs. Experiments on a number of network datasets suggestthat the proposed GCN model is capable of encoding both graph structure and node features in away useful for semi-supervised classification. In this setting, our model outperforms several recentlyproposed methods by a significant margin, while being computationally efficient.ACKNOWLEDGMENTSWe would like to thank Christos Louizos, Taco Cohen, Joan Bruna, Zhilin Yang, Dave Herman,Pramod Sinha and Abdul-Saboor Sheikh for helpful discussions. This research was funded by SAP. | HJMB4vQ4e | Solid results. | 7: Good paper, accept | This paper proposes the graph convolutional networks, motivated from approximating graph convolutions. In one propagation step, what the model does can be simplified as, first linearly transform the node representations for each node, and then multiply the transformed node representations with the normalized affinity matrix (with self-connections added), and then pass through nonlinearity.
This model is used for semi-supervised learning on graphs, and in the experiments it demonstrated quite impressive results compared to other baselines, outperforming them by a significant margin. The evaluation of propagation model is also interesting, where different variants of the model and design decisions are evaluated and compared.
It is surprising that such a simple model works so much better than all the baselines. Considering that the model used is just a two-layer model in most experiments, this is really surprising as a two-layer model is very local, and the output of a node can only be affected by nodes in a 2-hop neighborhood, and no longer range interactions can play any roles in this. Since computation is quite efficient (sec. 6.3), I wonder if adding more layers helped anything or not.
Even though motivated from graph convolutions, when simplified as the paper suggests, the operations the model does are quite simple. Compared to Duvenaud et al. 2015 and Li et al. 2016, the proposed method is simpler and does almost strictly less things. So how would the proposed GCN compare against these methods?
Overall I think this model is simple, but the connection to graph convolutions is interesting, and the experiment results are quite good. There are a few questions that still remain, but I feel this paper can be accepted. | 4: The reviewer is confident but not absolutely certain that the evaluation is correct |
SJU4ayYgl | ICLR.cc/2017/conference | 2017 | Semi-Supervised Classification with Graph Convolutional Networks | ["Thomas N. Kipf", "Max Welling"] | We present a scalable approach for semi-supervised learning on graph-structured data that is based on an efficient variant of convolutional neural networks which operate directly on graphs. We motivate the choice of our convolutional architecture via a localized first-order approximation of spectral graph convolutions. Our model scales linearly in the number of graph edges and learns hidden layer representations that encode both local graph structure and features of nodes. In a number of experiments on citation networks and on a knowledge graph dataset we demonstrate that our approach outperforms related methods by a significant margin. | ["Deep learning", "Semi-Supervised Learning"] | ABSTRACTWe present a scalable approach for semi-supervised learning on graph-structureddata that is based on an efficient variant of convolutional neural networks whichoperate directly on graphs. We motivate the choice of our convolutional archi-tecture via a localized first-order approximation of spectral graph convolutions.Our model scales linearly in the number of graph edges and learns hidden layerrepresentations that encode both local graph structure and features of nodes. Ina number of experiments on citation networks and on a knowledge graph datasetwe demonstrate that our approach outperforms related methods by a significantmargin.1 I NTRODUCTIONWe consider the problem of classifying nodes (such as documents) in a graph (such as a citationnetwork), where labels are only available for a small subset of nodes. This problem can be framedas graph-based semi-supervised learning, where label information is smoothed over the graph viasome form of explicit graph-based regularization (Zhu et al., 2003; Zhou et al., 2004; Belkin et al.,2006; Weston et al., 2012), e.g. by using a graph Laplacian regularization term in the loss function:L=L0+Lreg;withLreg=Xi;jAijkf(Xi)f(Xj)k2=f(X)>f(X): (1)Here,L0denotes the supervised loss w.r.t. the labeled part of the graph, f()can be a neural network-like differentiable function, is a weighing factor and Xis a matrix of node feature vectors Xi. =DAdenotes the unnormalized graph Laplacian of an undirected graph G= (V;E)withNnodesvi2V, edges (vi;vj)2E, an adjacency matrix A2RNN(binary or weighted) anda degree matrix Dii=PjAij. The formulation of Eq. 1 relies on the assumption that connectednodes in the graph are likely to share the same label. This assumption, however, might restrictmodeling capacity, as graph edges need not necessarily encode node similarity, but could containadditional information.In this work, we encode the graph structure directly using a neural network model f(X;A)andtrain on a supervised target L0for all nodes with labels, thereby avoiding explicit graph-basedregularization in the loss function. Conditioning f()on the adjacency matrix of the graph willallow the model to distribute gradient information from the supervised loss L0and will enable it tolearn representations of nodes both with and without labels.Our contributions are two-fold. Firstly, we introduce a simple and well-behaved layer-wise prop-agation rule for neural network models which operate directly on graphs and show how it can bemotivated from a first-order approximation of spectral graph convolutions (Hammond et al., 2011).Secondly, we demonstrate how this form of a graph-based neural network model can be used forfast and scalable semi-supervised classification of nodes in a graph. Experiments on a number ofdatasets demonstrate that our model compares favorably both in classification accuracy and effi-ciency (measured in wall-clock time) against state-of-the-art methods for semi-supervised learning.1Published as a conference paper at ICLR 20172 F AST APPROXIMATE CONVOLUTIONS ON GRAPHSIn this section, we provide theoretical motivation for a specific graph-based neural network modelf(X;A)that we will use in the rest of this paper. We consider a multi-layer Graph ConvolutionalNetwork (GCN) with the following layer-wise propagation rule:H(l+1)=~D12~A~D12H(l)W(l): (2)Here, ~A=A+INis the adjacency matrix of the undirected graph Gwith added self-connections.INis the identity matrix, ~Dii=Pj~AijandW(l)is a layer-specific trainable weight matrix. ()denotes an activation function, such as the ReLU() = max(0;).H(l)2RNDis the matrix of ac-tivations in the lthlayer;H(0)=X. In the following, we show that the form of this propagation rulecan be motivated1via a first-order approximation of localized spectral filters on graphs (Hammondet al., 2011; Defferrard et al., 2016).2.1 S PECTRAL GRAPH CONVOLUTIONSWe consider spectral convolutions on graphs defined as the multiplication of a signal x2RN(ascalar for every node) with a filter g=diag()parameterized by 2RNin the Fourier domain,i.e.:g?x=UgU>x; (3)whereUis the matrix of eigenvectors of the normalized graph Laplacian L=IND12AD12=UU>, with a diagonal matrix of its eigenvalues andU>xbeing the graph Fourier transformofx. We can understand gas a function of the eigenvalues of L, i.e.g(). Evaluating Eq. 3 iscomputationally expensive, as multiplication with the eigenvector matrix UisO(N2). Furthermore,computing the eigendecomposition of Lin the first place might be prohibitively expensive for largegraphs. To circumvent this problem, it was suggested in Hammond et al. (2011) that g()can bewell-approximated by a truncated expansion in terms of Chebyshev polynomials Tk(x)up toKthorder:g0()KXk=00kTk(~); (4)with a rescaled ~ =2maxIN.maxdenotes the largest eigenvalue of L.02RKis now avector of Chebyshev coefficients. The Chebyshev polynomials are recursively defined as Tk(x) =2xTk1(x)Tk2(x), withT0(x) = 1 andT1(x) =x. The reader is referred to Hammond et al.(2011) for an in-depth discussion of this approximation.Going back to our definition of a convolution of a signal xwith a filterg0, we now have:g0?xKXk=00kTk(~L)x; (5)with ~L=2maxLIN; as can easily be verified by noticing that (UU>)k=UkU>. Note thatthis expression is now K-localized since it is a Kth-order polynomial in the Laplacian, i.e. it dependsonly on nodes that are at maximum Ksteps away from the central node ( Kth-order neighborhood).The complexity of evaluating Eq. 5 is O(jEj), i.e. linear in the number of edges. Defferrard et al.(2016) use this K-localized convolution to define a convolutional neural network on graphs.2.2 L AYER -WISELINEAR MODELA neural network model based on graph convolutions can therefore be built by stacking multipleconvolutional layers of the form of Eq. 5, each layer followed by a point-wise non-linearity. Now,imagine we limited the layer-wise convolution operation to K= 1(see Eq. 5), i.e. a function that islinear w.r.t.Land therefore a linear function on the graph Laplacian spectrum.1We provide an alternative interpretation of this propagation rule based on the Weisfeiler-Lehman algorithm(Weisfeiler & Lehmann, 1968) in Appendix A.2Published as a conference paper at ICLR 2017In this way, we can still recover a rich class of convolutional filter functions by stacking multiplesuch layers, but we are not limited to the explicit parameterization given by, e.g., the Chebyshevpolynomials. We intuitively expect that such a model can alleviate the problem of overfitting onlocal neighborhood structures for graphs with very wide node degree distributions, such as socialnetworks, citation networks, knowledge graphs and many other real-world graph datasets. Addition-ally, for a fixed computational budget, this layer-wise linear formulation allows us to build deepermodels, a practice that is known to improve modeling capacity on a number of domains (He et al.,2016).In this linear formulation of a GCN we further approximate max2, as we can expect that neuralnetwork parameters will adapt to this change in scale during training. Under these approximationsEq. 5 simplifies to:g0?x00x+01(LIN)x=00x01D12AD12x; (6)with two free parameters 00and01. The filter parameters can be shared over the whole graph.Successive application of filters of this form then effectively convolve the kth-order neighborhood ofa node, where kis the number of successive filtering operations or convolutional layers in the neuralnetwork model.In practice, it can be beneficial to constrain the number of parameters further to address overfittingand to minimize the number of operations (such as matrix multiplications) per layer. This leaves uswith the following expression:g?xIN+D12AD12x; (7)with a single parameter =00=01. Note that IN+D12AD12now has eigenvalues inthe range [0;2]. Repeated application of this operator can therefore lead to numerical instabilitiesand exploding/vanishing gradients when used in a deep neural network model. To alleviate thisproblem, we introduce the following renormalization trick :IN+D12AD12!~D12~A~D12, with~A=A+INand~Dii=Pj~Aij.We can generalize this definition to a signal X2RNCwithCinput channels (i.e. a C-dimensionalfeature vector for every node) and Ffilters or feature maps as follows:Z=~D12~A~D12X; (8)where 2RCFis now a matrix of filter parameters and Z2RNFis the convolved signalmatrix. This filtering operation has complexity O(jEjFC), as~AX can be efficiently implementedas a product of a sparse matrix with a dense matrix.3 S EMI-SUPERVISED NODE CLASSIFICATIONHaving introduced a simple, yet flexible model f(X;A)for efficient information propagation ongraphs, we can return to the problem of semi-supervised node classification. As outlined in the in-troduction, we can relax certain assumptions typically made in graph-based semi-supervised learn-ing by conditioning our model f(X;A)both on the data Xand on the adjacency matrix Aof theunderlying graph structure. We expect this setting to be especially powerful in scenarios where theadjacency matrix contains information not present in the data X, such as citation links between doc-uments in a citation network or relations in a knowledge graph. The overall model, a multi-layerGCN for semi-supervised learning, is schematically depicted in Figure 1.3.1 E XAMPLEIn the following, we consider a two-layer GCN for semi-supervised node classification on a graphwith a symmetric adjacency matrix A(binary or weighted). We first calculate ^A=~D12~A~D12ina pre-processing step. Our forward model then takes the simple form:Z=f(X;A) = softmax^AReLU^AXW(0)W(1): (9)3Published as a conference paper at ICLR 2017Cinput layerX1X2X3X4Foutput layerZ1Z2Z3Z4hiddenlayersY1Y41(a) Graph Convolutional Network30 20 10 0 10 20 303020100102030 (b) Hidden layer activationsFigure 1: Left: Schematic depiction of multi-layer Graph Convolutional Network (GCN) for semi-supervised learning with Cinput channels and Ffeature maps in the output layer. The graph struc-ture (edges shown as black lines) is shared over layers, labels are denoted by Yi.Right : t-SNE(Maaten & Hinton, 2008) visualization of hidden layer activations of a two-layer GCN trained onthe Cora dataset (Sen et al., 2008) using 5%of labels. Colors denote document class.Here,W(0)2RCHis an input-to-hidden weight matrix for a hidden layer with Hfeature maps.W(1)2RHFis a hidden-to-output weight matrix. The softmax activation function, defined assoftmax(xi) =1Zexp(xi)withZ=Piexp(xi), is applied row-wise. For semi-supervised multi-class classification, we then evaluate the cross-entropy error over all labeled examples:L=Xl2YLFXf=1YlflnZlf; (10)whereYLis the set of node indices that have labels.The neural network weights W(0)andW(1)are trained using gradient descent. In this work, weperform batch gradient descent using the full dataset for every training iteration, which is a viableoption as long as datasets fit in memory. Using a sparse representation for A, memory requirementisO(jEj), i.e. linear in the number of edges. Stochasticity in the training process is introduced viadropout (Srivastava et al., 2014). We leave memory-efficient extensions with mini-batch stochasticgradient descent for future work.3.2 I MPLEMENTATIONIn practice, we make use of TensorFlow (Abadi et al., 2015) for an efficient GPU-based imple-mentation2of Eq. 9 using sparse-dense matrix multiplications. The computational complexity ofevaluating Eq. 9 is then O(jEjCHF ), i.e. linear in the number of graph edges.4 R ELATED WORKOur model draws inspiration both from the field of graph-based semi-supervised learning and fromrecent work on neural networks that operate on graphs. In what follows, we provide a brief overviewon related work in both fields.4.1 G RAPH -BASED SEMI-SUPERVISED LEARNINGA large number of approaches for semi-supervised learning using graph representations have beenproposed in recent years, most of which fall into two broad categories: methods that use someform of explicit graph Laplacian regularization and graph embedding-based approaches. Prominentexamples for graph Laplacian regularization include label propagation (Zhu et al., 2003), manifoldregularization (Belkin et al., 2006) and deep semi-supervised embedding (Weston et al., 2012).2Code to reproduce our experiments is available at https://github.com/tkipf/gcn .4Published as a conference paper at ICLR 2017Recently, attention has shifted to models that learn graph embeddings with methods inspired bythe skip-gram model (Mikolov et al., 2013). DeepWalk (Perozzi et al., 2014) learns embeddingsvia the prediction of the local neighborhood of nodes, sampled from random walks on the graph.LINE (Tang et al., 2015) and node2vec (Grover & Leskovec, 2016) extend DeepWalk with moresophisticated random walk or breadth-first search schemes. For all these methods, however, a multi-step pipeline including random walk generation and semi-supervised training is required where eachstep has to be optimized separately. Planetoid (Yang et al., 2016) alleviates this by injecting labelinformation in the process of learning embeddings.4.2 N EURAL NETWORKS ON GRAPHSNeural networks that operate on graphs have previously been introduced in Gori et al. (2005);Scarselli et al. (2009) as a form of recurrent neural network. Their framework requires the repeatedapplication of contraction maps as propagation functions until node representations reach a stablefixed point. This restriction was later alleviated in Li et al. (2016) by introducing modern practicesfor recurrent neural network training to the original graph neural network framework. Duvenaudet al. (2015) introduced a convolution-like propagation rule on graphs and methods for graph-levelclassification. Their approach requires to learn node degree-specific weight matrices which does notscale to large graphs with wide node degree distributions. Our model instead uses a single weightmatrix per layer and deals with varying node degrees through an appropriate normalization of theadjacency matrix (see Section 3.1).A related approach to node classification with a graph-based neural network was recently introducedin Atwood & Towsley (2016). They report O(N2)complexity, limiting the range of possible appli-cations. In a different yet related model, Niepert et al. (2016) convert graphs locally into sequencesthat are fed into a conventional 1D convolutional neural network, which requires the definition of anode ordering in a pre-processing step.Our method is based on spectral graph convolutional neural networks, introduced in Bruna et al.(2014) and later extended by Defferrard et al. (2016) with fast localized convolutions. In contrastto these works, we consider here the task of transductive node classification within networks ofsignificantly larger scale. We show that in this setting, a number of simplifications (see Section 2.2)can be introduced to the original frameworks of Bruna et al. (2014) and Defferrard et al. (2016) thatimprove scalability and classification performance in large-scale networks.5 E XPERIMENTSWe test our model in a number of experiments: semi-supervised document classification in cita-tion networks, semi-supervised entity classification in a bipartite graph extracted from a knowledgegraph, an evaluation of various graph propagation models and a run-time analysis on random graphs.5.1 D ATASETSWe closely follow the experimental setup in Yang et al. (2016). Dataset statistics are summarizedin Table 1. In the citation network datasets—Citeseer, Cora and Pubmed (Sen et al., 2008)—nodesare documents and edges are citation links. Label rate denotes the number of labeled nodes that areused for training divided by the total number of nodes in each dataset. NELL (Carlson et al., 2010;Yang et al., 2016) is a bipartite graph dataset extracted from a knowledge graph with 55,864 relationnodes and 9,891 entity nodes.Table 1: Dataset statistics, as reported in Yang et al. (2016).Dataset Type Nodes Edges Classes Features Label rateCiteseer Citation network 3,327 4,732 6 3,703 0:036Cora Citation network 2,708 5,429 7 1,433 0:052Pubmed Citation network 19,717 44,338 3 500 0:003NELL Knowledge graph 65,755 266,144 210 5,414 0:0015Published as a conference paper at ICLR 2017Citation networks We consider three citation network datasets: Citeseer, Cora and Pubmed (Senet al., 2008). The datasets contain sparse bag-of-words feature vectors for each document and a listof citation links between documents. We treat the citation links as (undirected) edges and constructa binary, symmetric adjacency matrix A. Each document has a class label. For training, we only use20 labels per class, but all feature vectors.NELL NELL is a dataset extracted from the knowledge graph introduced in (Carlson et al., 2010).A knowledge graph is a set of entities connected with directed, labeled edges (relations). We followthe pre-processing scheme as described in Yang et al. (2016). We assign separate relation nodesr1andr2for each entity pair (e1;r;e 2)as(e1;r1)and(e2;r2). Entity nodes are described bysparse feature vectors. We extend the number of features in NELL by assigning a unique one-hotrepresentation for every relation node, effectively resulting in a 61,278-dim sparse feature vector pernode. The semi-supervised task here considers the extreme case of only a single labeled exampleper class in the training set. We construct a binary, symmetric adjacency matrix from this graph bysetting entries Aij= 1, if one or more edges are present between nodes iandj.Random graphs We simulate random graph datasets of various sizes for experiments where wemeasure training time per epoch. For a dataset with Nnodes we create a random graph assigning2Nedges uniformly at random. We take the identity matrix INas input feature matrix X, therebyimplicitly taking a featureless approach where the model is only informed about the identity of eachnode, specified by a unique one-hot vector. We add dummy labels Yi= 1for every node.5.2 E XPERIMENTAL SET-UPUnless otherwise noted, we train a two-layer GCN as described in Section 3.1 and evaluate pre-diction accuracy on a test set of 1,000 labeled examples. We provide additional experiments usingdeeper models with up to 10 layers in Appendix B. We choose the same dataset splits as in Yanget al. (2016) with an additional validation set of 500 labeled examples for hyperparameter opti-mization (dropout rate for all layers, L2 regularization factor for the first GCN layer and number ofhidden units). We do not use the validation set labels for training.For the citation network datasets, we optimize hyperparameters on Cora only and use the same setof parameters for Citeseer and Pubmed. We train all models for a maximum of 200 epochs (trainingiterations) using Adam (Kingma & Ba, 2015) with a learning rate of 0:01and early stopping with awindow size of 10, i.e. we stop training if the validation loss does not decrease for 10 consecutiveepochs. We initialize weights using the initialization described in Glorot & Bengio (2010) andaccordingly (row-)normalize input feature vectors. On the random graph datasets, we use a hiddenlayer size of 32 units and omit regularization (i.e. neither dropout nor L2 regularization).5.3 B ASELINESWe compare against the same baseline methods as in Yang et al. (2016), i.e. label propagation(LP) (Zhu et al., 2003), semi-supervised embedding (SemiEmb) (Weston et al., 2012), manifoldregularization (ManiReg) (Belkin et al., 2006) and skip-gram based graph embeddings (DeepWalk)(Perozzi et al., 2014). We omit TSVM (Joachims, 1999), as it does not scale to the large number ofclasses in one of our datasets.We further compare against the iterative classification algorithm (ICA) proposed in Lu & Getoor(2003) in conjunction with two logistic regression classifiers, one for local node features alone andone for relational classification using local features and an aggregation operator as described inSen et al. (2008). We first train the local classifier using all labeled training set nodes and useit to bootstrap class labels of unlabeled nodes for relational classifier training. We run iterativeclassification (relational classifier) with a random node ordering for 10 iterations on all unlabelednodes (bootstrapped using the local classifier). L2 regularization parameter and aggregation operator(count vs.prop, see Sen et al. (2008)) are chosen based on validation set performance for each datasetseparately.Lastly, we compare against Planetoid (Yang et al., 2016), where we always choose their best-performing model variant (transductive vs. inductive) as a baseline.6Published as a conference paper at ICLR 20176 R ESULTS6.1 S EMI-SUPERVISED NODE CLASSIFICATIONResults are summarized in Table 2. Reported numbers denote classification accuracy in percent. ForICA, we report the mean accuracy of 100 runs with random node orderings. Results for all otherbaseline methods are taken from the Planetoid paper (Yang et al., 2016). Planetoid* denotes the bestmodel for the respective dataset out of the variants presented in their paper.Table 2: Summary of results in terms of classification accuracy (in percent).Method Citeseer Cora Pubmed NELLManiReg [3] 60:1 59 :5 70 :7 21 :8SemiEmb [28] 59:6 59 :0 71 :1 26 :7LP [32] 45:3 68 :0 63 :0 26 :5DeepWalk [22] 43:2 67 :2 65 :3 58 :1ICA [18] 69:1 75 :1 73 :9 23 :1Planetoid* [29] 64:7(26s) 75:7(13s) 77:2(25s) 61:9(185s)GCN (this paper) 70:3(7s) 81:5(4s) 79:0(38s) 66:0(48s)GCN (rand. splits) 67:90:5 80:10:5 78:90:7 58:41:7We further report wall-clock training time in seconds until convergence (in brackets) for our method(incl. evaluation of validation error) and for Planetoid. For the latter, we used an implementation pro-vided by the authors3and trained on the same hardware (with GPU) as our GCN model. We trainedand tested our model on the same dataset splits as in Yang et al. (2016) and report mean accuracyof 100 runs with random weight initializations. We used the following sets of hyperparameters forCiteseer, Cora and Pubmed: 0.5 (dropout rate), 5104(L2 regularization) and 16(number of hid-den units); and for NELL: 0.1 (dropout rate), 1105(L2 regularization) and 64(number of hiddenunits).In addition, we report performance of our model on 10 randomly drawn dataset splits of the samesize as in Yang et al. (2016), denoted by GCN (rand. splits). Here, we report mean and standarderror of prediction accuracy on the test set split in percent.6.2 E VALUATION OF PROPAGATION MODELWe compare different variants of our proposed per-layer propagation model on the citation networkdatasets. We follow the experimental set-up described in the previous section. Results are summa-rized in Table 3. The propagation model of our original GCN model is denoted by renormalizationtrick (in bold). In all other cases, the propagation model of both neural network layers is replacedwith the model specified under propagation model . Reported numbers denote mean classificationaccuracy for 100 repeated runs with random weight matrix initializations. In case of multiple vari-ables iper layer, we impose L2 regularization on all weight matrices of the first layer.Table 3: Comparison of propagation models.Description Propagation model Citeseer Cora PubmedChebyshev filter (Eq. 5)K= 3PKk=0Tk(~L)Xk69:8 79:5 74:4K= 2 69 :6 81:2 73:81st-order model (Eq. 6) X0+D12AD12X1 68:3 80:0 77:5Single parameter (Eq. 7) (IN+D12AD12)X 69 :3 79:2 77:4Renormalization trick (Eq. 8) ~D12~A~D12X 70:3 81:5 79:01st-order term only D12AD12X 68 :7 80:5 77:8Multi-layer perceptron X 46 :5 55:1 71:43https://github.com/kimiyoung/planetoid7Published as a conference paper at ICLR 20176.3 T RAINING TIME PER EPOCH1k 10k 100k 1M 10M# Edges10-310-210-1100101Sec./epoch*GPUCPUFigure 2: Wall-clock time per epoch for randomgraphs. (*) indicates out-of-memory error.Here, we report results for the mean trainingtime per epoch (forward pass, cross-entropycalculation, backward pass) for 100 epochs onsimulated random graphs, measured in secondswall-clock time. See Section 5.1 for a detaileddescription of the random graph dataset usedin these experiments. We compare results ona GPU and on a CPU-only implementation4inTensorFlow (Abadi et al., 2015). Figure 2 sum-marizes the results.7 D ISCUSSION7.1 S EMI-SUPERVISED MODELIn the experiments demonstrated here, our method for semi-supervised node classification outper-forms recent related methods by a significant margin. Methods based on graph-Laplacian regular-ization (Zhu et al., 2003; Belkin et al., 2006; Weston et al., 2012) are most likely limited due to theirassumption that edges encode mere similarity of nodes. Skip-gram based methods on the other handare limited by the fact that they are based on a multi-step pipeline which is difficult to optimize.Our proposed model can overcome both limitations, while still comparing favorably in terms of ef-ficiency (measured in wall-clock time) to related methods. Propagation of feature information fromneighboring nodes in every layer improves classification performance in comparison to methods likeICA (Lu & Getoor, 2003), where only label information is aggregated.We have further demonstrated that the proposed renormalized propagation model (Eq. 8) offers bothimproved efficiency (fewer parameters and operations, such as multiplication or addition) and betterpredictive performance on a number of datasets compared to a na ̈ıve1st-order model (Eq. 6) orhigher-order graph convolutional models using Chebyshev polynomials (Eq. 5).7.2 L IMITATIONS AND FUTURE WORKHere, we describe several limitations of our current model and outline how these might be overcomein future work.Memory requirement In the current setup with full-batch gradient descent, memory requirementgrows linearly in the size of the dataset. We have shown that for large graphs that do not fit in GPUmemory, training on CPU can still be a viable option. Mini-batch stochastic gradient descent canalleviate this issue. The procedure of generating mini-batches, however, should take into account thenumber of layers in the GCN model, as the Kth-order neighborhood for a GCN with Klayers has tobe stored in memory for an exact procedure. For very large and densely connected graph datasets,further approximations might be necessary.Directed edges and edge features Our framework currently does not naturally support edge fea-tures and is limited to undirected graphs (weighted or unweighted). Results on NELL howevershow that it is possible to handle both directed edges and edge features by representing the originaldirected graph as an undirected bipartite graph with additional nodes that represent edges in theoriginal graph (see Section 5.1 for details).Limiting assumptions Through the approximations introduced in Section 2, we implicitly assumelocality (dependence on the Kth-order neighborhood for a GCN with Klayers) and equal impor-tance of self-connections vs. edges to neighboring nodes. For some datasets, however, it might bebeneficial to introduce a trade-off parameter in the definition of ~A:~A=A+IN: (11)4Hardware used: 16-core Intel RXeon RCPU E5-2640 v3 @ 2.60GHz, GeForce RGTX TITAN X8Published as a conference paper at ICLR 2017This parameter now plays a similar role as the trade-off parameter between supervised and unsuper-vised loss in the typical semi-supervised setting (see Eq. 1). Here, however, it can be learned viagradient descent.8 C ONCLUSIONWe have introduced a novel approach for semi-supervised classification on graph-structured data.Our GCN model uses an efficient layer-wise propagation rule that is based on a first-order approx-imation of spectral convolutions on graphs. Experiments on a number of network datasets suggestthat the proposed GCN model is capable of encoding both graph structure and node features in away useful for semi-supervised classification. In this setting, our model outperforms several recentlyproposed methods by a significant margin, while being computationally efficient.ACKNOWLEDGMENTSWe would like to thank Christos Louizos, Taco Cohen, Joan Bruna, Zhilin Yang, Dave Herman,Pramod Sinha and Abdul-Saboor Sheikh for helpful discussions. This research was funded by SAP. | S1eLrWQBg | Simple and reasonable approach | 7: Good paper, accept | The paper develops a simple and reasonable algorithm for graph node prediction/classification. The formulations are very intuitive and lead to a simple CNN based training and can easily leverage existing GPU speedups.
Experiments are thorough and compare with many reasonable baselines on large and real benchmark datasets. Although, I am not quite aware of the literature on other methods and there may be similar alternatives as link and node prediction is an old problem. I still think the approach is quite simple and reasonably supported by good evaluations. | 3: The reviewer is fairly confident that the evaluation is correct |
H1Gq5Q9el | ICLR.cc/2017/conference | 2017 | Unsupervised Pretraining for Sequence to Sequence Learning | ["Prajit Ramachandran", "Peter J. Liu", "Quoc V. Le"] | This work presents a general unsupervised learning method to improve
the accuracy of sequence to sequence (seq2seq) models. In our method, the
weights of the encoder and decoder of a seq2seq model are initialized
with the pretrained weights of two language models and then
fine-tuned with labeled data. We apply this method to
challenging benchmarks in machine translation and abstractive
summarization and find that it significantly improves the subsequent
supervised models. Our main result is that the pretraining
accelerates training and improves generalization of seq2seq models,
achieving state-of-the-art results on the WMT
English->German task, surpassing a range of methods using
both phrase-based machine translation and neural machine
translation. Our method achieves an improvement of 1.3 BLEU from the
previous best models on both WMT'14 and WMT'15
English->German. On summarization, our method beats
the supervised learning baseline. | ["Natural language processing", "Deep learning", "Semi-Supervised Learning", "Transfer Learning"] | ABSTRACTThis work presents a general unsupervised learning method to improve the accu-racy of sequence to sequence (seq2seq) models. In our method, the weights ofthe encoder and decoder of a seq2seq model are initialized with the pretrainedweights of two language models and then fine-tuned with labeled data. We ap-ply this method to challenging benchmarks in machine translation and abstractivesummarization and find that it significantly improves the subsequent supervisedmodels. Our main result is that the pretraining accelerates training and improvesgeneralization of seq2seq models, achieving state-of-the-art results on the WMTEnglish!German task, surpassing a range of methods using both phrase-basedmachine translation and neural machine translation. Our method achieves an im-provement of 1.3 BLEU from the previous best models on both WMT’14 andWMT’15 English!German. On summarization, our method beats the supervisedlearning baseline.1 I NTRODUCTIONSequence to sequence ( seq2seq ) models (Sutskever et al., 2014; Cho et al., 2014; Kalchbrenner& Blunsom, 2013; Allen, 1987; ̃Neco & Forcada, 1997) are extremely effective on a variety oftasks that require a mapping between a variable-length input sequence to a variable-length outputsequence. The main weakness of sequence to sequence models, and deep networks in general, liesin the fact that they can easily overfit when the amount of supervised training data is small.In this work, we propose a simple and effective technique for using unsupervised pretraining toimprove seq2seq models. Our proposal is to initialize both encoder and decoder networks withpretrained weights of two language models. These pretrained weights are then fine-tuned with thelabeled corpus.We benchmark this method on machine translation for English !German and abstractive summa-rization on CNN and Daily Mail articles. Our main result is that a seq2seq model, with pretraining,exceeds the strongest possible baseline in both neural machine translation and phrase-based machinetranslation. Our model obtains an improvement of 1.3 BLEU from the previous best models on bothWMT’14 and WMT’15 English !German. On abstractive summarization, our method achievescompetitive results to the strongest baselines.We also perform ablation study to understand the behaviors of the pretraining method. Our studyconfirms that among many other possible choices of using a language model in seq2seq with atten-tion, the above proposal works best. Our study also shows that, for translation, the main gains comefrom the improved generalization due to the pretrained features, whereas for summarization thegains come from the improved optimization due to pretraining the encoder which has been unrolledfor hundreds of timesteps. On both tasks, our proposed method always improves generalization onthe test sets.Work done as an intern on Google Brain.1Under review as a conference paper at ICLR 20172 U NSUPERVISED PRETRAINING FOR SEQUENCE TO SEQUENCE LEARNINGIn the following section, we will describe our basic unsupervised pretraining procedure for sequenceto sequence learning and how to modify sequence to sequence learning to effectively make use ofthe pretrained weights. We then show several extensions to improve the basic model.2.1 B ASIC PROCEDUREGiven an input sequence x1;x2;:::;x mand an output sequence yn;yn1;:::;y 1, the objective of se-quence to sequence learning is to maximize the likelihood p(yn;yn1;:::;y 1jx1;x2;:::;x m).Common sequence to sequence learning methods decompose this objective asp(yn;yn1;:::;y 1jx1;x2;:::;x m) =Qnt=1p(ytjyt1;:::;y 1;x1;x2;:::;x m).In sequence to sequence learning, an RNN encoder is used to represent x1;:::;x mas a hidden vector,which is given to an RNN decoder to produce the output sequence. Our method is based on theobservation that without the encoder, the decoder essentially acts like a language model on y’s.Similarly, the encoder with an additional output layer also acts like a language model. Thus it isnatural to use trained languages models to initialize the encoder and decoder.Therefore, the basic procedure of our approach is to pretrain both the seq2seq encoder and decodernetworks with language models, which can be trained on large amounts of unlabeled text data. Thiscan be seen in Figure 1, where the parameters in the shaded boxes are pretrained. In the followingwe will describe the method in detail using machine translation as an example application.A B C <EOS> W X Y ZW X Y Z <EOS>EmbeddingFirst RNN LayerSoftmaxSecond RNN LayerFigure 1: Pretrained sequence to sequence model. The red parameters are the encoder and the blueparameters are the decoder. All parameters in a shaded box are pretrained, either from the sourceside (light red) or target side (light blue) language model. Otherwise, they are randomly initialized.First, two monolingual datasets are collected, one for the source side language, and one for thetarget side language. A language model ( LM) is trained on each dataset independently, giving anLM trained on the source side corpus and an LM trained on the target side corpus.After two language models are trained, a multi-layer seq2seq model Mis constructed. The embed-ding and first LSTM layers of the encoder and decoder are initialized with the pretrained weights.To be even more efficient, the softmax of the decoder is initialized with the softmax of the pretrainedtarget side LM.2.2 I MPROVING THE MODELWe also employ three additional methods to further improve the model above. The three meth-ods are: a) Monolingual language modeling losses, b) Residual connections and c) Attention overmultiple layers (see Figure 2).Monolingual language modeling losses: After the seq2seq model Mis initialized with the twoLMs, it is fine-tuned with a labeled dataset. To ensure that the model does not overfit the labeleddata, we regularize the parameters that were pretrained by continuing to train with the monolinguallanguage modeling losses. The seq2seq and language modeling losses are weighted equally.2Under review as a conference paper at ICLR 2017WX+(a)A B C <EOS>WAttention(b)Figure 2: Two improvements to the baseline model: (a) residual connection, and (b) attention overmultiple layers.Residual connections: As described, the input vector to the decoder softmax layer is a randomvector because the high level (non-first) layers of the LSTM are randomly initialized. This slowsdown training and introduces random gradients to the pretrained parameters, reducing the effective-ness of pretraining. To circumvent this issue, we use a residual connection from the output of thefirst LSTM layer directly to the input of the softmax (see Figure 2-a).Attention over multiple layers: In all our models, we use an attention mechanism (Bahdanauet al., 2015), where the model attends over both top and first layer (see Figure 2-b). More concretely,given a query vector qtfrom the decoder, encoder states from the first layer h11;:::;h1T, and encoderstates from the last layer hL1;:::;hLT, we compute the attention context vector ctas follows:i=exp(qthNi)PTj=1exp(qthNj)c1t=TXi=1ih1icNt=TXi=1ihNict= [c1t;cNt]Note that attention weights iare only computed once using the top level encoder states.We also experiment with passing the attention vector ctas input into the next timestep (Luong et al.,2015b). Instead of passing cinto the first LSTM layer, we pass it as input to the second LSTM layerby concatenating it with the output of the first LSTM layer.We use all three improvements in our experiments. However, in general we notice that the benefitsof the attention modifications are minor in comparison with the benefits of the additional languagemodeling objectives and residual connections.3 E XPERIMENTSIn the following section, we apply our approach to two important tasks in seq2seq learning: machinetranslation and abstractive summarization. On each task, we compare against the previous bestsystems. We also perform ablation experiments to understand the behavior of each component ofour method.3.1 M ACHINE TRANSLATIONDataset and Evaluation: For machine translation, we evaluate our method on the WMTEnglish!German task (Bojar et al., 2015). We used the WMT 14 training dataset, which is slightlysmaller than the WMT 15 dataset. Because the dataset has some noisy examples, we used a lan-guage detection system to filter the training examples. Sentences pairs where either the source wasnot English or the target was not German were thrown away. This resulted in around 4 milliontraining examples. Following Sennrich et al. (2015b), we use subword units (Sennrich et al., 2015a)3Under review as a conference paper at ICLR 2017with 89500 merge operations, giving a vocabulary size around 90000. The validation set is theconcatenated newstest2012 and newstest2013, and our test sets are newstest2014 and newstest2015.Evaluation on the validation set was with case-sensitive BLEU (Papineni et al., 2002) on tokenizedtext using multi-bleu.perl . Evaluation on the test sets was with case-sensitive BLEU ondetokenized text using mteval-v13a.pl . The monolingual training datasets are the News CrawlEnglish and German corpora, each of which has more than a billion tokens.Experimental settings: The language models were trained in the same fashion as (Jozefowiczet al., 2016) We used a 1 layer 4096 dimensional LSTM with the hidden state projected downto 1024 units (Sak et al., 2014) and trained for one week on 32 Tesla K40 GPUs. Our seq2seqmodel was a 3 layer model, where the second and third layers each have 1000 hidden units. Themonolingual objectives, residual connection, and the modified attention were all used. We used theAdam optimizer (Kingma & Ba, 2015) and train with asynchronous SGD on 16 GPUs for speed.We used a learning rate of 5e-5 which is multiplied by 0.8 every 50K steps after an initial 400Ksteps, gradient clipping with norm 5.0 (Pascanu et al., 2013), and dropout of 0.2 on non-recurrentconnections (Zaremba et al., 2014). We used early stopping on validation set perplexity. A beamsize of 10 was used for decoding. Our ensemble is constructed with the 5 best performing modelson the validation set, which are trained with different hyperparameters.Results: Table 1 shows the results of our method in comparison with other baselines. Ourmethod achieves a new state-of-the-art for single model performance on both newstest2014 andnewstest2015, significantly outperforming the competitive semi-supervised backtranslation tech-nique (Sennrich et al., 2015b). Equally impressive is the fact that our best single model outperformsthe previous state of the art ensemble of 4 models. Our ensemble of 5 models matches or exceedsthe previous best ensemble of 12 models.BLEUSystem ensemble? newstest2014 newstest2015Phrase Based MT (Williams et al., 2016) - 21.9 23.7Supervised NMT (Jean et al., 2015) single - 22.4Edit Distance Transducer NMT (Stahlberg et al., 2016) single 21.7 24.1Edit Distance Transducer NMT (Stahlberg et al., 2016) ensemble 8 22.9 25.7Backtranslation (Sennrich et al., 2015b) single 22.7 25.7Backtranslation (Sennrich et al., 2015b) ensemble 4 23.8 26.5Backtranslation (Sennrich et al., 2015b) ensemble 12 24.7 27.6No pretraining single 21.3 24.3Pretrained seq2seq single 24.0 27.0Pretrained seq2seq ensemble 5 24.7 28.1Table 1: English!German performance on WMT test sets. Our pretrained model outperforms allother models. Note that the model without pretraining uses the LM objective.Ablation study: In order to better understand the effects of pretraining, we conducted an ablationstudy by modifying the pretraining scheme. Figure 3 shows the drop in validation BLEU of variousablations compared with the full model. The full model uses LMs trained with monolingual data toinitialize the encoder and decoder, in addition to the language modeling objective. In the following,we interpret the findings of the study. Note that some findings are specific to the translation task.Given the results from the ablation study, we can make the following observations:Pretraining the decoder is better than pretraining the encoder: Only pretraining the encoderleads to a 1.6 BLEU point drop while only pretraining the decoder leads to a 1.0 BLEUpoint drop.Pretrain as much as possible because the benefits compound: given the drops of no pre-training at all (2:0) and only pretraining the encoder ( 1:6), the additive estimate of thedrop of only pretraining the decoder side is 2:0(1:6) =0:4; however the actualdrop is1:0which is a much larger drop than the additive estimate.Pretraining the softmax is important: Pretraining only the embeddings and first LSTM layergives a large drop of 1.6 BLEU points.4Under review as a conference paper at ICLR 20172.01.51.00.50.0Difference/uni00A0in/uni00A0BLEU/uni00AD2.1Pretrain/uni00A0on/uni00A0parallel/uni00A0corpus/uni00AD2.0No/uni00A0pretraining/uni00AD2.0Only/uni00A0pretrain/uni00A0embeddings/uni00AD2.0No/uni00A0LM/uni00A0objective/uni00AD1.6Only/uni00A0pretrain/uni00A0encoder/uni00AD1.6Only/uni00A0pretrain/uni00A0embeddings/uni00A0&/uni00A0LSTM/uni00AD1.0Only/uni00A0pretrain/uni00A0decoder/uni00AD0.3Pretrain/uni00A0on/uni00A0WikipediaFigure 3: English!German ablation study measuring the difference in validation BLEU betweenvarious ablations and the full model. More negative is worse. The full model uses LMs trained withmonolingual data to initialize the encoder and decoder, plus the language modeling objective.The language modeling objective is a strong regularizer: The drop in BLEU points ofpretraining the entire model and not using the LM objective is as bad as using the LMobjective without pretraining.Pretraining on a lot of unlabeled data is essential for learning to extract powerful features:If the model is initialized with LMs that are pretrained on the source part and target part oftheparallel corpus, the drop in performance is as large as not pretraining at all. However,performance remains strong when pretrained on the large, non-news Wikipedia corpus.To understand the contributions of unsupervised pretraining vs. supervised training, we track theperformance of pretraining as a function of dataset size. For this, we trained a a model with andwithout pretraining on random subsets of the English !German corpus. Both models use the ad-ditional LM objective. The results are summarized in Figure 4. When a 100% of the labeled datais used, the gap between the pretrained and no pretrain model is 2.0 BLEU points. However, thatgap grows when less data is available. When trained on 20% of the labeled data, the gap becomes3.8 BLEU points. This demonstrates that the pretrained models degrade less as the labeled datasetbecomes smaller.20 40 60 80 100Percent/uni00A0of/uni00A0entire/uni00A0labeled/uni00A0dataset/uni00A0used/uni00A0for/uni00A0training1516171819202122BLEUPretrainNo/uni00A0pretrainFigure 4: Validation performance of pretraining vs. no pretraining when trained on a subset of theentire labeled dataset for English !German translation.5Under review as a conference paper at ICLR 20173.2 A BSTRACTIVE SUMMARIZATIONDataset and Evaluation: For a low-resource abstractive summarization task, we use theCNN/Daily Mail corpus from (Hermann et al., 2015). Following Nallapati et al. (2016), we modifythe data collection scripts to restore the bullet point summaries. The task is to predict the bulletpoint summaries from a news article. The dataset has fewer than 300K document-summary pairs.To compare against Nallapati et al. (2016), we used the anonymized corpus. However, for our abla-tion study, we used the non-anonymized corpus.1We evaluate our system using full length ROUGE(Lin, 2004). For the anonymized corpus in particular, we considered each highlight as a separatesentence following Nallapati et al. (2016). In this setting, we used the English Gigaword corpus(Napoles et al., 2012) as our larger, unlabeled “monolingual” corpus, although all data used in thistask is in English.Experimental settings: We use subword units (Sennrich et al., 2015a) with 31500 merges, result-ing in a vocabulary size of about 32000. We use up to the first 600 tokens of the document andpredict the entire summary. Only one language model is trained and it is used to initialize both theencoder and decoder, since the source and target languages are the same. However, the encoderand decoder are not tied. The LM is a one-layer LSTM of size 1024 trained in a similar fashion toJozefowicz et al. (2016). For the seq2seq model, we use the same settings as the machine translationexperiments. The only differences are that we use a 2 layer model with the second layer having1024 hidden units, and that the learning rate is multiplied by 0.8 every 30K steps after an initial100K steps.Results: Table 2 summarizes our results on the anonymized version of the corpus. Our pretrainedmodel is only able to match the previous baseline seq2seq of Nallapati et al. (2016). However, ourmodel is a unidirectional LSTM while they use a bidirectional LSTM. They also use a longer contextof 800 tokens, whereas we used a context of 600 tokens due to GPU memory issues. Furthermore,they use pretrained word2vec (Mikolov et al., 2013) vectors to initialize their word embeddings. Aswe show in our ablation study, just pretraining the embeddings itself gives a large improvement.System ROUGE-1 ROUGE-2 ROUGE-LSeq2seq + pretrained embeddings (Nallapati et al., 2016) 32.49 11.84 29.47+ temporal attention (Nallapati et al., 2016) 35.46 13.30 32.65Pretrained seq2seq 32.56 11.89 29.44Table 2: Results on the anonymized CNN/Daily Mail dataset.Ablation study: We performed an ablation study similar to the one performed on the machinetranslation model. The results are reported in Figure 5. Here we report the drops on ROUGE-1,ROUGE-2, and ROUGE-L on the non-anonymized validation set.Given the results from our ablation study, we can make the following observations:Pretraining improves optimization: in contrast with the machine translation model, it ismore beneficial to only pretrain the encoder than only the decoder of the summarizationmodel. One interpretation is that pretraining enables the gradient to flow much furtherback in time than randomly initialized weights. This may also explain why pretraining onthe parallel corpus is no worse than pretraining on a larger monolingual corpus.The language modeling objective is a strong regularizer: A model without the LM objectivehas a significant drop in ROUGE scores.Human evaluation: As ROUGE may not be able to capture the quality of summarization, wealso performed a small qualitative study to understand the human impression of the summariesproduced by different models. We took 200 random documents and compared the performance of1We encourage future researchers to use the non-anonymized version because it is a more realistic summa-rization setting with a larger vocabulary. Our numbers on the non-anonymized test set are 35:56ROUGE-1,14:60ROUGE-2, and 25:08ROUGE-L. We did not consider highlights as separate sentences.6Under review as a conference paper at ICLR 2017/uni00AD5/uni00AD4/uni00AD3/uni00AD2/uni00AD10Difference/uni00A0in/uni00A0ROUGENo/uni00A0pretraining Only/uni00A0pretrain/uni00A0decoder No/uni00A0LM/uni00A0objective Only/uni00A0pretrain/uni00A0embeddings Only/uni00A0pretrain/uni00A0embeddings/uni00A0&/uni00A0LSTM Only/uni00A0pretrain/uni00A0encoder Pretrain/uni00A0on/uni00A0parallel/uni00A0corpusROUGE/uni00AD1ROUGE/uni00AD2ROUGE/uni00ADLFigure 5: Summarization ablation study measuring the difference in validation ROUGE betweenvarious ablations and the full model. More negative is worse. The full model uses LMs trained withunlabeled data to initialize the encoder and decoder, plus the language modeling objective.a pretrained and non-pretrained system. The document, gold summary, and the two system outputswere presented to a human evaluator who was asked to rate each system output on a scale of 1-5with 5 being the best score. The system outputs were presented in random order and the evaluatordid not know the identity of either output. The evaluator noted if there were repetitive phrases orsentences in either system outputs. Unwanted repetition was also noticed by Nallapati et al. (2016).Table 3 and 4 show the results of the study. In both cases, the pretrained system outperforms thesystem without pretraining in a statistically significant manner. The better optimization enabled bypretraining improves the generated summaries and decreases unwanted repetition in the output.NP>PNP = P NP<P29 88 83Table 3: The count of how often the no pretrain system ( NP) achieves a higher, equal, and lowerscore than the pretrained system ( P) in the side-by-side study where the human evaluator gave eachsystem a score from 1-5. The sign statistical test gives a p-value of <0:0001 for rejecting the nullhypothesis that there is no difference in the score obtained by either system.No pretrainNo repeats RepeatsPretrainNo repeats 67 65Repeats 24 44Table 4: The count of how often the pretrain and no pretrain systems contain repeated phrases orsentences in their outputs in the side-by-side study. McNemar’s test gives a p-value of <0:0001for rejecting the null hypothesis that the two systems repeat the same proportion of times. Thepretrained system clearly repeats less than the system without pretraining.4 R ELATED WORKUnsupervised pretraining has been intensively studied in the past years, most notably is the workby Dahl et al. (2012) who found that pretraining with deep belief networks improved feedforwardacoustic models. More recent acoustic models have found pretraining unnecessary (Xiong et al.,7Under review as a conference paper at ICLR 20172016; Zhang et al., 2016; Chan et al., 2015), probably because the reconstruction objective of deepbelief networks is too easy. In contrast, we find that pretraining language models by next stepprediction significantly improves seq2seq on challenging real world datasets.Despite its appeal, unsupervised learning is rarely shown to improve supervised training. Dai & Le(2015) was amongst the rare studies which showed the benefits of pretraining in a semi-supervisedlearning setting. Their method is similar to our method except that they did not have a decodernetwork and thus could not apply to seq2seq learning. Similarly, Zhang & Zong (2016) found ituseful to add an additional task of sentence reordering of source-side monolingual data for neuralmachine translation. Various forms of transfer or multitask learning with seq2seq framework alsohave the flavors of our algorithm (Zoph et al., 2016; Luong et al., 2015a; Firat et al., 2016).Perhaps most closely related to our method is the work by Gulcehre et al. (2015), who combined alanguage model with an already trained seq2seq model by fine-tuning additional deep output layers.Empirically, their method produces small improvements over the supervised baseline. We suspectthat their method does not produce significant gains because (i) the models are trained independentlyof each other and are not fine-tuned (ii) the LM is combined with the seq2seq model after the lastlayer, wasting the benefit of the low level LM features, and (iii) only using the LM on the decoderside. Venugopalan et al. (2016) addressed (i) but still experienced minor improvements. Usingpretrained GloVe embedding vectors (Pennington et al., 2014) had more impact.Related to our approach in principle is the work by Chen et al. (2016) who proposed a two-term,theoretically motivated unsupervised objective for unpaired input-output samples. Though they didnot apply their method to seq2seq learning, their framework can be modified to do so. In that case,the first term pushes the output to be highly probable under some scoring model, and the secondterm ensures that the output depends on the input. In the seq2seq setting, we interpret the first termas a pretrained language model scoring the output sequence. In our work, we fold the pretrainedlanguage model into the decoder. We believe that using the pretrained language model only forscoring is less efficient that using all the pretrained weights. Our use of labeled examples satisfiesthe second term. These connections provide a theoretical grounding for our work.In our experiments, we benchmark our method on machine translation, where other unsupervisedmethods are shown to give promising results (Sennrich et al., 2015b; Cheng et al., 2016). In back-translation (Sennrich et al., 2015b), the trained model is used to decode unlabeled data to yield extralabeled data. One can argue that this method may not have a natural analogue to other tasks such assummarization. We note that their technique is complementary to ours, and may lead to additionalgains in machine translation. The method of using autoencoders in Cheng et al. (2016) is promising,though it can be argued that autoencoding is an easy objective and language modeling may force theunsupervised models to learn better features.5 C ONCLUSIONWe presented a novel unsupervised pretraining method to improve sequence to sequence learning.The method can aid in both generalization and optimization. Our scheme involves pretraining twolanguage models in the source and target domain, and initializing the embeddings, first LSTM layers,and softmax of a sequence to sequence model with the weights of the language models. Using ourmethod, we achieved state-of-the-art machine translation results on both WMT’14 and WMT’15English to German.A key advantage of this technique is that it is flexible and can be applied to a large variety of tasks,such as summarization, where it surpasses the supervised learning baseline.ACKNOWLEDGMENTSWe thank George Dahl, Andrew Dai, Laurent Dinh, Stephan Gouws, Geoffrey Hinton, Rafal Joze-fowicz, Pooya Khorrami, Phillip Louis, Ramesh Nallapati, Arvind Neelakantan, Xin Pan, Abi See,Rico Sennrich, Luke Vilnis, Yuan Yu and the Google Brain team for their help with the project.8Under review as a conference paper at ICLR 2017 | HyfU5MFSg | good paper with strong experiments | 7: Good paper, accept | In this paper, the authors propose to pretrain the encoder/decoder of seq2seq models on a large amount of unlabeled data using a LM objective. They obtain improvements using this technique on machine translation and abstractive summarization.
While the effectiveness of pretraining seq2seq models has been known among researchers and explored in a few papers (e.g. Zoph et al. 2016, Dai and Le 2015), I believe this is the first paper to pretrain using a LM for both the encoder/decoder. The technique is simple, but the gains are large (e.g. +2.7 BLEU on NMT). In addition, the authors perform extensive ablation studies to analyze where the performance is coming from. Hence, I think this paper should be accepted.
| 5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature |
H1Gq5Q9el | ICLR.cc/2017/conference | 2017 | Unsupervised Pretraining for Sequence to Sequence Learning | ["Prajit Ramachandran", "Peter J. Liu", "Quoc V. Le"] | This work presents a general unsupervised learning method to improve
the accuracy of sequence to sequence (seq2seq) models. In our method, the
weights of the encoder and decoder of a seq2seq model are initialized
with the pretrained weights of two language models and then
fine-tuned with labeled data. We apply this method to
challenging benchmarks in machine translation and abstractive
summarization and find that it significantly improves the subsequent
supervised models. Our main result is that the pretraining
accelerates training and improves generalization of seq2seq models,
achieving state-of-the-art results on the WMT
English->German task, surpassing a range of methods using
both phrase-based machine translation and neural machine
translation. Our method achieves an improvement of 1.3 BLEU from the
previous best models on both WMT'14 and WMT'15
English->German. On summarization, our method beats
the supervised learning baseline. | ["Natural language processing", "Deep learning", "Semi-Supervised Learning", "Transfer Learning"] | ABSTRACTThis work presents a general unsupervised learning method to improve the accu-racy of sequence to sequence (seq2seq) models. In our method, the weights ofthe encoder and decoder of a seq2seq model are initialized with the pretrainedweights of two language models and then fine-tuned with labeled data. We ap-ply this method to challenging benchmarks in machine translation and abstractivesummarization and find that it significantly improves the subsequent supervisedmodels. Our main result is that the pretraining accelerates training and improvesgeneralization of seq2seq models, achieving state-of-the-art results on the WMTEnglish!German task, surpassing a range of methods using both phrase-basedmachine translation and neural machine translation. Our method achieves an im-provement of 1.3 BLEU from the previous best models on both WMT’14 andWMT’15 English!German. On summarization, our method beats the supervisedlearning baseline.1 I NTRODUCTIONSequence to sequence ( seq2seq ) models (Sutskever et al., 2014; Cho et al., 2014; Kalchbrenner& Blunsom, 2013; Allen, 1987; ̃Neco & Forcada, 1997) are extremely effective on a variety oftasks that require a mapping between a variable-length input sequence to a variable-length outputsequence. The main weakness of sequence to sequence models, and deep networks in general, liesin the fact that they can easily overfit when the amount of supervised training data is small.In this work, we propose a simple and effective technique for using unsupervised pretraining toimprove seq2seq models. Our proposal is to initialize both encoder and decoder networks withpretrained weights of two language models. These pretrained weights are then fine-tuned with thelabeled corpus.We benchmark this method on machine translation for English !German and abstractive summa-rization on CNN and Daily Mail articles. Our main result is that a seq2seq model, with pretraining,exceeds the strongest possible baseline in both neural machine translation and phrase-based machinetranslation. Our model obtains an improvement of 1.3 BLEU from the previous best models on bothWMT’14 and WMT’15 English !German. On abstractive summarization, our method achievescompetitive results to the strongest baselines.We also perform ablation study to understand the behaviors of the pretraining method. Our studyconfirms that among many other possible choices of using a language model in seq2seq with atten-tion, the above proposal works best. Our study also shows that, for translation, the main gains comefrom the improved generalization due to the pretrained features, whereas for summarization thegains come from the improved optimization due to pretraining the encoder which has been unrolledfor hundreds of timesteps. On both tasks, our proposed method always improves generalization onthe test sets.Work done as an intern on Google Brain.1Under review as a conference paper at ICLR 20172 U NSUPERVISED PRETRAINING FOR SEQUENCE TO SEQUENCE LEARNINGIn the following section, we will describe our basic unsupervised pretraining procedure for sequenceto sequence learning and how to modify sequence to sequence learning to effectively make use ofthe pretrained weights. We then show several extensions to improve the basic model.2.1 B ASIC PROCEDUREGiven an input sequence x1;x2;:::;x mand an output sequence yn;yn1;:::;y 1, the objective of se-quence to sequence learning is to maximize the likelihood p(yn;yn1;:::;y 1jx1;x2;:::;x m).Common sequence to sequence learning methods decompose this objective asp(yn;yn1;:::;y 1jx1;x2;:::;x m) =Qnt=1p(ytjyt1;:::;y 1;x1;x2;:::;x m).In sequence to sequence learning, an RNN encoder is used to represent x1;:::;x mas a hidden vector,which is given to an RNN decoder to produce the output sequence. Our method is based on theobservation that without the encoder, the decoder essentially acts like a language model on y’s.Similarly, the encoder with an additional output layer also acts like a language model. Thus it isnatural to use trained languages models to initialize the encoder and decoder.Therefore, the basic procedure of our approach is to pretrain both the seq2seq encoder and decodernetworks with language models, which can be trained on large amounts of unlabeled text data. Thiscan be seen in Figure 1, where the parameters in the shaded boxes are pretrained. In the followingwe will describe the method in detail using machine translation as an example application.A B C <EOS> W X Y ZW X Y Z <EOS>EmbeddingFirst RNN LayerSoftmaxSecond RNN LayerFigure 1: Pretrained sequence to sequence model. The red parameters are the encoder and the blueparameters are the decoder. All parameters in a shaded box are pretrained, either from the sourceside (light red) or target side (light blue) language model. Otherwise, they are randomly initialized.First, two monolingual datasets are collected, one for the source side language, and one for thetarget side language. A language model ( LM) is trained on each dataset independently, giving anLM trained on the source side corpus and an LM trained on the target side corpus.After two language models are trained, a multi-layer seq2seq model Mis constructed. The embed-ding and first LSTM layers of the encoder and decoder are initialized with the pretrained weights.To be even more efficient, the softmax of the decoder is initialized with the softmax of the pretrainedtarget side LM.2.2 I MPROVING THE MODELWe also employ three additional methods to further improve the model above. The three meth-ods are: a) Monolingual language modeling losses, b) Residual connections and c) Attention overmultiple layers (see Figure 2).Monolingual language modeling losses: After the seq2seq model Mis initialized with the twoLMs, it is fine-tuned with a labeled dataset. To ensure that the model does not overfit the labeleddata, we regularize the parameters that were pretrained by continuing to train with the monolinguallanguage modeling losses. The seq2seq and language modeling losses are weighted equally.2Under review as a conference paper at ICLR 2017WX+(a)A B C <EOS>WAttention(b)Figure 2: Two improvements to the baseline model: (a) residual connection, and (b) attention overmultiple layers.Residual connections: As described, the input vector to the decoder softmax layer is a randomvector because the high level (non-first) layers of the LSTM are randomly initialized. This slowsdown training and introduces random gradients to the pretrained parameters, reducing the effective-ness of pretraining. To circumvent this issue, we use a residual connection from the output of thefirst LSTM layer directly to the input of the softmax (see Figure 2-a).Attention over multiple layers: In all our models, we use an attention mechanism (Bahdanauet al., 2015), where the model attends over both top and first layer (see Figure 2-b). More concretely,given a query vector qtfrom the decoder, encoder states from the first layer h11;:::;h1T, and encoderstates from the last layer hL1;:::;hLT, we compute the attention context vector ctas follows:i=exp(qthNi)PTj=1exp(qthNj)c1t=TXi=1ih1icNt=TXi=1ihNict= [c1t;cNt]Note that attention weights iare only computed once using the top level encoder states.We also experiment with passing the attention vector ctas input into the next timestep (Luong et al.,2015b). Instead of passing cinto the first LSTM layer, we pass it as input to the second LSTM layerby concatenating it with the output of the first LSTM layer.We use all three improvements in our experiments. However, in general we notice that the benefitsof the attention modifications are minor in comparison with the benefits of the additional languagemodeling objectives and residual connections.3 E XPERIMENTSIn the following section, we apply our approach to two important tasks in seq2seq learning: machinetranslation and abstractive summarization. On each task, we compare against the previous bestsystems. We also perform ablation experiments to understand the behavior of each component ofour method.3.1 M ACHINE TRANSLATIONDataset and Evaluation: For machine translation, we evaluate our method on the WMTEnglish!German task (Bojar et al., 2015). We used the WMT 14 training dataset, which is slightlysmaller than the WMT 15 dataset. Because the dataset has some noisy examples, we used a lan-guage detection system to filter the training examples. Sentences pairs where either the source wasnot English or the target was not German were thrown away. This resulted in around 4 milliontraining examples. Following Sennrich et al. (2015b), we use subword units (Sennrich et al., 2015a)3Under review as a conference paper at ICLR 2017with 89500 merge operations, giving a vocabulary size around 90000. The validation set is theconcatenated newstest2012 and newstest2013, and our test sets are newstest2014 and newstest2015.Evaluation on the validation set was with case-sensitive BLEU (Papineni et al., 2002) on tokenizedtext using multi-bleu.perl . Evaluation on the test sets was with case-sensitive BLEU ondetokenized text using mteval-v13a.pl . The monolingual training datasets are the News CrawlEnglish and German corpora, each of which has more than a billion tokens.Experimental settings: The language models were trained in the same fashion as (Jozefowiczet al., 2016) We used a 1 layer 4096 dimensional LSTM with the hidden state projected downto 1024 units (Sak et al., 2014) and trained for one week on 32 Tesla K40 GPUs. Our seq2seqmodel was a 3 layer model, where the second and third layers each have 1000 hidden units. Themonolingual objectives, residual connection, and the modified attention were all used. We used theAdam optimizer (Kingma & Ba, 2015) and train with asynchronous SGD on 16 GPUs for speed.We used a learning rate of 5e-5 which is multiplied by 0.8 every 50K steps after an initial 400Ksteps, gradient clipping with norm 5.0 (Pascanu et al., 2013), and dropout of 0.2 on non-recurrentconnections (Zaremba et al., 2014). We used early stopping on validation set perplexity. A beamsize of 10 was used for decoding. Our ensemble is constructed with the 5 best performing modelson the validation set, which are trained with different hyperparameters.Results: Table 1 shows the results of our method in comparison with other baselines. Ourmethod achieves a new state-of-the-art for single model performance on both newstest2014 andnewstest2015, significantly outperforming the competitive semi-supervised backtranslation tech-nique (Sennrich et al., 2015b). Equally impressive is the fact that our best single model outperformsthe previous state of the art ensemble of 4 models. Our ensemble of 5 models matches or exceedsthe previous best ensemble of 12 models.BLEUSystem ensemble? newstest2014 newstest2015Phrase Based MT (Williams et al., 2016) - 21.9 23.7Supervised NMT (Jean et al., 2015) single - 22.4Edit Distance Transducer NMT (Stahlberg et al., 2016) single 21.7 24.1Edit Distance Transducer NMT (Stahlberg et al., 2016) ensemble 8 22.9 25.7Backtranslation (Sennrich et al., 2015b) single 22.7 25.7Backtranslation (Sennrich et al., 2015b) ensemble 4 23.8 26.5Backtranslation (Sennrich et al., 2015b) ensemble 12 24.7 27.6No pretraining single 21.3 24.3Pretrained seq2seq single 24.0 27.0Pretrained seq2seq ensemble 5 24.7 28.1Table 1: English!German performance on WMT test sets. Our pretrained model outperforms allother models. Note that the model without pretraining uses the LM objective.Ablation study: In order to better understand the effects of pretraining, we conducted an ablationstudy by modifying the pretraining scheme. Figure 3 shows the drop in validation BLEU of variousablations compared with the full model. The full model uses LMs trained with monolingual data toinitialize the encoder and decoder, in addition to the language modeling objective. In the following,we interpret the findings of the study. Note that some findings are specific to the translation task.Given the results from the ablation study, we can make the following observations:Pretraining the decoder is better than pretraining the encoder: Only pretraining the encoderleads to a 1.6 BLEU point drop while only pretraining the decoder leads to a 1.0 BLEUpoint drop.Pretrain as much as possible because the benefits compound: given the drops of no pre-training at all (2:0) and only pretraining the encoder ( 1:6), the additive estimate of thedrop of only pretraining the decoder side is 2:0(1:6) =0:4; however the actualdrop is1:0which is a much larger drop than the additive estimate.Pretraining the softmax is important: Pretraining only the embeddings and first LSTM layergives a large drop of 1.6 BLEU points.4Under review as a conference paper at ICLR 20172.01.51.00.50.0Difference/uni00A0in/uni00A0BLEU/uni00AD2.1Pretrain/uni00A0on/uni00A0parallel/uni00A0corpus/uni00AD2.0No/uni00A0pretraining/uni00AD2.0Only/uni00A0pretrain/uni00A0embeddings/uni00AD2.0No/uni00A0LM/uni00A0objective/uni00AD1.6Only/uni00A0pretrain/uni00A0encoder/uni00AD1.6Only/uni00A0pretrain/uni00A0embeddings/uni00A0&/uni00A0LSTM/uni00AD1.0Only/uni00A0pretrain/uni00A0decoder/uni00AD0.3Pretrain/uni00A0on/uni00A0WikipediaFigure 3: English!German ablation study measuring the difference in validation BLEU betweenvarious ablations and the full model. More negative is worse. The full model uses LMs trained withmonolingual data to initialize the encoder and decoder, plus the language modeling objective.The language modeling objective is a strong regularizer: The drop in BLEU points ofpretraining the entire model and not using the LM objective is as bad as using the LMobjective without pretraining.Pretraining on a lot of unlabeled data is essential for learning to extract powerful features:If the model is initialized with LMs that are pretrained on the source part and target part oftheparallel corpus, the drop in performance is as large as not pretraining at all. However,performance remains strong when pretrained on the large, non-news Wikipedia corpus.To understand the contributions of unsupervised pretraining vs. supervised training, we track theperformance of pretraining as a function of dataset size. For this, we trained a a model with andwithout pretraining on random subsets of the English !German corpus. Both models use the ad-ditional LM objective. The results are summarized in Figure 4. When a 100% of the labeled datais used, the gap between the pretrained and no pretrain model is 2.0 BLEU points. However, thatgap grows when less data is available. When trained on 20% of the labeled data, the gap becomes3.8 BLEU points. This demonstrates that the pretrained models degrade less as the labeled datasetbecomes smaller.20 40 60 80 100Percent/uni00A0of/uni00A0entire/uni00A0labeled/uni00A0dataset/uni00A0used/uni00A0for/uni00A0training1516171819202122BLEUPretrainNo/uni00A0pretrainFigure 4: Validation performance of pretraining vs. no pretraining when trained on a subset of theentire labeled dataset for English !German translation.5Under review as a conference paper at ICLR 20173.2 A BSTRACTIVE SUMMARIZATIONDataset and Evaluation: For a low-resource abstractive summarization task, we use theCNN/Daily Mail corpus from (Hermann et al., 2015). Following Nallapati et al. (2016), we modifythe data collection scripts to restore the bullet point summaries. The task is to predict the bulletpoint summaries from a news article. The dataset has fewer than 300K document-summary pairs.To compare against Nallapati et al. (2016), we used the anonymized corpus. However, for our abla-tion study, we used the non-anonymized corpus.1We evaluate our system using full length ROUGE(Lin, 2004). For the anonymized corpus in particular, we considered each highlight as a separatesentence following Nallapati et al. (2016). In this setting, we used the English Gigaword corpus(Napoles et al., 2012) as our larger, unlabeled “monolingual” corpus, although all data used in thistask is in English.Experimental settings: We use subword units (Sennrich et al., 2015a) with 31500 merges, result-ing in a vocabulary size of about 32000. We use up to the first 600 tokens of the document andpredict the entire summary. Only one language model is trained and it is used to initialize both theencoder and decoder, since the source and target languages are the same. However, the encoderand decoder are not tied. The LM is a one-layer LSTM of size 1024 trained in a similar fashion toJozefowicz et al. (2016). For the seq2seq model, we use the same settings as the machine translationexperiments. The only differences are that we use a 2 layer model with the second layer having1024 hidden units, and that the learning rate is multiplied by 0.8 every 30K steps after an initial100K steps.Results: Table 2 summarizes our results on the anonymized version of the corpus. Our pretrainedmodel is only able to match the previous baseline seq2seq of Nallapati et al. (2016). However, ourmodel is a unidirectional LSTM while they use a bidirectional LSTM. They also use a longer contextof 800 tokens, whereas we used a context of 600 tokens due to GPU memory issues. Furthermore,they use pretrained word2vec (Mikolov et al., 2013) vectors to initialize their word embeddings. Aswe show in our ablation study, just pretraining the embeddings itself gives a large improvement.System ROUGE-1 ROUGE-2 ROUGE-LSeq2seq + pretrained embeddings (Nallapati et al., 2016) 32.49 11.84 29.47+ temporal attention (Nallapati et al., 2016) 35.46 13.30 32.65Pretrained seq2seq 32.56 11.89 29.44Table 2: Results on the anonymized CNN/Daily Mail dataset.Ablation study: We performed an ablation study similar to the one performed on the machinetranslation model. The results are reported in Figure 5. Here we report the drops on ROUGE-1,ROUGE-2, and ROUGE-L on the non-anonymized validation set.Given the results from our ablation study, we can make the following observations:Pretraining improves optimization: in contrast with the machine translation model, it ismore beneficial to only pretrain the encoder than only the decoder of the summarizationmodel. One interpretation is that pretraining enables the gradient to flow much furtherback in time than randomly initialized weights. This may also explain why pretraining onthe parallel corpus is no worse than pretraining on a larger monolingual corpus.The language modeling objective is a strong regularizer: A model without the LM objectivehas a significant drop in ROUGE scores.Human evaluation: As ROUGE may not be able to capture the quality of summarization, wealso performed a small qualitative study to understand the human impression of the summariesproduced by different models. We took 200 random documents and compared the performance of1We encourage future researchers to use the non-anonymized version because it is a more realistic summa-rization setting with a larger vocabulary. Our numbers on the non-anonymized test set are 35:56ROUGE-1,14:60ROUGE-2, and 25:08ROUGE-L. We did not consider highlights as separate sentences.6Under review as a conference paper at ICLR 2017/uni00AD5/uni00AD4/uni00AD3/uni00AD2/uni00AD10Difference/uni00A0in/uni00A0ROUGENo/uni00A0pretraining Only/uni00A0pretrain/uni00A0decoder No/uni00A0LM/uni00A0objective Only/uni00A0pretrain/uni00A0embeddings Only/uni00A0pretrain/uni00A0embeddings/uni00A0&/uni00A0LSTM Only/uni00A0pretrain/uni00A0encoder Pretrain/uni00A0on/uni00A0parallel/uni00A0corpusROUGE/uni00AD1ROUGE/uni00AD2ROUGE/uni00ADLFigure 5: Summarization ablation study measuring the difference in validation ROUGE betweenvarious ablations and the full model. More negative is worse. The full model uses LMs trained withunlabeled data to initialize the encoder and decoder, plus the language modeling objective.a pretrained and non-pretrained system. The document, gold summary, and the two system outputswere presented to a human evaluator who was asked to rate each system output on a scale of 1-5with 5 being the best score. The system outputs were presented in random order and the evaluatordid not know the identity of either output. The evaluator noted if there were repetitive phrases orsentences in either system outputs. Unwanted repetition was also noticed by Nallapati et al. (2016).Table 3 and 4 show the results of the study. In both cases, the pretrained system outperforms thesystem without pretraining in a statistically significant manner. The better optimization enabled bypretraining improves the generated summaries and decreases unwanted repetition in the output.NP>PNP = P NP<P29 88 83Table 3: The count of how often the no pretrain system ( NP) achieves a higher, equal, and lowerscore than the pretrained system ( P) in the side-by-side study where the human evaluator gave eachsystem a score from 1-5. The sign statistical test gives a p-value of <0:0001 for rejecting the nullhypothesis that there is no difference in the score obtained by either system.No pretrainNo repeats RepeatsPretrainNo repeats 67 65Repeats 24 44Table 4: The count of how often the pretrain and no pretrain systems contain repeated phrases orsentences in their outputs in the side-by-side study. McNemar’s test gives a p-value of <0:0001for rejecting the null hypothesis that the two systems repeat the same proportion of times. Thepretrained system clearly repeats less than the system without pretraining.4 R ELATED WORKUnsupervised pretraining has been intensively studied in the past years, most notably is the workby Dahl et al. (2012) who found that pretraining with deep belief networks improved feedforwardacoustic models. More recent acoustic models have found pretraining unnecessary (Xiong et al.,7Under review as a conference paper at ICLR 20172016; Zhang et al., 2016; Chan et al., 2015), probably because the reconstruction objective of deepbelief networks is too easy. In contrast, we find that pretraining language models by next stepprediction significantly improves seq2seq on challenging real world datasets.Despite its appeal, unsupervised learning is rarely shown to improve supervised training. Dai & Le(2015) was amongst the rare studies which showed the benefits of pretraining in a semi-supervisedlearning setting. Their method is similar to our method except that they did not have a decodernetwork and thus could not apply to seq2seq learning. Similarly, Zhang & Zong (2016) found ituseful to add an additional task of sentence reordering of source-side monolingual data for neuralmachine translation. Various forms of transfer or multitask learning with seq2seq framework alsohave the flavors of our algorithm (Zoph et al., 2016; Luong et al., 2015a; Firat et al., 2016).Perhaps most closely related to our method is the work by Gulcehre et al. (2015), who combined alanguage model with an already trained seq2seq model by fine-tuning additional deep output layers.Empirically, their method produces small improvements over the supervised baseline. We suspectthat their method does not produce significant gains because (i) the models are trained independentlyof each other and are not fine-tuned (ii) the LM is combined with the seq2seq model after the lastlayer, wasting the benefit of the low level LM features, and (iii) only using the LM on the decoderside. Venugopalan et al. (2016) addressed (i) but still experienced minor improvements. Usingpretrained GloVe embedding vectors (Pennington et al., 2014) had more impact.Related to our approach in principle is the work by Chen et al. (2016) who proposed a two-term,theoretically motivated unsupervised objective for unpaired input-output samples. Though they didnot apply their method to seq2seq learning, their framework can be modified to do so. In that case,the first term pushes the output to be highly probable under some scoring model, and the secondterm ensures that the output depends on the input. In the seq2seq setting, we interpret the first termas a pretrained language model scoring the output sequence. In our work, we fold the pretrainedlanguage model into the decoder. We believe that using the pretrained language model only forscoring is less efficient that using all the pretrained weights. Our use of labeled examples satisfiesthe second term. These connections provide a theoretical grounding for our work.In our experiments, we benchmark our method on machine translation, where other unsupervisedmethods are shown to give promising results (Sennrich et al., 2015b; Cheng et al., 2016). In back-translation (Sennrich et al., 2015b), the trained model is used to decode unlabeled data to yield extralabeled data. One can argue that this method may not have a natural analogue to other tasks such assummarization. We note that their technique is complementary to ours, and may lead to additionalgains in machine translation. The method of using autoencoders in Cheng et al. (2016) is promising,though it can be argued that autoencoding is an easy objective and language modeling may force theunsupervised models to learn better features.5 C ONCLUSIONWe presented a novel unsupervised pretraining method to improve sequence to sequence learning.The method can aid in both generalization and optimization. Our scheme involves pretraining twolanguage models in the source and target domain, and initializing the embeddings, first LSTM layers,and softmax of a sequence to sequence model with the weights of the language models. Using ourmethod, we achieved state-of-the-art machine translation results on both WMT’14 and WMT’15English to German.A key advantage of this technique is that it is flexible and can be applied to a large variety of tasks,such as summarization, where it surpasses the supervised learning baseline.ACKNOWLEDGMENTSWe thank George Dahl, Andrew Dai, Laurent Dinh, Stephan Gouws, Geoffrey Hinton, Rafal Joze-fowicz, Pooya Khorrami, Phillip Louis, Ramesh Nallapati, Arvind Neelakantan, Xin Pan, Abi See,Rico Sennrich, Luke Vilnis, Yuan Yu and the Google Brain team for their help with the project.8Under review as a conference paper at ICLR 2017 | r1L2IyIVe | review | 6: Marginally above acceptance threshold | Authors propose the use of layer-wise language model-like pretraining for encoder-decoder models. This allows to leverage separate source and target corpora (in unsupervised manner) without necessity of large amounts of parallel training corpora. The idea is in principle fairly simple, and rely on initial optimising both encoder and decoder with LSTM tasked to perform language modelling.
The ideas are not new, and the paper is more like a successful compilation of several approaches that have been around for some time. The experimental validation, though, offers some interesting insights into importance of initialization, and the effectiveness of different initialisations approaches in enc-dec setting.
The regulariser you propose to use on page 3, looks like typical multi-task objective function, especially it is used in an alternating manner would be interesting to see whether similar performance might have been obtained starting with this objective, from random initialisation.
You should probably give credit for encoder-decoder like-RNN models published in 1990s.
Minors:
Pg. 2, Sec 2.1 2nd paragraph: can be different sizes -> can be of different sizes | 4: The reviewer is confident but not absolutely certain that the evaluation is correct |
H1Gq5Q9el | ICLR.cc/2017/conference | 2017 | Unsupervised Pretraining for Sequence to Sequence Learning | ["Prajit Ramachandran", "Peter J. Liu", "Quoc V. Le"] | This work presents a general unsupervised learning method to improve
the accuracy of sequence to sequence (seq2seq) models. In our method, the
weights of the encoder and decoder of a seq2seq model are initialized
with the pretrained weights of two language models and then
fine-tuned with labeled data. We apply this method to
challenging benchmarks in machine translation and abstractive
summarization and find that it significantly improves the subsequent
supervised models. Our main result is that the pretraining
accelerates training and improves generalization of seq2seq models,
achieving state-of-the-art results on the WMT
English->German task, surpassing a range of methods using
both phrase-based machine translation and neural machine
translation. Our method achieves an improvement of 1.3 BLEU from the
previous best models on both WMT'14 and WMT'15
English->German. On summarization, our method beats
the supervised learning baseline. | ["Natural language processing", "Deep learning", "Semi-Supervised Learning", "Transfer Learning"] | ABSTRACTThis work presents a general unsupervised learning method to improve the accu-racy of sequence to sequence (seq2seq) models. In our method, the weights ofthe encoder and decoder of a seq2seq model are initialized with the pretrainedweights of two language models and then fine-tuned with labeled data. We ap-ply this method to challenging benchmarks in machine translation and abstractivesummarization and find that it significantly improves the subsequent supervisedmodels. Our main result is that the pretraining accelerates training and improvesgeneralization of seq2seq models, achieving state-of-the-art results on the WMTEnglish!German task, surpassing a range of methods using both phrase-basedmachine translation and neural machine translation. Our method achieves an im-provement of 1.3 BLEU from the previous best models on both WMT’14 andWMT’15 English!German. On summarization, our method beats the supervisedlearning baseline.1 I NTRODUCTIONSequence to sequence ( seq2seq ) models (Sutskever et al., 2014; Cho et al., 2014; Kalchbrenner& Blunsom, 2013; Allen, 1987; ̃Neco & Forcada, 1997) are extremely effective on a variety oftasks that require a mapping between a variable-length input sequence to a variable-length outputsequence. The main weakness of sequence to sequence models, and deep networks in general, liesin the fact that they can easily overfit when the amount of supervised training data is small.In this work, we propose a simple and effective technique for using unsupervised pretraining toimprove seq2seq models. Our proposal is to initialize both encoder and decoder networks withpretrained weights of two language models. These pretrained weights are then fine-tuned with thelabeled corpus.We benchmark this method on machine translation for English !German and abstractive summa-rization on CNN and Daily Mail articles. Our main result is that a seq2seq model, with pretraining,exceeds the strongest possible baseline in both neural machine translation and phrase-based machinetranslation. Our model obtains an improvement of 1.3 BLEU from the previous best models on bothWMT’14 and WMT’15 English !German. On abstractive summarization, our method achievescompetitive results to the strongest baselines.We also perform ablation study to understand the behaviors of the pretraining method. Our studyconfirms that among many other possible choices of using a language model in seq2seq with atten-tion, the above proposal works best. Our study also shows that, for translation, the main gains comefrom the improved generalization due to the pretrained features, whereas for summarization thegains come from the improved optimization due to pretraining the encoder which has been unrolledfor hundreds of timesteps. On both tasks, our proposed method always improves generalization onthe test sets.Work done as an intern on Google Brain.1Under review as a conference paper at ICLR 20172 U NSUPERVISED PRETRAINING FOR SEQUENCE TO SEQUENCE LEARNINGIn the following section, we will describe our basic unsupervised pretraining procedure for sequenceto sequence learning and how to modify sequence to sequence learning to effectively make use ofthe pretrained weights. We then show several extensions to improve the basic model.2.1 B ASIC PROCEDUREGiven an input sequence x1;x2;:::;x mand an output sequence yn;yn1;:::;y 1, the objective of se-quence to sequence learning is to maximize the likelihood p(yn;yn1;:::;y 1jx1;x2;:::;x m).Common sequence to sequence learning methods decompose this objective asp(yn;yn1;:::;y 1jx1;x2;:::;x m) =Qnt=1p(ytjyt1;:::;y 1;x1;x2;:::;x m).In sequence to sequence learning, an RNN encoder is used to represent x1;:::;x mas a hidden vector,which is given to an RNN decoder to produce the output sequence. Our method is based on theobservation that without the encoder, the decoder essentially acts like a language model on y’s.Similarly, the encoder with an additional output layer also acts like a language model. Thus it isnatural to use trained languages models to initialize the encoder and decoder.Therefore, the basic procedure of our approach is to pretrain both the seq2seq encoder and decodernetworks with language models, which can be trained on large amounts of unlabeled text data. Thiscan be seen in Figure 1, where the parameters in the shaded boxes are pretrained. In the followingwe will describe the method in detail using machine translation as an example application.A B C <EOS> W X Y ZW X Y Z <EOS>EmbeddingFirst RNN LayerSoftmaxSecond RNN LayerFigure 1: Pretrained sequence to sequence model. The red parameters are the encoder and the blueparameters are the decoder. All parameters in a shaded box are pretrained, either from the sourceside (light red) or target side (light blue) language model. Otherwise, they are randomly initialized.First, two monolingual datasets are collected, one for the source side language, and one for thetarget side language. A language model ( LM) is trained on each dataset independently, giving anLM trained on the source side corpus and an LM trained on the target side corpus.After two language models are trained, a multi-layer seq2seq model Mis constructed. The embed-ding and first LSTM layers of the encoder and decoder are initialized with the pretrained weights.To be even more efficient, the softmax of the decoder is initialized with the softmax of the pretrainedtarget side LM.2.2 I MPROVING THE MODELWe also employ three additional methods to further improve the model above. The three meth-ods are: a) Monolingual language modeling losses, b) Residual connections and c) Attention overmultiple layers (see Figure 2).Monolingual language modeling losses: After the seq2seq model Mis initialized with the twoLMs, it is fine-tuned with a labeled dataset. To ensure that the model does not overfit the labeleddata, we regularize the parameters that were pretrained by continuing to train with the monolinguallanguage modeling losses. The seq2seq and language modeling losses are weighted equally.2Under review as a conference paper at ICLR 2017WX+(a)A B C <EOS>WAttention(b)Figure 2: Two improvements to the baseline model: (a) residual connection, and (b) attention overmultiple layers.Residual connections: As described, the input vector to the decoder softmax layer is a randomvector because the high level (non-first) layers of the LSTM are randomly initialized. This slowsdown training and introduces random gradients to the pretrained parameters, reducing the effective-ness of pretraining. To circumvent this issue, we use a residual connection from the output of thefirst LSTM layer directly to the input of the softmax (see Figure 2-a).Attention over multiple layers: In all our models, we use an attention mechanism (Bahdanauet al., 2015), where the model attends over both top and first layer (see Figure 2-b). More concretely,given a query vector qtfrom the decoder, encoder states from the first layer h11;:::;h1T, and encoderstates from the last layer hL1;:::;hLT, we compute the attention context vector ctas follows:i=exp(qthNi)PTj=1exp(qthNj)c1t=TXi=1ih1icNt=TXi=1ihNict= [c1t;cNt]Note that attention weights iare only computed once using the top level encoder states.We also experiment with passing the attention vector ctas input into the next timestep (Luong et al.,2015b). Instead of passing cinto the first LSTM layer, we pass it as input to the second LSTM layerby concatenating it with the output of the first LSTM layer.We use all three improvements in our experiments. However, in general we notice that the benefitsof the attention modifications are minor in comparison with the benefits of the additional languagemodeling objectives and residual connections.3 E XPERIMENTSIn the following section, we apply our approach to two important tasks in seq2seq learning: machinetranslation and abstractive summarization. On each task, we compare against the previous bestsystems. We also perform ablation experiments to understand the behavior of each component ofour method.3.1 M ACHINE TRANSLATIONDataset and Evaluation: For machine translation, we evaluate our method on the WMTEnglish!German task (Bojar et al., 2015). We used the WMT 14 training dataset, which is slightlysmaller than the WMT 15 dataset. Because the dataset has some noisy examples, we used a lan-guage detection system to filter the training examples. Sentences pairs where either the source wasnot English or the target was not German were thrown away. This resulted in around 4 milliontraining examples. Following Sennrich et al. (2015b), we use subword units (Sennrich et al., 2015a)3Under review as a conference paper at ICLR 2017with 89500 merge operations, giving a vocabulary size around 90000. The validation set is theconcatenated newstest2012 and newstest2013, and our test sets are newstest2014 and newstest2015.Evaluation on the validation set was with case-sensitive BLEU (Papineni et al., 2002) on tokenizedtext using multi-bleu.perl . Evaluation on the test sets was with case-sensitive BLEU ondetokenized text using mteval-v13a.pl . The monolingual training datasets are the News CrawlEnglish and German corpora, each of which has more than a billion tokens.Experimental settings: The language models were trained in the same fashion as (Jozefowiczet al., 2016) We used a 1 layer 4096 dimensional LSTM with the hidden state projected downto 1024 units (Sak et al., 2014) and trained for one week on 32 Tesla K40 GPUs. Our seq2seqmodel was a 3 layer model, where the second and third layers each have 1000 hidden units. Themonolingual objectives, residual connection, and the modified attention were all used. We used theAdam optimizer (Kingma & Ba, 2015) and train with asynchronous SGD on 16 GPUs for speed.We used a learning rate of 5e-5 which is multiplied by 0.8 every 50K steps after an initial 400Ksteps, gradient clipping with norm 5.0 (Pascanu et al., 2013), and dropout of 0.2 on non-recurrentconnections (Zaremba et al., 2014). We used early stopping on validation set perplexity. A beamsize of 10 was used for decoding. Our ensemble is constructed with the 5 best performing modelson the validation set, which are trained with different hyperparameters.Results: Table 1 shows the results of our method in comparison with other baselines. Ourmethod achieves a new state-of-the-art for single model performance on both newstest2014 andnewstest2015, significantly outperforming the competitive semi-supervised backtranslation tech-nique (Sennrich et al., 2015b). Equally impressive is the fact that our best single model outperformsthe previous state of the art ensemble of 4 models. Our ensemble of 5 models matches or exceedsthe previous best ensemble of 12 models.BLEUSystem ensemble? newstest2014 newstest2015Phrase Based MT (Williams et al., 2016) - 21.9 23.7Supervised NMT (Jean et al., 2015) single - 22.4Edit Distance Transducer NMT (Stahlberg et al., 2016) single 21.7 24.1Edit Distance Transducer NMT (Stahlberg et al., 2016) ensemble 8 22.9 25.7Backtranslation (Sennrich et al., 2015b) single 22.7 25.7Backtranslation (Sennrich et al., 2015b) ensemble 4 23.8 26.5Backtranslation (Sennrich et al., 2015b) ensemble 12 24.7 27.6No pretraining single 21.3 24.3Pretrained seq2seq single 24.0 27.0Pretrained seq2seq ensemble 5 24.7 28.1Table 1: English!German performance on WMT test sets. Our pretrained model outperforms allother models. Note that the model without pretraining uses the LM objective.Ablation study: In order to better understand the effects of pretraining, we conducted an ablationstudy by modifying the pretraining scheme. Figure 3 shows the drop in validation BLEU of variousablations compared with the full model. The full model uses LMs trained with monolingual data toinitialize the encoder and decoder, in addition to the language modeling objective. In the following,we interpret the findings of the study. Note that some findings are specific to the translation task.Given the results from the ablation study, we can make the following observations:Pretraining the decoder is better than pretraining the encoder: Only pretraining the encoderleads to a 1.6 BLEU point drop while only pretraining the decoder leads to a 1.0 BLEUpoint drop.Pretrain as much as possible because the benefits compound: given the drops of no pre-training at all (2:0) and only pretraining the encoder ( 1:6), the additive estimate of thedrop of only pretraining the decoder side is 2:0(1:6) =0:4; however the actualdrop is1:0which is a much larger drop than the additive estimate.Pretraining the softmax is important: Pretraining only the embeddings and first LSTM layergives a large drop of 1.6 BLEU points.4Under review as a conference paper at ICLR 20172.01.51.00.50.0Difference/uni00A0in/uni00A0BLEU/uni00AD2.1Pretrain/uni00A0on/uni00A0parallel/uni00A0corpus/uni00AD2.0No/uni00A0pretraining/uni00AD2.0Only/uni00A0pretrain/uni00A0embeddings/uni00AD2.0No/uni00A0LM/uni00A0objective/uni00AD1.6Only/uni00A0pretrain/uni00A0encoder/uni00AD1.6Only/uni00A0pretrain/uni00A0embeddings/uni00A0&/uni00A0LSTM/uni00AD1.0Only/uni00A0pretrain/uni00A0decoder/uni00AD0.3Pretrain/uni00A0on/uni00A0WikipediaFigure 3: English!German ablation study measuring the difference in validation BLEU betweenvarious ablations and the full model. More negative is worse. The full model uses LMs trained withmonolingual data to initialize the encoder and decoder, plus the language modeling objective.The language modeling objective is a strong regularizer: The drop in BLEU points ofpretraining the entire model and not using the LM objective is as bad as using the LMobjective without pretraining.Pretraining on a lot of unlabeled data is essential for learning to extract powerful features:If the model is initialized with LMs that are pretrained on the source part and target part oftheparallel corpus, the drop in performance is as large as not pretraining at all. However,performance remains strong when pretrained on the large, non-news Wikipedia corpus.To understand the contributions of unsupervised pretraining vs. supervised training, we track theperformance of pretraining as a function of dataset size. For this, we trained a a model with andwithout pretraining on random subsets of the English !German corpus. Both models use the ad-ditional LM objective. The results are summarized in Figure 4. When a 100% of the labeled datais used, the gap between the pretrained and no pretrain model is 2.0 BLEU points. However, thatgap grows when less data is available. When trained on 20% of the labeled data, the gap becomes3.8 BLEU points. This demonstrates that the pretrained models degrade less as the labeled datasetbecomes smaller.20 40 60 80 100Percent/uni00A0of/uni00A0entire/uni00A0labeled/uni00A0dataset/uni00A0used/uni00A0for/uni00A0training1516171819202122BLEUPretrainNo/uni00A0pretrainFigure 4: Validation performance of pretraining vs. no pretraining when trained on a subset of theentire labeled dataset for English !German translation.5Under review as a conference paper at ICLR 20173.2 A BSTRACTIVE SUMMARIZATIONDataset and Evaluation: For a low-resource abstractive summarization task, we use theCNN/Daily Mail corpus from (Hermann et al., 2015). Following Nallapati et al. (2016), we modifythe data collection scripts to restore the bullet point summaries. The task is to predict the bulletpoint summaries from a news article. The dataset has fewer than 300K document-summary pairs.To compare against Nallapati et al. (2016), we used the anonymized corpus. However, for our abla-tion study, we used the non-anonymized corpus.1We evaluate our system using full length ROUGE(Lin, 2004). For the anonymized corpus in particular, we considered each highlight as a separatesentence following Nallapati et al. (2016). In this setting, we used the English Gigaword corpus(Napoles et al., 2012) as our larger, unlabeled “monolingual” corpus, although all data used in thistask is in English.Experimental settings: We use subword units (Sennrich et al., 2015a) with 31500 merges, result-ing in a vocabulary size of about 32000. We use up to the first 600 tokens of the document andpredict the entire summary. Only one language model is trained and it is used to initialize both theencoder and decoder, since the source and target languages are the same. However, the encoderand decoder are not tied. The LM is a one-layer LSTM of size 1024 trained in a similar fashion toJozefowicz et al. (2016). For the seq2seq model, we use the same settings as the machine translationexperiments. The only differences are that we use a 2 layer model with the second layer having1024 hidden units, and that the learning rate is multiplied by 0.8 every 30K steps after an initial100K steps.Results: Table 2 summarizes our results on the anonymized version of the corpus. Our pretrainedmodel is only able to match the previous baseline seq2seq of Nallapati et al. (2016). However, ourmodel is a unidirectional LSTM while they use a bidirectional LSTM. They also use a longer contextof 800 tokens, whereas we used a context of 600 tokens due to GPU memory issues. Furthermore,they use pretrained word2vec (Mikolov et al., 2013) vectors to initialize their word embeddings. Aswe show in our ablation study, just pretraining the embeddings itself gives a large improvement.System ROUGE-1 ROUGE-2 ROUGE-LSeq2seq + pretrained embeddings (Nallapati et al., 2016) 32.49 11.84 29.47+ temporal attention (Nallapati et al., 2016) 35.46 13.30 32.65Pretrained seq2seq 32.56 11.89 29.44Table 2: Results on the anonymized CNN/Daily Mail dataset.Ablation study: We performed an ablation study similar to the one performed on the machinetranslation model. The results are reported in Figure 5. Here we report the drops on ROUGE-1,ROUGE-2, and ROUGE-L on the non-anonymized validation set.Given the results from our ablation study, we can make the following observations:Pretraining improves optimization: in contrast with the machine translation model, it ismore beneficial to only pretrain the encoder than only the decoder of the summarizationmodel. One interpretation is that pretraining enables the gradient to flow much furtherback in time than randomly initialized weights. This may also explain why pretraining onthe parallel corpus is no worse than pretraining on a larger monolingual corpus.The language modeling objective is a strong regularizer: A model without the LM objectivehas a significant drop in ROUGE scores.Human evaluation: As ROUGE may not be able to capture the quality of summarization, wealso performed a small qualitative study to understand the human impression of the summariesproduced by different models. We took 200 random documents and compared the performance of1We encourage future researchers to use the non-anonymized version because it is a more realistic summa-rization setting with a larger vocabulary. Our numbers on the non-anonymized test set are 35:56ROUGE-1,14:60ROUGE-2, and 25:08ROUGE-L. We did not consider highlights as separate sentences.6Under review as a conference paper at ICLR 2017/uni00AD5/uni00AD4/uni00AD3/uni00AD2/uni00AD10Difference/uni00A0in/uni00A0ROUGENo/uni00A0pretraining Only/uni00A0pretrain/uni00A0decoder No/uni00A0LM/uni00A0objective Only/uni00A0pretrain/uni00A0embeddings Only/uni00A0pretrain/uni00A0embeddings/uni00A0&/uni00A0LSTM Only/uni00A0pretrain/uni00A0encoder Pretrain/uni00A0on/uni00A0parallel/uni00A0corpusROUGE/uni00AD1ROUGE/uni00AD2ROUGE/uni00ADLFigure 5: Summarization ablation study measuring the difference in validation ROUGE betweenvarious ablations and the full model. More negative is worse. The full model uses LMs trained withunlabeled data to initialize the encoder and decoder, plus the language modeling objective.a pretrained and non-pretrained system. The document, gold summary, and the two system outputswere presented to a human evaluator who was asked to rate each system output on a scale of 1-5with 5 being the best score. The system outputs were presented in random order and the evaluatordid not know the identity of either output. The evaluator noted if there were repetitive phrases orsentences in either system outputs. Unwanted repetition was also noticed by Nallapati et al. (2016).Table 3 and 4 show the results of the study. In both cases, the pretrained system outperforms thesystem without pretraining in a statistically significant manner. The better optimization enabled bypretraining improves the generated summaries and decreases unwanted repetition in the output.NP>PNP = P NP<P29 88 83Table 3: The count of how often the no pretrain system ( NP) achieves a higher, equal, and lowerscore than the pretrained system ( P) in the side-by-side study where the human evaluator gave eachsystem a score from 1-5. The sign statistical test gives a p-value of <0:0001 for rejecting the nullhypothesis that there is no difference in the score obtained by either system.No pretrainNo repeats RepeatsPretrainNo repeats 67 65Repeats 24 44Table 4: The count of how often the pretrain and no pretrain systems contain repeated phrases orsentences in their outputs in the side-by-side study. McNemar’s test gives a p-value of <0:0001for rejecting the null hypothesis that the two systems repeat the same proportion of times. Thepretrained system clearly repeats less than the system without pretraining.4 R ELATED WORKUnsupervised pretraining has been intensively studied in the past years, most notably is the workby Dahl et al. (2012) who found that pretraining with deep belief networks improved feedforwardacoustic models. More recent acoustic models have found pretraining unnecessary (Xiong et al.,7Under review as a conference paper at ICLR 20172016; Zhang et al., 2016; Chan et al., 2015), probably because the reconstruction objective of deepbelief networks is too easy. In contrast, we find that pretraining language models by next stepprediction significantly improves seq2seq on challenging real world datasets.Despite its appeal, unsupervised learning is rarely shown to improve supervised training. Dai & Le(2015) was amongst the rare studies which showed the benefits of pretraining in a semi-supervisedlearning setting. Their method is similar to our method except that they did not have a decodernetwork and thus could not apply to seq2seq learning. Similarly, Zhang & Zong (2016) found ituseful to add an additional task of sentence reordering of source-side monolingual data for neuralmachine translation. Various forms of transfer or multitask learning with seq2seq framework alsohave the flavors of our algorithm (Zoph et al., 2016; Luong et al., 2015a; Firat et al., 2016).Perhaps most closely related to our method is the work by Gulcehre et al. (2015), who combined alanguage model with an already trained seq2seq model by fine-tuning additional deep output layers.Empirically, their method produces small improvements over the supervised baseline. We suspectthat their method does not produce significant gains because (i) the models are trained independentlyof each other and are not fine-tuned (ii) the LM is combined with the seq2seq model after the lastlayer, wasting the benefit of the low level LM features, and (iii) only using the LM on the decoderside. Venugopalan et al. (2016) addressed (i) but still experienced minor improvements. Usingpretrained GloVe embedding vectors (Pennington et al., 2014) had more impact.Related to our approach in principle is the work by Chen et al. (2016) who proposed a two-term,theoretically motivated unsupervised objective for unpaired input-output samples. Though they didnot apply their method to seq2seq learning, their framework can be modified to do so. In that case,the first term pushes the output to be highly probable under some scoring model, and the secondterm ensures that the output depends on the input. In the seq2seq setting, we interpret the first termas a pretrained language model scoring the output sequence. In our work, we fold the pretrainedlanguage model into the decoder. We believe that using the pretrained language model only forscoring is less efficient that using all the pretrained weights. Our use of labeled examples satisfiesthe second term. These connections provide a theoretical grounding for our work.In our experiments, we benchmark our method on machine translation, where other unsupervisedmethods are shown to give promising results (Sennrich et al., 2015b; Cheng et al., 2016). In back-translation (Sennrich et al., 2015b), the trained model is used to decode unlabeled data to yield extralabeled data. One can argue that this method may not have a natural analogue to other tasks such assummarization. We note that their technique is complementary to ours, and may lead to additionalgains in machine translation. The method of using autoencoders in Cheng et al. (2016) is promising,though it can be argued that autoencoding is an easy objective and language modeling may force theunsupervised models to learn better features.5 C ONCLUSIONWe presented a novel unsupervised pretraining method to improve sequence to sequence learning.The method can aid in both generalization and optimization. Our scheme involves pretraining twolanguage models in the source and target domain, and initializing the embeddings, first LSTM layers,and softmax of a sequence to sequence model with the weights of the language models. Using ourmethod, we achieved state-of-the-art machine translation results on both WMT’14 and WMT’15English to German.A key advantage of this technique is that it is flexible and can be applied to a large variety of tasks,such as summarization, where it surpasses the supervised learning baseline.ACKNOWLEDGMENTSWe thank George Dahl, Andrew Dai, Laurent Dinh, Stephan Gouws, Geoffrey Hinton, Rafal Joze-fowicz, Pooya Khorrami, Phillip Louis, Ramesh Nallapati, Arvind Neelakantan, Xin Pan, Abi See,Rico Sennrich, Luke Vilnis, Yuan Yu and the Google Brain team for their help with the project.8Under review as a conference paper at ICLR 2017 | S1iDqXoVl | the paper addresses a very important issue of exploiting non-parallel training data, but it should add detailed discussions on comparing with two pieces of prior arts detailed in the review below | 5: Marginally below acceptance threshold | strengths:
A method is proposed in this paper to initialize the encoder and decoder of the seq2seq model using the trained weights of language models with no parallel data. After such pretraining, all weights are jointly fine-tuned with parallel labeled data with an additional language modeling loss.
It is shown that pretraining accelerates training and improves generalization of seq2seq models.
The main value of the proposed method is to leverage separate source and target corpora, contrasting the common methods of using large amounts of parallel training corpora.
weaknesses:
The objective function shown in the middle of pg 3 is highly empirical, not directly linked to how non-parallel data helps to improve the final prediction results. The paper should compare with and discuss the objective function based on expectation of cross entropy which is directly linked to improving prediction results as proposed in arXiv:1606.04646, Chen et al.: Unsupervised Learning of Predictors from Unpaired Input-Output Samples, 2016.
The pre-training procedure proposed in this paper is also closely connected with the DNN pretraining method presented in Dahl et al. 2011, 2012. Comparisons should be made in the paper, highlighting why the proposed one is conceptually superior if the authors believe so.
| 5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature |
Sys6GJqxl | ICLR.cc/2017/conference | 2017 | Delving into Transferable Adversarial Examples and Black-box Attacks | ["Yanpei Liu", "Xinyun Chen", "Chang Liu", "Dawn Song"] | An intriguing property of deep neural networks is the existence of adversarial examples, which can transfer among different architectures. These transferable adversarial examples may severely hinder deep neural network-based applications. Previous works mostly study the transferability using small scale datasets. In this work, we are the first to conduct an extensive study of the transferability over large models and a large scale dataset, and we are also the first to study the transferability of targeted adversarial examples with their target labels. We study both non-targeted and targeted adversarial examples, and show that while transferable non-targeted adversarial examples are easy to find, targeted adversarial examples generated using existing approaches almost never transfer with their target labels. Therefore, we propose novel ensemble-based approaches to generating transferable adversarial examples. Using such approaches, we observe a large proportion of targeted adversarial examples that are able to transfer with their target labels for the first time. We also present some geometric studies to help understanding the transferable adversarial examples. Finally, we show that the adversarial examples generated using ensemble-based approaches can successfully attack Clarifai.com, which is a black-box image classification system. | ["Computer vision", "Deep learning", "Applications"] | ABSTRACTAn intriguing property of deep neural networks is the existence of adversarial ex-amples, which can transfer among different architectures. These transferable ad-versarial examples may severely hinder deep neural network-based applications.Previous works mostly study the transferability using small scale datasets. In thiswork, we are the first to conduct an extensive study of the transferability overlarge models and a large scale dataset, and we are also the first to study the trans-ferability of targeted adversarial examples with their target labels. We study bothnon-targeted andtargeted adversarial examples, and show that while transferablenon-targeted adversarial examples are easy to find, targeted adversarial examplesgenerated using existing approaches almost never transfer with their target labels.Therefore, we propose novel ensemble-based approaches to generating transfer-able adversarial examples. Using such approaches, we observe a large proportionof targeted adversarial examples that are able to transfer with their target labels forthe first time. We also present some geometric studies to help understanding thetransferable adversarial examples. Finally, we show that the adversarial examplesgenerated using ensemble-based approaches can successfully attack Clarifai.com,which is a black-box image classification system.1 I NTRODUCTIONRecent research has demonstrated that for a deep architecture, it is easy to generate adversarialexamples, which are close to the original ones but are misclassified by the deep architecture (Szegedyet al. (2013); Goodfellow et al. (2014)). The existence of such adversarial examples may have severeconsequences, which hinders vision-understanding-based applications, such as autonomous driving.Most of these studies require explicit knowledge of the underlying models. It remains an openquestion how to efficiently find adversarial examples for a black-box model.Several works have demonstrated that some adversarial examples generated for one model mayalso be misclassified by another model. Such a property is referred to as transferability , whichcan be leveraged to perform black-box attacks. This property has been exploited by constructinga substitute of the black-box model, and generating adversarial instances against the substitute toattack the black-box system (Papernot et al. (2016a;b)). However, so far, transferability is mostlyexamined over small datasets, such as MNIST (LeCun et al. (1998)) and CIFAR-10 (Krizhevsky &Hinton (2009)). It has yet to be better understood transferability over large scale datasets, such asImageNet (Russakovsky et al. (2015)).In this work, we are the first to conduct an extensive study of the transferability of different adver-sarial instance generation strategies applied to different state-of-the-art models trained over a largescale dataset. In particular, we study two types of adversarial examples: (1) non-targeted adversar-ial examples, which can be misclassified by a network, regardless of what the misclassified labelsmay be; and (2) targeted adversarial examples, which can be classified by a network as a targetlabel. We examine several existing approaches searching for adversarial examples based on a singlemodel. While non-targeted adversarial examples are more likely to transfer, we observe few targetedadversarial examples that are able to transfer with their target labels.Work is done while visiting UC Berkeley.1Published as a conference paper at ICLR 2017We further propose a novel strategy to generate transferable adversarial images using an ensembleof multiple models. In our evaluation, we observe that this new strategy can generate non-targetedadversarial instances with better transferability than other methods examined in this work. Also, forthe first time, we observe a large proportion of targeted adversarial examples that are able to transferwith their target labels.We study geometric properties of the models in our evaluation. In particular, we show that thegradient directions of different models are orthogonal to each other. We also show that decisionboundaries of different models align well with each other, which partially illustrates why adversarialexamples can transfer.Last, we study whether generated adversarial images can attack Clarifai.com, a commercial com-pany providing state-of-the-art image classification services. We have no knowledge about the train-ing dataset and the types of models used by Clarifai.com; meanwhile, the label set of Clarifai.comis quite different from ImageNet’s. We show that even in this case, both non-targeted and targetedadversarial images transfer to Clarifai.com. This is the first work documenting the success of gen-erating both non-targeted and targeted adversarial examples for a black-box state-of-the-art onlineimage classification system, whose model and training dataset are unknown to the attacker.Contributions and organization. We summarize our main contributions as follows:For ImageNet models, we show that while existing approaches are effective to generatenon-targeted transferable adversarial examples (Section 3), only few targeted adversarialexamples generated by existing methods can transfer (Section 4).We propose novel ensemble-based approaches to generate adversarial examples (Sec-tion 5). Our approaches enable a large portion of targeted adversarial examples to transferamong multiple models for the first time.We are the first to present that targeted adversarial examples generated for models trainedon ImageNet can transfer to a black-box system, i.e., Clarifai.com, whose model, trainingdata, and label set is unknown to us (Section 7). In particular, Clarifai.com’s label set isvery different from ImageNet’s.We conduct the first analysis of geometric properties for large models trained over Ima-geNet (Section 6), and the results reveal several interesting findings, such as the gradientdirections of different models are orthogonal to each other.In the following, we first discuss related work, and then present the background knowledge andexperiment setup in Section 2. Then we present each of our experiments and conclusions in thecorresponding section as mentioned above.Related work. Transferability of adversarial examples was first examined by Szegedy et al.(2013), which studied the transferability (1) between different models trained over the same dataset;and (2) between the same or different model trained over disjoint subsets of a dataset; However,Szegedy et al. (2013) only studied MNIST.The study of transferability was followed by Goodfellow et al. (2014), which attributed the phe-nomenon of transferability to the reason that the adversarial perturbation is highly aligned with theweight vector of the model. Again, this hypothesis was tested using MNIST and CIFAR-10 datasets.We show that this is not the case for models trained over ImageNet.Papernot et al. (2016a;b) examined constructing a substitute model to attack a black-box targetmodel. To train the substitute model, they developed a technique that synthesizes a training set andannotates it by querying the target model for labels. They demonstrate that using this approach,black-box attacks are feasible towards machine learning services hosted by Amazon, Google, andMetaMind. Further, Papernot et al. (2016a) studied the transferability between deep neural networksand other models such as decision tree, kNN, etc.Our work differs from Papernot et al. (2016a;b) in three aspects. First, in these works, only the modeland the training process are a black box, but the training set and the test set are controlled by theattacker; in contrast, we attack Clarifai.com, whose model, training data, training process, and eventhe test label set are unknown to the attacker. Second, the datasets studied in these works are small2Published as a conference paper at ICLR 2017scale, i.e., MNIST and GTSRB (Stallkamp et al. (2012)); in our work, we study the transferabilityover larger models and a larger dataset, i.e., ImageNet. Third, to attack black-box machine learningsystems, we do not query the systems for constructing the substitute model ourselves.In a concurrent and independent work, Moosavi-Dezfooli et al. (2016) showed the existence of auniversal perturbation for each model, which can transfer across different images. They also showthat the adversarial images generated using these universal perturbations can transfer across differentmodels on ImageNet. However, they only examine the non-targeted transferability, while our workstudies both non-targeted and targeted transferability over ImageNet.2 A DVERSARIAL DEEPLEARNING AND TRANSFERABILITY2.1 T HE ADVERSARIAL DEEP LEARNING PROBLEMWe assume a classifier f(x)outputs a category (or a label) as the prediction. Given an originalimagex, with ground truth label y, the adversarial deep learning problem is to seek for adversarialexamples for the classifier f(x). Specifically, we consider two classes of adversarial examples.Anon-targeted adversarial example x?is an instance that is close to x, in which case x?shouldhave the same ground truth as x, whilef(x?)6=y. For the problem to be non-trivial, we assumef(x) =ywithout loss of generality. A targeted adversarial example x?is close toxand satisfiesf(x?) =y?, wherey?is a target label specified by the adversary, and y?6=y.2.2 A PPROACHES FOR GENERATING ADVERSARIAL EXAMPLESIn this work, we consider three classes of approaches for generating adversarial examples:optimization-based approaches, fast gradient approaches, and fast gradient sign approaches. Eachclass has non-targeted and targeted versions respectively.2.2.1 A PPROACHES FOR GENERATING NON -TARGETED ADVERSARIAL EXAMPLESFormally, given an image xwith ground truth y=f(x), searching for a non-targeted adversarialexample can be modeled as searching for an instance x?to satisfy the following constraints:f(x?)6=y (1)d(x;x?)B (2)whered(;)is a metric to quantify the distance between an original image and its adversarial coun-terpart, andB, called distortion , is an upper bound placed on this distance. Without loss of gener-ality, we consider model fis composed of a network J(x), which outputs the probability for eachcategory, so that foutputs the category with the highest probability.Optimization-based approach. One approach is to approximate the solution to the followingoptimization problem:argminx?d(x;x?)`(1y;J(x?)) (3)where 1yis the one-hot encoding of the ground truth label y,`is a loss function to measure thedistance between the prediction and the ground truth, and is a constant to balance constraints (2)and (1), which is empirically determined. Here, loss function `is used to approximate constraint (1),and its choice can affect the effectiveness of searching for an adversarial example. In this work, wechoose`(u;v) = log (1uv), which is shown to be effective by Carlini & Wagner (2016).Fast gradient sign (FGS). Goodfellow et al. (2014) proposed the fast gradient sign (FGS) methodso that the gradient needs be computed only once to generate an adversarial example. FGS can beused to generate adversarial images to meet the L1norm bound. Formally, non-targeted adversarialexamples are constructed asx? clip(x+Bsgn(rx`(1y;J(x))))Here, clip(x)is used to clip each dimension of xto the range of pixel values, i.e., [0;255] in thiswork. We make a slight variation to choose `(u;v) = log (1uv), which is the same as used inthe optimization-based approach.3Published as a conference paper at ICLR 2017Fast gradient (FG). The fast gradient approach (FG) is similar to FGS, but instead of movingalong the gradient sign direction, FG moves along the gradient direction. In particular, we havex? clip(x+Brx`(1y;J(x))jjrx`(1y;J(x))jj))Here, we assume the distance metric in constraint (2), d(x;x?) =jjxx?jjis a norm of xx?.The term sgn(rx`)in FGS is replaced byrx`jjrx`jjto meet this distance constraint.We call both FGS and FG fast gradient-based approaches .2.2.2 A PPROACHES FOR GENERATING TARGETED ADVERSARIAL EXAMPLESA targeted adversarial image x?is similar to a non-targeted one, but constraint (1) is replaced byf(x?) =y?(4)wherey?is the target label given by the adversary. For the optimization-based approach, we ap-proximate the solution by solving the following dual objective:argminx?d(x;x?) +`0(1y?;J(x?)) (5)In this work, we choose the standard cross entropy loss `0(u;v) =Piuilogvi.For FGS and FG, we construct adversarial examples as follows:x? clip(xBsgn(rx`0(1y?;J(x)))) (FGS)x? clip(xBrx`0(1y?;J(x))jjrx`0(1y?;J(x))jj) (FG)where`0is the same as the one used for the optimization-based approach.2.3 E VALUATION METHODOLOGYFor the rest of the paper, we focus on examining the transferability among state-of-the-art modelstrained over ImageNet (Russakovsky et al. (2015)). In this section, we detail the models to beexamined, the dataset to be evaluated, and the measurements to be used.Models. We examine five networks, ResNet-50, ResNet-101, ResNet-152 (He et al. (2015))1,GoogLeNet (Szegedy et al. (2014))2, and VGG-16 (Simonyan & Zisserman (2014))3. We retrievethe pre-trained models for each network online. The performance of these models on the ILSVRC2012 (Russakovsky et al. (2015)) validation set can be found in our online technical report: Liu et al.(2016). We choose these models to study the transferability between homogeneous architectures(i.e., ResNet models) and heterogeneous architectures.Dataset. It is less meaningful to examine the transferability of an adversarial image between twomodels which cannot classify the original image correctly. Therefore, from the ILSVRC 2012 val-idation set, we randomly choose 100 images, which can be classified correctly by all five modelsin our examination. These 100 images form our test set. To perform targeted attacks, we manuallychoose a target label for each image, so that its semantics is far from the ground truth. The imagesand target labels in our evaluation can be found on website4.Measuring transferability. Given two models, we measure the non-targeted transferability bycomputing the percentage of the adversarial examples generated for one model that can be classifiedcorrectly for the other. We refer to this percentage as accuracy . A lower accuracy means betternon-targeted transferability. We measure the targeted transferability by computing the percentage ofthe adversarial examples generated for one model that are classified as the target label by the othermodel. We refer to this percentage as matching rate . A higher matching rate means better targetedtransferability. For clarity, the reported results are only based on top-1 accuracy. Top-5 accuracy’scounterparts can be found in our online technical report: Liu et al. (2016).1https://github.com/KaimingHe/deep-residual-networks2https://github.com/BVLC/caffe/tree/master/models/bvlc_googlenet3https://gist.github.com/ksimonyan/211839e770f7b538e2d84https://github.com/sunblaze-ucb/transferability-advdnn-pub4Published as a conference paper at ICLR 2017Distortion. Besides transferability, another important factor is the distortion between adversarialimages and the original ones. We measure the distortion by root mean square deviation , i.e., RMSD,which is computed as d(x?;x) =pPi(x?ixi)2=N, wherex?andxare the vector representationsof an adversarial image and the original one respectively, Nis the dimensionality of xandx?, andxidenotes the pixel value of the i-th dimension of x, within range [0;255], and similar for x?i.3 N ON-TARGETED ADVERSARIAL EXAMPLESIn this section, we examine different approaches for generating non-targeted adversarial images.3.1 O PTIMIZATION -BASED APPROACHTo apply the optimization-based approach for a single model, we initialize x?to bexand use AdamOptimizer (Kingma & Ba (2014)) to optimize Objective (3) . We find that we can tune the RMSDby adjusting the learning rate of Adam and . We find that, for each model, we can use a smalllearning rate to generate adversarial images with small RMSD, i.e. <2, with any. In fact, we findthat when initializing x?withx, Adam Optimizer will search for an adversarial example around x,even when we set to be 0, i.e., not restricting the distance between x?andx. Therefore, we setto be 0for all experiments using optimization-based approaches throughout the paper. Althoughthese adversarial examples with small distortions can successfully fool the target model, however,they cannot transfer well to other models (details can be found in our online technical report: Liuet al. (2016)).We increase the learning rate to allow the optimization algorithm to search for adversarial imageswith larger distortion. In particular, we set the learning rate to be 4. We run Adam Optimizer for 100iterations to generate the adversarial images. We observe that the loss converges after 100 iterations.An alternative optimization-based approach leading to similar results can be found in our onlinetechnical report: Liu et al. (2016).Non-targeted adversarial examples transfer. We generate non-targeted adversarial examples onone network, but evaluate them on another, and Table 1 Panel A presents the results. From the table,we can observe thatThe diagonal contains all 0 values. This says that all adversarial images generated for onemodel can mislead the same model.A large proportion of non-targeted adversarial images generated for one model using theoptimization-based approach can transfer to another.Although the three ResNet models share similar architectures which differ only in the hy-perparameters, adversarial examples generated against a ResNet model do not necessarilytransfer to another ResNet model better than other non-ResNet models. For example, theadversarial examples generated for VGG-16 have lower accuracy on ResNet-50 than thosegenerated for ResNet-152 or ResNet-101.3.2 F AST GRADIENT -BASED APPROACHESWe then examine the effectiveness of fast gradient-based approaches. A good property of fastgradient-based approaches is that all generated adversarial examples lie in a 1-D subspace. There-fore, we can easily approximate the minimal distortion in this subspace of transferable adversarialexamples between two models. In the following, we first control the RMSD to study fast gradient-based approaches’ effectiveness. Second, we study the transferable minimal distortions of fastgradient-based approaches.3.2.1 E FFECTIVENESS AND TRANSFERABILITY OF THE FAST GRADIENT -BASEDAPPROACHESSince the distortion Band the RMSD of the generated adversarial images are highly correlated, wecan choose this hyperparameter Bto generate adversarial images with a given RMSD. In Table 15Published as a conference paper at ICLR 2017RMSD ResNet-152 ResNet-101 ResNet-50 VGG-16 GoogLeNetResNet-152 22.83 0% 13% 18% 19% 11%ResNet-101 23.81 19% 0% 21% 21% 12%ResNet-50 22.86 23% 20% 0% 21% 18%VGG-16 22.51 22% 17% 17% 0% 5%GoogLeNet 22.58 39% 38% 34% 19% 0%Panel A: Optimization-based approachRMSD ResNet-152 ResNet-101 ResNet-50 VGG-16 GoogLeNetResNet-152 23.45 4% 13% 13% 20% 12%ResNet-101 23.49 19% 4% 11% 23% 13%ResNet-50 23.49 25% 19% 5% 25% 14%VGG-16 23.73 20% 16% 15% 1% 7%GoogLeNet 23.45 25% 25% 17% 19% 1%Panel B: Fast gradient approachTable 1: Transferability of non-targeted adversarial images generated between pairs of models. Thefirst column indicates the average RMSD of all adversarial images generated for the model in thecorresponding row. The cell (i;j)indicates the accuracy of the adversarial images generated formodeli(row) evaluated over model j(column). Results of top-5 accuracy can be found in ouronline technical report: Liu et al. (2016).Panel B, we generate adversarial images using FG such that the average RMSD is almost the sameas those generated using the optimization-based approach. We observe that the diagonal values inthe table are all positive, which means that FG cannot fully mislead the models. A potential reasonis that, FG can be viewed as approximating the optimization, but is tailored for speed over accuracy.On the other hand, the values of non-diagonal cells in the table, which correspond to the accuraciesof adversarial images generated for one model but evaluated on another, are comparable with or lessthan their counterparts in the optimization-based approach. This shows that non-targeted adversarialexamples generated by FG exhibit transferability as well.We also evaluate FGS, but the transferability of the generated images is worse than the ones gen-erated using either FG or optimization-based approaches. The results can be found in our onlinetechnical report: Liu et al. (2016). It shows that when RMSD is around 23, the accuracies of theadversarial images generated by FGS is greater than their counterparts for FG. We hypothesize thereason why transferability of FGS is worse to this fact.3.2.2 A DVERSARIAL IMAGES WITH MINIMAL TRANSFERABLE RMSDFor an image xand two models M1;M 2, we can approximate the minimal distortion Balong adirection, such thatxB=x+Bgenerated for M1is adversarial for both M1andM2. Hereisthe direction, i.e., sgn(rx`)for FGS, andrx`=jjrx`jjfor FG.We refer to the minimal transferable RMSD from M1toM2using FG (or FGS) as the RMSD ofa transferable adversarial example xBwith the minimal transferable distortion BfromM1toM2using FG (or FGS). The minimal transferable RMSD can illustrate the tradeoff between distortionand transferability.In the following, we approximate the minimal transferable RMSD through a linear search by sam-plingBevery 0.1 step. We choose the linear-search method rather than binary-search method todetermine the minimal transferable RMSD because the adversarial images generated from an origi-nal image may come from multiple intervals. The experiment can be found in our online technicalreport: Liu et al. (2016).Minimal transferable RMSD using FG and FGS. Figure 1 plots the cumulative distributionfunction (CDF) of the minimal transferable RMSD from VGG-16 to ResNet-152 using non-targetedFG (Figure 1a) and FGS (Figure 1b). From the figures, we observe that both FG and FGS can find100% transferable adversarial images with RMSD less than 80:91and86:56respectively. Further,6Published as a conference paper at ICLR 2017(a) Fast Gradient (b) Fast Gradient SignFigure 1: The CDF of the minimal transferable RMSD from VGG-16 to ResNet-152 using FG (a)and FGS (b). The green line labels the median minimal transferable RMSD, while the red line labelsthe minimal transferable RMSD to reach 90% percentage.RMSD ResNet-152 ResNet-101 ResNet-50 VGG-16 GoogLeNetResNet-152 23.13 100% 2% 1% 1% 1%ResNet-101 23.16 3% 100% 3% 2% 1%ResNet-50 23.06 4% 2% 100% 1% 1%VGG-16 23.59 2% 1% 2% 100% 1%GoogLeNet 22.87 1% 1% 0% 1% 100%Table 2: The matching rate of targeted adversarial images generated using the optimization-basedapproach. The first column indicates the average RMSD of the generated adversarial images. Cell(i;j)indicates that matching rate of the targeted adversarial images generated for model i(row)when evaluated on model j(column). The top-5 results can be found in our online technical re-port: Liu et al. (2016).the FG method can generate transferable attacks with smaller RMSD than FGS. A potential rea-son is that while FGS minimizes the distortion’s L1norm, FG minimizes its L2norm, which isproportional to RMSD.3.3 C OMPARISON WITH RANDOM PERTURBATIONSWe also evaluate the test accuracy when we add a Gaussian noise to the 100 images in our testset. The concrete results can be found in our online technical report: Liu et al. (2016), where weshow the conclusion that the “transferability” of this approach is significantly worse than eitheroptimization-based approaches or fast gradient-based approaches.4 T ARGETED ADVERSARIAL EXAMPLESIn this section, we examine the transferability of targeted adversarial images. Table 2 presentsthe results for using optimization-based approach. We observe that (1) the prediction of targetedadversarial images can match the target labels when evaluated on the same model that is used togenerate the adversarial examples; but (2) the targeted adversarial images can be rarely predictedas the target labels by a different model. We call the latter that the target labels do not transfer .Even when we increase the distortion, we still do not observe improvements on making target labeltransfer. Some results can be found in our online technical report: Liu et al. (2016). Even if wecompute the matching rate based on top-5 accuracy, the highest matching rate is only 10%. Theresults can be found in our online technical report: Liu et al. (2016).We also examine the targeted adversarial images generated by fast gradient-based approaches, andwe observe that the target labels do not transfer as well. The results can be found in our onlinetechnical report: Liu et al. (2016). In fact, most targeted adversarial images cannot mislead themodel, for which the adversarial images are generated, to predict the target labels, regardless of howlarge the distortion is used. We attribute it to the fact that the fast gradient-based approaches only7Published as a conference paper at ICLR 2017search for attacks in a 1-D subspace. In this subspace, the total possible predictions may contain asmall subset of all labels, which usually does not contain the target label. In Section 6, we studydecision boundaries regarding this issue.We also evaluate the matching rate of images added with Gaussian noise, as described in Section 3.3.However, we observe that the matching rate of any of the 5 models is 0%. Therefore, we concludethat by adding Gaussian noise, the attacker cannot generate successful targeted adversarial examplesat all, let alone targeted transferability.5 E NSEMBLE -BASED APPROACHESWe hypothesize that if an adversarial image remains adversarial for multiple models, then it is morelikely to transfer to other models as well. We develop techniques to generate adversarial images formultiple models. The basic idea is to generate adversarial images for the ensemble of the models .Formally, given kwhite-box models with softmax outputs being J1;:::;Jk, an original image x,and its ground truth y,the ensemble-based approach solves the following optimization problem (fortargeted attack):argminx?log(kXi=1iJi(x?))1y?+d(x;x?) (6)wherey?is the target label specified by the adversary,PiJi(x?)is the ensemble model, and iare the ensemble weights,Pki=1i= 1. Note that (6) is the targeted objective. The non-targetedcounterpart can be derived similarly. In doing so, we hope the generated adversarial images remainadversarial for an additional black-box model Jk+1.We evaluate the effectiveness of the ensemble-based approach. For each of the five models, we treatit as the black-box model to attack, and generate adversarial images for the ensemble of the restfour, which is considered as white-box. We evaluate the generated adversarial images over all fivemodels. Throughout the rest of the paper, we refer to the approaches evaluated in Section 3 and 4 asthe approaches using a single model, and to the ensemble-based approaches discussed in this sectionas the approaches using an ensemble model.Optimization-based approach. We use Adam to optimize the objective (6) with equal ensembleweights across all models in the ensemble to generate targeted adversarial examples. In particular,we set the learning rate of Adam to be 8for each model. In each iteration, we compute the Adamupdate for each model, sum up the four updates, and add the aggregation onto the image. We run 100iterations of updates, and we observe that the loss converges after 100 iterations. By doing so, for thefirst time, we observe a large proportion of the targeted adversarial images whose target labels cantransfer. The results are presented in Table 3. We observe that not all targeted adversarial imagescan be misclassified to the target labels by the models used in the ensemble. This suggests thatwhile searching for an adversarial example for the ensemble model, there is no direct supervision tomislead any individual model in the ensemble to predict the target label. Further, from the diagonalnumbers of the table, we observe that the transferability to ResNet models is better than to VGG-16or GoogLeNet, when adversarial examples are generated against all models except the target model.We also evaluate non-targeted adversarial images generated by the ensemble-based approach. Weobserve that the generated adversarial images have almost perfect transferability. We use the sameprocedure as for the targeted version, except the objective to generate the adversarial images. Weevaluate the generated adversarial images over all models. The results are presented in Table 4.The generated adversarial images all have RMSDs around 17, which are lower than 22 to 23 ofthe optimization-based approach using a single model (See Table 1 for comparison). When theadversarial images are evaluated over models which are not used to generate the attack, the accuracyis no greater than 6%. For a reference, the corresponding accuracies for all approaches evaluated inSection 3 using one single model are at least 12%. Our experiments demonstrate that the ensemble-based approaches can generate almost perfectly transferable adversarial images.Fast gradient-based approach. The results for non-targeted fast gradient-based approaches ap-plied to the ensemble can be found in our online technical report: Liu et al. (2016). We observethat the diagonal values are not zero, which is the same as we observed in the results for FG and8Published as a conference paper at ICLR 2017RMSD ResNet-152 ResNet-101 ResNet-50 VGG-16 GoogLeNet-ResNet-152 30.68 38% 76% 70% 97% 76%-ResNet-101 30.76 75% 43% 69% 98% 73%-ResNet-50 30.26 84% 81% 46% 99% 77%-VGG-16 31.13 74% 78% 68% 24% 63%-GoogLeNet 29.70 90% 87% 83% 99% 11%Table 3: The matching rate of targeted adversarial images generated using the optimization-basedapproach. The first column indicates the average RMSD of the generated adversarial images. Cell(i;j)indicates that percentage of the targeted adversarial images generated for the ensemble of thefour models except model i(row) is predicted as the target label by model j(column). In each row,the minus sign “” indicates that the model of the row is not used when generating the attacks.Results of top-5 matching rate can be found in our online technical report: Liu et al. (2016).RMSD ResNet-152 ResNet-101 ResNet-50 VGG-16 GoogLeNet-ResNet-152 17.17 0% 0% 0% 0% 0%-ResNet-101 17.25 0% 1% 0% 0% 0%-ResNet-50 17.25 0% 0% 2% 0% 0%-VGG-16 17.80 0% 0% 0% 6% 0%-GoogLeNet 17.41 0% 0% 0% 0% 5%Table 4: Accuracy of non-targeted adversarial images generated using the optimization-based ap-proach. The first column indicates the average RMSD of the generated adversarial images. Cell(i;j)corresponds to the accuracy of the attack generated using four models except model i(row)when evaluated over model j(column). In each row, the minus sign “ ” indicates that the modelof the row is not used when generating the attacks. Results of top-5 accuracy can be found in ouronline technical report: Liu et al. (2016).FGS applied to a single model. We hypothesize a potential reason is that the gradient directions ofdifferent models in the ensemble are orthogonal to each other, as we will illustrate in Section 6. Inthis case, the gradient direction of the ensemble is almost orthogonal to the one of each model in theensemble. Therefore searching along this direction may require large distortion to reach adversarialexamples.For targeted adversarial examples generated using FG and FGS based on an ensemble model, theirtransferability is no better than the ones generated using a single model. The results can be found inour online technical report: Liu et al. (2016). We hypothesize the same reason to explain this: thereare only few possible target labels in total in the 1-D subspace.6 G EOMETRIC PROPERTIES OF DIFFERENT MODELSIn this section, we show some geometric properties of the models to try to better understand transfer-able adversarial examples. Prior works also try to understand the geometic properties of adversarialexamples theoretically (Fawzi et al. (2016)) or empirically (Goodfellow et al. (2014)). In this work,we examine large models trained over a large dataset with 1000 labels, whose geometric propertiesare never examined before. This allows us to make new observations to better understand the modelsand their adversarial examples.The gradient directions of different models in our evaluation are almost orthogonal to eachother. We study whether the adversarial directions of different models align with each other. Wecalculate cosine value of the angle between gradient directions of different models, and the resultscan be found in our online technical report: Liu et al. (2016). We observe that all non-diagonalvalues are close to 0, which indicates that for most images, their gradient directions with respect todifferent models are orthogonal to each other.Decision boundaries of the non-targeted approaches using a single model. We study the deci-sion boundary of different models to understand why adversarial examples transfer. We choose two9Published as a conference paper at ICLR 2017Figure 2: The example image to study the decision boundary. Its ID in ILSVRC 2012 validation setis 49443, and its ground truth label is “anemone fish.”VGG-16 ResNet-50 ResNet-101 ResNet-152 GoogLeNetZoom-in20 15 10 50 5 10 15 2020151050510152020 15 10 50 5 10 15 2020151050510152020 15 10 50 5 10 15 2020151050510152020 15 10 50 5 10 15 2020151050510152020 15 10 50 5 10 15 20201510505101520 Zoom-out100 50 0 50 10010050050100100 50 0 50 10010050050100100 50 0 50 10010050050100100 50 0 50 10010050050100100 50 0 50 10010050050100Figure 3: Decision regions of different models. We pick the same two directions for all plots: one isthe gradient direction of VGG-16 (x-axis), and the other is a random orthogonal direction (y-axis).Each point in the span plane shows the predicted label of the image generated by adding a noise tothe original image (e.g., the origin corresponds to the predicted label of the original image). Theunits of both axises are 1 pixel values. All sub-figure plots the regions on the span plane using thesame color for the same label. The image is in Figure 2.normalized orthogonal directions 1;2, one being the gradient direction of VGG-16 and the otherbeing randomly chosen. Each point (u;v)in this 2-D plane corresponds to the image x+u1+v2,wherexis the pixel value vector of the original image. For each model, we plot the label of theimage corresponding to each point, and get Figure 3 using the image in Figure 2.We can observe that for all models, the region that each model can predict the image correctlyis limited to the central area. Also, along the gradient direction, the classifiers are soon misled.One interesting finding is that along this gradient direction, the first misclassified label for the threeResNet models (corresponding to the light green region) is the label “orange”. A more detailedstudy can be found in our online technical report: Liu et al. (2016). When we look at the zoom-out figures, however, the labels of images that are far away from the original one are different fordifferent models, even among ResNet models.On the other hand, in Table 5, we show the total number of regions in each plane. In fact, for eachplane, there are at most 21 different regions in all planes. Compared with the 1,000 total categoriesin ImageNet, this is only 2.1% of all categories. That means, for all other 97.9% labels, no targetedadversarial example exists in each plane. Such a phenomenon partially explains why fast gradient-based approaches can hardly find targeted adversarial images.Further, in Figure 4, we draw the decision boundaries of all models on the same plane as describedabove. We can observe that10Published as a conference paper at ICLR 2017Model VGG-16 ResNet-50 ResNet-101 ResNet-152 GoogLeNet# of labels 10 9 21 10 21Table 5: The number of all possible predicted labels for each model in the same plane described in Figure 3.50 0 50 100604020020406080VGG-16ResNet-101ResNet-152ResNet-50GoogLeNetFigure 4: The decision boundary to sep-arate the region within which all pointsare classified as the ground truth label(encircled by each closed curve) fromothers. The plane is the same one de-scribed in Figure 3. The origin ofthe coordinate plane corresponds to theoriginal image. The units of both axisesare 1 pixel values.50 0 50 1006040200204060 ResNet-101VGG-16ResNet-50ResNet-152GoogLeNetFigure 5: The decision boundary to separate theregion within which all points are classified as thetarget label (encircled by each closed curve) fromothers. The plane is spanned by the targeted ad-versarial direction and a random orthogonal di-rection. The targeted adversarial direction is com-puted as the difference between the original imagein Figure 2 and the adversarial image generated bythe optimization-based approach for an ensemble.The ensemble contains all models except ResNet-101. The origin of the coordinate plane corre-sponds to the original image. The units of bothaxises are 1 pixel values.The boundaries align with each other very well. This partially explains why non-targetedadversarial images can transfer among models.The boundary diameters along the gradient direction is less than the ones along the ran-dom direction. A potential reason is that moving a variable along its gradient directioncan change the loss function (i.e., the probability of the ground truth label) significantly.Therefore along the gradient direction it will take fewer steps to move out of the groundtruth region than a random direction.An interesting finding is that even though we move left along the x-axis, which is equivalentto maximizing the ground truth’s prediction probability, it also reaches the boundary muchsooner than moving along a random direction. We attribute this to the non-linearity of theloss function: when the distortion is larger, the gradient direction also changes dramatically.In this case, moving along the original gradient direction no longer increases the probabilityto predict the ground truth label (details can be found in our online technical report: Liuet al. (2016)).As for VGG-16 model, there is a small hole within the region corresponding to the groundtruth. This may partially explain why non-targeted adversarial images with small distortionexist, but do not transfer well. This hole does not exist in other models’ decision planes. Inthis case, non-targeted adversarial images in this hole do not transfer.Decision boundaries of the targeted ensemble-based approaches. In addition, we choose thetargeted adversarial direction of the ensemble of all models except ResNet-101 and a random or-thogonal direction, and we plot decision boundaries on the plane spanned by these two directionvectors in Figure 5. We observe that the regions of images, which are predicted as the target label,align well for the four models in the ensemble. However, for the model not used to generate theadversarial image, i.e., ResNet-101, it also has a non-empty region such that the prediction is suc-cessfully misled to the target label, although the area is much smaller. Meanwhile, the region withineach closed curve of the models almost has the same center.11Published as a conference paper at ICLR 20177 R EAL WORLD EXAMPLE :ADVERSARIAL EXAMPLES FOR CLARIFAI .COMClarifai.com is a commercial company providing state-of-the-art image classification services. Wehave no knowledge about the dataset and types of models used behind Clarifai.com, except that wehave black-box access to the services. The labels returned from Clarifai.com are also different fromthe categories in ILSVRC 2012. We submit all 100 original images to Clarifai.com and the returnedlabels are correct based on a subjective measure.We also submit 400 adversarial images in total, where 200 of them are targeted adversarial examples,and the rest 200 are non-targeted ones. As for the 200 targeted adversarial images, 100 of themare generated using the optimization-based approach based on VGG-16 (the same ones evaluatedin Table 2), and the rest 100 are generated using the optimization-based approach based on anensemble of all models except ResNet-152 (the same ones evaluated in Table 3). The 200 non-targeted adversarial examples are generated similarly (the same ones evaluated in Table 1 and 4).For non-targeted adversarial examples, we observe that for both the ones generated using VGG-16and those generated using the ensemble, most of them can transfer to Clarifai.com.More importantly, a large proportion of our targeted adversarial examples are misclassified by Clari-fai.com as well. We observe that 57% of the targeted adversarial examples generated using VGG-16,and76% of the ones generated using the ensemble can mislead Clarifai.com to predict labels irrele-vant to the ground truth.Further, our experiment shows that for targeted adversarial examples, 18% of those generated us-ing the ensemble model can be predicted as labels close to the target label by Clarifai.com. Thecorresponding number for the targeted adversarial examples generated using VGG-16 is 2%. Con-sidering that in the case of attacking Clarifai.com, the labels given by the target model are differentfrom those given by our models, it is fairly surprising to see that when using the ensemble-basedapproach, there is still a considerable proportion of our targeted adversarial examples that can mis-lead this black-box model to make predictions semantically similar to our target labels. All thesenumbers are computed based on a subjective measure, and we include some examples in Table 6.More examples can be found in our online technical report: Liu et al. (2016).originalimagetruelabelClarifai.comresults oforiginal imagetargetlabeltargetedadversarialexampleClarifai.com resultsof targetedadversarial exampleviaductbridge,sight,arch,river,skywindowscreenwindow,wall,old,decoration,designhip, rosehip,rosehipfruit,fall,food,little,wildlifestupa,topeBuddha,gold,temple,celebration,artisticdogsled,dogsled,dogsleighgroup together,four,sledge,sled,enjoymenthip, rosehip,rosehipcherry,branch,fruit,food,season12Published as a conference paper at ICLR 2017pug,pug-dogpug,friendship,adorable,purebred,sitsea lionsea seal,ocean,head,sea,cuteOldEnglishsheep-dog,bobtailpoodle,retriever,loyalty,sit,twoabayaveil,spirituality,religion,people,illustrationmaillot,tank suitbeach,woman,adult,wear,portraitamphib-ian,amphibi-ousvehicletransportationsystem,vehicle,man,print,retropatas,hussarmonkey,Erythro-cebuspatasprimate,monkey,safari,sit,lookingbee eaterornithology,avian,beak,wing,featherTable 6: Original images and adversarial images evaluated over Clarifai.com. For labels returnedfrom Clarifai.com, we sort the labels firstly by rareness: how many times a label appears in theClarifai.com results for all adversarial images and original images, and secondly by confidence.Only top 5 labels are provided.8 C ONCLUSIONIn this work, we are the first to conduct an extensive study of the transferability of both non-targetedand targeted adversarial examples generated using different approaches over large models and alarge scale dataset. Our results confirm that the transferability for non-targeted adversarial exam-ples are prominent even for large models and a large scale dataset. On the other hand, we find thatit is hard to use existing approaches to generate targeted adversarial examples whose target labelscan transfer. We develop novel ensemble-based approaches, and demonstrate that they can gen-erate transferable targeted adversarial examples with a high success rate. Meanwhile, these newapproaches exhibit better performance on generating non-targeted transferable adversarial examplesthan previous work. We also show that both non-targeted and targeted adversarial examples gen-erated using our new approaches can successfully attack Clarifai.com, which is a black-box imageclassification system. Furthermore, we study some geometric properties to better understand thetransferable adversarial examples.ACKNOWLEDGMENTSThis material is in part based upon work supported by the National Science Foundation under GrantNo. TWC-1409915. Any opinions, findings, and conclusions or recommendations expressed in thismaterial are those of the author(s) and do not necessarily reflect the views of the National ScienceFoundation. | Syhdnc0Qx | interesting and insightful work on adversarial examples for deep CNNs for image classification | 7: Good paper, accept | This paper present an experimental study of the robustness of state-of-the-art CNNs to different types of "attacks" in the context of image classication. Specifically, an attack aims to fool the classification system with a specially corrupted image, i.e. making it misclassify the image as (1) any wrong class (non-targeted attack) or (2) a target class, chosen in advance by the attacker (targeted attack). For instance, the attacker could corrupt an image of an ostrich in such a way that it would be classified as a megalith. Even though the attacker's agenda is not so clear in this example, it is still interesting to study the weaknesses of current systems in view of (1) improving them in general and (2) actual risks with e.g. autonomous vehicles.
The paper is mostly experimental. In short, it compares different strategies (already published in previous papers) for all popular networks (VGG, GoogLeNet, ResNet-50/101/152) and the two aforementionned types of attacks. The experiments are well conducted and clearly exposed. A convincing point is that attacks are also conducted on "clarifai.com" which is a black-box classification system. Some analysis and insightful explanations are also provided to help understanding why CNNs are prone to such attacks (Section 6).
To sum up, the main findings are that non-targeted attacks are easy to perform, even on a black-box system. Non-targeted attacks are more difficult to realize with existing schemes, but the authors propose a new approach for that that vastly improves over existing attacks (even though it's still far from perfect: ~20% success rate on clarifai.com versus 2% with previous schemes).
Arguably, The paper still has some weaknesses:
- The authors are treating the 3 ResNet-based networks as different, yet they are obviously clearly correlated. See Table 7 for instance. This is naturally expected because their architecture is similar (only their depth varies). Hence, it does not sound very fair to state that "One interesting finding is that [...] the first misclassified label (non-targeted) is the same for all models except VGG-16 and GoogLeNet.", i.e., the three ResNet-based networks.
- A subjective measure is employed to evaluate the effectiveness of the attacks on the black box system. While this is for a good reason (clarifai.com returns image labels that are different from ImageNet), it is not certain that the reported numbers are fair (even though the qualitative results look convincing).
- The novelty of the proposed approach (optimizing an ensemble of network instead of a single network) is limited. However, this was not really the point of the paper, and it is effective, so it seems ok overall.
- The paper is quite long. This is expected because it is an extensive evaluation study, but still. I suggest the authors prune some near-duplicate content (e.g. Section 2.3 has a high overlap with Section 1, etc.).
- The paper would benefit from additional discussions with the recent and related work of Fawzi et al (NIPS'16) in Section 6. Indeed the work of Fawzi et al. is mostly theoretical and well aligned with the experimental findings and observations (in particular in Section 6).
To conclude, I think that this paper is somewhat useful for the community and could help to further improve existing architectures, as well as better assess their flaws and weaknesses. | 3: The reviewer is fairly confident that the evaluation is correct |
Sys6GJqxl | ICLR.cc/2017/conference | 2017 | Delving into Transferable Adversarial Examples and Black-box Attacks | ["Yanpei Liu", "Xinyun Chen", "Chang Liu", "Dawn Song"] | An intriguing property of deep neural networks is the existence of adversarial examples, which can transfer among different architectures. These transferable adversarial examples may severely hinder deep neural network-based applications. Previous works mostly study the transferability using small scale datasets. In this work, we are the first to conduct an extensive study of the transferability over large models and a large scale dataset, and we are also the first to study the transferability of targeted adversarial examples with their target labels. We study both non-targeted and targeted adversarial examples, and show that while transferable non-targeted adversarial examples are easy to find, targeted adversarial examples generated using existing approaches almost never transfer with their target labels. Therefore, we propose novel ensemble-based approaches to generating transferable adversarial examples. Using such approaches, we observe a large proportion of targeted adversarial examples that are able to transfer with their target labels for the first time. We also present some geometric studies to help understanding the transferable adversarial examples. Finally, we show that the adversarial examples generated using ensemble-based approaches can successfully attack Clarifai.com, which is a black-box image classification system. | ["Computer vision", "Deep learning", "Applications"] | ABSTRACTAn intriguing property of deep neural networks is the existence of adversarial ex-amples, which can transfer among different architectures. These transferable ad-versarial examples may severely hinder deep neural network-based applications.Previous works mostly study the transferability using small scale datasets. In thiswork, we are the first to conduct an extensive study of the transferability overlarge models and a large scale dataset, and we are also the first to study the trans-ferability of targeted adversarial examples with their target labels. We study bothnon-targeted andtargeted adversarial examples, and show that while transferablenon-targeted adversarial examples are easy to find, targeted adversarial examplesgenerated using existing approaches almost never transfer with their target labels.Therefore, we propose novel ensemble-based approaches to generating transfer-able adversarial examples. Using such approaches, we observe a large proportionof targeted adversarial examples that are able to transfer with their target labels forthe first time. We also present some geometric studies to help understanding thetransferable adversarial examples. Finally, we show that the adversarial examplesgenerated using ensemble-based approaches can successfully attack Clarifai.com,which is a black-box image classification system.1 I NTRODUCTIONRecent research has demonstrated that for a deep architecture, it is easy to generate adversarialexamples, which are close to the original ones but are misclassified by the deep architecture (Szegedyet al. (2013); Goodfellow et al. (2014)). The existence of such adversarial examples may have severeconsequences, which hinders vision-understanding-based applications, such as autonomous driving.Most of these studies require explicit knowledge of the underlying models. It remains an openquestion how to efficiently find adversarial examples for a black-box model.Several works have demonstrated that some adversarial examples generated for one model mayalso be misclassified by another model. Such a property is referred to as transferability , whichcan be leveraged to perform black-box attacks. This property has been exploited by constructinga substitute of the black-box model, and generating adversarial instances against the substitute toattack the black-box system (Papernot et al. (2016a;b)). However, so far, transferability is mostlyexamined over small datasets, such as MNIST (LeCun et al. (1998)) and CIFAR-10 (Krizhevsky &Hinton (2009)). It has yet to be better understood transferability over large scale datasets, such asImageNet (Russakovsky et al. (2015)).In this work, we are the first to conduct an extensive study of the transferability of different adver-sarial instance generation strategies applied to different state-of-the-art models trained over a largescale dataset. In particular, we study two types of adversarial examples: (1) non-targeted adversar-ial examples, which can be misclassified by a network, regardless of what the misclassified labelsmay be; and (2) targeted adversarial examples, which can be classified by a network as a targetlabel. We examine several existing approaches searching for adversarial examples based on a singlemodel. While non-targeted adversarial examples are more likely to transfer, we observe few targetedadversarial examples that are able to transfer with their target labels.Work is done while visiting UC Berkeley.1Published as a conference paper at ICLR 2017We further propose a novel strategy to generate transferable adversarial images using an ensembleof multiple models. In our evaluation, we observe that this new strategy can generate non-targetedadversarial instances with better transferability than other methods examined in this work. Also, forthe first time, we observe a large proportion of targeted adversarial examples that are able to transferwith their target labels.We study geometric properties of the models in our evaluation. In particular, we show that thegradient directions of different models are orthogonal to each other. We also show that decisionboundaries of different models align well with each other, which partially illustrates why adversarialexamples can transfer.Last, we study whether generated adversarial images can attack Clarifai.com, a commercial com-pany providing state-of-the-art image classification services. We have no knowledge about the train-ing dataset and the types of models used by Clarifai.com; meanwhile, the label set of Clarifai.comis quite different from ImageNet’s. We show that even in this case, both non-targeted and targetedadversarial images transfer to Clarifai.com. This is the first work documenting the success of gen-erating both non-targeted and targeted adversarial examples for a black-box state-of-the-art onlineimage classification system, whose model and training dataset are unknown to the attacker.Contributions and organization. We summarize our main contributions as follows:For ImageNet models, we show that while existing approaches are effective to generatenon-targeted transferable adversarial examples (Section 3), only few targeted adversarialexamples generated by existing methods can transfer (Section 4).We propose novel ensemble-based approaches to generate adversarial examples (Sec-tion 5). Our approaches enable a large portion of targeted adversarial examples to transferamong multiple models for the first time.We are the first to present that targeted adversarial examples generated for models trainedon ImageNet can transfer to a black-box system, i.e., Clarifai.com, whose model, trainingdata, and label set is unknown to us (Section 7). In particular, Clarifai.com’s label set isvery different from ImageNet’s.We conduct the first analysis of geometric properties for large models trained over Ima-geNet (Section 6), and the results reveal several interesting findings, such as the gradientdirections of different models are orthogonal to each other.In the following, we first discuss related work, and then present the background knowledge andexperiment setup in Section 2. Then we present each of our experiments and conclusions in thecorresponding section as mentioned above.Related work. Transferability of adversarial examples was first examined by Szegedy et al.(2013), which studied the transferability (1) between different models trained over the same dataset;and (2) between the same or different model trained over disjoint subsets of a dataset; However,Szegedy et al. (2013) only studied MNIST.The study of transferability was followed by Goodfellow et al. (2014), which attributed the phe-nomenon of transferability to the reason that the adversarial perturbation is highly aligned with theweight vector of the model. Again, this hypothesis was tested using MNIST and CIFAR-10 datasets.We show that this is not the case for models trained over ImageNet.Papernot et al. (2016a;b) examined constructing a substitute model to attack a black-box targetmodel. To train the substitute model, they developed a technique that synthesizes a training set andannotates it by querying the target model for labels. They demonstrate that using this approach,black-box attacks are feasible towards machine learning services hosted by Amazon, Google, andMetaMind. Further, Papernot et al. (2016a) studied the transferability between deep neural networksand other models such as decision tree, kNN, etc.Our work differs from Papernot et al. (2016a;b) in three aspects. First, in these works, only the modeland the training process are a black box, but the training set and the test set are controlled by theattacker; in contrast, we attack Clarifai.com, whose model, training data, training process, and eventhe test label set are unknown to the attacker. Second, the datasets studied in these works are small2Published as a conference paper at ICLR 2017scale, i.e., MNIST and GTSRB (Stallkamp et al. (2012)); in our work, we study the transferabilityover larger models and a larger dataset, i.e., ImageNet. Third, to attack black-box machine learningsystems, we do not query the systems for constructing the substitute model ourselves.In a concurrent and independent work, Moosavi-Dezfooli et al. (2016) showed the existence of auniversal perturbation for each model, which can transfer across different images. They also showthat the adversarial images generated using these universal perturbations can transfer across differentmodels on ImageNet. However, they only examine the non-targeted transferability, while our workstudies both non-targeted and targeted transferability over ImageNet.2 A DVERSARIAL DEEPLEARNING AND TRANSFERABILITY2.1 T HE ADVERSARIAL DEEP LEARNING PROBLEMWe assume a classifier f(x)outputs a category (or a label) as the prediction. Given an originalimagex, with ground truth label y, the adversarial deep learning problem is to seek for adversarialexamples for the classifier f(x). Specifically, we consider two classes of adversarial examples.Anon-targeted adversarial example x?is an instance that is close to x, in which case x?shouldhave the same ground truth as x, whilef(x?)6=y. For the problem to be non-trivial, we assumef(x) =ywithout loss of generality. A targeted adversarial example x?is close toxand satisfiesf(x?) =y?, wherey?is a target label specified by the adversary, and y?6=y.2.2 A PPROACHES FOR GENERATING ADVERSARIAL EXAMPLESIn this work, we consider three classes of approaches for generating adversarial examples:optimization-based approaches, fast gradient approaches, and fast gradient sign approaches. Eachclass has non-targeted and targeted versions respectively.2.2.1 A PPROACHES FOR GENERATING NON -TARGETED ADVERSARIAL EXAMPLESFormally, given an image xwith ground truth y=f(x), searching for a non-targeted adversarialexample can be modeled as searching for an instance x?to satisfy the following constraints:f(x?)6=y (1)d(x;x?)B (2)whered(;)is a metric to quantify the distance between an original image and its adversarial coun-terpart, andB, called distortion , is an upper bound placed on this distance. Without loss of gener-ality, we consider model fis composed of a network J(x), which outputs the probability for eachcategory, so that foutputs the category with the highest probability.Optimization-based approach. One approach is to approximate the solution to the followingoptimization problem:argminx?d(x;x?)`(1y;J(x?)) (3)where 1yis the one-hot encoding of the ground truth label y,`is a loss function to measure thedistance between the prediction and the ground truth, and is a constant to balance constraints (2)and (1), which is empirically determined. Here, loss function `is used to approximate constraint (1),and its choice can affect the effectiveness of searching for an adversarial example. In this work, wechoose`(u;v) = log (1uv), which is shown to be effective by Carlini & Wagner (2016).Fast gradient sign (FGS). Goodfellow et al. (2014) proposed the fast gradient sign (FGS) methodso that the gradient needs be computed only once to generate an adversarial example. FGS can beused to generate adversarial images to meet the L1norm bound. Formally, non-targeted adversarialexamples are constructed asx? clip(x+Bsgn(rx`(1y;J(x))))Here, clip(x)is used to clip each dimension of xto the range of pixel values, i.e., [0;255] in thiswork. We make a slight variation to choose `(u;v) = log (1uv), which is the same as used inthe optimization-based approach.3Published as a conference paper at ICLR 2017Fast gradient (FG). The fast gradient approach (FG) is similar to FGS, but instead of movingalong the gradient sign direction, FG moves along the gradient direction. In particular, we havex? clip(x+Brx`(1y;J(x))jjrx`(1y;J(x))jj))Here, we assume the distance metric in constraint (2), d(x;x?) =jjxx?jjis a norm of xx?.The term sgn(rx`)in FGS is replaced byrx`jjrx`jjto meet this distance constraint.We call both FGS and FG fast gradient-based approaches .2.2.2 A PPROACHES FOR GENERATING TARGETED ADVERSARIAL EXAMPLESA targeted adversarial image x?is similar to a non-targeted one, but constraint (1) is replaced byf(x?) =y?(4)wherey?is the target label given by the adversary. For the optimization-based approach, we ap-proximate the solution by solving the following dual objective:argminx?d(x;x?) +`0(1y?;J(x?)) (5)In this work, we choose the standard cross entropy loss `0(u;v) =Piuilogvi.For FGS and FG, we construct adversarial examples as follows:x? clip(xBsgn(rx`0(1y?;J(x)))) (FGS)x? clip(xBrx`0(1y?;J(x))jjrx`0(1y?;J(x))jj) (FG)where`0is the same as the one used for the optimization-based approach.2.3 E VALUATION METHODOLOGYFor the rest of the paper, we focus on examining the transferability among state-of-the-art modelstrained over ImageNet (Russakovsky et al. (2015)). In this section, we detail the models to beexamined, the dataset to be evaluated, and the measurements to be used.Models. We examine five networks, ResNet-50, ResNet-101, ResNet-152 (He et al. (2015))1,GoogLeNet (Szegedy et al. (2014))2, and VGG-16 (Simonyan & Zisserman (2014))3. We retrievethe pre-trained models for each network online. The performance of these models on the ILSVRC2012 (Russakovsky et al. (2015)) validation set can be found in our online technical report: Liu et al.(2016). We choose these models to study the transferability between homogeneous architectures(i.e., ResNet models) and heterogeneous architectures.Dataset. It is less meaningful to examine the transferability of an adversarial image between twomodels which cannot classify the original image correctly. Therefore, from the ILSVRC 2012 val-idation set, we randomly choose 100 images, which can be classified correctly by all five modelsin our examination. These 100 images form our test set. To perform targeted attacks, we manuallychoose a target label for each image, so that its semantics is far from the ground truth. The imagesand target labels in our evaluation can be found on website4.Measuring transferability. Given two models, we measure the non-targeted transferability bycomputing the percentage of the adversarial examples generated for one model that can be classifiedcorrectly for the other. We refer to this percentage as accuracy . A lower accuracy means betternon-targeted transferability. We measure the targeted transferability by computing the percentage ofthe adversarial examples generated for one model that are classified as the target label by the othermodel. We refer to this percentage as matching rate . A higher matching rate means better targetedtransferability. For clarity, the reported results are only based on top-1 accuracy. Top-5 accuracy’scounterparts can be found in our online technical report: Liu et al. (2016).1https://github.com/KaimingHe/deep-residual-networks2https://github.com/BVLC/caffe/tree/master/models/bvlc_googlenet3https://gist.github.com/ksimonyan/211839e770f7b538e2d84https://github.com/sunblaze-ucb/transferability-advdnn-pub4Published as a conference paper at ICLR 2017Distortion. Besides transferability, another important factor is the distortion between adversarialimages and the original ones. We measure the distortion by root mean square deviation , i.e., RMSD,which is computed as d(x?;x) =pPi(x?ixi)2=N, wherex?andxare the vector representationsof an adversarial image and the original one respectively, Nis the dimensionality of xandx?, andxidenotes the pixel value of the i-th dimension of x, within range [0;255], and similar for x?i.3 N ON-TARGETED ADVERSARIAL EXAMPLESIn this section, we examine different approaches for generating non-targeted adversarial images.3.1 O PTIMIZATION -BASED APPROACHTo apply the optimization-based approach for a single model, we initialize x?to bexand use AdamOptimizer (Kingma & Ba (2014)) to optimize Objective (3) . We find that we can tune the RMSDby adjusting the learning rate of Adam and . We find that, for each model, we can use a smalllearning rate to generate adversarial images with small RMSD, i.e. <2, with any. In fact, we findthat when initializing x?withx, Adam Optimizer will search for an adversarial example around x,even when we set to be 0, i.e., not restricting the distance between x?andx. Therefore, we setto be 0for all experiments using optimization-based approaches throughout the paper. Althoughthese adversarial examples with small distortions can successfully fool the target model, however,they cannot transfer well to other models (details can be found in our online technical report: Liuet al. (2016)).We increase the learning rate to allow the optimization algorithm to search for adversarial imageswith larger distortion. In particular, we set the learning rate to be 4. We run Adam Optimizer for 100iterations to generate the adversarial images. We observe that the loss converges after 100 iterations.An alternative optimization-based approach leading to similar results can be found in our onlinetechnical report: Liu et al. (2016).Non-targeted adversarial examples transfer. We generate non-targeted adversarial examples onone network, but evaluate them on another, and Table 1 Panel A presents the results. From the table,we can observe thatThe diagonal contains all 0 values. This says that all adversarial images generated for onemodel can mislead the same model.A large proportion of non-targeted adversarial images generated for one model using theoptimization-based approach can transfer to another.Although the three ResNet models share similar architectures which differ only in the hy-perparameters, adversarial examples generated against a ResNet model do not necessarilytransfer to another ResNet model better than other non-ResNet models. For example, theadversarial examples generated for VGG-16 have lower accuracy on ResNet-50 than thosegenerated for ResNet-152 or ResNet-101.3.2 F AST GRADIENT -BASED APPROACHESWe then examine the effectiveness of fast gradient-based approaches. A good property of fastgradient-based approaches is that all generated adversarial examples lie in a 1-D subspace. There-fore, we can easily approximate the minimal distortion in this subspace of transferable adversarialexamples between two models. In the following, we first control the RMSD to study fast gradient-based approaches’ effectiveness. Second, we study the transferable minimal distortions of fastgradient-based approaches.3.2.1 E FFECTIVENESS AND TRANSFERABILITY OF THE FAST GRADIENT -BASEDAPPROACHESSince the distortion Band the RMSD of the generated adversarial images are highly correlated, wecan choose this hyperparameter Bto generate adversarial images with a given RMSD. In Table 15Published as a conference paper at ICLR 2017RMSD ResNet-152 ResNet-101 ResNet-50 VGG-16 GoogLeNetResNet-152 22.83 0% 13% 18% 19% 11%ResNet-101 23.81 19% 0% 21% 21% 12%ResNet-50 22.86 23% 20% 0% 21% 18%VGG-16 22.51 22% 17% 17% 0% 5%GoogLeNet 22.58 39% 38% 34% 19% 0%Panel A: Optimization-based approachRMSD ResNet-152 ResNet-101 ResNet-50 VGG-16 GoogLeNetResNet-152 23.45 4% 13% 13% 20% 12%ResNet-101 23.49 19% 4% 11% 23% 13%ResNet-50 23.49 25% 19% 5% 25% 14%VGG-16 23.73 20% 16% 15% 1% 7%GoogLeNet 23.45 25% 25% 17% 19% 1%Panel B: Fast gradient approachTable 1: Transferability of non-targeted adversarial images generated between pairs of models. Thefirst column indicates the average RMSD of all adversarial images generated for the model in thecorresponding row. The cell (i;j)indicates the accuracy of the adversarial images generated formodeli(row) evaluated over model j(column). Results of top-5 accuracy can be found in ouronline technical report: Liu et al. (2016).Panel B, we generate adversarial images using FG such that the average RMSD is almost the sameas those generated using the optimization-based approach. We observe that the diagonal values inthe table are all positive, which means that FG cannot fully mislead the models. A potential reasonis that, FG can be viewed as approximating the optimization, but is tailored for speed over accuracy.On the other hand, the values of non-diagonal cells in the table, which correspond to the accuraciesof adversarial images generated for one model but evaluated on another, are comparable with or lessthan their counterparts in the optimization-based approach. This shows that non-targeted adversarialexamples generated by FG exhibit transferability as well.We also evaluate FGS, but the transferability of the generated images is worse than the ones gen-erated using either FG or optimization-based approaches. The results can be found in our onlinetechnical report: Liu et al. (2016). It shows that when RMSD is around 23, the accuracies of theadversarial images generated by FGS is greater than their counterparts for FG. We hypothesize thereason why transferability of FGS is worse to this fact.3.2.2 A DVERSARIAL IMAGES WITH MINIMAL TRANSFERABLE RMSDFor an image xand two models M1;M 2, we can approximate the minimal distortion Balong adirection, such thatxB=x+Bgenerated for M1is adversarial for both M1andM2. Hereisthe direction, i.e., sgn(rx`)for FGS, andrx`=jjrx`jjfor FG.We refer to the minimal transferable RMSD from M1toM2using FG (or FGS) as the RMSD ofa transferable adversarial example xBwith the minimal transferable distortion BfromM1toM2using FG (or FGS). The minimal transferable RMSD can illustrate the tradeoff between distortionand transferability.In the following, we approximate the minimal transferable RMSD through a linear search by sam-plingBevery 0.1 step. We choose the linear-search method rather than binary-search method todetermine the minimal transferable RMSD because the adversarial images generated from an origi-nal image may come from multiple intervals. The experiment can be found in our online technicalreport: Liu et al. (2016).Minimal transferable RMSD using FG and FGS. Figure 1 plots the cumulative distributionfunction (CDF) of the minimal transferable RMSD from VGG-16 to ResNet-152 using non-targetedFG (Figure 1a) and FGS (Figure 1b). From the figures, we observe that both FG and FGS can find100% transferable adversarial images with RMSD less than 80:91and86:56respectively. Further,6Published as a conference paper at ICLR 2017(a) Fast Gradient (b) Fast Gradient SignFigure 1: The CDF of the minimal transferable RMSD from VGG-16 to ResNet-152 using FG (a)and FGS (b). The green line labels the median minimal transferable RMSD, while the red line labelsthe minimal transferable RMSD to reach 90% percentage.RMSD ResNet-152 ResNet-101 ResNet-50 VGG-16 GoogLeNetResNet-152 23.13 100% 2% 1% 1% 1%ResNet-101 23.16 3% 100% 3% 2% 1%ResNet-50 23.06 4% 2% 100% 1% 1%VGG-16 23.59 2% 1% 2% 100% 1%GoogLeNet 22.87 1% 1% 0% 1% 100%Table 2: The matching rate of targeted adversarial images generated using the optimization-basedapproach. The first column indicates the average RMSD of the generated adversarial images. Cell(i;j)indicates that matching rate of the targeted adversarial images generated for model i(row)when evaluated on model j(column). The top-5 results can be found in our online technical re-port: Liu et al. (2016).the FG method can generate transferable attacks with smaller RMSD than FGS. A potential rea-son is that while FGS minimizes the distortion’s L1norm, FG minimizes its L2norm, which isproportional to RMSD.3.3 C OMPARISON WITH RANDOM PERTURBATIONSWe also evaluate the test accuracy when we add a Gaussian noise to the 100 images in our testset. The concrete results can be found in our online technical report: Liu et al. (2016), where weshow the conclusion that the “transferability” of this approach is significantly worse than eitheroptimization-based approaches or fast gradient-based approaches.4 T ARGETED ADVERSARIAL EXAMPLESIn this section, we examine the transferability of targeted adversarial images. Table 2 presentsthe results for using optimization-based approach. We observe that (1) the prediction of targetedadversarial images can match the target labels when evaluated on the same model that is used togenerate the adversarial examples; but (2) the targeted adversarial images can be rarely predictedas the target labels by a different model. We call the latter that the target labels do not transfer .Even when we increase the distortion, we still do not observe improvements on making target labeltransfer. Some results can be found in our online technical report: Liu et al. (2016). Even if wecompute the matching rate based on top-5 accuracy, the highest matching rate is only 10%. Theresults can be found in our online technical report: Liu et al. (2016).We also examine the targeted adversarial images generated by fast gradient-based approaches, andwe observe that the target labels do not transfer as well. The results can be found in our onlinetechnical report: Liu et al. (2016). In fact, most targeted adversarial images cannot mislead themodel, for which the adversarial images are generated, to predict the target labels, regardless of howlarge the distortion is used. We attribute it to the fact that the fast gradient-based approaches only7Published as a conference paper at ICLR 2017search for attacks in a 1-D subspace. In this subspace, the total possible predictions may contain asmall subset of all labels, which usually does not contain the target label. In Section 6, we studydecision boundaries regarding this issue.We also evaluate the matching rate of images added with Gaussian noise, as described in Section 3.3.However, we observe that the matching rate of any of the 5 models is 0%. Therefore, we concludethat by adding Gaussian noise, the attacker cannot generate successful targeted adversarial examplesat all, let alone targeted transferability.5 E NSEMBLE -BASED APPROACHESWe hypothesize that if an adversarial image remains adversarial for multiple models, then it is morelikely to transfer to other models as well. We develop techniques to generate adversarial images formultiple models. The basic idea is to generate adversarial images for the ensemble of the models .Formally, given kwhite-box models with softmax outputs being J1;:::;Jk, an original image x,and its ground truth y,the ensemble-based approach solves the following optimization problem (fortargeted attack):argminx?log(kXi=1iJi(x?))1y?+d(x;x?) (6)wherey?is the target label specified by the adversary,PiJi(x?)is the ensemble model, and iare the ensemble weights,Pki=1i= 1. Note that (6) is the targeted objective. The non-targetedcounterpart can be derived similarly. In doing so, we hope the generated adversarial images remainadversarial for an additional black-box model Jk+1.We evaluate the effectiveness of the ensemble-based approach. For each of the five models, we treatit as the black-box model to attack, and generate adversarial images for the ensemble of the restfour, which is considered as white-box. We evaluate the generated adversarial images over all fivemodels. Throughout the rest of the paper, we refer to the approaches evaluated in Section 3 and 4 asthe approaches using a single model, and to the ensemble-based approaches discussed in this sectionas the approaches using an ensemble model.Optimization-based approach. We use Adam to optimize the objective (6) with equal ensembleweights across all models in the ensemble to generate targeted adversarial examples. In particular,we set the learning rate of Adam to be 8for each model. In each iteration, we compute the Adamupdate for each model, sum up the four updates, and add the aggregation onto the image. We run 100iterations of updates, and we observe that the loss converges after 100 iterations. By doing so, for thefirst time, we observe a large proportion of the targeted adversarial images whose target labels cantransfer. The results are presented in Table 3. We observe that not all targeted adversarial imagescan be misclassified to the target labels by the models used in the ensemble. This suggests thatwhile searching for an adversarial example for the ensemble model, there is no direct supervision tomislead any individual model in the ensemble to predict the target label. Further, from the diagonalnumbers of the table, we observe that the transferability to ResNet models is better than to VGG-16or GoogLeNet, when adversarial examples are generated against all models except the target model.We also evaluate non-targeted adversarial images generated by the ensemble-based approach. Weobserve that the generated adversarial images have almost perfect transferability. We use the sameprocedure as for the targeted version, except the objective to generate the adversarial images. Weevaluate the generated adversarial images over all models. The results are presented in Table 4.The generated adversarial images all have RMSDs around 17, which are lower than 22 to 23 ofthe optimization-based approach using a single model (See Table 1 for comparison). When theadversarial images are evaluated over models which are not used to generate the attack, the accuracyis no greater than 6%. For a reference, the corresponding accuracies for all approaches evaluated inSection 3 using one single model are at least 12%. Our experiments demonstrate that the ensemble-based approaches can generate almost perfectly transferable adversarial images.Fast gradient-based approach. The results for non-targeted fast gradient-based approaches ap-plied to the ensemble can be found in our online technical report: Liu et al. (2016). We observethat the diagonal values are not zero, which is the same as we observed in the results for FG and8Published as a conference paper at ICLR 2017RMSD ResNet-152 ResNet-101 ResNet-50 VGG-16 GoogLeNet-ResNet-152 30.68 38% 76% 70% 97% 76%-ResNet-101 30.76 75% 43% 69% 98% 73%-ResNet-50 30.26 84% 81% 46% 99% 77%-VGG-16 31.13 74% 78% 68% 24% 63%-GoogLeNet 29.70 90% 87% 83% 99% 11%Table 3: The matching rate of targeted adversarial images generated using the optimization-basedapproach. The first column indicates the average RMSD of the generated adversarial images. Cell(i;j)indicates that percentage of the targeted adversarial images generated for the ensemble of thefour models except model i(row) is predicted as the target label by model j(column). In each row,the minus sign “” indicates that the model of the row is not used when generating the attacks.Results of top-5 matching rate can be found in our online technical report: Liu et al. (2016).RMSD ResNet-152 ResNet-101 ResNet-50 VGG-16 GoogLeNet-ResNet-152 17.17 0% 0% 0% 0% 0%-ResNet-101 17.25 0% 1% 0% 0% 0%-ResNet-50 17.25 0% 0% 2% 0% 0%-VGG-16 17.80 0% 0% 0% 6% 0%-GoogLeNet 17.41 0% 0% 0% 0% 5%Table 4: Accuracy of non-targeted adversarial images generated using the optimization-based ap-proach. The first column indicates the average RMSD of the generated adversarial images. Cell(i;j)corresponds to the accuracy of the attack generated using four models except model i(row)when evaluated over model j(column). In each row, the minus sign “ ” indicates that the modelof the row is not used when generating the attacks. Results of top-5 accuracy can be found in ouronline technical report: Liu et al. (2016).FGS applied to a single model. We hypothesize a potential reason is that the gradient directions ofdifferent models in the ensemble are orthogonal to each other, as we will illustrate in Section 6. Inthis case, the gradient direction of the ensemble is almost orthogonal to the one of each model in theensemble. Therefore searching along this direction may require large distortion to reach adversarialexamples.For targeted adversarial examples generated using FG and FGS based on an ensemble model, theirtransferability is no better than the ones generated using a single model. The results can be found inour online technical report: Liu et al. (2016). We hypothesize the same reason to explain this: thereare only few possible target labels in total in the 1-D subspace.6 G EOMETRIC PROPERTIES OF DIFFERENT MODELSIn this section, we show some geometric properties of the models to try to better understand transfer-able adversarial examples. Prior works also try to understand the geometic properties of adversarialexamples theoretically (Fawzi et al. (2016)) or empirically (Goodfellow et al. (2014)). In this work,we examine large models trained over a large dataset with 1000 labels, whose geometric propertiesare never examined before. This allows us to make new observations to better understand the modelsand their adversarial examples.The gradient directions of different models in our evaluation are almost orthogonal to eachother. We study whether the adversarial directions of different models align with each other. Wecalculate cosine value of the angle between gradient directions of different models, and the resultscan be found in our online technical report: Liu et al. (2016). We observe that all non-diagonalvalues are close to 0, which indicates that for most images, their gradient directions with respect todifferent models are orthogonal to each other.Decision boundaries of the non-targeted approaches using a single model. We study the deci-sion boundary of different models to understand why adversarial examples transfer. We choose two9Published as a conference paper at ICLR 2017Figure 2: The example image to study the decision boundary. Its ID in ILSVRC 2012 validation setis 49443, and its ground truth label is “anemone fish.”VGG-16 ResNet-50 ResNet-101 ResNet-152 GoogLeNetZoom-in20 15 10 50 5 10 15 2020151050510152020 15 10 50 5 10 15 2020151050510152020 15 10 50 5 10 15 2020151050510152020 15 10 50 5 10 15 2020151050510152020 15 10 50 5 10 15 20201510505101520 Zoom-out100 50 0 50 10010050050100100 50 0 50 10010050050100100 50 0 50 10010050050100100 50 0 50 10010050050100100 50 0 50 10010050050100Figure 3: Decision regions of different models. We pick the same two directions for all plots: one isthe gradient direction of VGG-16 (x-axis), and the other is a random orthogonal direction (y-axis).Each point in the span plane shows the predicted label of the image generated by adding a noise tothe original image (e.g., the origin corresponds to the predicted label of the original image). Theunits of both axises are 1 pixel values. All sub-figure plots the regions on the span plane using thesame color for the same label. The image is in Figure 2.normalized orthogonal directions 1;2, one being the gradient direction of VGG-16 and the otherbeing randomly chosen. Each point (u;v)in this 2-D plane corresponds to the image x+u1+v2,wherexis the pixel value vector of the original image. For each model, we plot the label of theimage corresponding to each point, and get Figure 3 using the image in Figure 2.We can observe that for all models, the region that each model can predict the image correctlyis limited to the central area. Also, along the gradient direction, the classifiers are soon misled.One interesting finding is that along this gradient direction, the first misclassified label for the threeResNet models (corresponding to the light green region) is the label “orange”. A more detailedstudy can be found in our online technical report: Liu et al. (2016). When we look at the zoom-out figures, however, the labels of images that are far away from the original one are different fordifferent models, even among ResNet models.On the other hand, in Table 5, we show the total number of regions in each plane. In fact, for eachplane, there are at most 21 different regions in all planes. Compared with the 1,000 total categoriesin ImageNet, this is only 2.1% of all categories. That means, for all other 97.9% labels, no targetedadversarial example exists in each plane. Such a phenomenon partially explains why fast gradient-based approaches can hardly find targeted adversarial images.Further, in Figure 4, we draw the decision boundaries of all models on the same plane as describedabove. We can observe that10Published as a conference paper at ICLR 2017Model VGG-16 ResNet-50 ResNet-101 ResNet-152 GoogLeNet# of labels 10 9 21 10 21Table 5: The number of all possible predicted labels for each model in the same plane described in Figure 3.50 0 50 100604020020406080VGG-16ResNet-101ResNet-152ResNet-50GoogLeNetFigure 4: The decision boundary to sep-arate the region within which all pointsare classified as the ground truth label(encircled by each closed curve) fromothers. The plane is the same one de-scribed in Figure 3. The origin ofthe coordinate plane corresponds to theoriginal image. The units of both axisesare 1 pixel values.50 0 50 1006040200204060 ResNet-101VGG-16ResNet-50ResNet-152GoogLeNetFigure 5: The decision boundary to separate theregion within which all points are classified as thetarget label (encircled by each closed curve) fromothers. The plane is spanned by the targeted ad-versarial direction and a random orthogonal di-rection. The targeted adversarial direction is com-puted as the difference between the original imagein Figure 2 and the adversarial image generated bythe optimization-based approach for an ensemble.The ensemble contains all models except ResNet-101. The origin of the coordinate plane corre-sponds to the original image. The units of bothaxises are 1 pixel values.The boundaries align with each other very well. This partially explains why non-targetedadversarial images can transfer among models.The boundary diameters along the gradient direction is less than the ones along the ran-dom direction. A potential reason is that moving a variable along its gradient directioncan change the loss function (i.e., the probability of the ground truth label) significantly.Therefore along the gradient direction it will take fewer steps to move out of the groundtruth region than a random direction.An interesting finding is that even though we move left along the x-axis, which is equivalentto maximizing the ground truth’s prediction probability, it also reaches the boundary muchsooner than moving along a random direction. We attribute this to the non-linearity of theloss function: when the distortion is larger, the gradient direction also changes dramatically.In this case, moving along the original gradient direction no longer increases the probabilityto predict the ground truth label (details can be found in our online technical report: Liuet al. (2016)).As for VGG-16 model, there is a small hole within the region corresponding to the groundtruth. This may partially explain why non-targeted adversarial images with small distortionexist, but do not transfer well. This hole does not exist in other models’ decision planes. Inthis case, non-targeted adversarial images in this hole do not transfer.Decision boundaries of the targeted ensemble-based approaches. In addition, we choose thetargeted adversarial direction of the ensemble of all models except ResNet-101 and a random or-thogonal direction, and we plot decision boundaries on the plane spanned by these two directionvectors in Figure 5. We observe that the regions of images, which are predicted as the target label,align well for the four models in the ensemble. However, for the model not used to generate theadversarial image, i.e., ResNet-101, it also has a non-empty region such that the prediction is suc-cessfully misled to the target label, although the area is much smaller. Meanwhile, the region withineach closed curve of the models almost has the same center.11Published as a conference paper at ICLR 20177 R EAL WORLD EXAMPLE :ADVERSARIAL EXAMPLES FOR CLARIFAI .COMClarifai.com is a commercial company providing state-of-the-art image classification services. Wehave no knowledge about the dataset and types of models used behind Clarifai.com, except that wehave black-box access to the services. The labels returned from Clarifai.com are also different fromthe categories in ILSVRC 2012. We submit all 100 original images to Clarifai.com and the returnedlabels are correct based on a subjective measure.We also submit 400 adversarial images in total, where 200 of them are targeted adversarial examples,and the rest 200 are non-targeted ones. As for the 200 targeted adversarial images, 100 of themare generated using the optimization-based approach based on VGG-16 (the same ones evaluatedin Table 2), and the rest 100 are generated using the optimization-based approach based on anensemble of all models except ResNet-152 (the same ones evaluated in Table 3). The 200 non-targeted adversarial examples are generated similarly (the same ones evaluated in Table 1 and 4).For non-targeted adversarial examples, we observe that for both the ones generated using VGG-16and those generated using the ensemble, most of them can transfer to Clarifai.com.More importantly, a large proportion of our targeted adversarial examples are misclassified by Clari-fai.com as well. We observe that 57% of the targeted adversarial examples generated using VGG-16,and76% of the ones generated using the ensemble can mislead Clarifai.com to predict labels irrele-vant to the ground truth.Further, our experiment shows that for targeted adversarial examples, 18% of those generated us-ing the ensemble model can be predicted as labels close to the target label by Clarifai.com. Thecorresponding number for the targeted adversarial examples generated using VGG-16 is 2%. Con-sidering that in the case of attacking Clarifai.com, the labels given by the target model are differentfrom those given by our models, it is fairly surprising to see that when using the ensemble-basedapproach, there is still a considerable proportion of our targeted adversarial examples that can mis-lead this black-box model to make predictions semantically similar to our target labels. All thesenumbers are computed based on a subjective measure, and we include some examples in Table 6.More examples can be found in our online technical report: Liu et al. (2016).originalimagetruelabelClarifai.comresults oforiginal imagetargetlabeltargetedadversarialexampleClarifai.com resultsof targetedadversarial exampleviaductbridge,sight,arch,river,skywindowscreenwindow,wall,old,decoration,designhip, rosehip,rosehipfruit,fall,food,little,wildlifestupa,topeBuddha,gold,temple,celebration,artisticdogsled,dogsled,dogsleighgroup together,four,sledge,sled,enjoymenthip, rosehip,rosehipcherry,branch,fruit,food,season12Published as a conference paper at ICLR 2017pug,pug-dogpug,friendship,adorable,purebred,sitsea lionsea seal,ocean,head,sea,cuteOldEnglishsheep-dog,bobtailpoodle,retriever,loyalty,sit,twoabayaveil,spirituality,religion,people,illustrationmaillot,tank suitbeach,woman,adult,wear,portraitamphib-ian,amphibi-ousvehicletransportationsystem,vehicle,man,print,retropatas,hussarmonkey,Erythro-cebuspatasprimate,monkey,safari,sit,lookingbee eaterornithology,avian,beak,wing,featherTable 6: Original images and adversarial images evaluated over Clarifai.com. For labels returnedfrom Clarifai.com, we sort the labels firstly by rareness: how many times a label appears in theClarifai.com results for all adversarial images and original images, and secondly by confidence.Only top 5 labels are provided.8 C ONCLUSIONIn this work, we are the first to conduct an extensive study of the transferability of both non-targetedand targeted adversarial examples generated using different approaches over large models and alarge scale dataset. Our results confirm that the transferability for non-targeted adversarial exam-ples are prominent even for large models and a large scale dataset. On the other hand, we find thatit is hard to use existing approaches to generate targeted adversarial examples whose target labelscan transfer. We develop novel ensemble-based approaches, and demonstrate that they can gen-erate transferable targeted adversarial examples with a high success rate. Meanwhile, these newapproaches exhibit better performance on generating non-targeted transferable adversarial examplesthan previous work. We also show that both non-targeted and targeted adversarial examples gen-erated using our new approaches can successfully attack Clarifai.com, which is a black-box imageclassification system. Furthermore, we study some geometric properties to better understand thetransferable adversarial examples.ACKNOWLEDGMENTSThis material is in part based upon work supported by the National Science Foundation under GrantNo. TWC-1409915. Any opinions, findings, and conclusions or recommendations expressed in thismaterial are those of the author(s) and do not necessarily reflect the views of the National ScienceFoundation. | HJeU-eaQx | Review for Liu et al | 5: Marginally below acceptance threshold | I reviewed the manuscript as of December 7th.
Summary:
The authors investigate the transferability of adversarial examples in deep networks. The authors confirm that transferability exists even in large models but demonstrate that it is difficult to manipulate the network to adversarially perturb an image into a specifically desired label. The authors additionally demonstrate real world attacks on a vision web service and explore the geometric properties of adversarial examples.
Major Comments:
1. The paper contains a list of many results and it is not clear what single message this paper provides. As mentioned in the comments, this paper is effectively 15 pages and 9 page of results in the Appendix heavily discussed throughout the main body of the paper. Although there is no strict page limit for this conference, I do feel this pushes the spirit of a conference publication. I do not rule out this paper for acceptance based on the length but I do hold it as a negative because clarity of presentation is an important quality. If this paper is ultimately accepted, I would suggest that the authors make some effort to cut down the length even further beyond the 13 pages posted elsewhere. I have marked some sections to highlight areas that may be trimmed.
2. The section of geometric understanding is similar to results of 'Adversarial Perturbations of Deep Neural Networks' in Warde-Farley and Goodfellow (2015). See Figure 1.2. I am not clear what the authors show above-and-beyond these results. If there are additional findings, the authors should emphasize them.
3. The authors expand on observations by Goodfellow et al (2014) and Szegedy et al (2013) demonstrating that large-scale models are susceptible to adversarial perturbations (see also Kurakin et al (2016)). The authors additionally demonstrate that attempting to perform adversarial manipulation to convert an image to a particular, desired label is more difficult.
4. The authors demonstrate that they can target a real-world vision API. These results are compelling but it is not clear what these results demonstrate above-and-beyond Papernot et al (2016).
As far I can understand, I think that the most interesting result from this paper not previously described in the literature is to note about the unique difficulty about performing adversarial manipulation to convert an image to a particular, desired label. The rest of the results appear to expand on other results that have already appeared in the literature and the authors need to better explain what these makes these results unique above-and-beyond previous work.
Areas to Trim the Paper:
- Table 1 is not necessary. Just cite other results or write the Top-1 numbers in the text.
- Condense Section 2.2.1 and cite heavily.
- Figure 2 panels may be overlaid to highlight a comparison.
| 3: The reviewer is fairly confident that the evaluation is correct |
Sys6GJqxl | ICLR.cc/2017/conference | 2017 | Delving into Transferable Adversarial Examples and Black-box Attacks | ["Yanpei Liu", "Xinyun Chen", "Chang Liu", "Dawn Song"] | An intriguing property of deep neural networks is the existence of adversarial examples, which can transfer among different architectures. These transferable adversarial examples may severely hinder deep neural network-based applications. Previous works mostly study the transferability using small scale datasets. In this work, we are the first to conduct an extensive study of the transferability over large models and a large scale dataset, and we are also the first to study the transferability of targeted adversarial examples with their target labels. We study both non-targeted and targeted adversarial examples, and show that while transferable non-targeted adversarial examples are easy to find, targeted adversarial examples generated using existing approaches almost never transfer with their target labels. Therefore, we propose novel ensemble-based approaches to generating transferable adversarial examples. Using such approaches, we observe a large proportion of targeted adversarial examples that are able to transfer with their target labels for the first time. We also present some geometric studies to help understanding the transferable adversarial examples. Finally, we show that the adversarial examples generated using ensemble-based approaches can successfully attack Clarifai.com, which is a black-box image classification system. | ["Computer vision", "Deep learning", "Applications"] | ABSTRACTAn intriguing property of deep neural networks is the existence of adversarial ex-amples, which can transfer among different architectures. These transferable ad-versarial examples may severely hinder deep neural network-based applications.Previous works mostly study the transferability using small scale datasets. In thiswork, we are the first to conduct an extensive study of the transferability overlarge models and a large scale dataset, and we are also the first to study the trans-ferability of targeted adversarial examples with their target labels. We study bothnon-targeted andtargeted adversarial examples, and show that while transferablenon-targeted adversarial examples are easy to find, targeted adversarial examplesgenerated using existing approaches almost never transfer with their target labels.Therefore, we propose novel ensemble-based approaches to generating transfer-able adversarial examples. Using such approaches, we observe a large proportionof targeted adversarial examples that are able to transfer with their target labels forthe first time. We also present some geometric studies to help understanding thetransferable adversarial examples. Finally, we show that the adversarial examplesgenerated using ensemble-based approaches can successfully attack Clarifai.com,which is a black-box image classification system.1 I NTRODUCTIONRecent research has demonstrated that for a deep architecture, it is easy to generate adversarialexamples, which are close to the original ones but are misclassified by the deep architecture (Szegedyet al. (2013); Goodfellow et al. (2014)). The existence of such adversarial examples may have severeconsequences, which hinders vision-understanding-based applications, such as autonomous driving.Most of these studies require explicit knowledge of the underlying models. It remains an openquestion how to efficiently find adversarial examples for a black-box model.Several works have demonstrated that some adversarial examples generated for one model mayalso be misclassified by another model. Such a property is referred to as transferability , whichcan be leveraged to perform black-box attacks. This property has been exploited by constructinga substitute of the black-box model, and generating adversarial instances against the substitute toattack the black-box system (Papernot et al. (2016a;b)). However, so far, transferability is mostlyexamined over small datasets, such as MNIST (LeCun et al. (1998)) and CIFAR-10 (Krizhevsky &Hinton (2009)). It has yet to be better understood transferability over large scale datasets, such asImageNet (Russakovsky et al. (2015)).In this work, we are the first to conduct an extensive study of the transferability of different adver-sarial instance generation strategies applied to different state-of-the-art models trained over a largescale dataset. In particular, we study two types of adversarial examples: (1) non-targeted adversar-ial examples, which can be misclassified by a network, regardless of what the misclassified labelsmay be; and (2) targeted adversarial examples, which can be classified by a network as a targetlabel. We examine several existing approaches searching for adversarial examples based on a singlemodel. While non-targeted adversarial examples are more likely to transfer, we observe few targetedadversarial examples that are able to transfer with their target labels.Work is done while visiting UC Berkeley.1Published as a conference paper at ICLR 2017We further propose a novel strategy to generate transferable adversarial images using an ensembleof multiple models. In our evaluation, we observe that this new strategy can generate non-targetedadversarial instances with better transferability than other methods examined in this work. Also, forthe first time, we observe a large proportion of targeted adversarial examples that are able to transferwith their target labels.We study geometric properties of the models in our evaluation. In particular, we show that thegradient directions of different models are orthogonal to each other. We also show that decisionboundaries of different models align well with each other, which partially illustrates why adversarialexamples can transfer.Last, we study whether generated adversarial images can attack Clarifai.com, a commercial com-pany providing state-of-the-art image classification services. We have no knowledge about the train-ing dataset and the types of models used by Clarifai.com; meanwhile, the label set of Clarifai.comis quite different from ImageNet’s. We show that even in this case, both non-targeted and targetedadversarial images transfer to Clarifai.com. This is the first work documenting the success of gen-erating both non-targeted and targeted adversarial examples for a black-box state-of-the-art onlineimage classification system, whose model and training dataset are unknown to the attacker.Contributions and organization. We summarize our main contributions as follows:For ImageNet models, we show that while existing approaches are effective to generatenon-targeted transferable adversarial examples (Section 3), only few targeted adversarialexamples generated by existing methods can transfer (Section 4).We propose novel ensemble-based approaches to generate adversarial examples (Sec-tion 5). Our approaches enable a large portion of targeted adversarial examples to transferamong multiple models for the first time.We are the first to present that targeted adversarial examples generated for models trainedon ImageNet can transfer to a black-box system, i.e., Clarifai.com, whose model, trainingdata, and label set is unknown to us (Section 7). In particular, Clarifai.com’s label set isvery different from ImageNet’s.We conduct the first analysis of geometric properties for large models trained over Ima-geNet (Section 6), and the results reveal several interesting findings, such as the gradientdirections of different models are orthogonal to each other.In the following, we first discuss related work, and then present the background knowledge andexperiment setup in Section 2. Then we present each of our experiments and conclusions in thecorresponding section as mentioned above.Related work. Transferability of adversarial examples was first examined by Szegedy et al.(2013), which studied the transferability (1) between different models trained over the same dataset;and (2) between the same or different model trained over disjoint subsets of a dataset; However,Szegedy et al. (2013) only studied MNIST.The study of transferability was followed by Goodfellow et al. (2014), which attributed the phe-nomenon of transferability to the reason that the adversarial perturbation is highly aligned with theweight vector of the model. Again, this hypothesis was tested using MNIST and CIFAR-10 datasets.We show that this is not the case for models trained over ImageNet.Papernot et al. (2016a;b) examined constructing a substitute model to attack a black-box targetmodel. To train the substitute model, they developed a technique that synthesizes a training set andannotates it by querying the target model for labels. They demonstrate that using this approach,black-box attacks are feasible towards machine learning services hosted by Amazon, Google, andMetaMind. Further, Papernot et al. (2016a) studied the transferability between deep neural networksand other models such as decision tree, kNN, etc.Our work differs from Papernot et al. (2016a;b) in three aspects. First, in these works, only the modeland the training process are a black box, but the training set and the test set are controlled by theattacker; in contrast, we attack Clarifai.com, whose model, training data, training process, and eventhe test label set are unknown to the attacker. Second, the datasets studied in these works are small2Published as a conference paper at ICLR 2017scale, i.e., MNIST and GTSRB (Stallkamp et al. (2012)); in our work, we study the transferabilityover larger models and a larger dataset, i.e., ImageNet. Third, to attack black-box machine learningsystems, we do not query the systems for constructing the substitute model ourselves.In a concurrent and independent work, Moosavi-Dezfooli et al. (2016) showed the existence of auniversal perturbation for each model, which can transfer across different images. They also showthat the adversarial images generated using these universal perturbations can transfer across differentmodels on ImageNet. However, they only examine the non-targeted transferability, while our workstudies both non-targeted and targeted transferability over ImageNet.2 A DVERSARIAL DEEPLEARNING AND TRANSFERABILITY2.1 T HE ADVERSARIAL DEEP LEARNING PROBLEMWe assume a classifier f(x)outputs a category (or a label) as the prediction. Given an originalimagex, with ground truth label y, the adversarial deep learning problem is to seek for adversarialexamples for the classifier f(x). Specifically, we consider two classes of adversarial examples.Anon-targeted adversarial example x?is an instance that is close to x, in which case x?shouldhave the same ground truth as x, whilef(x?)6=y. For the problem to be non-trivial, we assumef(x) =ywithout loss of generality. A targeted adversarial example x?is close toxand satisfiesf(x?) =y?, wherey?is a target label specified by the adversary, and y?6=y.2.2 A PPROACHES FOR GENERATING ADVERSARIAL EXAMPLESIn this work, we consider three classes of approaches for generating adversarial examples:optimization-based approaches, fast gradient approaches, and fast gradient sign approaches. Eachclass has non-targeted and targeted versions respectively.2.2.1 A PPROACHES FOR GENERATING NON -TARGETED ADVERSARIAL EXAMPLESFormally, given an image xwith ground truth y=f(x), searching for a non-targeted adversarialexample can be modeled as searching for an instance x?to satisfy the following constraints:f(x?)6=y (1)d(x;x?)B (2)whered(;)is a metric to quantify the distance between an original image and its adversarial coun-terpart, andB, called distortion , is an upper bound placed on this distance. Without loss of gener-ality, we consider model fis composed of a network J(x), which outputs the probability for eachcategory, so that foutputs the category with the highest probability.Optimization-based approach. One approach is to approximate the solution to the followingoptimization problem:argminx?d(x;x?)`(1y;J(x?)) (3)where 1yis the one-hot encoding of the ground truth label y,`is a loss function to measure thedistance between the prediction and the ground truth, and is a constant to balance constraints (2)and (1), which is empirically determined. Here, loss function `is used to approximate constraint (1),and its choice can affect the effectiveness of searching for an adversarial example. In this work, wechoose`(u;v) = log (1uv), which is shown to be effective by Carlini & Wagner (2016).Fast gradient sign (FGS). Goodfellow et al. (2014) proposed the fast gradient sign (FGS) methodso that the gradient needs be computed only once to generate an adversarial example. FGS can beused to generate adversarial images to meet the L1norm bound. Formally, non-targeted adversarialexamples are constructed asx? clip(x+Bsgn(rx`(1y;J(x))))Here, clip(x)is used to clip each dimension of xto the range of pixel values, i.e., [0;255] in thiswork. We make a slight variation to choose `(u;v) = log (1uv), which is the same as used inthe optimization-based approach.3Published as a conference paper at ICLR 2017Fast gradient (FG). The fast gradient approach (FG) is similar to FGS, but instead of movingalong the gradient sign direction, FG moves along the gradient direction. In particular, we havex? clip(x+Brx`(1y;J(x))jjrx`(1y;J(x))jj))Here, we assume the distance metric in constraint (2), d(x;x?) =jjxx?jjis a norm of xx?.The term sgn(rx`)in FGS is replaced byrx`jjrx`jjto meet this distance constraint.We call both FGS and FG fast gradient-based approaches .2.2.2 A PPROACHES FOR GENERATING TARGETED ADVERSARIAL EXAMPLESA targeted adversarial image x?is similar to a non-targeted one, but constraint (1) is replaced byf(x?) =y?(4)wherey?is the target label given by the adversary. For the optimization-based approach, we ap-proximate the solution by solving the following dual objective:argminx?d(x;x?) +`0(1y?;J(x?)) (5)In this work, we choose the standard cross entropy loss `0(u;v) =Piuilogvi.For FGS and FG, we construct adversarial examples as follows:x? clip(xBsgn(rx`0(1y?;J(x)))) (FGS)x? clip(xBrx`0(1y?;J(x))jjrx`0(1y?;J(x))jj) (FG)where`0is the same as the one used for the optimization-based approach.2.3 E VALUATION METHODOLOGYFor the rest of the paper, we focus on examining the transferability among state-of-the-art modelstrained over ImageNet (Russakovsky et al. (2015)). In this section, we detail the models to beexamined, the dataset to be evaluated, and the measurements to be used.Models. We examine five networks, ResNet-50, ResNet-101, ResNet-152 (He et al. (2015))1,GoogLeNet (Szegedy et al. (2014))2, and VGG-16 (Simonyan & Zisserman (2014))3. We retrievethe pre-trained models for each network online. The performance of these models on the ILSVRC2012 (Russakovsky et al. (2015)) validation set can be found in our online technical report: Liu et al.(2016). We choose these models to study the transferability between homogeneous architectures(i.e., ResNet models) and heterogeneous architectures.Dataset. It is less meaningful to examine the transferability of an adversarial image between twomodels which cannot classify the original image correctly. Therefore, from the ILSVRC 2012 val-idation set, we randomly choose 100 images, which can be classified correctly by all five modelsin our examination. These 100 images form our test set. To perform targeted attacks, we manuallychoose a target label for each image, so that its semantics is far from the ground truth. The imagesand target labels in our evaluation can be found on website4.Measuring transferability. Given two models, we measure the non-targeted transferability bycomputing the percentage of the adversarial examples generated for one model that can be classifiedcorrectly for the other. We refer to this percentage as accuracy . A lower accuracy means betternon-targeted transferability. We measure the targeted transferability by computing the percentage ofthe adversarial examples generated for one model that are classified as the target label by the othermodel. We refer to this percentage as matching rate . A higher matching rate means better targetedtransferability. For clarity, the reported results are only based on top-1 accuracy. Top-5 accuracy’scounterparts can be found in our online technical report: Liu et al. (2016).1https://github.com/KaimingHe/deep-residual-networks2https://github.com/BVLC/caffe/tree/master/models/bvlc_googlenet3https://gist.github.com/ksimonyan/211839e770f7b538e2d84https://github.com/sunblaze-ucb/transferability-advdnn-pub4Published as a conference paper at ICLR 2017Distortion. Besides transferability, another important factor is the distortion between adversarialimages and the original ones. We measure the distortion by root mean square deviation , i.e., RMSD,which is computed as d(x?;x) =pPi(x?ixi)2=N, wherex?andxare the vector representationsof an adversarial image and the original one respectively, Nis the dimensionality of xandx?, andxidenotes the pixel value of the i-th dimension of x, within range [0;255], and similar for x?i.3 N ON-TARGETED ADVERSARIAL EXAMPLESIn this section, we examine different approaches for generating non-targeted adversarial images.3.1 O PTIMIZATION -BASED APPROACHTo apply the optimization-based approach for a single model, we initialize x?to bexand use AdamOptimizer (Kingma & Ba (2014)) to optimize Objective (3) . We find that we can tune the RMSDby adjusting the learning rate of Adam and . We find that, for each model, we can use a smalllearning rate to generate adversarial images with small RMSD, i.e. <2, with any. In fact, we findthat when initializing x?withx, Adam Optimizer will search for an adversarial example around x,even when we set to be 0, i.e., not restricting the distance between x?andx. Therefore, we setto be 0for all experiments using optimization-based approaches throughout the paper. Althoughthese adversarial examples with small distortions can successfully fool the target model, however,they cannot transfer well to other models (details can be found in our online technical report: Liuet al. (2016)).We increase the learning rate to allow the optimization algorithm to search for adversarial imageswith larger distortion. In particular, we set the learning rate to be 4. We run Adam Optimizer for 100iterations to generate the adversarial images. We observe that the loss converges after 100 iterations.An alternative optimization-based approach leading to similar results can be found in our onlinetechnical report: Liu et al. (2016).Non-targeted adversarial examples transfer. We generate non-targeted adversarial examples onone network, but evaluate them on another, and Table 1 Panel A presents the results. From the table,we can observe thatThe diagonal contains all 0 values. This says that all adversarial images generated for onemodel can mislead the same model.A large proportion of non-targeted adversarial images generated for one model using theoptimization-based approach can transfer to another.Although the three ResNet models share similar architectures which differ only in the hy-perparameters, adversarial examples generated against a ResNet model do not necessarilytransfer to another ResNet model better than other non-ResNet models. For example, theadversarial examples generated for VGG-16 have lower accuracy on ResNet-50 than thosegenerated for ResNet-152 or ResNet-101.3.2 F AST GRADIENT -BASED APPROACHESWe then examine the effectiveness of fast gradient-based approaches. A good property of fastgradient-based approaches is that all generated adversarial examples lie in a 1-D subspace. There-fore, we can easily approximate the minimal distortion in this subspace of transferable adversarialexamples between two models. In the following, we first control the RMSD to study fast gradient-based approaches’ effectiveness. Second, we study the transferable minimal distortions of fastgradient-based approaches.3.2.1 E FFECTIVENESS AND TRANSFERABILITY OF THE FAST GRADIENT -BASEDAPPROACHESSince the distortion Band the RMSD of the generated adversarial images are highly correlated, wecan choose this hyperparameter Bto generate adversarial images with a given RMSD. In Table 15Published as a conference paper at ICLR 2017RMSD ResNet-152 ResNet-101 ResNet-50 VGG-16 GoogLeNetResNet-152 22.83 0% 13% 18% 19% 11%ResNet-101 23.81 19% 0% 21% 21% 12%ResNet-50 22.86 23% 20% 0% 21% 18%VGG-16 22.51 22% 17% 17% 0% 5%GoogLeNet 22.58 39% 38% 34% 19% 0%Panel A: Optimization-based approachRMSD ResNet-152 ResNet-101 ResNet-50 VGG-16 GoogLeNetResNet-152 23.45 4% 13% 13% 20% 12%ResNet-101 23.49 19% 4% 11% 23% 13%ResNet-50 23.49 25% 19% 5% 25% 14%VGG-16 23.73 20% 16% 15% 1% 7%GoogLeNet 23.45 25% 25% 17% 19% 1%Panel B: Fast gradient approachTable 1: Transferability of non-targeted adversarial images generated between pairs of models. Thefirst column indicates the average RMSD of all adversarial images generated for the model in thecorresponding row. The cell (i;j)indicates the accuracy of the adversarial images generated formodeli(row) evaluated over model j(column). Results of top-5 accuracy can be found in ouronline technical report: Liu et al. (2016).Panel B, we generate adversarial images using FG such that the average RMSD is almost the sameas those generated using the optimization-based approach. We observe that the diagonal values inthe table are all positive, which means that FG cannot fully mislead the models. A potential reasonis that, FG can be viewed as approximating the optimization, but is tailored for speed over accuracy.On the other hand, the values of non-diagonal cells in the table, which correspond to the accuraciesof adversarial images generated for one model but evaluated on another, are comparable with or lessthan their counterparts in the optimization-based approach. This shows that non-targeted adversarialexamples generated by FG exhibit transferability as well.We also evaluate FGS, but the transferability of the generated images is worse than the ones gen-erated using either FG or optimization-based approaches. The results can be found in our onlinetechnical report: Liu et al. (2016). It shows that when RMSD is around 23, the accuracies of theadversarial images generated by FGS is greater than their counterparts for FG. We hypothesize thereason why transferability of FGS is worse to this fact.3.2.2 A DVERSARIAL IMAGES WITH MINIMAL TRANSFERABLE RMSDFor an image xand two models M1;M 2, we can approximate the minimal distortion Balong adirection, such thatxB=x+Bgenerated for M1is adversarial for both M1andM2. Hereisthe direction, i.e., sgn(rx`)for FGS, andrx`=jjrx`jjfor FG.We refer to the minimal transferable RMSD from M1toM2using FG (or FGS) as the RMSD ofa transferable adversarial example xBwith the minimal transferable distortion BfromM1toM2using FG (or FGS). The minimal transferable RMSD can illustrate the tradeoff between distortionand transferability.In the following, we approximate the minimal transferable RMSD through a linear search by sam-plingBevery 0.1 step. We choose the linear-search method rather than binary-search method todetermine the minimal transferable RMSD because the adversarial images generated from an origi-nal image may come from multiple intervals. The experiment can be found in our online technicalreport: Liu et al. (2016).Minimal transferable RMSD using FG and FGS. Figure 1 plots the cumulative distributionfunction (CDF) of the minimal transferable RMSD from VGG-16 to ResNet-152 using non-targetedFG (Figure 1a) and FGS (Figure 1b). From the figures, we observe that both FG and FGS can find100% transferable adversarial images with RMSD less than 80:91and86:56respectively. Further,6Published as a conference paper at ICLR 2017(a) Fast Gradient (b) Fast Gradient SignFigure 1: The CDF of the minimal transferable RMSD from VGG-16 to ResNet-152 using FG (a)and FGS (b). The green line labels the median minimal transferable RMSD, while the red line labelsthe minimal transferable RMSD to reach 90% percentage.RMSD ResNet-152 ResNet-101 ResNet-50 VGG-16 GoogLeNetResNet-152 23.13 100% 2% 1% 1% 1%ResNet-101 23.16 3% 100% 3% 2% 1%ResNet-50 23.06 4% 2% 100% 1% 1%VGG-16 23.59 2% 1% 2% 100% 1%GoogLeNet 22.87 1% 1% 0% 1% 100%Table 2: The matching rate of targeted adversarial images generated using the optimization-basedapproach. The first column indicates the average RMSD of the generated adversarial images. Cell(i;j)indicates that matching rate of the targeted adversarial images generated for model i(row)when evaluated on model j(column). The top-5 results can be found in our online technical re-port: Liu et al. (2016).the FG method can generate transferable attacks with smaller RMSD than FGS. A potential rea-son is that while FGS minimizes the distortion’s L1norm, FG minimizes its L2norm, which isproportional to RMSD.3.3 C OMPARISON WITH RANDOM PERTURBATIONSWe also evaluate the test accuracy when we add a Gaussian noise to the 100 images in our testset. The concrete results can be found in our online technical report: Liu et al. (2016), where weshow the conclusion that the “transferability” of this approach is significantly worse than eitheroptimization-based approaches or fast gradient-based approaches.4 T ARGETED ADVERSARIAL EXAMPLESIn this section, we examine the transferability of targeted adversarial images. Table 2 presentsthe results for using optimization-based approach. We observe that (1) the prediction of targetedadversarial images can match the target labels when evaluated on the same model that is used togenerate the adversarial examples; but (2) the targeted adversarial images can be rarely predictedas the target labels by a different model. We call the latter that the target labels do not transfer .Even when we increase the distortion, we still do not observe improvements on making target labeltransfer. Some results can be found in our online technical report: Liu et al. (2016). Even if wecompute the matching rate based on top-5 accuracy, the highest matching rate is only 10%. Theresults can be found in our online technical report: Liu et al. (2016).We also examine the targeted adversarial images generated by fast gradient-based approaches, andwe observe that the target labels do not transfer as well. The results can be found in our onlinetechnical report: Liu et al. (2016). In fact, most targeted adversarial images cannot mislead themodel, for which the adversarial images are generated, to predict the target labels, regardless of howlarge the distortion is used. We attribute it to the fact that the fast gradient-based approaches only7Published as a conference paper at ICLR 2017search for attacks in a 1-D subspace. In this subspace, the total possible predictions may contain asmall subset of all labels, which usually does not contain the target label. In Section 6, we studydecision boundaries regarding this issue.We also evaluate the matching rate of images added with Gaussian noise, as described in Section 3.3.However, we observe that the matching rate of any of the 5 models is 0%. Therefore, we concludethat by adding Gaussian noise, the attacker cannot generate successful targeted adversarial examplesat all, let alone targeted transferability.5 E NSEMBLE -BASED APPROACHESWe hypothesize that if an adversarial image remains adversarial for multiple models, then it is morelikely to transfer to other models as well. We develop techniques to generate adversarial images formultiple models. The basic idea is to generate adversarial images for the ensemble of the models .Formally, given kwhite-box models with softmax outputs being J1;:::;Jk, an original image x,and its ground truth y,the ensemble-based approach solves the following optimization problem (fortargeted attack):argminx?log(kXi=1iJi(x?))1y?+d(x;x?) (6)wherey?is the target label specified by the adversary,PiJi(x?)is the ensemble model, and iare the ensemble weights,Pki=1i= 1. Note that (6) is the targeted objective. The non-targetedcounterpart can be derived similarly. In doing so, we hope the generated adversarial images remainadversarial for an additional black-box model Jk+1.We evaluate the effectiveness of the ensemble-based approach. For each of the five models, we treatit as the black-box model to attack, and generate adversarial images for the ensemble of the restfour, which is considered as white-box. We evaluate the generated adversarial images over all fivemodels. Throughout the rest of the paper, we refer to the approaches evaluated in Section 3 and 4 asthe approaches using a single model, and to the ensemble-based approaches discussed in this sectionas the approaches using an ensemble model.Optimization-based approach. We use Adam to optimize the objective (6) with equal ensembleweights across all models in the ensemble to generate targeted adversarial examples. In particular,we set the learning rate of Adam to be 8for each model. In each iteration, we compute the Adamupdate for each model, sum up the four updates, and add the aggregation onto the image. We run 100iterations of updates, and we observe that the loss converges after 100 iterations. By doing so, for thefirst time, we observe a large proportion of the targeted adversarial images whose target labels cantransfer. The results are presented in Table 3. We observe that not all targeted adversarial imagescan be misclassified to the target labels by the models used in the ensemble. This suggests thatwhile searching for an adversarial example for the ensemble model, there is no direct supervision tomislead any individual model in the ensemble to predict the target label. Further, from the diagonalnumbers of the table, we observe that the transferability to ResNet models is better than to VGG-16or GoogLeNet, when adversarial examples are generated against all models except the target model.We also evaluate non-targeted adversarial images generated by the ensemble-based approach. Weobserve that the generated adversarial images have almost perfect transferability. We use the sameprocedure as for the targeted version, except the objective to generate the adversarial images. Weevaluate the generated adversarial images over all models. The results are presented in Table 4.The generated adversarial images all have RMSDs around 17, which are lower than 22 to 23 ofthe optimization-based approach using a single model (See Table 1 for comparison). When theadversarial images are evaluated over models which are not used to generate the attack, the accuracyis no greater than 6%. For a reference, the corresponding accuracies for all approaches evaluated inSection 3 using one single model are at least 12%. Our experiments demonstrate that the ensemble-based approaches can generate almost perfectly transferable adversarial images.Fast gradient-based approach. The results for non-targeted fast gradient-based approaches ap-plied to the ensemble can be found in our online technical report: Liu et al. (2016). We observethat the diagonal values are not zero, which is the same as we observed in the results for FG and8Published as a conference paper at ICLR 2017RMSD ResNet-152 ResNet-101 ResNet-50 VGG-16 GoogLeNet-ResNet-152 30.68 38% 76% 70% 97% 76%-ResNet-101 30.76 75% 43% 69% 98% 73%-ResNet-50 30.26 84% 81% 46% 99% 77%-VGG-16 31.13 74% 78% 68% 24% 63%-GoogLeNet 29.70 90% 87% 83% 99% 11%Table 3: The matching rate of targeted adversarial images generated using the optimization-basedapproach. The first column indicates the average RMSD of the generated adversarial images. Cell(i;j)indicates that percentage of the targeted adversarial images generated for the ensemble of thefour models except model i(row) is predicted as the target label by model j(column). In each row,the minus sign “” indicates that the model of the row is not used when generating the attacks.Results of top-5 matching rate can be found in our online technical report: Liu et al. (2016).RMSD ResNet-152 ResNet-101 ResNet-50 VGG-16 GoogLeNet-ResNet-152 17.17 0% 0% 0% 0% 0%-ResNet-101 17.25 0% 1% 0% 0% 0%-ResNet-50 17.25 0% 0% 2% 0% 0%-VGG-16 17.80 0% 0% 0% 6% 0%-GoogLeNet 17.41 0% 0% 0% 0% 5%Table 4: Accuracy of non-targeted adversarial images generated using the optimization-based ap-proach. The first column indicates the average RMSD of the generated adversarial images. Cell(i;j)corresponds to the accuracy of the attack generated using four models except model i(row)when evaluated over model j(column). In each row, the minus sign “ ” indicates that the modelof the row is not used when generating the attacks. Results of top-5 accuracy can be found in ouronline technical report: Liu et al. (2016).FGS applied to a single model. We hypothesize a potential reason is that the gradient directions ofdifferent models in the ensemble are orthogonal to each other, as we will illustrate in Section 6. Inthis case, the gradient direction of the ensemble is almost orthogonal to the one of each model in theensemble. Therefore searching along this direction may require large distortion to reach adversarialexamples.For targeted adversarial examples generated using FG and FGS based on an ensemble model, theirtransferability is no better than the ones generated using a single model. The results can be found inour online technical report: Liu et al. (2016). We hypothesize the same reason to explain this: thereare only few possible target labels in total in the 1-D subspace.6 G EOMETRIC PROPERTIES OF DIFFERENT MODELSIn this section, we show some geometric properties of the models to try to better understand transfer-able adversarial examples. Prior works also try to understand the geometic properties of adversarialexamples theoretically (Fawzi et al. (2016)) or empirically (Goodfellow et al. (2014)). In this work,we examine large models trained over a large dataset with 1000 labels, whose geometric propertiesare never examined before. This allows us to make new observations to better understand the modelsand their adversarial examples.The gradient directions of different models in our evaluation are almost orthogonal to eachother. We study whether the adversarial directions of different models align with each other. Wecalculate cosine value of the angle between gradient directions of different models, and the resultscan be found in our online technical report: Liu et al. (2016). We observe that all non-diagonalvalues are close to 0, which indicates that for most images, their gradient directions with respect todifferent models are orthogonal to each other.Decision boundaries of the non-targeted approaches using a single model. We study the deci-sion boundary of different models to understand why adversarial examples transfer. We choose two9Published as a conference paper at ICLR 2017Figure 2: The example image to study the decision boundary. Its ID in ILSVRC 2012 validation setis 49443, and its ground truth label is “anemone fish.”VGG-16 ResNet-50 ResNet-101 ResNet-152 GoogLeNetZoom-in20 15 10 50 5 10 15 2020151050510152020 15 10 50 5 10 15 2020151050510152020 15 10 50 5 10 15 2020151050510152020 15 10 50 5 10 15 2020151050510152020 15 10 50 5 10 15 20201510505101520 Zoom-out100 50 0 50 10010050050100100 50 0 50 10010050050100100 50 0 50 10010050050100100 50 0 50 10010050050100100 50 0 50 10010050050100Figure 3: Decision regions of different models. We pick the same two directions for all plots: one isthe gradient direction of VGG-16 (x-axis), and the other is a random orthogonal direction (y-axis).Each point in the span plane shows the predicted label of the image generated by adding a noise tothe original image (e.g., the origin corresponds to the predicted label of the original image). Theunits of both axises are 1 pixel values. All sub-figure plots the regions on the span plane using thesame color for the same label. The image is in Figure 2.normalized orthogonal directions 1;2, one being the gradient direction of VGG-16 and the otherbeing randomly chosen. Each point (u;v)in this 2-D plane corresponds to the image x+u1+v2,wherexis the pixel value vector of the original image. For each model, we plot the label of theimage corresponding to each point, and get Figure 3 using the image in Figure 2.We can observe that for all models, the region that each model can predict the image correctlyis limited to the central area. Also, along the gradient direction, the classifiers are soon misled.One interesting finding is that along this gradient direction, the first misclassified label for the threeResNet models (corresponding to the light green region) is the label “orange”. A more detailedstudy can be found in our online technical report: Liu et al. (2016). When we look at the zoom-out figures, however, the labels of images that are far away from the original one are different fordifferent models, even among ResNet models.On the other hand, in Table 5, we show the total number of regions in each plane. In fact, for eachplane, there are at most 21 different regions in all planes. Compared with the 1,000 total categoriesin ImageNet, this is only 2.1% of all categories. That means, for all other 97.9% labels, no targetedadversarial example exists in each plane. Such a phenomenon partially explains why fast gradient-based approaches can hardly find targeted adversarial images.Further, in Figure 4, we draw the decision boundaries of all models on the same plane as describedabove. We can observe that10Published as a conference paper at ICLR 2017Model VGG-16 ResNet-50 ResNet-101 ResNet-152 GoogLeNet# of labels 10 9 21 10 21Table 5: The number of all possible predicted labels for each model in the same plane described in Figure 3.50 0 50 100604020020406080VGG-16ResNet-101ResNet-152ResNet-50GoogLeNetFigure 4: The decision boundary to sep-arate the region within which all pointsare classified as the ground truth label(encircled by each closed curve) fromothers. The plane is the same one de-scribed in Figure 3. The origin ofthe coordinate plane corresponds to theoriginal image. The units of both axisesare 1 pixel values.50 0 50 1006040200204060 ResNet-101VGG-16ResNet-50ResNet-152GoogLeNetFigure 5: The decision boundary to separate theregion within which all points are classified as thetarget label (encircled by each closed curve) fromothers. The plane is spanned by the targeted ad-versarial direction and a random orthogonal di-rection. The targeted adversarial direction is com-puted as the difference between the original imagein Figure 2 and the adversarial image generated bythe optimization-based approach for an ensemble.The ensemble contains all models except ResNet-101. The origin of the coordinate plane corre-sponds to the original image. The units of bothaxises are 1 pixel values.The boundaries align with each other very well. This partially explains why non-targetedadversarial images can transfer among models.The boundary diameters along the gradient direction is less than the ones along the ran-dom direction. A potential reason is that moving a variable along its gradient directioncan change the loss function (i.e., the probability of the ground truth label) significantly.Therefore along the gradient direction it will take fewer steps to move out of the groundtruth region than a random direction.An interesting finding is that even though we move left along the x-axis, which is equivalentto maximizing the ground truth’s prediction probability, it also reaches the boundary muchsooner than moving along a random direction. We attribute this to the non-linearity of theloss function: when the distortion is larger, the gradient direction also changes dramatically.In this case, moving along the original gradient direction no longer increases the probabilityto predict the ground truth label (details can be found in our online technical report: Liuet al. (2016)).As for VGG-16 model, there is a small hole within the region corresponding to the groundtruth. This may partially explain why non-targeted adversarial images with small distortionexist, but do not transfer well. This hole does not exist in other models’ decision planes. Inthis case, non-targeted adversarial images in this hole do not transfer.Decision boundaries of the targeted ensemble-based approaches. In addition, we choose thetargeted adversarial direction of the ensemble of all models except ResNet-101 and a random or-thogonal direction, and we plot decision boundaries on the plane spanned by these two directionvectors in Figure 5. We observe that the regions of images, which are predicted as the target label,align well for the four models in the ensemble. However, for the model not used to generate theadversarial image, i.e., ResNet-101, it also has a non-empty region such that the prediction is suc-cessfully misled to the target label, although the area is much smaller. Meanwhile, the region withineach closed curve of the models almost has the same center.11Published as a conference paper at ICLR 20177 R EAL WORLD EXAMPLE :ADVERSARIAL EXAMPLES FOR CLARIFAI .COMClarifai.com is a commercial company providing state-of-the-art image classification services. Wehave no knowledge about the dataset and types of models used behind Clarifai.com, except that wehave black-box access to the services. The labels returned from Clarifai.com are also different fromthe categories in ILSVRC 2012. We submit all 100 original images to Clarifai.com and the returnedlabels are correct based on a subjective measure.We also submit 400 adversarial images in total, where 200 of them are targeted adversarial examples,and the rest 200 are non-targeted ones. As for the 200 targeted adversarial images, 100 of themare generated using the optimization-based approach based on VGG-16 (the same ones evaluatedin Table 2), and the rest 100 are generated using the optimization-based approach based on anensemble of all models except ResNet-152 (the same ones evaluated in Table 3). The 200 non-targeted adversarial examples are generated similarly (the same ones evaluated in Table 1 and 4).For non-targeted adversarial examples, we observe that for both the ones generated using VGG-16and those generated using the ensemble, most of them can transfer to Clarifai.com.More importantly, a large proportion of our targeted adversarial examples are misclassified by Clari-fai.com as well. We observe that 57% of the targeted adversarial examples generated using VGG-16,and76% of the ones generated using the ensemble can mislead Clarifai.com to predict labels irrele-vant to the ground truth.Further, our experiment shows that for targeted adversarial examples, 18% of those generated us-ing the ensemble model can be predicted as labels close to the target label by Clarifai.com. Thecorresponding number for the targeted adversarial examples generated using VGG-16 is 2%. Con-sidering that in the case of attacking Clarifai.com, the labels given by the target model are differentfrom those given by our models, it is fairly surprising to see that when using the ensemble-basedapproach, there is still a considerable proportion of our targeted adversarial examples that can mis-lead this black-box model to make predictions semantically similar to our target labels. All thesenumbers are computed based on a subjective measure, and we include some examples in Table 6.More examples can be found in our online technical report: Liu et al. (2016).originalimagetruelabelClarifai.comresults oforiginal imagetargetlabeltargetedadversarialexampleClarifai.com resultsof targetedadversarial exampleviaductbridge,sight,arch,river,skywindowscreenwindow,wall,old,decoration,designhip, rosehip,rosehipfruit,fall,food,little,wildlifestupa,topeBuddha,gold,temple,celebration,artisticdogsled,dogsled,dogsleighgroup together,four,sledge,sled,enjoymenthip, rosehip,rosehipcherry,branch,fruit,food,season12Published as a conference paper at ICLR 2017pug,pug-dogpug,friendship,adorable,purebred,sitsea lionsea seal,ocean,head,sea,cuteOldEnglishsheep-dog,bobtailpoodle,retriever,loyalty,sit,twoabayaveil,spirituality,religion,people,illustrationmaillot,tank suitbeach,woman,adult,wear,portraitamphib-ian,amphibi-ousvehicletransportationsystem,vehicle,man,print,retropatas,hussarmonkey,Erythro-cebuspatasprimate,monkey,safari,sit,lookingbee eaterornithology,avian,beak,wing,featherTable 6: Original images and adversarial images evaluated over Clarifai.com. For labels returnedfrom Clarifai.com, we sort the labels firstly by rareness: how many times a label appears in theClarifai.com results for all adversarial images and original images, and secondly by confidence.Only top 5 labels are provided.8 C ONCLUSIONIn this work, we are the first to conduct an extensive study of the transferability of both non-targetedand targeted adversarial examples generated using different approaches over large models and alarge scale dataset. Our results confirm that the transferability for non-targeted adversarial exam-ples are prominent even for large models and a large scale dataset. On the other hand, we find thatit is hard to use existing approaches to generate targeted adversarial examples whose target labelscan transfer. We develop novel ensemble-based approaches, and demonstrate that they can gen-erate transferable targeted adversarial examples with a high success rate. Meanwhile, these newapproaches exhibit better performance on generating non-targeted transferable adversarial examplesthan previous work. We also show that both non-targeted and targeted adversarial examples gen-erated using our new approaches can successfully attack Clarifai.com, which is a black-box imageclassification system. Furthermore, we study some geometric properties to better understand thetransferable adversarial examples.ACKNOWLEDGMENTSThis material is in part based upon work supported by the National Science Foundation under GrantNo. TWC-1409915. Any opinions, findings, and conclusions or recommendations expressed in thismaterial are those of the author(s) and do not necessarily reflect the views of the National ScienceFoundation. | ryLKyXLVg | good in-depth exploration but strongly recommend a rewrite | 6: Marginally above acceptance threshold | The paper presents an interesting and very detailed study of targeted and non-targeted adversarial examples in CNNs.
I’m on the fence about this paper but am leaning towards acceptance. Such detailed empirical explorations are difficult and time-consuming to construct yet can serve as important stepping stones for future work. I see the length of the paper as a strength since it allows for a very in-depth look into the effectiveness and transferability of different kinds of adversarial examples.
There are, however, some concerns:
1) While the length of the paper is a strength in my mind, the key contributions should be made much more clear. As evidenced by my comment earlier, I got confused at some point between the ensemble/non-ensemble method, and about the contribution of the Clarifai evaluation and what I should be focusing on where. I’d strongly suggest a radical revision which more clearly focuses the story:
- First, we demonstrate that non-targeted attacks are easy while targeted attacks are hard (evidenced by a key experiment comparing the two; we refer to appendix or later sections for the extensive exploration of e.g., current Section 3)
- Thus, we propose an ensemble method that is able to handle targeted attacks much better (evidenced by experiments focusing on the comparison between ensemble and non-ensemble method, both in a controlled setting and on Clarifai)
- Also, here are all the other details and explorations.
2) Instead of using ResNet-152, Res-Net-101 and ResNet-50 as three of the five models, it would've been better to use one ResNet architecture and the other two, say, AlexNet and Network-in-Network. This would make the ensemble results a lot more compelling. | 3: The reviewer is fairly confident that the evaluation is correct |
BkSmc8qll | ICLR.cc/2017/conference | 2017 | Dynamic Neural Turing Machine with Continuous and Discrete Addressing Schemes | ["Caglar Gulcehre", "Sarath Chandar", "Kyunghyun Cho", "Yoshua Bengio"] | In this paper, we extend neural Turing machine (NTM) into a dynamic neural Turing machine (D-NTM) by introducing a trainable memory addressing scheme. This addressing scheme maintains for each memory cell two separate vectors, content and address vectors. This allows the D-NTM to learn a wide variety of location-based addressing strategies including both linear and nonlinear ones. We implement the D-NTM with both continuous, differentiable and discrete, non-differentiable read/write mechanisms. We investigate the mechanisms and effects for learning to read and write to a memory through experiments on Facebook bAbI tasks using both a feedforward and GRU-controller. The D-NTM is evaluated on a set of Facebook bAbI tasks and shown to outperform NTM and LSTM baselines. We also provide further experimental results on sequential MNIST, associative recall and copy tasks. | ["Deep learning", "Natural language processing", "Reinforcement Learning"] | ABSTRACTIn this paper, we extend neural Turing machine (NTM) into a dynamic neural Turingmachine (D-NTM) by introducing a trainable memory addressing scheme. Thisaddressing scheme maintains for each memory cell two separate vectors, content andaddress vectors. This allows the D-NTM to learn a wide variety of location-basedaddressing strategies including both linear and nonlinear ones. We implementthe D-NTM with both continuous, differentiable and discrete, non-differentiableread/write mechanisms. We investigate the mechanisms and effects for learning toread and write to a memory through experiments on Facebook bAbI tasks using bothafeedforward andGRU -controller. The D-NTM is evaluated on a set of FacebookbAbI tasks and shown to outperform NTM and LSTM baselines. We also providefurther experimental results on sequential MNIST, associative recall and copy tasks.1 I NTRODUCTIONDesigning general-purpose learning algorithms is one of the long-standing goals of artificial intelligence.Despite the success of deep learning in this area (see, e.g., (Goodfellow et al., 2016)) there are still a setof complex tasks that are not well addressed by conventional neural networks. Those tasks often require aneural network to be equipped with an explicit, external memory in which a larger, potentially unbounded,set of facts need to be stored. They include, but are not limited to, episodic question-answering (Westonet al., 2015b; Hermann et al., 2015; Hill et al., 2015), compact algorithms (Zaremba et al., 2015),dialogue (Serban et al., 2016; Vinyals & Le, 2015) and video caption generation (Yao et al., 2015).Recently two promising approaches based on neural networks to this type of tasks have been proposed.Memory networks (Weston et al., 2015b) explicitly store all the facts, or information, available foreach episode in an external memory (as continuous vectors) and use the attention-based mechanismto index them when returning an output. On the other hand, neural Turing machines (NTM, (Graveset al., 2014)) read each fact in an episode and decides whether to read, write the fact or do both tothe external, differentiable memory.A crucial difference between these two models is that the memory network does not have a mechanismto modify the content of the external memory, while the NTM does. In practice, this leads to easierlearning in the memory network, which in turn resulted in it being used more in real tasks (Bordes et al.,2015; Dodge et al., 2015). On the contrary, the NTM has mainly been tested on a series of small-scale,carefully-crafted tasks such as copy and associative recall. The NTM, however is more expressive,precisely because it can store and modify the internal state of the network as it processes an episode.The original NTM supports two modes of addressing (which can be used simultaneously.) They arecontent-based and location-based addressing. We notice that the location-based strategy is based onlinear addressing. The distance between each pair of consecutive memory cells is fixed to a constant.We address this limitation, in this paper, by introducing a learnable address vector for each memorycell of the NTM with least recently used memory addressing mechanism, and we call this variant adynamic neural Turing machine (D-NTM).We evaluate the proposed D-NTM on the full set of Facebook bAbI task (Weston et al., 2015b)using either continuous , differentiable attention or discrete , non-differentiable attention (Zaremba &Sutskever, 2015) as an addressing strategy. Our experiments reveal that it is possible to use the discrete,non-differentiable attention mechanism, and in fact, the D-NTM with the discrete attention and GRUcontroller outperforms the one with the continuous attention. After we published our paper on arXiv, anew extension of NTM called DNC (Graves et al., 2016) has also provided results on bAbI task as well.1Under review as a conference paper at ICLR 2017We also provide results on sequential-MNIST and algorithmic tasks proposed by (Graves et al., 2014)in order to investigate the ability of our model when dealing with long-term dependencies.Our Contributions1. We propose a generalization of Neural Turing Machine called a dynamic neural Turing machine(D-NTM) which employs a learnable and location-based addressing.2.We demonstrate the application of neural Turing machines on a more natural and less toyish task:episodic question-answering besides the toy tasks. We provide detailed analysis of our model onthis task.3.We propose to use the discrete attention mechanism and empirically show that, it can outperformthe continuous attention based addressing for episodic QA task.4. We propose a curriculum strategy for our model with the feedforward controller and discreteattention that improves our results significantly.2 D YNAMIC NEURAL TURING MACHINEThe proposed dynamic neural Turing machine (D-NTM) extends the neural Turing machine (NTM,(Graves et al., 2014)) which has a modular design. The NTM consists of two main modules, a controllerand, a memory. The controller, which is often implemented as a recurrent neural network, issues acommand to the memory so as to read, write to and erase a subset of memory cells. Although thememory was originally envisioned as an integrated module, it is not necessary, and the memory maybe an external, black box (Zaremba & Sutskever, 2015).2.1 C ONTROLLERAt each time step t, the controller (1) receives an input value xt, (2) addresses and reads the memory andcreates the content vector t, (3) erases/writes a portion of the memory, (4) updates its own hidden stateht, and (5) outputs a value yt(if needed.) In this paper, we use both a gated recurrent unit (GRU, (Choet al., 2014)) and a feedforward-controller to implement the controller such that for a GRU controllerht=GRU(xt;ht1;t) (1)or for a feedforward-controllerht=(xt;t): (2)2.2 M EMORYWe use a rectangular matrix M2RN(dc+da)to denoteNmemory cells. Unlike the original NTM,we partition each memory cell vector into two parts:M= [A;C]:The first part A2RNdais a learnable address matrix, and the second C2RNdca content matrix.In other words, each memory cell miis nowmi= [ai;ci]:The address part aiis considered a model parameter that is updated during training. During inference,the address part is not overwritten by the controller and remains constant. On the other hand, thecontent part ciis both read and written by the controller both during training and inference. At thebeginning of each episode, the content part of the memory is refreshed to be an all-zero matrix, C0=0.This introduction of the learnable address portion for each memory cell allows the model to learnsophisticated location-based addressing strategies. A similar addressing mechanism is also exploredin (Reed & de Freitas, 2015) in the context of learning program traces.2.3 M EMORY ADDRESSINGMemory addressing in the D-NTM is equivalent to computing an N-dimensional address vector. The D-NTM computes three such vectors for respectively reading wt2RN, erasing et2Rdcand writing ut2RN. Specifically for writing, the controller further computes a candidate memory content vector ct22Under review as a conference paper at ICLR 2017Address 1ContentAddress 2ContentAddress 3ContentAddress 4ContentAddress 5ContentAddress 6ContentAddress 7ContentControllerMemoryContentReaderWriterStoryFact t-1Fact tQuestionAnswer.........Figure 1: A graphical illustration of the proposed dynamic neural Turing machine with therecurrent-controller. The controller receives the fact as a continuous vector encoded by a recurrentneural network, computes the read and write weights for addressing the memory. If the D-NTMautomatically detects that a query has been received, it returns an answer and terminates.Rdcbased on its current hidden state of the controller ht2Rdhand the input of the controller scaled witha scalar gate twhich is a function of the hidden state and the input of the controller as well, see Eqn 4.t=f(ht;xt); (3)ct=ReLU (Wmht+tWxxt+bm): (4)Reading With the read vector wt, the content vector read from the memory t2Rda+dcis retrievedbyt= (wt)>Mt1; (5)where wtis a row vector.Erasing and Writing Given the erase, write and candidate memory content vectors ( et,utj, andctrespectively) generated by a simple MLP conditioned on the hidden state of the controller ht, thememory matrix is updated by,Ct[j] = (1etutj)Ct1[j] +utjct: (6)where the subscript jinCt[j]denotes thej-th row of the content part Ctof the memory matrix Mt.No Operation (NOP) As found in (Joulin & Mikolov, 2015), an additional NOP action might bebeneficial for the controller notto access the memory once in a while. We model this situation bydesignating one memory cell as a NOP cell. Reading or writing from this memory cell is ignored.2.4 L EARNINGOnce the proposed D-NTM is executed, it returns the output distribution p(yjx1;:::;xT). As a result,we define a cost function as the negative log-likelihood:C() =1NNXn=1logp(ynjxn1;:::;xnT); (7)whereis a set of all the parameters. As the proposed D-NTM, just like the original NTM, is fullyend-to-end differentiable, we can compute the gradient of this cost function by using backpropagationand learn the parameters of the model with a gradient-based optimization algorithm, such as stochasticgradient descent, to train it end-to-end.3Under review as a conference paper at ICLR 20173 A DDRESSING MECHANISM3.1 A DDRESS VECTORSEach of the address vectors (both read and write) is computed in the same way. The way they arecomputed are very similar to the content based addressing in (Graves et al., 2014). First, the controllercomputes a key vector:kt=W>kht+bk;where Wk2RN(da+dc)andbk2Rda+dcif the read head is being computed, otherwiseWk2RNdcandbk2Rdcif the write head weights are being computed. They can be the parametersfor a specific head (either read or write.) Also, the sharpening factor t2R1is computed as:softplus (x) =log(exp(x) + 1) (8)t=softplus (u>ht+b) + 1: (9)uandbare the parameters of the sharpening t.The address vector is then computed by,zti=tSkt;mti(10)wti=exp(zti)Pjexp(ztj); (11)where the similarity function S2R0is defined asS(x;y) =xy(jjxjjjjyjj+):3.2 M ULTI -STEP ADDRESSINGAt each time-step, controller may require more than one-step for accessing to the memory. The originalNTM addresses this by implementing multiple sets of read, erase and write heads. In this paper, weexplore an option of allowing each head to operate more than once at each time step, similar to themulti-hop mechanism from the end-to-end memory network (Sukhbaatar et al., 2015).3.3 D YNAMIC LEAST RECENTLY USED ADDRESSINGWe introduce a memory addressing schema that can learn to put more emphasis on the least recentlyused (LRU) memory locations. As observed in (Santoro et al., 2016; Rae et al., 2016), we find it easierto learn the write operations with the use of LRU addressing.To learn a LRU based addressing, first we compute the exponentially moving averages of the logits ( zt)asvt,vt= 0:1vt1+ 0:9zt. We rescale the accumulated vtwitht, such that the controller adjuststhe influence of how much previously written memory locations should effect the attention weightsof a particular time-step. Next, we subtract vtfromztin order to reduce the weights of previouslyread or written memory locations. tis a shallow MLP with a scalar output and it is conditioned onthe hidden state of the controller. tis parametrized with the parameters uandb,t=sigmoid (u>ht+b); (12)wt=softmax (zttvt1): (13)This addressing method increases the weights of the least recently used rows of the memory. Themagnitude of the influence of the least-recently used memory locations is being learned and adjustedwitht. Our LRU addressing is dynamic due to the model’s ability to switch between pure content-basedaddressing and LRU. During the training, we do not backpropagate through vt. Due to the dynamicnature of this addressing mechanism, it can be used for both read and write operations. If needed,the model will automatically learn to disable LRU while reading from the memory.4 G ENERATING DISCRETE ADDRESS VECTORSIn this section, we describe the discrete attention based addressing strategy.4Under review as a conference paper at ICLR 2017Discrete Addressing Let us use wto denote an address vector (either read, write or erase) at timet. By definition in Eq. (10), every element in this address vector is positive and sums up to one. Inother words, we can treat this vector as the probabilities of a categorical distribution C(w)withdim(w)choices:p(j) =wj;wherewjis thej-th element of w. We can readily sample from this categorical distribution and forman one-hot vector ~wsuch that~wk=I(k=j);wherejC(w), andIis an indicator function.Training We use this sampling-based strategy for all the heads during training. This clearly makesthe use of backpropagation infeasible to compute the gradient, as the sampling procedure is notdifferentiable. Thus, we use REINFORCE (Williams, 1992) together with the three variance reductiontechniques–global baseline, input-dependent baseline and variance normalization– suggested in (Mnih& Gregor, 2014).Let us define R(x) = logp(yjx1;:::;xT)as a reward. We first center and re-scale the reward by~R(x) =R(x)bp2+;wherebandis running average and standard deviation of R. We can further center it for each inputxseparately, i.e.,~R(x) ~R(x)b(x);whereb(x)is computed by a baseline network which takes as input xand predicts its estimated reward.The baseline network is trained to minimize the Huber loss (Huber, 1964) between the true reward~R(x)and the predicted reward b(x). We use the Huber loss, which is defined byH(x) =x2forjxj;(2jxj);otherwise,due to its robustness. As a further measure to reduce the variance, we regularize the negative entropyof all those category distributions to facilitate a better exploration during training (Xu et al., 2015).Then, the cost function for each training example is approximated asCn() =logp(yjx1:T;~w1:J;~u1:J;~e1:J)JXj=1~R(xn)(logp( ~wjjx1:T) + logp(~ujjx1:T) + logp(~ejjx1:T))HJXj=1(H(wjjx1:T) +H(ujjx1:T) +H(ejjx1:T)):whereJis the number of addressing steps, His the entropy regularization coefficient, and Hdenotesthe entropy.Inference Once training is over, we switch to a deterministic strategy. We simply choose an elementofwwith the largest value to be the index of the target memory cell, such that~wk=I(k=argmax (w)):Curriculum Learning for the Discrete Attention Training discrete attention with feed-forwardcontroller and REINFORCE is challenging. We propose to use a curriculum strategy for trainingwith the discrete attention in order to tackle this problem. For each minibatch, we sample from abinomial distribution with the probability pt,tBin(pt). The model will either use the discreteor the continuous-attention based on the t. We start the training procedure with p0= 1and duringthe trainingptis annealed to 0by settingpt=p0p1+t.We can rewrite the weights wtas in Equation 14, where it is expressed as the combination of continuousattention weights wtand discrete attention weights ~wtwithtbeing a binary variable that choosesto use one of them,wt twt+ (1t)~wt: (14)5Under review as a conference paper at ICLR 2017By using this curriculum learning strategy, at the beginning of the training, the model learns to usethe memory mainly with the continuous attention. As we anneal the pt, the model will rely more onthe discrete attention.5 R EGULARIZING DYNAMIC NEURAL TURING MACHINESWhen the controller of D-NTM is a powerful recurrent neural network, it is important to regularizetraining of the D-NTM so as to avoid suboptimal solutions in which the D-NTM ignores the memoryand works as a simple recurrent neural network.Read-Write Consistency Regularizer One such suboptimal solution we have observed in ourpreliminary experiments with the proposed D-NTM is that the D-NTM uses the address part Aofthe memory matrix simply as an additional weight matrix, rather than as a means to accessing thecontent part C. We found that this pathological case can be effectively avoided by encouraging the readhead to point to a memory cell which has also been pointed by the write head. This can be implementedas the following regularization term:Rrw(w;u) =TXt0=1jj1(1t0t0Xt=1ut)>wt0jj22 (15)In the equations above, utis the write and wtis the read weights.Next Input Prediction as Regularization Temporal structure is a strong signal that should beexploited by the controller based on a recurrent neural network. We exploit this structure by lettingthe controller predict the input in the future. We maximize the predictability of the next input by thecontroller during training. This is equivalent to minimizing the following regularizer:Rpred(W) =logp(ft+1jft;wt;ut;Mt;W))whereftis the current input and ft+1is the input at next timestep. We found this regularizer to beeffective in our preliminary experiments and use it for bAbI tasks.6 R ELATED WORKA recurrent neural network (RNN), which is used as a controller in the proposed D-NTM, has animplicit memory in the form of recurring hidden states. Even with this implicit memory, a vanillaRNN is however known to have difficulties in storing information for long time-spans (Bengio et al.,1994; Hochreiter, 1991). Long short-term memory (LSTM, (Hochreiter & Schmidhuber, 1997)) andgated recurrent units (GRU, (Cho et al., 2014)) have been found to address this issue. However allthese models based solely on RNNs have been found to be limited when they are used to solve, e.g.,algorithmic tasks and episodic question-answering.In addition to the finite random access memory of the neural Turing machine, based on which theD-NTM is designed, other data structures have been proposed as external memory for neural networks.In (Sun et al., 1997; Grefenstette et al., 2015; Joulin & Mikolov, 2015), a continuous, differentiablestack was proposed. In (Zaremba et al., 2015; Zaremba & Sutskever, 2015), grid and tape storagesare used. These approaches differ from the NTM in that their memory is unbounded and can growindefinitely. On the other hand, they are often not randomly accessible.Memory networks (Weston et al., 2015b) form another family of neural networks with external memory.In this class of neural networks, information is stored explicitly as it is (in the form of its continuousrepresentation) in the memory, without being erased or modified during an episode. Memory networksand their variants have been applied to various tasks successfully (Sukhbaatar et al., 2015; Bordes et al.,2015; Dodge et al., 2015; Xiong et al., 2016). Miller et al. (2016) have also independently proposedthe idea of having separate key and value vectors for memory networks.Another related family of models is the attention-based neural networks. Neural networks withcontinuous or discrete attention over an input have shown promising results on a variety ofchallenging tasks, including machine translation (Bahdanau et al., 2015; Luong et al., 2015), speechrecognition (Chorowski et al., 2015), machine reading comprehension (Hermann et al., 2015) andimage caption generation (Xu et al., 2015).6Under review as a conference paper at ICLR 2017The latter two, the memory network and attention-based networks, are however clearly distinguishablefrom the D-NTM by the fact that they do not modify the content of the memory.7 E XPERIMENTSWe provide experimental results to demonstrate the abilities of our model, first on Facebook bAbItask (Weston et al., 2015a). We give detailed analysis and experimental results on this task. We alsocompare different variations of NTM on bAbI tasks. We have performed experiments on sequentialpermuted MNIST (Le et al., 2015) and on toy tasks to compare other published models on these taskswith a recurrent controller. The details of our experiments are provided in the supplementary material.7.1 E PISODIC QUESTION -ANSWERING :BABI TASKSIn this section, we evaluate the proposed D-NTM on the recently proposed episodic question-answeringtask called Facebook bAbI. We use the dataset with 10k training examples per sub-task provided byFacebook.1For each episode, the D-NTM reads a sequence of factual sentences followed by a question,all of which are given as natural language sentences. The D-NTM is expected to store and retrieverelevant information in the memory in order to answer the question based on the presented facts. Exactimplementation details and hyper-parameter settings are provided in the appendix.7.1.1 G OALSThe goal of this experiment is three-fold. First, we present for the first time the performance of amemory-based network that can both read and write dynamically on the Facebook bAbI tasks2. We aimto understand whether a model that has to learn to write an incoming fact to the memory, rather thanstoring it as it is, is able to work well, and to do so, we compare both the original NTM and proposedD-NTM against an LSTM-RNN.Second, we investigate the effect of having to learn how to write. The fact that the NTM needs tolearn to write likely has adverse effect on the overall performance, when compared to, for instance,end-to-end memory networks (MemN2N, (Sukhbaatar et al., 2015)) and dynamic memory network(DMN+, (Xiong et al., 2016)) both of which simply store the incoming facts as they are. We quantifythis effect in this experiment. Lastly, we show the effect of the proposed learnable addressing scheme.We further explore the effect of using a feedforward controller instead of the GRU controller. In additionto the explicit memory, the GRU controller can use its own internal hidden state as the memory. Onthe other hand, the feedforward controller must solely rely on the explicit memory, as it is the onlymemory available.7.1.2 R ESULTS AND ANALYSISIn Table 1, we first observe that the NTMs are indeed capable of solving this type of episodicquestion-answering better than the vanilla LSTM-RNN. Although the availability of explicit memoryin the NTM has already suggested this result, we note that this is the first time neural Turing machineshave been used in this specific task.All the variants of NTM with the GRU controller outperform the vanilla LSTM-RNN. However, not allof them perform equally well. First, it is clear that the proposed dynamic NTM (D-NTM) using the GRUcontroller outperforms the original NTM with the GRU controller (NTM, CBA only NTM vs. continuousD-NTM, Discrete D-NTM). As discussed earlier, the learnable addressing scheme of the D-NTM allowsthe controller to access the memory slots by location in a potentially nonlinear way. We expect it to helpwith tasks that have non-trivial access patterns, and as anticipated, we see a large gain with the D-NTMover the original NTM in the tasks of, for instance, 12 - Conjunction and 17 - Positional Reasoning.Among the recurrent variants of the proposed D-NTM, we notice significant improvements by usingdiscrete addressing over using continuous addressing. We conjecture that this is due to certain typesof tasks that require precise/sharp retrieval of a stored fact, in which case continuous addressingis in disadvantage over discrete addressing. This is evident from the observation that the D-NTMwith discrete addressing significantly outperforms that with continuous addressing in the tasks of 8 -1https://research.facebook.com/researchers/15439345391893482Similar experiments were done in the recently published (Graves et al., 2016), but D-NTM results for bAbItasks were already available in arxiv by that time.7Under review as a conference paper at ICLR 20171-step 1-step 1-step 1-step 3-steps 3-steps 3-steps 3-stepsLBACBA Soft Discrete LBACBA Soft DiscreteTask LSTM MemN2N DMN+ NTM NTM D-NTM D-NTM NTM NTM D-NTM D-NTM1 0.00 0.00 0.00 16.30 16.88 5.41 6.66 0.00 0.00 0.00 0.002 81.90 0.30 0.30 57.08 55.70 58.54 56.04 61.67 59.38 46.66 62.293 83.10 2.10 1.10 74.16 55.00 74.58 72.08 83.54 65.21 47.08 41.454 0.20 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.005 1.20 0.80 0.50 1.46 20.41 1.66 1.04 0.83 1.46 1.25 1.456 51.80 0.10 0.00 23.33 21.04 40.20 44.79 48.13 54.80 20.62 11.047 24.90 2.00 2.40 21.67 21.67 19.16 19.58 7.92 37.70 7.29 5.628 34.10 0.90 0.00 25.76 21.05 12.58 18.46 25.38 8.82 11.02 0.749 20.20 0.30 0.00 24.79 24.17 36.66 34.37 37.80 0.00 39.37 32.5010 30.10 0.00 0.00 41.46 33.13 52.29 50.83 56.25 23.75 20.00 20.8311 10.30 0.10 0.00 18.96 31.88 31.45 4.16 3.96 0.28 30.62 16.8712 23.40 0.00 0.00 25.83 30.00 7.70 6.66 28.75 23.75 5.41 4.5813 6.10 0.00 0.00 6.67 5.63 5.62 2.29 5.83 83.13 7.91 5.0014 81.00 0.10 0.20 58.54 59.17 60.00 63.75 61.88 57.71 58.12 60.2015 78.70 0.00 0.00 36.46 42.30 36.87 39.27 35.62 21.88 36.04 40.2616 51.90 51.80 45.30 71.15 71.15 49.16 51.35 46.15 50.00 46.04 45.4117 50.10 18.60 4.20 43.75 43.75 17.91 16.04 43.75 56.25 21.25 9.1618 6.80 5.30 2.10 3.96 47.50 3.95 3.54 47.50 47.50 6.87 1.6619 90.30 2.30 0.00 75.89 71.51 73.74 64.63 61.56 63.65 75.88 76.6620 2.10 0.00 0.00 1.25 0.00 2.70 3.12 0.40 0.00 3.33 0.00Avg.Err. 36.41 4.24 2.81 31.42 33.60 29.51 27.93 32.85 32.76 24.24 21.79Table 1: Test error rates (%) on the 20 bAbI QA tasks for models using 10k training examples withthe GRU and feedforward controller. FF stands for the experiments that are conducted with feedforwardcontroller. Let us, note that LBArefers to NTM that uses both LBA and CBA. In this table, wecompare multi-step vs single-step addressing, original NTM with location based+content basedaddressing vs only content based addressing, and discrete vs continuous addressing on bAbI.Lists/Sets and 11 - Basic Coreference. Furthermore, this is in line with an earlier observation in (Xu et al.,2015), where discrete addressing was found to generalize better in the task of image caption generation.In Table 2, we also observe that the D-NTM with the feedforward controller and discrete attentionperforms worse than LSTM and D-NTM with continuous-attention. However, when the proposedcurriculum strategy from Sec. 4 is used, the average test error drops from 68.30 to 37.79.We empirically found training of the feedforward controller more difficult than that of the recurrentcontroller. We train our feedforward controller based models four times longer (in terms of the numberof updates) than the recurrent controller based ones in order to ensure that they are converged for mostof the tasks. On the other hand, the models trained with the GRU controller overfit on bAbI tasksvery quickly. For example, on tasks 3 and 16 the feedforward controller based model underfits (i.e.,high training loss) at the end of the training, whereas with the same number of units the model withthe GRU controller can overfit on those tasks after 3,000 updates only.When our results are compared to the variants of the memory network Weston et al. (2015b) (MemN2Nand DMN+), we notice a significant performance gap. We attribute this gap to the difficulty in learningto manipulate and store a complex input.FF FF FFSoft Discrete DiscreteTask D-NTM D-NTM D-NTM1 4.38 81.67 14.792 27.5 76.67 76.673 71.25 79.38 70.834 0.00 78.65 44.065 1.67 83.13 17.716 1.46 48.76 48.137 6.04 54.79 23.548 1.70 69.75 35.629 0.63 39.17 14.3810 19.80 56.25 56.2511 0.00 78.96 39.5812 6.25 82.5 32.0813 7.5 75.0 18.5414 17.5 78.75 24.7915 0.0 71.42 39.7316 49.65 71.46 71.1517 1.25 43.75 43.7518 0.24 48.13 2.9219 39.47 71.46 71.5620 0.0 76.56 9.79Avg.Err. 12.81 68.30 37.79Table 2: Test error rates (%) on the 20 bAbI QA tasks for models using 10k training examples withfeedforward controller.We also provide further experiments investigating different extensions on D-NTM in the appendix.8Under review as a conference paper at ICLR 20177.2 S EQUENTIAL pMNISTIn sequential MNIST task, the pixels of the MNIST digits are provided to the model in scan line order,left to right and top to bottom (Le et al., 2015). At the end of sequence of pixels, the model predictsthe label of the digit in the sequence of pixels. We experiment D-NTM on the variation of sequentialMNIST where the order of the pixels is randomly shuffled, we call this task as permuted MNIST(pMNIST). An important contribution of this task to our paper, in particular, is to measure the model’sability to perform well when dealing with long-term dependencies. We report our results in Table 33, weobserve improvements over other models that we compare against. In Table 3, ”discrete addressing withMAB” refers to D-NTM model using REINFORCE with baseline computed from moving averages ofthe reward. Discrete addressing with IB refers to D-NTM using REINFORCE with input-based baseline.7.3 NTM T OYTASKSWe explore the possibility of using D-NTM to solve algorithmic tasks such as copy and associativerecall tasks. We train our model on the same lengths of sequences that is experimented in (Graveset al., 2014). We report our results in Table 4. We find out that D-NTM using continuous-attentioncan successfully learn the ”Copy” and ”Associative Recall” tasks.In Table 4, we train our model on sequences of the same length as the experiments in (Graves et al., 2014)and test the model on the sequences of the maximum length seen during the training. We consider modelto be successful on copy or associative recall if its validation cost (binary cross-entropy) is lower than0:02over the sequences of maximum length seen during the training. We set the threshold to 0:02todetermine whether a model is successful on a task. Because empirically we observe that the models havehigher validation costs perform badly in terms of generalization over the longer sequences. ”D-NTMdiscrete” model in this table is trained with REINFORCE using moving averages to estimate the baseline.Test AccD-NTM discrete MAB 89.6D-NTM discrete IB 92.3Soft D-NTM 93.4NTM 90.9I-RNN (Le et al., 2015) 82.0Zoneout (Krueger et al., 2016) 93.1LSTM (Krueger et al., 2016) 89.8Unitary-RNN (Arjovsky et al., 2015) 91.4Recurrent Dropout (Krueger et al., 2016) 92.5Table 3: Sequential pMNIST.Copy Tasks Associative RecallSoft D-NTM Success SuccessD-NTM discrete Success FailureNTM Success SuccessTable 4: NTM Toy Tasks.8 C ONCLUSION AND FUTURE WORKIn this paper we extend neural Turing machines (NTM) by introducing a learnable addressing schemewhich allows the NTM to be capable of performing highly nonlinear location-based addressing.This extension, to which we refer by dynamic NTM (D-NTM), is extensively tested with variousconfigurations, including different addressing mechanisms (continuous vs. discrete) and differentnumber of addressing steps, on the Facebook bAbI tasks. This is the first time an NTM-type modelwas tested on this task, and we observe that the NTM, especially the proposed D-NTM, performs betterthan vanilla LSTM-RNN. Furthermore, the experiments revealed that the discrete, discrete addressingworks better than the continuous addressing with the GRU controller, and our analysis reveals thatthis is the case when the task requires precise retrieval of memory content.Our experiments show that the NTM-based models can be weaker than other variants of memorynetworks which do not learn but have an explicit mechanism of storing incoming facts as they are. Weconjecture that this is due to the difficulty in learning how to write, manipulate and delete the contentof memory. Despite this difficulty, we find the NTM-based approach, such as the proposed D-NTM,3Let us note that, the current state of art on this task is recurrent batch normalization with LSTM (Cooijmanset al., 2016) with 95.6% accuracy. It is possible to use recurrent batch normalization in our model and potentiallyimprove our results on this task as well.9Under review as a conference paper at ICLR 2017to be a better, future-proof approach, because it can scale to a much longer horizon (where it becomesimpossible to explicitly store all the experiences.)OnpMNIST task, we show that our model can outperform other similar type of approaches proposedto deal with the long-term dependencies. On copy and associative recall tasks, we show that our modelcan solve the algorithmic problems that are proposed to solve with NTM type of models.The success of both the learnable address and the discrete addressing scheme suggests two futureresearch directions. First, we should try both of these schemes in a wider array of memory-based models,as they are not specific to the neural Turing machines. Second, the proposed D-NTM needs to beevaluated on a diverse set of applications, such as text summarization (Rush et al., 2015), visual question-answering (Antol et al., 2015) and machine translation, in order to make a more concrete conclusion. | Hkg1A2IVx | Review | 6: Marginally above acceptance threshold | The authors proposed a dynamic neural Turing machine (D-NTM) model that overcomes the rigid location-based memory access used in the original NTM model. The paper has two main contributions: 1) introducing a learnable addressing to NTM. 2) curriculum learning using hybrid discrete and continuous attention. The proposed model was empirically evaluated on Facebook bAbI task and has shown improvement over the original NTM.
Pros:
+ Comprehensive comparisons of feed-forward controllers v.s. recurrent controllers
+ Encouraging results on the curriculum learning on hybrid discrete and continuous attentions
Cons:
- Very weak NTM baseline (due to some hyper-parameter engineering?) in Table 1, 31% err. comparing to the NTM 20% err. reported in Table 1 in(Graves et al, 2016, Hybrid computing using a neural network with dynamic external memory). In fact, the NTM baseline in (Graves et al 2016) is better than the proposed D-NTM with GRU controller. Maybe it is worthwhile to reproduce their results using the hyper-parameter setting in their Table2 which could potentially lead to better D-NTM performance?
- Section 3 of the paper is hard to follow. The overall clarity of the paper needs improvement. | 4: The reviewer is confident but not absolutely certain that the evaluation is correct |
BkSmc8qll | ICLR.cc/2017/conference | 2017 | Dynamic Neural Turing Machine with Continuous and Discrete Addressing Schemes | ["Caglar Gulcehre", "Sarath Chandar", "Kyunghyun Cho", "Yoshua Bengio"] | In this paper, we extend neural Turing machine (NTM) into a dynamic neural Turing machine (D-NTM) by introducing a trainable memory addressing scheme. This addressing scheme maintains for each memory cell two separate vectors, content and address vectors. This allows the D-NTM to learn a wide variety of location-based addressing strategies including both linear and nonlinear ones. We implement the D-NTM with both continuous, differentiable and discrete, non-differentiable read/write mechanisms. We investigate the mechanisms and effects for learning to read and write to a memory through experiments on Facebook bAbI tasks using both a feedforward and GRU-controller. The D-NTM is evaluated on a set of Facebook bAbI tasks and shown to outperform NTM and LSTM baselines. We also provide further experimental results on sequential MNIST, associative recall and copy tasks. | ["Deep learning", "Natural language processing", "Reinforcement Learning"] | ABSTRACTIn this paper, we extend neural Turing machine (NTM) into a dynamic neural Turingmachine (D-NTM) by introducing a trainable memory addressing scheme. Thisaddressing scheme maintains for each memory cell two separate vectors, content andaddress vectors. This allows the D-NTM to learn a wide variety of location-basedaddressing strategies including both linear and nonlinear ones. We implementthe D-NTM with both continuous, differentiable and discrete, non-differentiableread/write mechanisms. We investigate the mechanisms and effects for learning toread and write to a memory through experiments on Facebook bAbI tasks using bothafeedforward andGRU -controller. The D-NTM is evaluated on a set of FacebookbAbI tasks and shown to outperform NTM and LSTM baselines. We also providefurther experimental results on sequential MNIST, associative recall and copy tasks.1 I NTRODUCTIONDesigning general-purpose learning algorithms is one of the long-standing goals of artificial intelligence.Despite the success of deep learning in this area (see, e.g., (Goodfellow et al., 2016)) there are still a setof complex tasks that are not well addressed by conventional neural networks. Those tasks often require aneural network to be equipped with an explicit, external memory in which a larger, potentially unbounded,set of facts need to be stored. They include, but are not limited to, episodic question-answering (Westonet al., 2015b; Hermann et al., 2015; Hill et al., 2015), compact algorithms (Zaremba et al., 2015),dialogue (Serban et al., 2016; Vinyals & Le, 2015) and video caption generation (Yao et al., 2015).Recently two promising approaches based on neural networks to this type of tasks have been proposed.Memory networks (Weston et al., 2015b) explicitly store all the facts, or information, available foreach episode in an external memory (as continuous vectors) and use the attention-based mechanismto index them when returning an output. On the other hand, neural Turing machines (NTM, (Graveset al., 2014)) read each fact in an episode and decides whether to read, write the fact or do both tothe external, differentiable memory.A crucial difference between these two models is that the memory network does not have a mechanismto modify the content of the external memory, while the NTM does. In practice, this leads to easierlearning in the memory network, which in turn resulted in it being used more in real tasks (Bordes et al.,2015; Dodge et al., 2015). On the contrary, the NTM has mainly been tested on a series of small-scale,carefully-crafted tasks such as copy and associative recall. The NTM, however is more expressive,precisely because it can store and modify the internal state of the network as it processes an episode.The original NTM supports two modes of addressing (which can be used simultaneously.) They arecontent-based and location-based addressing. We notice that the location-based strategy is based onlinear addressing. The distance between each pair of consecutive memory cells is fixed to a constant.We address this limitation, in this paper, by introducing a learnable address vector for each memorycell of the NTM with least recently used memory addressing mechanism, and we call this variant adynamic neural Turing machine (D-NTM).We evaluate the proposed D-NTM on the full set of Facebook bAbI task (Weston et al., 2015b)using either continuous , differentiable attention or discrete , non-differentiable attention (Zaremba &Sutskever, 2015) as an addressing strategy. Our experiments reveal that it is possible to use the discrete,non-differentiable attention mechanism, and in fact, the D-NTM with the discrete attention and GRUcontroller outperforms the one with the continuous attention. After we published our paper on arXiv, anew extension of NTM called DNC (Graves et al., 2016) has also provided results on bAbI task as well.1Under review as a conference paper at ICLR 2017We also provide results on sequential-MNIST and algorithmic tasks proposed by (Graves et al., 2014)in order to investigate the ability of our model when dealing with long-term dependencies.Our Contributions1. We propose a generalization of Neural Turing Machine called a dynamic neural Turing machine(D-NTM) which employs a learnable and location-based addressing.2.We demonstrate the application of neural Turing machines on a more natural and less toyish task:episodic question-answering besides the toy tasks. We provide detailed analysis of our model onthis task.3.We propose to use the discrete attention mechanism and empirically show that, it can outperformthe continuous attention based addressing for episodic QA task.4. We propose a curriculum strategy for our model with the feedforward controller and discreteattention that improves our results significantly.2 D YNAMIC NEURAL TURING MACHINEThe proposed dynamic neural Turing machine (D-NTM) extends the neural Turing machine (NTM,(Graves et al., 2014)) which has a modular design. The NTM consists of two main modules, a controllerand, a memory. The controller, which is often implemented as a recurrent neural network, issues acommand to the memory so as to read, write to and erase a subset of memory cells. Although thememory was originally envisioned as an integrated module, it is not necessary, and the memory maybe an external, black box (Zaremba & Sutskever, 2015).2.1 C ONTROLLERAt each time step t, the controller (1) receives an input value xt, (2) addresses and reads the memory andcreates the content vector t, (3) erases/writes a portion of the memory, (4) updates its own hidden stateht, and (5) outputs a value yt(if needed.) In this paper, we use both a gated recurrent unit (GRU, (Choet al., 2014)) and a feedforward-controller to implement the controller such that for a GRU controllerht=GRU(xt;ht1;t) (1)or for a feedforward-controllerht=(xt;t): (2)2.2 M EMORYWe use a rectangular matrix M2RN(dc+da)to denoteNmemory cells. Unlike the original NTM,we partition each memory cell vector into two parts:M= [A;C]:The first part A2RNdais a learnable address matrix, and the second C2RNdca content matrix.In other words, each memory cell miis nowmi= [ai;ci]:The address part aiis considered a model parameter that is updated during training. During inference,the address part is not overwritten by the controller and remains constant. On the other hand, thecontent part ciis both read and written by the controller both during training and inference. At thebeginning of each episode, the content part of the memory is refreshed to be an all-zero matrix, C0=0.This introduction of the learnable address portion for each memory cell allows the model to learnsophisticated location-based addressing strategies. A similar addressing mechanism is also exploredin (Reed & de Freitas, 2015) in the context of learning program traces.2.3 M EMORY ADDRESSINGMemory addressing in the D-NTM is equivalent to computing an N-dimensional address vector. The D-NTM computes three such vectors for respectively reading wt2RN, erasing et2Rdcand writing ut2RN. Specifically for writing, the controller further computes a candidate memory content vector ct22Under review as a conference paper at ICLR 2017Address 1ContentAddress 2ContentAddress 3ContentAddress 4ContentAddress 5ContentAddress 6ContentAddress 7ContentControllerMemoryContentReaderWriterStoryFact t-1Fact tQuestionAnswer.........Figure 1: A graphical illustration of the proposed dynamic neural Turing machine with therecurrent-controller. The controller receives the fact as a continuous vector encoded by a recurrentneural network, computes the read and write weights for addressing the memory. If the D-NTMautomatically detects that a query has been received, it returns an answer and terminates.Rdcbased on its current hidden state of the controller ht2Rdhand the input of the controller scaled witha scalar gate twhich is a function of the hidden state and the input of the controller as well, see Eqn 4.t=f(ht;xt); (3)ct=ReLU (Wmht+tWxxt+bm): (4)Reading With the read vector wt, the content vector read from the memory t2Rda+dcis retrievedbyt= (wt)>Mt1; (5)where wtis a row vector.Erasing and Writing Given the erase, write and candidate memory content vectors ( et,utj, andctrespectively) generated by a simple MLP conditioned on the hidden state of the controller ht, thememory matrix is updated by,Ct[j] = (1etutj)Ct1[j] +utjct: (6)where the subscript jinCt[j]denotes thej-th row of the content part Ctof the memory matrix Mt.No Operation (NOP) As found in (Joulin & Mikolov, 2015), an additional NOP action might bebeneficial for the controller notto access the memory once in a while. We model this situation bydesignating one memory cell as a NOP cell. Reading or writing from this memory cell is ignored.2.4 L EARNINGOnce the proposed D-NTM is executed, it returns the output distribution p(yjx1;:::;xT). As a result,we define a cost function as the negative log-likelihood:C() =1NNXn=1logp(ynjxn1;:::;xnT); (7)whereis a set of all the parameters. As the proposed D-NTM, just like the original NTM, is fullyend-to-end differentiable, we can compute the gradient of this cost function by using backpropagationand learn the parameters of the model with a gradient-based optimization algorithm, such as stochasticgradient descent, to train it end-to-end.3Under review as a conference paper at ICLR 20173 A DDRESSING MECHANISM3.1 A DDRESS VECTORSEach of the address vectors (both read and write) is computed in the same way. The way they arecomputed are very similar to the content based addressing in (Graves et al., 2014). First, the controllercomputes a key vector:kt=W>kht+bk;where Wk2RN(da+dc)andbk2Rda+dcif the read head is being computed, otherwiseWk2RNdcandbk2Rdcif the write head weights are being computed. They can be the parametersfor a specific head (either read or write.) Also, the sharpening factor t2R1is computed as:softplus (x) =log(exp(x) + 1) (8)t=softplus (u>ht+b) + 1: (9)uandbare the parameters of the sharpening t.The address vector is then computed by,zti=tSkt;mti(10)wti=exp(zti)Pjexp(ztj); (11)where the similarity function S2R0is defined asS(x;y) =xy(jjxjjjjyjj+):3.2 M ULTI -STEP ADDRESSINGAt each time-step, controller may require more than one-step for accessing to the memory. The originalNTM addresses this by implementing multiple sets of read, erase and write heads. In this paper, weexplore an option of allowing each head to operate more than once at each time step, similar to themulti-hop mechanism from the end-to-end memory network (Sukhbaatar et al., 2015).3.3 D YNAMIC LEAST RECENTLY USED ADDRESSINGWe introduce a memory addressing schema that can learn to put more emphasis on the least recentlyused (LRU) memory locations. As observed in (Santoro et al., 2016; Rae et al., 2016), we find it easierto learn the write operations with the use of LRU addressing.To learn a LRU based addressing, first we compute the exponentially moving averages of the logits ( zt)asvt,vt= 0:1vt1+ 0:9zt. We rescale the accumulated vtwitht, such that the controller adjuststhe influence of how much previously written memory locations should effect the attention weightsof a particular time-step. Next, we subtract vtfromztin order to reduce the weights of previouslyread or written memory locations. tis a shallow MLP with a scalar output and it is conditioned onthe hidden state of the controller. tis parametrized with the parameters uandb,t=sigmoid (u>ht+b); (12)wt=softmax (zttvt1): (13)This addressing method increases the weights of the least recently used rows of the memory. Themagnitude of the influence of the least-recently used memory locations is being learned and adjustedwitht. Our LRU addressing is dynamic due to the model’s ability to switch between pure content-basedaddressing and LRU. During the training, we do not backpropagate through vt. Due to the dynamicnature of this addressing mechanism, it can be used for both read and write operations. If needed,the model will automatically learn to disable LRU while reading from the memory.4 G ENERATING DISCRETE ADDRESS VECTORSIn this section, we describe the discrete attention based addressing strategy.4Under review as a conference paper at ICLR 2017Discrete Addressing Let us use wto denote an address vector (either read, write or erase) at timet. By definition in Eq. (10), every element in this address vector is positive and sums up to one. Inother words, we can treat this vector as the probabilities of a categorical distribution C(w)withdim(w)choices:p(j) =wj;wherewjis thej-th element of w. We can readily sample from this categorical distribution and forman one-hot vector ~wsuch that~wk=I(k=j);wherejC(w), andIis an indicator function.Training We use this sampling-based strategy for all the heads during training. This clearly makesthe use of backpropagation infeasible to compute the gradient, as the sampling procedure is notdifferentiable. Thus, we use REINFORCE (Williams, 1992) together with the three variance reductiontechniques–global baseline, input-dependent baseline and variance normalization– suggested in (Mnih& Gregor, 2014).Let us define R(x) = logp(yjx1;:::;xT)as a reward. We first center and re-scale the reward by~R(x) =R(x)bp2+;wherebandis running average and standard deviation of R. We can further center it for each inputxseparately, i.e.,~R(x) ~R(x)b(x);whereb(x)is computed by a baseline network which takes as input xand predicts its estimated reward.The baseline network is trained to minimize the Huber loss (Huber, 1964) between the true reward~R(x)and the predicted reward b(x). We use the Huber loss, which is defined byH(x) =x2forjxj;(2jxj);otherwise,due to its robustness. As a further measure to reduce the variance, we regularize the negative entropyof all those category distributions to facilitate a better exploration during training (Xu et al., 2015).Then, the cost function for each training example is approximated asCn() =logp(yjx1:T;~w1:J;~u1:J;~e1:J)JXj=1~R(xn)(logp( ~wjjx1:T) + logp(~ujjx1:T) + logp(~ejjx1:T))HJXj=1(H(wjjx1:T) +H(ujjx1:T) +H(ejjx1:T)):whereJis the number of addressing steps, His the entropy regularization coefficient, and Hdenotesthe entropy.Inference Once training is over, we switch to a deterministic strategy. We simply choose an elementofwwith the largest value to be the index of the target memory cell, such that~wk=I(k=argmax (w)):Curriculum Learning for the Discrete Attention Training discrete attention with feed-forwardcontroller and REINFORCE is challenging. We propose to use a curriculum strategy for trainingwith the discrete attention in order to tackle this problem. For each minibatch, we sample from abinomial distribution with the probability pt,tBin(pt). The model will either use the discreteor the continuous-attention based on the t. We start the training procedure with p0= 1and duringthe trainingptis annealed to 0by settingpt=p0p1+t.We can rewrite the weights wtas in Equation 14, where it is expressed as the combination of continuousattention weights wtand discrete attention weights ~wtwithtbeing a binary variable that choosesto use one of them,wt twt+ (1t)~wt: (14)5Under review as a conference paper at ICLR 2017By using this curriculum learning strategy, at the beginning of the training, the model learns to usethe memory mainly with the continuous attention. As we anneal the pt, the model will rely more onthe discrete attention.5 R EGULARIZING DYNAMIC NEURAL TURING MACHINESWhen the controller of D-NTM is a powerful recurrent neural network, it is important to regularizetraining of the D-NTM so as to avoid suboptimal solutions in which the D-NTM ignores the memoryand works as a simple recurrent neural network.Read-Write Consistency Regularizer One such suboptimal solution we have observed in ourpreliminary experiments with the proposed D-NTM is that the D-NTM uses the address part Aofthe memory matrix simply as an additional weight matrix, rather than as a means to accessing thecontent part C. We found that this pathological case can be effectively avoided by encouraging the readhead to point to a memory cell which has also been pointed by the write head. This can be implementedas the following regularization term:Rrw(w;u) =TXt0=1jj1(1t0t0Xt=1ut)>wt0jj22 (15)In the equations above, utis the write and wtis the read weights.Next Input Prediction as Regularization Temporal structure is a strong signal that should beexploited by the controller based on a recurrent neural network. We exploit this structure by lettingthe controller predict the input in the future. We maximize the predictability of the next input by thecontroller during training. This is equivalent to minimizing the following regularizer:Rpred(W) =logp(ft+1jft;wt;ut;Mt;W))whereftis the current input and ft+1is the input at next timestep. We found this regularizer to beeffective in our preliminary experiments and use it for bAbI tasks.6 R ELATED WORKA recurrent neural network (RNN), which is used as a controller in the proposed D-NTM, has animplicit memory in the form of recurring hidden states. Even with this implicit memory, a vanillaRNN is however known to have difficulties in storing information for long time-spans (Bengio et al.,1994; Hochreiter, 1991). Long short-term memory (LSTM, (Hochreiter & Schmidhuber, 1997)) andgated recurrent units (GRU, (Cho et al., 2014)) have been found to address this issue. However allthese models based solely on RNNs have been found to be limited when they are used to solve, e.g.,algorithmic tasks and episodic question-answering.In addition to the finite random access memory of the neural Turing machine, based on which theD-NTM is designed, other data structures have been proposed as external memory for neural networks.In (Sun et al., 1997; Grefenstette et al., 2015; Joulin & Mikolov, 2015), a continuous, differentiablestack was proposed. In (Zaremba et al., 2015; Zaremba & Sutskever, 2015), grid and tape storagesare used. These approaches differ from the NTM in that their memory is unbounded and can growindefinitely. On the other hand, they are often not randomly accessible.Memory networks (Weston et al., 2015b) form another family of neural networks with external memory.In this class of neural networks, information is stored explicitly as it is (in the form of its continuousrepresentation) in the memory, without being erased or modified during an episode. Memory networksand their variants have been applied to various tasks successfully (Sukhbaatar et al., 2015; Bordes et al.,2015; Dodge et al., 2015; Xiong et al., 2016). Miller et al. (2016) have also independently proposedthe idea of having separate key and value vectors for memory networks.Another related family of models is the attention-based neural networks. Neural networks withcontinuous or discrete attention over an input have shown promising results on a variety ofchallenging tasks, including machine translation (Bahdanau et al., 2015; Luong et al., 2015), speechrecognition (Chorowski et al., 2015), machine reading comprehension (Hermann et al., 2015) andimage caption generation (Xu et al., 2015).6Under review as a conference paper at ICLR 2017The latter two, the memory network and attention-based networks, are however clearly distinguishablefrom the D-NTM by the fact that they do not modify the content of the memory.7 E XPERIMENTSWe provide experimental results to demonstrate the abilities of our model, first on Facebook bAbItask (Weston et al., 2015a). We give detailed analysis and experimental results on this task. We alsocompare different variations of NTM on bAbI tasks. We have performed experiments on sequentialpermuted MNIST (Le et al., 2015) and on toy tasks to compare other published models on these taskswith a recurrent controller. The details of our experiments are provided in the supplementary material.7.1 E PISODIC QUESTION -ANSWERING :BABI TASKSIn this section, we evaluate the proposed D-NTM on the recently proposed episodic question-answeringtask called Facebook bAbI. We use the dataset with 10k training examples per sub-task provided byFacebook.1For each episode, the D-NTM reads a sequence of factual sentences followed by a question,all of which are given as natural language sentences. The D-NTM is expected to store and retrieverelevant information in the memory in order to answer the question based on the presented facts. Exactimplementation details and hyper-parameter settings are provided in the appendix.7.1.1 G OALSThe goal of this experiment is three-fold. First, we present for the first time the performance of amemory-based network that can both read and write dynamically on the Facebook bAbI tasks2. We aimto understand whether a model that has to learn to write an incoming fact to the memory, rather thanstoring it as it is, is able to work well, and to do so, we compare both the original NTM and proposedD-NTM against an LSTM-RNN.Second, we investigate the effect of having to learn how to write. The fact that the NTM needs tolearn to write likely has adverse effect on the overall performance, when compared to, for instance,end-to-end memory networks (MemN2N, (Sukhbaatar et al., 2015)) and dynamic memory network(DMN+, (Xiong et al., 2016)) both of which simply store the incoming facts as they are. We quantifythis effect in this experiment. Lastly, we show the effect of the proposed learnable addressing scheme.We further explore the effect of using a feedforward controller instead of the GRU controller. In additionto the explicit memory, the GRU controller can use its own internal hidden state as the memory. Onthe other hand, the feedforward controller must solely rely on the explicit memory, as it is the onlymemory available.7.1.2 R ESULTS AND ANALYSISIn Table 1, we first observe that the NTMs are indeed capable of solving this type of episodicquestion-answering better than the vanilla LSTM-RNN. Although the availability of explicit memoryin the NTM has already suggested this result, we note that this is the first time neural Turing machineshave been used in this specific task.All the variants of NTM with the GRU controller outperform the vanilla LSTM-RNN. However, not allof them perform equally well. First, it is clear that the proposed dynamic NTM (D-NTM) using the GRUcontroller outperforms the original NTM with the GRU controller (NTM, CBA only NTM vs. continuousD-NTM, Discrete D-NTM). As discussed earlier, the learnable addressing scheme of the D-NTM allowsthe controller to access the memory slots by location in a potentially nonlinear way. We expect it to helpwith tasks that have non-trivial access patterns, and as anticipated, we see a large gain with the D-NTMover the original NTM in the tasks of, for instance, 12 - Conjunction and 17 - Positional Reasoning.Among the recurrent variants of the proposed D-NTM, we notice significant improvements by usingdiscrete addressing over using continuous addressing. We conjecture that this is due to certain typesof tasks that require precise/sharp retrieval of a stored fact, in which case continuous addressingis in disadvantage over discrete addressing. This is evident from the observation that the D-NTMwith discrete addressing significantly outperforms that with continuous addressing in the tasks of 8 -1https://research.facebook.com/researchers/15439345391893482Similar experiments were done in the recently published (Graves et al., 2016), but D-NTM results for bAbItasks were already available in arxiv by that time.7Under review as a conference paper at ICLR 20171-step 1-step 1-step 1-step 3-steps 3-steps 3-steps 3-stepsLBACBA Soft Discrete LBACBA Soft DiscreteTask LSTM MemN2N DMN+ NTM NTM D-NTM D-NTM NTM NTM D-NTM D-NTM1 0.00 0.00 0.00 16.30 16.88 5.41 6.66 0.00 0.00 0.00 0.002 81.90 0.30 0.30 57.08 55.70 58.54 56.04 61.67 59.38 46.66 62.293 83.10 2.10 1.10 74.16 55.00 74.58 72.08 83.54 65.21 47.08 41.454 0.20 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.005 1.20 0.80 0.50 1.46 20.41 1.66 1.04 0.83 1.46 1.25 1.456 51.80 0.10 0.00 23.33 21.04 40.20 44.79 48.13 54.80 20.62 11.047 24.90 2.00 2.40 21.67 21.67 19.16 19.58 7.92 37.70 7.29 5.628 34.10 0.90 0.00 25.76 21.05 12.58 18.46 25.38 8.82 11.02 0.749 20.20 0.30 0.00 24.79 24.17 36.66 34.37 37.80 0.00 39.37 32.5010 30.10 0.00 0.00 41.46 33.13 52.29 50.83 56.25 23.75 20.00 20.8311 10.30 0.10 0.00 18.96 31.88 31.45 4.16 3.96 0.28 30.62 16.8712 23.40 0.00 0.00 25.83 30.00 7.70 6.66 28.75 23.75 5.41 4.5813 6.10 0.00 0.00 6.67 5.63 5.62 2.29 5.83 83.13 7.91 5.0014 81.00 0.10 0.20 58.54 59.17 60.00 63.75 61.88 57.71 58.12 60.2015 78.70 0.00 0.00 36.46 42.30 36.87 39.27 35.62 21.88 36.04 40.2616 51.90 51.80 45.30 71.15 71.15 49.16 51.35 46.15 50.00 46.04 45.4117 50.10 18.60 4.20 43.75 43.75 17.91 16.04 43.75 56.25 21.25 9.1618 6.80 5.30 2.10 3.96 47.50 3.95 3.54 47.50 47.50 6.87 1.6619 90.30 2.30 0.00 75.89 71.51 73.74 64.63 61.56 63.65 75.88 76.6620 2.10 0.00 0.00 1.25 0.00 2.70 3.12 0.40 0.00 3.33 0.00Avg.Err. 36.41 4.24 2.81 31.42 33.60 29.51 27.93 32.85 32.76 24.24 21.79Table 1: Test error rates (%) on the 20 bAbI QA tasks for models using 10k training examples withthe GRU and feedforward controller. FF stands for the experiments that are conducted with feedforwardcontroller. Let us, note that LBArefers to NTM that uses both LBA and CBA. In this table, wecompare multi-step vs single-step addressing, original NTM with location based+content basedaddressing vs only content based addressing, and discrete vs continuous addressing on bAbI.Lists/Sets and 11 - Basic Coreference. Furthermore, this is in line with an earlier observation in (Xu et al.,2015), where discrete addressing was found to generalize better in the task of image caption generation.In Table 2, we also observe that the D-NTM with the feedforward controller and discrete attentionperforms worse than LSTM and D-NTM with continuous-attention. However, when the proposedcurriculum strategy from Sec. 4 is used, the average test error drops from 68.30 to 37.79.We empirically found training of the feedforward controller more difficult than that of the recurrentcontroller. We train our feedforward controller based models four times longer (in terms of the numberof updates) than the recurrent controller based ones in order to ensure that they are converged for mostof the tasks. On the other hand, the models trained with the GRU controller overfit on bAbI tasksvery quickly. For example, on tasks 3 and 16 the feedforward controller based model underfits (i.e.,high training loss) at the end of the training, whereas with the same number of units the model withthe GRU controller can overfit on those tasks after 3,000 updates only.When our results are compared to the variants of the memory network Weston et al. (2015b) (MemN2Nand DMN+), we notice a significant performance gap. We attribute this gap to the difficulty in learningto manipulate and store a complex input.FF FF FFSoft Discrete DiscreteTask D-NTM D-NTM D-NTM1 4.38 81.67 14.792 27.5 76.67 76.673 71.25 79.38 70.834 0.00 78.65 44.065 1.67 83.13 17.716 1.46 48.76 48.137 6.04 54.79 23.548 1.70 69.75 35.629 0.63 39.17 14.3810 19.80 56.25 56.2511 0.00 78.96 39.5812 6.25 82.5 32.0813 7.5 75.0 18.5414 17.5 78.75 24.7915 0.0 71.42 39.7316 49.65 71.46 71.1517 1.25 43.75 43.7518 0.24 48.13 2.9219 39.47 71.46 71.5620 0.0 76.56 9.79Avg.Err. 12.81 68.30 37.79Table 2: Test error rates (%) on the 20 bAbI QA tasks for models using 10k training examples withfeedforward controller.We also provide further experiments investigating different extensions on D-NTM in the appendix.8Under review as a conference paper at ICLR 20177.2 S EQUENTIAL pMNISTIn sequential MNIST task, the pixels of the MNIST digits are provided to the model in scan line order,left to right and top to bottom (Le et al., 2015). At the end of sequence of pixels, the model predictsthe label of the digit in the sequence of pixels. We experiment D-NTM on the variation of sequentialMNIST where the order of the pixels is randomly shuffled, we call this task as permuted MNIST(pMNIST). An important contribution of this task to our paper, in particular, is to measure the model’sability to perform well when dealing with long-term dependencies. We report our results in Table 33, weobserve improvements over other models that we compare against. In Table 3, ”discrete addressing withMAB” refers to D-NTM model using REINFORCE with baseline computed from moving averages ofthe reward. Discrete addressing with IB refers to D-NTM using REINFORCE with input-based baseline.7.3 NTM T OYTASKSWe explore the possibility of using D-NTM to solve algorithmic tasks such as copy and associativerecall tasks. We train our model on the same lengths of sequences that is experimented in (Graveset al., 2014). We report our results in Table 4. We find out that D-NTM using continuous-attentioncan successfully learn the ”Copy” and ”Associative Recall” tasks.In Table 4, we train our model on sequences of the same length as the experiments in (Graves et al., 2014)and test the model on the sequences of the maximum length seen during the training. We consider modelto be successful on copy or associative recall if its validation cost (binary cross-entropy) is lower than0:02over the sequences of maximum length seen during the training. We set the threshold to 0:02todetermine whether a model is successful on a task. Because empirically we observe that the models havehigher validation costs perform badly in terms of generalization over the longer sequences. ”D-NTMdiscrete” model in this table is trained with REINFORCE using moving averages to estimate the baseline.Test AccD-NTM discrete MAB 89.6D-NTM discrete IB 92.3Soft D-NTM 93.4NTM 90.9I-RNN (Le et al., 2015) 82.0Zoneout (Krueger et al., 2016) 93.1LSTM (Krueger et al., 2016) 89.8Unitary-RNN (Arjovsky et al., 2015) 91.4Recurrent Dropout (Krueger et al., 2016) 92.5Table 3: Sequential pMNIST.Copy Tasks Associative RecallSoft D-NTM Success SuccessD-NTM discrete Success FailureNTM Success SuccessTable 4: NTM Toy Tasks.8 C ONCLUSION AND FUTURE WORKIn this paper we extend neural Turing machines (NTM) by introducing a learnable addressing schemewhich allows the NTM to be capable of performing highly nonlinear location-based addressing.This extension, to which we refer by dynamic NTM (D-NTM), is extensively tested with variousconfigurations, including different addressing mechanisms (continuous vs. discrete) and differentnumber of addressing steps, on the Facebook bAbI tasks. This is the first time an NTM-type modelwas tested on this task, and we observe that the NTM, especially the proposed D-NTM, performs betterthan vanilla LSTM-RNN. Furthermore, the experiments revealed that the discrete, discrete addressingworks better than the continuous addressing with the GRU controller, and our analysis reveals thatthis is the case when the task requires precise retrieval of memory content.Our experiments show that the NTM-based models can be weaker than other variants of memorynetworks which do not learn but have an explicit mechanism of storing incoming facts as they are. Weconjecture that this is due to the difficulty in learning how to write, manipulate and delete the contentof memory. Despite this difficulty, we find the NTM-based approach, such as the proposed D-NTM,3Let us note that, the current state of art on this task is recurrent batch normalization with LSTM (Cooijmanset al., 2016) with 95.6% accuracy. It is possible to use recurrent batch normalization in our model and potentiallyimprove our results on this task as well.9Under review as a conference paper at ICLR 2017to be a better, future-proof approach, because it can scale to a much longer horizon (where it becomesimpossible to explicitly store all the experiences.)OnpMNIST task, we show that our model can outperform other similar type of approaches proposedto deal with the long-term dependencies. On copy and associative recall tasks, we show that our modelcan solve the algorithmic problems that are proposed to solve with NTM type of models.The success of both the learnable address and the discrete addressing scheme suggests two futureresearch directions. First, we should try both of these schemes in a wider array of memory-based models,as they are not specific to the neural Turing machines. Second, the proposed D-NTM needs to beevaluated on a diverse set of applications, such as text summarization (Rush et al., 2015), visual question-answering (Antol et al., 2015) and machine translation, in order to make a more concrete conclusion. | HJpJEBZNl | interesting extension to NTM | 7: Good paper, accept | The paper extends the NTM by a trainable memory addressing scheme.
The paper also investigates both continuous/differentiable as well as discrete/non-differentiable addressing mechanisms.
Pros:
* Extension to NTM with trainable addressing.
* Experiments with discrete addressing.
* Experiments on bAbI QA tasks.
Cons:
* Big gap to MemN2N and DMN+ in performance.
* Code not available.
* There could be more experiments on other real-world tasks.
| 4: The reviewer is confident but not absolutely certain that the evaluation is correct |
BkSmc8qll | ICLR.cc/2017/conference | 2017 | Dynamic Neural Turing Machine with Continuous and Discrete Addressing Schemes | ["Caglar Gulcehre", "Sarath Chandar", "Kyunghyun Cho", "Yoshua Bengio"] | In this paper, we extend neural Turing machine (NTM) into a dynamic neural Turing machine (D-NTM) by introducing a trainable memory addressing scheme. This addressing scheme maintains for each memory cell two separate vectors, content and address vectors. This allows the D-NTM to learn a wide variety of location-based addressing strategies including both linear and nonlinear ones. We implement the D-NTM with both continuous, differentiable and discrete, non-differentiable read/write mechanisms. We investigate the mechanisms and effects for learning to read and write to a memory through experiments on Facebook bAbI tasks using both a feedforward and GRU-controller. The D-NTM is evaluated on a set of Facebook bAbI tasks and shown to outperform NTM and LSTM baselines. We also provide further experimental results on sequential MNIST, associative recall and copy tasks. | ["Deep learning", "Natural language processing", "Reinforcement Learning"] | ABSTRACTIn this paper, we extend neural Turing machine (NTM) into a dynamic neural Turingmachine (D-NTM) by introducing a trainable memory addressing scheme. Thisaddressing scheme maintains for each memory cell two separate vectors, content andaddress vectors. This allows the D-NTM to learn a wide variety of location-basedaddressing strategies including both linear and nonlinear ones. We implementthe D-NTM with both continuous, differentiable and discrete, non-differentiableread/write mechanisms. We investigate the mechanisms and effects for learning toread and write to a memory through experiments on Facebook bAbI tasks using bothafeedforward andGRU -controller. The D-NTM is evaluated on a set of FacebookbAbI tasks and shown to outperform NTM and LSTM baselines. We also providefurther experimental results on sequential MNIST, associative recall and copy tasks.1 I NTRODUCTIONDesigning general-purpose learning algorithms is one of the long-standing goals of artificial intelligence.Despite the success of deep learning in this area (see, e.g., (Goodfellow et al., 2016)) there are still a setof complex tasks that are not well addressed by conventional neural networks. Those tasks often require aneural network to be equipped with an explicit, external memory in which a larger, potentially unbounded,set of facts need to be stored. They include, but are not limited to, episodic question-answering (Westonet al., 2015b; Hermann et al., 2015; Hill et al., 2015), compact algorithms (Zaremba et al., 2015),dialogue (Serban et al., 2016; Vinyals & Le, 2015) and video caption generation (Yao et al., 2015).Recently two promising approaches based on neural networks to this type of tasks have been proposed.Memory networks (Weston et al., 2015b) explicitly store all the facts, or information, available foreach episode in an external memory (as continuous vectors) and use the attention-based mechanismto index them when returning an output. On the other hand, neural Turing machines (NTM, (Graveset al., 2014)) read each fact in an episode and decides whether to read, write the fact or do both tothe external, differentiable memory.A crucial difference between these two models is that the memory network does not have a mechanismto modify the content of the external memory, while the NTM does. In practice, this leads to easierlearning in the memory network, which in turn resulted in it being used more in real tasks (Bordes et al.,2015; Dodge et al., 2015). On the contrary, the NTM has mainly been tested on a series of small-scale,carefully-crafted tasks such as copy and associative recall. The NTM, however is more expressive,precisely because it can store and modify the internal state of the network as it processes an episode.The original NTM supports two modes of addressing (which can be used simultaneously.) They arecontent-based and location-based addressing. We notice that the location-based strategy is based onlinear addressing. The distance between each pair of consecutive memory cells is fixed to a constant.We address this limitation, in this paper, by introducing a learnable address vector for each memorycell of the NTM with least recently used memory addressing mechanism, and we call this variant adynamic neural Turing machine (D-NTM).We evaluate the proposed D-NTM on the full set of Facebook bAbI task (Weston et al., 2015b)using either continuous , differentiable attention or discrete , non-differentiable attention (Zaremba &Sutskever, 2015) as an addressing strategy. Our experiments reveal that it is possible to use the discrete,non-differentiable attention mechanism, and in fact, the D-NTM with the discrete attention and GRUcontroller outperforms the one with the continuous attention. After we published our paper on arXiv, anew extension of NTM called DNC (Graves et al., 2016) has also provided results on bAbI task as well.1Under review as a conference paper at ICLR 2017We also provide results on sequential-MNIST and algorithmic tasks proposed by (Graves et al., 2014)in order to investigate the ability of our model when dealing with long-term dependencies.Our Contributions1. We propose a generalization of Neural Turing Machine called a dynamic neural Turing machine(D-NTM) which employs a learnable and location-based addressing.2.We demonstrate the application of neural Turing machines on a more natural and less toyish task:episodic question-answering besides the toy tasks. We provide detailed analysis of our model onthis task.3.We propose to use the discrete attention mechanism and empirically show that, it can outperformthe continuous attention based addressing for episodic QA task.4. We propose a curriculum strategy for our model with the feedforward controller and discreteattention that improves our results significantly.2 D YNAMIC NEURAL TURING MACHINEThe proposed dynamic neural Turing machine (D-NTM) extends the neural Turing machine (NTM,(Graves et al., 2014)) which has a modular design. The NTM consists of two main modules, a controllerand, a memory. The controller, which is often implemented as a recurrent neural network, issues acommand to the memory so as to read, write to and erase a subset of memory cells. Although thememory was originally envisioned as an integrated module, it is not necessary, and the memory maybe an external, black box (Zaremba & Sutskever, 2015).2.1 C ONTROLLERAt each time step t, the controller (1) receives an input value xt, (2) addresses and reads the memory andcreates the content vector t, (3) erases/writes a portion of the memory, (4) updates its own hidden stateht, and (5) outputs a value yt(if needed.) In this paper, we use both a gated recurrent unit (GRU, (Choet al., 2014)) and a feedforward-controller to implement the controller such that for a GRU controllerht=GRU(xt;ht1;t) (1)or for a feedforward-controllerht=(xt;t): (2)2.2 M EMORYWe use a rectangular matrix M2RN(dc+da)to denoteNmemory cells. Unlike the original NTM,we partition each memory cell vector into two parts:M= [A;C]:The first part A2RNdais a learnable address matrix, and the second C2RNdca content matrix.In other words, each memory cell miis nowmi= [ai;ci]:The address part aiis considered a model parameter that is updated during training. During inference,the address part is not overwritten by the controller and remains constant. On the other hand, thecontent part ciis both read and written by the controller both during training and inference. At thebeginning of each episode, the content part of the memory is refreshed to be an all-zero matrix, C0=0.This introduction of the learnable address portion for each memory cell allows the model to learnsophisticated location-based addressing strategies. A similar addressing mechanism is also exploredin (Reed & de Freitas, 2015) in the context of learning program traces.2.3 M EMORY ADDRESSINGMemory addressing in the D-NTM is equivalent to computing an N-dimensional address vector. The D-NTM computes three such vectors for respectively reading wt2RN, erasing et2Rdcand writing ut2RN. Specifically for writing, the controller further computes a candidate memory content vector ct22Under review as a conference paper at ICLR 2017Address 1ContentAddress 2ContentAddress 3ContentAddress 4ContentAddress 5ContentAddress 6ContentAddress 7ContentControllerMemoryContentReaderWriterStoryFact t-1Fact tQuestionAnswer.........Figure 1: A graphical illustration of the proposed dynamic neural Turing machine with therecurrent-controller. The controller receives the fact as a continuous vector encoded by a recurrentneural network, computes the read and write weights for addressing the memory. If the D-NTMautomatically detects that a query has been received, it returns an answer and terminates.Rdcbased on its current hidden state of the controller ht2Rdhand the input of the controller scaled witha scalar gate twhich is a function of the hidden state and the input of the controller as well, see Eqn 4.t=f(ht;xt); (3)ct=ReLU (Wmht+tWxxt+bm): (4)Reading With the read vector wt, the content vector read from the memory t2Rda+dcis retrievedbyt= (wt)>Mt1; (5)where wtis a row vector.Erasing and Writing Given the erase, write and candidate memory content vectors ( et,utj, andctrespectively) generated by a simple MLP conditioned on the hidden state of the controller ht, thememory matrix is updated by,Ct[j] = (1etutj)Ct1[j] +utjct: (6)where the subscript jinCt[j]denotes thej-th row of the content part Ctof the memory matrix Mt.No Operation (NOP) As found in (Joulin & Mikolov, 2015), an additional NOP action might bebeneficial for the controller notto access the memory once in a while. We model this situation bydesignating one memory cell as a NOP cell. Reading or writing from this memory cell is ignored.2.4 L EARNINGOnce the proposed D-NTM is executed, it returns the output distribution p(yjx1;:::;xT). As a result,we define a cost function as the negative log-likelihood:C() =1NNXn=1logp(ynjxn1;:::;xnT); (7)whereis a set of all the parameters. As the proposed D-NTM, just like the original NTM, is fullyend-to-end differentiable, we can compute the gradient of this cost function by using backpropagationand learn the parameters of the model with a gradient-based optimization algorithm, such as stochasticgradient descent, to train it end-to-end.3Under review as a conference paper at ICLR 20173 A DDRESSING MECHANISM3.1 A DDRESS VECTORSEach of the address vectors (both read and write) is computed in the same way. The way they arecomputed are very similar to the content based addressing in (Graves et al., 2014). First, the controllercomputes a key vector:kt=W>kht+bk;where Wk2RN(da+dc)andbk2Rda+dcif the read head is being computed, otherwiseWk2RNdcandbk2Rdcif the write head weights are being computed. They can be the parametersfor a specific head (either read or write.) Also, the sharpening factor t2R1is computed as:softplus (x) =log(exp(x) + 1) (8)t=softplus (u>ht+b) + 1: (9)uandbare the parameters of the sharpening t.The address vector is then computed by,zti=tSkt;mti(10)wti=exp(zti)Pjexp(ztj); (11)where the similarity function S2R0is defined asS(x;y) =xy(jjxjjjjyjj+):3.2 M ULTI -STEP ADDRESSINGAt each time-step, controller may require more than one-step for accessing to the memory. The originalNTM addresses this by implementing multiple sets of read, erase and write heads. In this paper, weexplore an option of allowing each head to operate more than once at each time step, similar to themulti-hop mechanism from the end-to-end memory network (Sukhbaatar et al., 2015).3.3 D YNAMIC LEAST RECENTLY USED ADDRESSINGWe introduce a memory addressing schema that can learn to put more emphasis on the least recentlyused (LRU) memory locations. As observed in (Santoro et al., 2016; Rae et al., 2016), we find it easierto learn the write operations with the use of LRU addressing.To learn a LRU based addressing, first we compute the exponentially moving averages of the logits ( zt)asvt,vt= 0:1vt1+ 0:9zt. We rescale the accumulated vtwitht, such that the controller adjuststhe influence of how much previously written memory locations should effect the attention weightsof a particular time-step. Next, we subtract vtfromztin order to reduce the weights of previouslyread or written memory locations. tis a shallow MLP with a scalar output and it is conditioned onthe hidden state of the controller. tis parametrized with the parameters uandb,t=sigmoid (u>ht+b); (12)wt=softmax (zttvt1): (13)This addressing method increases the weights of the least recently used rows of the memory. Themagnitude of the influence of the least-recently used memory locations is being learned and adjustedwitht. Our LRU addressing is dynamic due to the model’s ability to switch between pure content-basedaddressing and LRU. During the training, we do not backpropagate through vt. Due to the dynamicnature of this addressing mechanism, it can be used for both read and write operations. If needed,the model will automatically learn to disable LRU while reading from the memory.4 G ENERATING DISCRETE ADDRESS VECTORSIn this section, we describe the discrete attention based addressing strategy.4Under review as a conference paper at ICLR 2017Discrete Addressing Let us use wto denote an address vector (either read, write or erase) at timet. By definition in Eq. (10), every element in this address vector is positive and sums up to one. Inother words, we can treat this vector as the probabilities of a categorical distribution C(w)withdim(w)choices:p(j) =wj;wherewjis thej-th element of w. We can readily sample from this categorical distribution and forman one-hot vector ~wsuch that~wk=I(k=j);wherejC(w), andIis an indicator function.Training We use this sampling-based strategy for all the heads during training. This clearly makesthe use of backpropagation infeasible to compute the gradient, as the sampling procedure is notdifferentiable. Thus, we use REINFORCE (Williams, 1992) together with the three variance reductiontechniques–global baseline, input-dependent baseline and variance normalization– suggested in (Mnih& Gregor, 2014).Let us define R(x) = logp(yjx1;:::;xT)as a reward. We first center and re-scale the reward by~R(x) =R(x)bp2+;wherebandis running average and standard deviation of R. We can further center it for each inputxseparately, i.e.,~R(x) ~R(x)b(x);whereb(x)is computed by a baseline network which takes as input xand predicts its estimated reward.The baseline network is trained to minimize the Huber loss (Huber, 1964) between the true reward~R(x)and the predicted reward b(x). We use the Huber loss, which is defined byH(x) =x2forjxj;(2jxj);otherwise,due to its robustness. As a further measure to reduce the variance, we regularize the negative entropyof all those category distributions to facilitate a better exploration during training (Xu et al., 2015).Then, the cost function for each training example is approximated asCn() =logp(yjx1:T;~w1:J;~u1:J;~e1:J)JXj=1~R(xn)(logp( ~wjjx1:T) + logp(~ujjx1:T) + logp(~ejjx1:T))HJXj=1(H(wjjx1:T) +H(ujjx1:T) +H(ejjx1:T)):whereJis the number of addressing steps, His the entropy regularization coefficient, and Hdenotesthe entropy.Inference Once training is over, we switch to a deterministic strategy. We simply choose an elementofwwith the largest value to be the index of the target memory cell, such that~wk=I(k=argmax (w)):Curriculum Learning for the Discrete Attention Training discrete attention with feed-forwardcontroller and REINFORCE is challenging. We propose to use a curriculum strategy for trainingwith the discrete attention in order to tackle this problem. For each minibatch, we sample from abinomial distribution with the probability pt,tBin(pt). The model will either use the discreteor the continuous-attention based on the t. We start the training procedure with p0= 1and duringthe trainingptis annealed to 0by settingpt=p0p1+t.We can rewrite the weights wtas in Equation 14, where it is expressed as the combination of continuousattention weights wtand discrete attention weights ~wtwithtbeing a binary variable that choosesto use one of them,wt twt+ (1t)~wt: (14)5Under review as a conference paper at ICLR 2017By using this curriculum learning strategy, at the beginning of the training, the model learns to usethe memory mainly with the continuous attention. As we anneal the pt, the model will rely more onthe discrete attention.5 R EGULARIZING DYNAMIC NEURAL TURING MACHINESWhen the controller of D-NTM is a powerful recurrent neural network, it is important to regularizetraining of the D-NTM so as to avoid suboptimal solutions in which the D-NTM ignores the memoryand works as a simple recurrent neural network.Read-Write Consistency Regularizer One such suboptimal solution we have observed in ourpreliminary experiments with the proposed D-NTM is that the D-NTM uses the address part Aofthe memory matrix simply as an additional weight matrix, rather than as a means to accessing thecontent part C. We found that this pathological case can be effectively avoided by encouraging the readhead to point to a memory cell which has also been pointed by the write head. This can be implementedas the following regularization term:Rrw(w;u) =TXt0=1jj1(1t0t0Xt=1ut)>wt0jj22 (15)In the equations above, utis the write and wtis the read weights.Next Input Prediction as Regularization Temporal structure is a strong signal that should beexploited by the controller based on a recurrent neural network. We exploit this structure by lettingthe controller predict the input in the future. We maximize the predictability of the next input by thecontroller during training. This is equivalent to minimizing the following regularizer:Rpred(W) =logp(ft+1jft;wt;ut;Mt;W))whereftis the current input and ft+1is the input at next timestep. We found this regularizer to beeffective in our preliminary experiments and use it for bAbI tasks.6 R ELATED WORKA recurrent neural network (RNN), which is used as a controller in the proposed D-NTM, has animplicit memory in the form of recurring hidden states. Even with this implicit memory, a vanillaRNN is however known to have difficulties in storing information for long time-spans (Bengio et al.,1994; Hochreiter, 1991). Long short-term memory (LSTM, (Hochreiter & Schmidhuber, 1997)) andgated recurrent units (GRU, (Cho et al., 2014)) have been found to address this issue. However allthese models based solely on RNNs have been found to be limited when they are used to solve, e.g.,algorithmic tasks and episodic question-answering.In addition to the finite random access memory of the neural Turing machine, based on which theD-NTM is designed, other data structures have been proposed as external memory for neural networks.In (Sun et al., 1997; Grefenstette et al., 2015; Joulin & Mikolov, 2015), a continuous, differentiablestack was proposed. In (Zaremba et al., 2015; Zaremba & Sutskever, 2015), grid and tape storagesare used. These approaches differ from the NTM in that their memory is unbounded and can growindefinitely. On the other hand, they are often not randomly accessible.Memory networks (Weston et al., 2015b) form another family of neural networks with external memory.In this class of neural networks, information is stored explicitly as it is (in the form of its continuousrepresentation) in the memory, without being erased or modified during an episode. Memory networksand their variants have been applied to various tasks successfully (Sukhbaatar et al., 2015; Bordes et al.,2015; Dodge et al., 2015; Xiong et al., 2016). Miller et al. (2016) have also independently proposedthe idea of having separate key and value vectors for memory networks.Another related family of models is the attention-based neural networks. Neural networks withcontinuous or discrete attention over an input have shown promising results on a variety ofchallenging tasks, including machine translation (Bahdanau et al., 2015; Luong et al., 2015), speechrecognition (Chorowski et al., 2015), machine reading comprehension (Hermann et al., 2015) andimage caption generation (Xu et al., 2015).6Under review as a conference paper at ICLR 2017The latter two, the memory network and attention-based networks, are however clearly distinguishablefrom the D-NTM by the fact that they do not modify the content of the memory.7 E XPERIMENTSWe provide experimental results to demonstrate the abilities of our model, first on Facebook bAbItask (Weston et al., 2015a). We give detailed analysis and experimental results on this task. We alsocompare different variations of NTM on bAbI tasks. We have performed experiments on sequentialpermuted MNIST (Le et al., 2015) and on toy tasks to compare other published models on these taskswith a recurrent controller. The details of our experiments are provided in the supplementary material.7.1 E PISODIC QUESTION -ANSWERING :BABI TASKSIn this section, we evaluate the proposed D-NTM on the recently proposed episodic question-answeringtask called Facebook bAbI. We use the dataset with 10k training examples per sub-task provided byFacebook.1For each episode, the D-NTM reads a sequence of factual sentences followed by a question,all of which are given as natural language sentences. The D-NTM is expected to store and retrieverelevant information in the memory in order to answer the question based on the presented facts. Exactimplementation details and hyper-parameter settings are provided in the appendix.7.1.1 G OALSThe goal of this experiment is three-fold. First, we present for the first time the performance of amemory-based network that can both read and write dynamically on the Facebook bAbI tasks2. We aimto understand whether a model that has to learn to write an incoming fact to the memory, rather thanstoring it as it is, is able to work well, and to do so, we compare both the original NTM and proposedD-NTM against an LSTM-RNN.Second, we investigate the effect of having to learn how to write. The fact that the NTM needs tolearn to write likely has adverse effect on the overall performance, when compared to, for instance,end-to-end memory networks (MemN2N, (Sukhbaatar et al., 2015)) and dynamic memory network(DMN+, (Xiong et al., 2016)) both of which simply store the incoming facts as they are. We quantifythis effect in this experiment. Lastly, we show the effect of the proposed learnable addressing scheme.We further explore the effect of using a feedforward controller instead of the GRU controller. In additionto the explicit memory, the GRU controller can use its own internal hidden state as the memory. Onthe other hand, the feedforward controller must solely rely on the explicit memory, as it is the onlymemory available.7.1.2 R ESULTS AND ANALYSISIn Table 1, we first observe that the NTMs are indeed capable of solving this type of episodicquestion-answering better than the vanilla LSTM-RNN. Although the availability of explicit memoryin the NTM has already suggested this result, we note that this is the first time neural Turing machineshave been used in this specific task.All the variants of NTM with the GRU controller outperform the vanilla LSTM-RNN. However, not allof them perform equally well. First, it is clear that the proposed dynamic NTM (D-NTM) using the GRUcontroller outperforms the original NTM with the GRU controller (NTM, CBA only NTM vs. continuousD-NTM, Discrete D-NTM). As discussed earlier, the learnable addressing scheme of the D-NTM allowsthe controller to access the memory slots by location in a potentially nonlinear way. We expect it to helpwith tasks that have non-trivial access patterns, and as anticipated, we see a large gain with the D-NTMover the original NTM in the tasks of, for instance, 12 - Conjunction and 17 - Positional Reasoning.Among the recurrent variants of the proposed D-NTM, we notice significant improvements by usingdiscrete addressing over using continuous addressing. We conjecture that this is due to certain typesof tasks that require precise/sharp retrieval of a stored fact, in which case continuous addressingis in disadvantage over discrete addressing. This is evident from the observation that the D-NTMwith discrete addressing significantly outperforms that with continuous addressing in the tasks of 8 -1https://research.facebook.com/researchers/15439345391893482Similar experiments were done in the recently published (Graves et al., 2016), but D-NTM results for bAbItasks were already available in arxiv by that time.7Under review as a conference paper at ICLR 20171-step 1-step 1-step 1-step 3-steps 3-steps 3-steps 3-stepsLBACBA Soft Discrete LBACBA Soft DiscreteTask LSTM MemN2N DMN+ NTM NTM D-NTM D-NTM NTM NTM D-NTM D-NTM1 0.00 0.00 0.00 16.30 16.88 5.41 6.66 0.00 0.00 0.00 0.002 81.90 0.30 0.30 57.08 55.70 58.54 56.04 61.67 59.38 46.66 62.293 83.10 2.10 1.10 74.16 55.00 74.58 72.08 83.54 65.21 47.08 41.454 0.20 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.005 1.20 0.80 0.50 1.46 20.41 1.66 1.04 0.83 1.46 1.25 1.456 51.80 0.10 0.00 23.33 21.04 40.20 44.79 48.13 54.80 20.62 11.047 24.90 2.00 2.40 21.67 21.67 19.16 19.58 7.92 37.70 7.29 5.628 34.10 0.90 0.00 25.76 21.05 12.58 18.46 25.38 8.82 11.02 0.749 20.20 0.30 0.00 24.79 24.17 36.66 34.37 37.80 0.00 39.37 32.5010 30.10 0.00 0.00 41.46 33.13 52.29 50.83 56.25 23.75 20.00 20.8311 10.30 0.10 0.00 18.96 31.88 31.45 4.16 3.96 0.28 30.62 16.8712 23.40 0.00 0.00 25.83 30.00 7.70 6.66 28.75 23.75 5.41 4.5813 6.10 0.00 0.00 6.67 5.63 5.62 2.29 5.83 83.13 7.91 5.0014 81.00 0.10 0.20 58.54 59.17 60.00 63.75 61.88 57.71 58.12 60.2015 78.70 0.00 0.00 36.46 42.30 36.87 39.27 35.62 21.88 36.04 40.2616 51.90 51.80 45.30 71.15 71.15 49.16 51.35 46.15 50.00 46.04 45.4117 50.10 18.60 4.20 43.75 43.75 17.91 16.04 43.75 56.25 21.25 9.1618 6.80 5.30 2.10 3.96 47.50 3.95 3.54 47.50 47.50 6.87 1.6619 90.30 2.30 0.00 75.89 71.51 73.74 64.63 61.56 63.65 75.88 76.6620 2.10 0.00 0.00 1.25 0.00 2.70 3.12 0.40 0.00 3.33 0.00Avg.Err. 36.41 4.24 2.81 31.42 33.60 29.51 27.93 32.85 32.76 24.24 21.79Table 1: Test error rates (%) on the 20 bAbI QA tasks for models using 10k training examples withthe GRU and feedforward controller. FF stands for the experiments that are conducted with feedforwardcontroller. Let us, note that LBArefers to NTM that uses both LBA and CBA. In this table, wecompare multi-step vs single-step addressing, original NTM with location based+content basedaddressing vs only content based addressing, and discrete vs continuous addressing on bAbI.Lists/Sets and 11 - Basic Coreference. Furthermore, this is in line with an earlier observation in (Xu et al.,2015), where discrete addressing was found to generalize better in the task of image caption generation.In Table 2, we also observe that the D-NTM with the feedforward controller and discrete attentionperforms worse than LSTM and D-NTM with continuous-attention. However, when the proposedcurriculum strategy from Sec. 4 is used, the average test error drops from 68.30 to 37.79.We empirically found training of the feedforward controller more difficult than that of the recurrentcontroller. We train our feedforward controller based models four times longer (in terms of the numberof updates) than the recurrent controller based ones in order to ensure that they are converged for mostof the tasks. On the other hand, the models trained with the GRU controller overfit on bAbI tasksvery quickly. For example, on tasks 3 and 16 the feedforward controller based model underfits (i.e.,high training loss) at the end of the training, whereas with the same number of units the model withthe GRU controller can overfit on those tasks after 3,000 updates only.When our results are compared to the variants of the memory network Weston et al. (2015b) (MemN2Nand DMN+), we notice a significant performance gap. We attribute this gap to the difficulty in learningto manipulate and store a complex input.FF FF FFSoft Discrete DiscreteTask D-NTM D-NTM D-NTM1 4.38 81.67 14.792 27.5 76.67 76.673 71.25 79.38 70.834 0.00 78.65 44.065 1.67 83.13 17.716 1.46 48.76 48.137 6.04 54.79 23.548 1.70 69.75 35.629 0.63 39.17 14.3810 19.80 56.25 56.2511 0.00 78.96 39.5812 6.25 82.5 32.0813 7.5 75.0 18.5414 17.5 78.75 24.7915 0.0 71.42 39.7316 49.65 71.46 71.1517 1.25 43.75 43.7518 0.24 48.13 2.9219 39.47 71.46 71.5620 0.0 76.56 9.79Avg.Err. 12.81 68.30 37.79Table 2: Test error rates (%) on the 20 bAbI QA tasks for models using 10k training examples withfeedforward controller.We also provide further experiments investigating different extensions on D-NTM in the appendix.8Under review as a conference paper at ICLR 20177.2 S EQUENTIAL pMNISTIn sequential MNIST task, the pixels of the MNIST digits are provided to the model in scan line order,left to right and top to bottom (Le et al., 2015). At the end of sequence of pixels, the model predictsthe label of the digit in the sequence of pixels. We experiment D-NTM on the variation of sequentialMNIST where the order of the pixels is randomly shuffled, we call this task as permuted MNIST(pMNIST). An important contribution of this task to our paper, in particular, is to measure the model’sability to perform well when dealing with long-term dependencies. We report our results in Table 33, weobserve improvements over other models that we compare against. In Table 3, ”discrete addressing withMAB” refers to D-NTM model using REINFORCE with baseline computed from moving averages ofthe reward. Discrete addressing with IB refers to D-NTM using REINFORCE with input-based baseline.7.3 NTM T OYTASKSWe explore the possibility of using D-NTM to solve algorithmic tasks such as copy and associativerecall tasks. We train our model on the same lengths of sequences that is experimented in (Graveset al., 2014). We report our results in Table 4. We find out that D-NTM using continuous-attentioncan successfully learn the ”Copy” and ”Associative Recall” tasks.In Table 4, we train our model on sequences of the same length as the experiments in (Graves et al., 2014)and test the model on the sequences of the maximum length seen during the training. We consider modelto be successful on copy or associative recall if its validation cost (binary cross-entropy) is lower than0:02over the sequences of maximum length seen during the training. We set the threshold to 0:02todetermine whether a model is successful on a task. Because empirically we observe that the models havehigher validation costs perform badly in terms of generalization over the longer sequences. ”D-NTMdiscrete” model in this table is trained with REINFORCE using moving averages to estimate the baseline.Test AccD-NTM discrete MAB 89.6D-NTM discrete IB 92.3Soft D-NTM 93.4NTM 90.9I-RNN (Le et al., 2015) 82.0Zoneout (Krueger et al., 2016) 93.1LSTM (Krueger et al., 2016) 89.8Unitary-RNN (Arjovsky et al., 2015) 91.4Recurrent Dropout (Krueger et al., 2016) 92.5Table 3: Sequential pMNIST.Copy Tasks Associative RecallSoft D-NTM Success SuccessD-NTM discrete Success FailureNTM Success SuccessTable 4: NTM Toy Tasks.8 C ONCLUSION AND FUTURE WORKIn this paper we extend neural Turing machines (NTM) by introducing a learnable addressing schemewhich allows the NTM to be capable of performing highly nonlinear location-based addressing.This extension, to which we refer by dynamic NTM (D-NTM), is extensively tested with variousconfigurations, including different addressing mechanisms (continuous vs. discrete) and differentnumber of addressing steps, on the Facebook bAbI tasks. This is the first time an NTM-type modelwas tested on this task, and we observe that the NTM, especially the proposed D-NTM, performs betterthan vanilla LSTM-RNN. Furthermore, the experiments revealed that the discrete, discrete addressingworks better than the continuous addressing with the GRU controller, and our analysis reveals thatthis is the case when the task requires precise retrieval of memory content.Our experiments show that the NTM-based models can be weaker than other variants of memorynetworks which do not learn but have an explicit mechanism of storing incoming facts as they are. Weconjecture that this is due to the difficulty in learning how to write, manipulate and delete the contentof memory. Despite this difficulty, we find the NTM-based approach, such as the proposed D-NTM,3Let us note that, the current state of art on this task is recurrent batch normalization with LSTM (Cooijmanset al., 2016) with 95.6% accuracy. It is possible to use recurrent batch normalization in our model and potentiallyimprove our results on this task as well.9Under review as a conference paper at ICLR 2017to be a better, future-proof approach, because it can scale to a much longer horizon (where it becomesimpossible to explicitly store all the experiences.)OnpMNIST task, we show that our model can outperform other similar type of approaches proposedto deal with the long-term dependencies. On copy and associative recall tasks, we show that our modelcan solve the algorithmic problems that are proposed to solve with NTM type of models.The success of both the learnable address and the discrete addressing scheme suggests two futureresearch directions. First, we should try both of these schemes in a wider array of memory-based models,as they are not specific to the neural Turing machines. Second, the proposed D-NTM needs to beevaluated on a diverse set of applications, such as text summarization (Rush et al., 2015), visual question-answering (Antol et al., 2015) and machine translation, in order to make a more concrete conclusion. | rJWPUiWNl | Review | 4: Ok but not good enough - rejection | This paper introduces a variant of the neural Turing machine (NTM, Graves et al. 2014) where key and values are stored. They try both continuous and discrete mechanisms to control the memory.
The model is quite complicated and seem to require a lot of tricks to work. Overall it seems that more than 10 different terms appear in the cost function and many different hacks are required to learn the model. It is hard to understand the justification for all of these tricks and sophisticated choices. There is no code available nor plan to release it (afaik).
The model is evaluated on a set of toy problems (the “babi task”) and achieves performance that are only slightly above those of a vanilla LSTM but are much worse than the different memory augmented models proposed in the last few years.
In terms of writing, the description of the model is quite hard to follow, describing different blocks independently, optimization tricks and regularization. The equations are hard to read, using non standard notation (e.g., “softplus”), overloading notations (w_t, b…), or write similar equations in different ways (for example, eq (8-9) compared to (10-11). Why are two equations in scalar and the other in vectors? Why is there an arrow instead of an equal?…).
Overall it is very hard to put together all the pieces of this model(s), there is no code available and I’m afraid there is not enough details to be able to reproduce their numbers. Finally, the performance on the bAbI tasks are quite poor compared to other memory augmented models.
| 4: The reviewer is confident but not absolutely certain that the evaluation is correct |
Hk8N3Sclg | ICLR.cc/2017/conference | 2017 | Multi-Agent Cooperation and the Emergence of (Natural) Language | ["Angeliki Lazaridou", "Alexander Peysakhovich", "Marco Baroni"] | The current mainstream approach to train natural language systems is to expose them to large amounts of text. This passive learning is problematic if we are in- terested in developing interactive machines, such as conversational agents. We propose a framework for language learning that relies on multi-agent communi- cation. We study this learning in the context of referential games. In these games, a sender and a receiver see a pair of images. The sender is told one of them is the target and is allowed to send a message to the receiver, while the receiver must rely on it to identify the target. Thus, the agents develop their own language interactively out of the need to communicate. We show that two networks with simple configurations are able to learn to coordinate in the referential game. We further explore whether the “word meanings” induced in the game reflect intuitive semantic properties of the objects depicted in the image, and we present a simple strategy for grounding the agents’ code into natural language, a necessary step in developing machines that should eventually be able to communicate with humans.
| ["Natural language processing", "Reinforcement Learning", "Games"] | ABSTRACTThe current mainstream approach to train natural language systems is to exposethem to large amounts of text. This passive learning is problematic if we are in-terested in developing interactive machines, such as conversational agents. Wepropose a framework for language learning that relies on multi-agent communi-cation. We study this learning in the context of referential games. In these games,a sender and a receiver see a pair of images. The sender is told one of them isthe target and is allowed to send a message from a fixed, arbitary vocabulary tothe receiver. The receiver must rely on this message to identify the target. Thus,the agents develop their own language interactively out of the need to communi-cate. We show that two networks with simple configurations are able to learn tocoordinate in the referential game. We further explore how to make changes to thegame environment to cause the “word meanings” induced in the game to better re-flect intuitive semantic properties of the images. In addition, we present a simplestrategy for grounding the agents’ code into natural language. Both of these arenecessary steps towards developing machines that are able to communicate withhumans productively.1 I NTRODUCTIONI tried to break it to him gently [...] the only way to learn an unknown languageis to interact with a native speaker [...] asking questions, holding a conversation,that sort of thing [...] If you want to learn the aliens’ language, someone [...] willhave to talk with an alien. Recordings alone aren’t sufficient.Ted Chiang, Story of Your LifeOne of the main aims of AI is to develop agents that can cooperate with others to achieve goals(Wooldridge, 2009). Such coordination requires communication. If the coordination partners are toinclude humans, the most obvious channel of communication is natural language. Thus, handlingnatural-language-based communication is a key step toward the development of AI that can thrivein a world populated by other agents.Given the success of deep learning models in related domains such as image captioning or machinetranslation (e.g., Sutskever et al., 2014; Xu et al., 2015), it would seem reasonable to cast the prob-lem of training conversational agents as an instance of supervised learning (Vinyals & Le, 2015).However, training on “canned” conversations does not allow learners to experience the interactiveaspects of communication. Supervised approaches, which focus on the structure of language, are anexcellent way to learn general statistical associations between sequences of symbols. However, theydo not capture the functional aspects of communication, i.e., that humans use words to coordinatewith others and make things happen (Austin, 1962; Clark, 1996; Wittgenstein, 1953).This paper introduces the first steps of a research program based on multi-agent coordination com-munication games . These games place agents in simple environments where they need to develop alanguage to coordinate and earn payoffs. Importantly, the agents start as blank slates, but, by play-ing a game together, they can develop and bootstrap knowledge on top of each others, leading to theemergence of a language.Work done while at Facebook AI Research.1Published as a conference paper at ICLR 2017The central problem of our program, then, is the following: How do we design environments thatfoster the development of a language that is portable to new situations and to new communicationpartners (in particular humans)?We start from the most basic challenge of using a language in order to refer to things in the contextof a two-agent game. We focus on two questions. First, whether tabula rasa agents succeed in com-munication. Second, what features of the environment lead to the development of codes resemblinghuman language.We assess this latter question in two ways. First, we consider whether the agents associate generalconceptual properties, such as broad object categories (as opposed to low-level visual properties),to the symbols they learn to use. Second, we examine whether the agents’ “word usage” is partiallyinterpretable by humans in an online experiment.Other researchers have proposed communication-based environments for the development ofcoordination-capable AI. Work in multi-agent systems has focused on the design of pre-programmedcommunication systems to solve specific tasks (e.g., robot soccer, Stone & Veloso 1998). Most re-lated to our work, Sukhbaatar et al. (2016) and Foerster et al. (2016) show that neural networks canevolve communication in the context of games without a pre-coded protocol. We pursue the samequestion, but further ask how we can change our environment to make the emergent language moreinterpretable.Others (e.g., the SHRLDU program of Winograd 1971 or the game in Wang et al. 2016) proposebuilding a communicating AI by putting humans in the loop from the very beginning. This approachhas benefits but faces serious scalability issues, as active human intervention is required at each step.An attractive component of our game-based paradigm is that humans may be added as players, butdo not need to be there all the time.A third branch of research focuses on “Wizard-of-Oz” environments, where agents learn to playgames by interacting with a complex scripted environment (Mikolov et al., 2015). This approachgives the designer tight control over the learning curriculum, but imposes a heavy engineering burdenon developers. We also stress the importance of the environment (game setup), but we focus onsimpler environments with multiple agents that force them to get smarter by bootstrapping on top ofeach other.We leverage ideas from work in linguistics, cognitive science and game theory on the emergence oflanguage (Wagner et al., 2003; Skyrms, 2010; Crawford & Sobel, 1982; Crawford, 1998). Our gameis a variation of Lewis’ signaling game (Lewis, 1969). There is a rich tradition of linguistic andcognitive studies using similar setups (e.g., Briscoe, 2002; Cangelosi & Parisi, 2002; Spike et al.,2016; Steels & Loetzsch, 2012). What distinguishes us from this literature is our aim to, eventually,develop practical AI. This motivates our focus on more realistic input data (a large collection ofnoisy natural images) and on trying to align the agents’ language with human intuitions.Lewis’ classic games have been studied extensively in game theory under the name of “cheap talk”.These games have been used as models to study the evolution of language both theoretically andexperimentally (Crawford, 1998; Blume et al., 1998; Crawford & Sobel, 1982). A major questionin game theory is whether equilibrium actually occurs in a game as convergence in learning isnot guaranteed (Fudenberg & Peysakhovich, 2014; Roth & Erev, 1995). And, if an equilibriumis reached, which one it will be (since they are typically not unique). This is particularly true forcheap talk games, which exhibit Nash equilibria in which precise language emerges, others wherevague language emerges and others where no language emerges at all (Crawford & Sobel, 1982). Inaddition, because in these games language has no ex-ante meaning and only emerges in the contextof the equilibrium, some of the emergent languages may not be very natural. Our results speak toboth the convergence question and the question of what features of the game cause the appearanceof different types of languages. Thus, our results are also of interest to game theorists.An evolutionary perspective has recently been advocated as a way to mitigate the data hunger oftraditional supervised approaches (Goodfellow et al., 2014; Silver et al., 2016). This research con-firms that learning can be bootstrapped from competition between agents. We focus, however, oncooperation between agents as a way to foster learning while reducing the need for annotated data.2Published as a conference paper at ICLR 20172 G ENERAL FRAMEWORKOur general framework includes K players, each parametrized by k, a collection of tasks/games thatthe players have to perform, a communication protocol Vthat enables the players to communicatewith each other, and payoffs assigned to the players as a deterministic function of a well-definedgoal. In this paper we focus on a particular version of this: referential games . These games arestructured as follows.1. There is a set of images represented by vectors fi1;:::;i Ng, two images are drawn atrandom from this set, call them (iL;iR), one of them is chosen to be the “target” t2fL;Rg2. There are two players, a sender and a receiver, each seeing the images - the sender receivesinputS(iL;iR;t)3. There is a vocabularyVof sizeKand the sender chooses one symbol to send to thereceiver, we call this the sender’s policy s(S(iL;iR;t))2V4. The receiver does not know the target, but sees the sender’s symbol and tries to guess thetarget image. We call this the receiver’s policy r(iL;iR;s(S(iL;iR;t)))2fL;Rg5. Ifr(iL;iR;s(S(iL;iR;t)) =t, that is, if the receiver guesses the target, both playersreceive a payoff of 1 (win), otherwise they receive a payoff of 0 (lose).Many extensions to the basic referential game explored here are possible. There can be more images,or a more sophisticated communication protocol (e.g., communication of a sequence of symbols ormulti-step communication requiring back-and-forth interaction1), rotation of the sender and receiverroles, having a human occasionally playing one of the roles, etc.3 E XPERIMENTAL SETUPImages We use the McRae et al.’s (2005) set of 463 base-level concrete concepts (e.g., cat, ap-ple, car . . . ) spanning across 20 general categories (e.g., animal ,fruit/vegetable ,vehicle . . . ). Werandomly sample 100 images of each concept from ImageNet (Deng et al., 2009). To create tar-get/distractor pairs, we randomly sample two concepts, one image for each concept and whether thefirst or second image will serve as target. We apply to each image a forward-pass through the pre-trained VGG ConvNet (Simonyan & Zisserman, 2014), and represent it with the activations fromeither the top 1000-D softmax layer ( sm) or the second-to-last 4096-D fully connected layer ( fc).Agent Players Both sender and receiver are simple feed-forward networks. For the sender, weexperiment with the two architectures depicted in Figure 1. Both sender architectures take as inputthe target (marked with a green square in Figure 1) and distractor representations, always in thisorder, so that they are implicitly informed of which image is the target (the receiver, instead, seesthe two images in random order).Theagnostic sender is a generic neural network that maps the original image vectors onto a “game-specific” embedding space (in the sense that the embedding is learned while playing the game)followed by a sigmoid nonlinearity. Fully-connected weights are applied to the embedding concate-nation to produce scores over vocabulary symbols.The informed sender also first embeds the images into a “game-specific” space. It then applies1-D convolutions (“filters”) on the image embeddings by treating them as different channels. Theinformed sender uses convolutions with kernel size 2x1 applied dimension-by-dimension to thetwo image embeddings (in Figure 1, there are 4 such filters). This is followed by the sigmoidnonlinearity. The resulting feature maps are combined through another filter (kernel size fx1, wherefis the number of filters on the image embeddings), to produce scores for the vocabulary symbols.Intuitively, the informed sender has an inductive bias towards combining the two images dimension-by-dimension whereas the agnostic sender does not (though we note the agnostic architecture neststhe informed one).1For example, Jorge et al. (2016) explore agents playing a “Guess Who” game to learn about the emergenceof question-asking and answering in language.3Published as a conference paper at ICLR 2017informed sender agnostic sender receiversymbol 1symbol 2symbol 3symbol 1symbol 2symbol 3left imageright imageinformed sender agnostic sender receiversymbol 1symbol 2symbol 3symsymsymleft imageright imageagnostic sender informed sender receiversymbol 1symbol 2symbol 3symbol 1symbol 2symbol 3left imageright imageFigure 1: Architectures of agent players.For both senders, motivated by the discrete nature of language, we enforce a strong communicationbottleneck that discretizes the communication protocol. Activations on the top (vocabulary) layerare converted to a Gibbs distribution (with temperature parameter ), and then a single symbol sissampled from the resulting probability distribution.The receiver takes as input the target and distractor image vectors in random order, as well as thesymbol produced by the sender (as a one-hot vector over the vocabulary). It embeds the images andthe symbol into its own “game-specific” space. It then computes dot products between the symboland image embeddings. Ideally, dot similarity should be higher for the image that is better denotedby the symbol. The two dot products are converted to a Gibbs distribution (with temperature ) andthe receiver “points” to an image by sampling from the resulting distribution.General Training Details We set the following hyperparameters without tuning: embedding di-mensionality: 50, number of filters applied to embeddings by informed sender: 20, temperature ofGibbs distributions: 10. We explore two vocabulary sizes: 10 and 100 symbols.The sender and receiver parameters =hR;Siare learned while playing the game. No weightsare shared and the only supervision used is communication success, i.e., whether the receiver pointedat the right referent.This setup is naturally modeled with Reinforcement Learning (Sutton & Barto, 1998). As out-lined in Section 2, the sender follows policy s(S(iL;iR;t))2Vand the receiver policyr(iL;iR;s(S(iL;iR;t)))2 fL;Rg. The loss function that the two agents must minimize isI E~r[R(~r)]whereRis the reward function returning 1 iff r(iL;iR;s(S(iL;iR;t)) =t. Param-eters are updated through the Reinforce rule (Williams, 1992). We apply mini-batch updates, witha batch size of 32 and for a total of 50k iterations (games). At test time, we compile a set of 10kgames using the same method as for the training games.We now turn to our main questions. The first is whether the agents can learn to successfully coordi-nate in a reasonable amount of time. The second is whether the agents’ language can be thought ofas “natural language”, i.e., symbols are assigned to meanings that make intuitive sense in terms ofour conceptualization of the world.4 L EARNING TO COMMUNICATEOur first question is whether agents converge to successful communication at all. We see that theydo: agents almost perfectly coordinate in the 1k rounds following the 10k training games for everyarchitecture and parameter choice (Table 1).We see, though, some differences between different sender architectures. Figure 2 (left) showsperformance on a sample of the test set as a function of the first 5,000 rounds of training. The agents4Published as a conference paper at ICLR 20170 1k 2k 3k 4k 5k#Games0.40.50.60.70.80.91.0 Communication successagnostic-sender (100 symbols)agnostic-sender (10 symbols)informed-sender (100 symbols)informed-sender (10 symbols)0.000.030.060.0921015202538 100Singular Value PositionNormalized SpectrumFigure 2: Left: Communication success as a function of training iterations, we see that informedsenders converge faster than agnostic ones. Right: Spectrum of an example symbol usage matrix:the first few dimensions do capture only partial variance, suggesting that the usage of more symbolsby the informed sender is not just due to synonymy.id sender vis voc used comm purity (%)obs-chancerep size symbols success ( %) purity (%)1 informed sm 100 58 100 46 272 informed fc 100 38 100 41 233 informed sm 10 10 100 35 184 informed fc 10 10 100 32 175 agnostic sm 100 2 99 21 156 agnostic fc 10 2 99 21 157 agnostic sm 10 2 99 20 158 agnostic fc 100 2 99 19 15Table 1: Playing the referential game: test results after 50K training games. Used symbols columnreports number of distinct vocabulary symbols that were produced at least once in the test phase. Seetext for explanation of comm success andpurity . All purity values are highly significant ( p<0:001)compared to simulated chance symbol assignment when matching observed symbol usage. The obs-chance purity column reports the difference between observed and expected purity under chance.converge to coordination quite fast, but the informed sender reaches higher levels more quickly thanthe agnostic one.The informed sender makes use of more symbols from the available vocabulary, while the agnosticsender constantly uses a compact 2-symbol vocabulary. This suggests that the informed sender isusing more varied and word-like symbols (recall that the images depict 463 distinct objects, so wewould expect a natural-language-endowed sender to use a wider array of symbols to discriminateamong them). However, it could also be the case that the informed sender vocabulary simply con-tains higher redundancy/synonymy. To check this, we construct a (sampled) matrix where rows aregame image pairs, columns are symbols, and entries represent how often that symbol is used for thatpair. We then decompose the matrix through SVD. If the sender is indeed just using a strategy withfew effective symbols but high synonymy, then we should expect a 1- or2-dimensional decomposi-tion. Figure 2 (right) plots the normalized spectrum of this matrix. While there is some redundancyin the matrix (thus potentially implying there is synonymy in the usage), the language still requiresmultiple dimensions to summarize (cross-validated SVD suggests 50 dimensions).We now turn to investigating the semantic properties of the emergent communication protocol. Re-call that the vocabulary that agents use is arbitrary and has no initial meaning. One way to understandits emerging semantics is by looking at the relationship between symbols and the sets of images theyrefer to.5Published as a conference paper at ICLR 2017accordionairplanealligatorambulanceanchorapartmentappleapronarmourashtrayasparagusavocadoaxebagbagpipeballballoonbanana banjobannerbarnbarrelbasementbasketbathtubbatonbayonetbazookabeanbearbeaverbedbedroombeehivebeetbeetlebeltbenchbikebirchbisonblackbirdblenderblouseblueberryboatboltbombbookbookcase●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●● ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●● ●●●●●●●●●●●●●●●●●●●●●● ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●● ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●● ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●accordionairplanealligatorambulanceanchorapartmentappleapronarmourashtrayasparagusavocadoaxebagbagpipeballballoonbananabanjobannerbarnbarrelbasementbasketbathtubbatonbayonetbazookabeanbearbeaverbedbedroombeehivebeetbeetlebeltbenchbikebirchbisonblackbirdblenderblouseblueberryboatboltbombbookbookcase●●●●●●●●●●● ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●● ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●● ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●Figure 3: t-SNE plots of object fc vectors color-coded by majority symbols assigned to them byinformed sender. Object class names shown for a random subset. Left: configuration of 4th row ofTable 1. Right: 2nd row of Table 2.The objects in our images were categorized into 20 broader categories (such as weapon andmammal )by McRae et al. (2005). If the agents converged to higher level semantic meanings for the symbols,we would expect that objects belonging to the same category would activate the same symbols, e.g.,that, say, when the target images depict bayonets and guns, the sender would use the same symbolto refer to them, whereas cows and guns should not share a symbol.To quantify this, we form clusters by grouping objects by the symbols that are most often activatedwhen target images contain them. We then assess the quality of the resulting clusters by measuringtheir purity with respect to the McRae categories. Purity (Zhao & Karypis, 2003) is a standardmeasure of cluster “quality”. The purity of a clustering solution is the proportion of category labelsin the clusters that agree with the respective cluster majority category. This number reaches 100%for perfect clustering and we always compare the observed purity to the score that would be obtainedfrom a random permutation of symbol assignments to objects. Table 1 shows that purity, while farfrom perfect, is significantly above chance in all cases. We confirm moreover that the informedsender is producing symbols that are more semantically natural than those of the agnostic one.Still, surprisingly, purity is significantly above chance even when the latter is only using two sym-bols. From our qualitative evaluations, in this case the agents converge to a (noisy) characterizationof objects as “living-vs-non-living” which, intriguingly, has been recognized as the most basic onein the human semantic system (Caramazza & Shelton, 1998).Rather than using hard clusters, we can also ask whether symbol usage reflects the semantics of thevisual space. To do so we construct vector representations for each object (defined by its ImageNetlabel) by averaging the CNN fc representations of all category images in our data-set (see Section3 above). Note that the fc layer, being near the top of a deep CNN, is expected to capture high-level visual properties of objects (Zeiler & Fergus, 2014). Moreover, since we average across manyspecific images, our vectors should capture rather general, high-level properties of objects.We map these average object vectors to 2 dimensions via t-SNE mapping (Van der Maaten & Hinton,2008) and we color-code them by the majority symbol the sender used for images containing thecorresponding object. Figure 3 (left) shows the results for the current experiment. We see thatobjects that are close in CNN space (thus, presumably, visually similar) are associated to the samesymbol (same color). However, there still appears to be quite a bit of variation.4.1 O BJECT -LEVEL REFERENCEWe established that our agents can solve the coordination problem, and we have at least tentativeevidence that they do so by developing symbol meanings that align with our semantic intuition. We6Published as a conference paper at ICLR 2017id sender vis voc used comm purity (%)obs-chancerep size symbols success( %) purity (%)1 informed fc 100 43 100 45 212 informed fc 10 10 100 37 193 agnostic fc 100 2 92 23 74 agnostic fc 10 3 98 28 12Table 2: Playing the referential game with image-level targets: test results after 50K training plays.Columns as in Table 1. All purity values significant at p<0:001.turn now to a simple way to tweak the game setup in order to encourage the agents to further pursuehigh-level semantics.The strategy is to remove some aspects of “common knowledge” from the game. Common knowl-edge, in game-theoretic parlance, are facts that everyone knows, everyone knows that everyoneknows, and so on (Brandenburger et al., 2014). Coordination can only occur if the basis of thecoordination is common knowledge (Rubinstein, 1989), therefore if we remove some facts fromcommon knowledge, we will preclude our agents from coordinating on them. In our case, we wantto remove facts pertaining to the details of the input images, thus forcing the agents to coordinate onmore abstract properties. We can remove all low-level common knowledge by letting the agents playonly using class-level properties of the objects. We achieve this by modifying the game to show theagents different pairs of images but maintaining the ImageNet class of both the target and distractor(e.g., if the target is dog, the sender is shown a picture of a Chihuahua and the receiver that of aBoston Terrier).Table 2 reports results for various configurations. We see that the agents are still able to coordinate.Moreover, we observe a small increase in symbol usage purity, as expected since agents can nowonly coordinate on general properties of object classes, rather than on the specific properties of eachimage. This effect is clearer in Figure 3 (right), when we repeat t-SNE based visualization of therelationship that emerges between visual embeddings and the words used to refer to them in thisnew experiment.5 G ROUNDING AGENTS ’ COMMUNICATION IN HUMAN LANGUAGEThe results in Section 4 show communication robustly arising in our game, and that we can changethe environment to nudge agents to develop symbol meanings which are more closely related to thevisual or class-based semantics of the images. Still, we would like agents to converge on a languagefully understandable by humans, as our ultimate goal is to develop conversational machines. To dothis, we will need to ground the communication.Taking inspiration from AlphaGo (Silver et al., 2016), an AI that reached the Go master level bycombining interactive learning in games of self-play with passive supervised learning from a largeset of human games, we combine the usual referential game, in which agents interactively developtheir communication protocol, with a supervised image labeling task, where the sender must learnto assign objects their conventional names. This way, the sender will naturally be encouraged to usesuch names with their conventional meaning to discriminate target images when playing the game,making communication more transparent to humans.In this experiment, the sender switches, equiprobably, between game playing and a supervised im-age classification task using ImageNet classes. Note that the supervised objective does not aim atimproving agents’ coordination performance. Instead, supervision provides them with basic ground-ing in natural language (in the form of image-label associations), while concurrent interactive gameplaying should teach them how to effectively use this grounding to communicate.We use the informed sender, fc image representations and a vocabulary size of 100. Supervisedtraining is based on 100 labels that are a subset of the object names in our data-set (see Section 3above). When predicting object names, the sender uses the usual game-embedding layer coupledwith a softmax layer of dimensionality 100 corresponding to the object names. Importantly, thegame-embedding layers used in object classification and the reference game are shared. Conse-7Published as a conference paper at ICLR 2017dolphinfenceFigure 4: Example pairs from the ReferItGame set, with word produced by sender. Target imagesframed in green.quently, we hope that, when playing, the sender will produce symbols aligned with object namesacquired in the supervised phase.The supervised objective has no negative effect on communication success: the agents are still ableto reach full coordination after 10k training trials (corresponding to 5k trials of reference gameplaying). The sender uses many more symbols after training than in any previous experiment (88)and symbol purity dramatically increases to 70% (the obs-chance purity difference also increases to37%).Even more importantly, many symbols have now become directly interpretable, thanks to their directcorrespondence to labels. Considering the 632 image pairs where the target gold standard labelcorresponds to one of the labels that were used in the supervised phase, in 47% of these cases thesender produced exactly the symbol corresponding to the correct supervised label for the targetimage (chance: 1%).For image pairs where the target image belongs to one of the directly supervised categories, it is notsurprising that the sender adopted the “conventional” supervised label to signal the target . However,a very interesting effect of supervision is that it improves the interpretability of the code even whenagents must communicate about images that do not contain objects in the supervised category set .This emerged in a follow-up experiment in which, during training, the sender was again exposed(with equal probability) to the same supervised classification task as above, but now the agentsplayed the referential game on a different dataset of images derived from ReferItGame (Kazemzadehet al., 2014). In its general format, the ReferItGame contains annotations of bounding boxes in realimages with referring expressions produced by humans when playing the game. For our purposes,we constructed 10k pairs by randomly sampling two bounding boxes, to act as target and distractor.Again, the agents converged to perfect communication after 15k trials, and this time used all 100available symbols in some trial.We then asked whether this language was human-interpretable. For each symbol used by the trainedsender, we randomly extracted 3 image pairs in which the sender picked that symbol and the receiverpointed at the right target (for two symbols, only 2 pairs matched these criteria, leading to a set of 298image pairs). We annotated each pair with the word corresponding to the symbol in the supervisedset. Out of the 298 pairs, only 25 (8%) included one of the 100 words among the correspondingreferring expressions in ReferItGame. So, in the large majority of cases, the sender had been facedwith a pair not (saliently) containing the categories used in the supervised phase of its training, andit had to produce a word that could, at best, only indirectly refer to what is depicted in the targetimage. We then tested whether this code would be understandable by humans. In essence, it is as ifwe replaced the trained agent receiver with a human.We prepared a crowdsourced survey using the CrowdFlower platform. For each pair, human partici-pants were shown the two images and the sender-emitted word (that is, the ImageNet label associatedto the symbol produced by the sender; see examples in Figure 4). The participants were asked topick the picture that they thought was most related to the word. We collected 10 ratings for eachpair.We found that in 68% of the cases the subjects were able to guess the right image. A logisticregression predicting subject image choice from ground-truth target images, with subjects and wordsas random effects, confirmed the highly significant correlation between the true and guessed images8Published as a conference paper at ICLR 2017(z= 16:75,p < 0:0001 ). Thus, while far from perfect, we find that supervised learning on aseparate data set does provide some grounding for communication with humans, that generalizesbeyond the conventional word denotations learned in the supervised phase.Looking at the results qualitatively, we found that very often sender-subject communication suc-ceeded when the sender established a sort of “metonymic” link between the words in its possessionand the contents of an image. Figure 4 shows an example where the sender produced dolphin torefer to a picture showing a stretch of sea, and fence for a patch of land. Similar semantic shiftsare a core characteristic of natural language (e.g., Pustejovsky, 1995), and thus subjects were, inmany cases, able to successfully play the referential game with our sender (10/10 subjects guessedthe dolphin target, and 8/10 the fence). This is very encouraging. Although the language developedin referential games will be initially very limited, if both agents and humans possess the sort offlexibility displayed in this last experiment, the noisy but shared common ground might suffice toestablish basic communication.6 D ISCUSSIONOur results confirmed that fairly simple neural-network agents can learn to coordinate in a referentialgame in which they need to communicate about a large number of real pictures. They also suggestthat the meanings agents come to assign to symbols in this setup capture general conceptual prop-erties of the objects depicted in the image, rather than low-level visual properties. We also showeda path to grounding the communication in natural language by mixing the game with a supervisedtask.In future work, encouraged by our preliminary experiments with object naming, we want to studyhow to ensure that the emergent communication stays close to human natural language. Predictivelearning should be retained as an important building block of intelligent agents, focusing on teachingthem structural properties of language (e.g., lexical choice, syntax or style). However, it is alsoimportant to learn the function-driven facets of language, such as how to hold a conversation, andinteractive games are a potentially fruitful method to achieve this goal. | r1jfwJjEx | Interesting idea, but could have been solved using a transfer learning approach | 7: Good paper, accept | Thank you for an interesting read.
Pros
- This paper tackles a very crucial problem of understanding communications between 2 agents. As more and more applications of reinforcement learning are being explored, this approach brings us back to a basic question. Is the problem solving approach of machines similar to that of humans.
- The task is simple enough to make the post learning analysis intuitive.
- It was interesting to see how informed agents made use of multiple symbols to transmit the message, where as agnostic agents relied only on 2 symbols.
Cons
- The task effectively boils down to image classification, if the 2 images sent are from different categories. The symbols used are effectively the image class which the second agent learns to assign to either of the images. By all means, this approach boils down to a transfer learning problem which could probably be trained much faster than a reinforcement learning algorithm. | 3: The reviewer is fairly confident that the evaluation is correct |
Hk8N3Sclg | ICLR.cc/2017/conference | 2017 | Multi-Agent Cooperation and the Emergence of (Natural) Language | ["Angeliki Lazaridou", "Alexander Peysakhovich", "Marco Baroni"] | The current mainstream approach to train natural language systems is to expose them to large amounts of text. This passive learning is problematic if we are in- terested in developing interactive machines, such as conversational agents. We propose a framework for language learning that relies on multi-agent communi- cation. We study this learning in the context of referential games. In these games, a sender and a receiver see a pair of images. The sender is told one of them is the target and is allowed to send a message to the receiver, while the receiver must rely on it to identify the target. Thus, the agents develop their own language interactively out of the need to communicate. We show that two networks with simple configurations are able to learn to coordinate in the referential game. We further explore whether the “word meanings” induced in the game reflect intuitive semantic properties of the objects depicted in the image, and we present a simple strategy for grounding the agents’ code into natural language, a necessary step in developing machines that should eventually be able to communicate with humans.
| ["Natural language processing", "Reinforcement Learning", "Games"] | ABSTRACTThe current mainstream approach to train natural language systems is to exposethem to large amounts of text. This passive learning is problematic if we are in-terested in developing interactive machines, such as conversational agents. Wepropose a framework for language learning that relies on multi-agent communi-cation. We study this learning in the context of referential games. In these games,a sender and a receiver see a pair of images. The sender is told one of them isthe target and is allowed to send a message from a fixed, arbitary vocabulary tothe receiver. The receiver must rely on this message to identify the target. Thus,the agents develop their own language interactively out of the need to communi-cate. We show that two networks with simple configurations are able to learn tocoordinate in the referential game. We further explore how to make changes to thegame environment to cause the “word meanings” induced in the game to better re-flect intuitive semantic properties of the images. In addition, we present a simplestrategy for grounding the agents’ code into natural language. Both of these arenecessary steps towards developing machines that are able to communicate withhumans productively.1 I NTRODUCTIONI tried to break it to him gently [...] the only way to learn an unknown languageis to interact with a native speaker [...] asking questions, holding a conversation,that sort of thing [...] If you want to learn the aliens’ language, someone [...] willhave to talk with an alien. Recordings alone aren’t sufficient.Ted Chiang, Story of Your LifeOne of the main aims of AI is to develop agents that can cooperate with others to achieve goals(Wooldridge, 2009). Such coordination requires communication. If the coordination partners are toinclude humans, the most obvious channel of communication is natural language. Thus, handlingnatural-language-based communication is a key step toward the development of AI that can thrivein a world populated by other agents.Given the success of deep learning models in related domains such as image captioning or machinetranslation (e.g., Sutskever et al., 2014; Xu et al., 2015), it would seem reasonable to cast the prob-lem of training conversational agents as an instance of supervised learning (Vinyals & Le, 2015).However, training on “canned” conversations does not allow learners to experience the interactiveaspects of communication. Supervised approaches, which focus on the structure of language, are anexcellent way to learn general statistical associations between sequences of symbols. However, theydo not capture the functional aspects of communication, i.e., that humans use words to coordinatewith others and make things happen (Austin, 1962; Clark, 1996; Wittgenstein, 1953).This paper introduces the first steps of a research program based on multi-agent coordination com-munication games . These games place agents in simple environments where they need to develop alanguage to coordinate and earn payoffs. Importantly, the agents start as blank slates, but, by play-ing a game together, they can develop and bootstrap knowledge on top of each others, leading to theemergence of a language.Work done while at Facebook AI Research.1Published as a conference paper at ICLR 2017The central problem of our program, then, is the following: How do we design environments thatfoster the development of a language that is portable to new situations and to new communicationpartners (in particular humans)?We start from the most basic challenge of using a language in order to refer to things in the contextof a two-agent game. We focus on two questions. First, whether tabula rasa agents succeed in com-munication. Second, what features of the environment lead to the development of codes resemblinghuman language.We assess this latter question in two ways. First, we consider whether the agents associate generalconceptual properties, such as broad object categories (as opposed to low-level visual properties),to the symbols they learn to use. Second, we examine whether the agents’ “word usage” is partiallyinterpretable by humans in an online experiment.Other researchers have proposed communication-based environments for the development ofcoordination-capable AI. Work in multi-agent systems has focused on the design of pre-programmedcommunication systems to solve specific tasks (e.g., robot soccer, Stone & Veloso 1998). Most re-lated to our work, Sukhbaatar et al. (2016) and Foerster et al. (2016) show that neural networks canevolve communication in the context of games without a pre-coded protocol. We pursue the samequestion, but further ask how we can change our environment to make the emergent language moreinterpretable.Others (e.g., the SHRLDU program of Winograd 1971 or the game in Wang et al. 2016) proposebuilding a communicating AI by putting humans in the loop from the very beginning. This approachhas benefits but faces serious scalability issues, as active human intervention is required at each step.An attractive component of our game-based paradigm is that humans may be added as players, butdo not need to be there all the time.A third branch of research focuses on “Wizard-of-Oz” environments, where agents learn to playgames by interacting with a complex scripted environment (Mikolov et al., 2015). This approachgives the designer tight control over the learning curriculum, but imposes a heavy engineering burdenon developers. We also stress the importance of the environment (game setup), but we focus onsimpler environments with multiple agents that force them to get smarter by bootstrapping on top ofeach other.We leverage ideas from work in linguistics, cognitive science and game theory on the emergence oflanguage (Wagner et al., 2003; Skyrms, 2010; Crawford & Sobel, 1982; Crawford, 1998). Our gameis a variation of Lewis’ signaling game (Lewis, 1969). There is a rich tradition of linguistic andcognitive studies using similar setups (e.g., Briscoe, 2002; Cangelosi & Parisi, 2002; Spike et al.,2016; Steels & Loetzsch, 2012). What distinguishes us from this literature is our aim to, eventually,develop practical AI. This motivates our focus on more realistic input data (a large collection ofnoisy natural images) and on trying to align the agents’ language with human intuitions.Lewis’ classic games have been studied extensively in game theory under the name of “cheap talk”.These games have been used as models to study the evolution of language both theoretically andexperimentally (Crawford, 1998; Blume et al., 1998; Crawford & Sobel, 1982). A major questionin game theory is whether equilibrium actually occurs in a game as convergence in learning isnot guaranteed (Fudenberg & Peysakhovich, 2014; Roth & Erev, 1995). And, if an equilibriumis reached, which one it will be (since they are typically not unique). This is particularly true forcheap talk games, which exhibit Nash equilibria in which precise language emerges, others wherevague language emerges and others where no language emerges at all (Crawford & Sobel, 1982). Inaddition, because in these games language has no ex-ante meaning and only emerges in the contextof the equilibrium, some of the emergent languages may not be very natural. Our results speak toboth the convergence question and the question of what features of the game cause the appearanceof different types of languages. Thus, our results are also of interest to game theorists.An evolutionary perspective has recently been advocated as a way to mitigate the data hunger oftraditional supervised approaches (Goodfellow et al., 2014; Silver et al., 2016). This research con-firms that learning can be bootstrapped from competition between agents. We focus, however, oncooperation between agents as a way to foster learning while reducing the need for annotated data.2Published as a conference paper at ICLR 20172 G ENERAL FRAMEWORKOur general framework includes K players, each parametrized by k, a collection of tasks/games thatthe players have to perform, a communication protocol Vthat enables the players to communicatewith each other, and payoffs assigned to the players as a deterministic function of a well-definedgoal. In this paper we focus on a particular version of this: referential games . These games arestructured as follows.1. There is a set of images represented by vectors fi1;:::;i Ng, two images are drawn atrandom from this set, call them (iL;iR), one of them is chosen to be the “target” t2fL;Rg2. There are two players, a sender and a receiver, each seeing the images - the sender receivesinputS(iL;iR;t)3. There is a vocabularyVof sizeKand the sender chooses one symbol to send to thereceiver, we call this the sender’s policy s(S(iL;iR;t))2V4. The receiver does not know the target, but sees the sender’s symbol and tries to guess thetarget image. We call this the receiver’s policy r(iL;iR;s(S(iL;iR;t)))2fL;Rg5. Ifr(iL;iR;s(S(iL;iR;t)) =t, that is, if the receiver guesses the target, both playersreceive a payoff of 1 (win), otherwise they receive a payoff of 0 (lose).Many extensions to the basic referential game explored here are possible. There can be more images,or a more sophisticated communication protocol (e.g., communication of a sequence of symbols ormulti-step communication requiring back-and-forth interaction1), rotation of the sender and receiverroles, having a human occasionally playing one of the roles, etc.3 E XPERIMENTAL SETUPImages We use the McRae et al.’s (2005) set of 463 base-level concrete concepts (e.g., cat, ap-ple, car . . . ) spanning across 20 general categories (e.g., animal ,fruit/vegetable ,vehicle . . . ). Werandomly sample 100 images of each concept from ImageNet (Deng et al., 2009). To create tar-get/distractor pairs, we randomly sample two concepts, one image for each concept and whether thefirst or second image will serve as target. We apply to each image a forward-pass through the pre-trained VGG ConvNet (Simonyan & Zisserman, 2014), and represent it with the activations fromeither the top 1000-D softmax layer ( sm) or the second-to-last 4096-D fully connected layer ( fc).Agent Players Both sender and receiver are simple feed-forward networks. For the sender, weexperiment with the two architectures depicted in Figure 1. Both sender architectures take as inputthe target (marked with a green square in Figure 1) and distractor representations, always in thisorder, so that they are implicitly informed of which image is the target (the receiver, instead, seesthe two images in random order).Theagnostic sender is a generic neural network that maps the original image vectors onto a “game-specific” embedding space (in the sense that the embedding is learned while playing the game)followed by a sigmoid nonlinearity. Fully-connected weights are applied to the embedding concate-nation to produce scores over vocabulary symbols.The informed sender also first embeds the images into a “game-specific” space. It then applies1-D convolutions (“filters”) on the image embeddings by treating them as different channels. Theinformed sender uses convolutions with kernel size 2x1 applied dimension-by-dimension to thetwo image embeddings (in Figure 1, there are 4 such filters). This is followed by the sigmoidnonlinearity. The resulting feature maps are combined through another filter (kernel size fx1, wherefis the number of filters on the image embeddings), to produce scores for the vocabulary symbols.Intuitively, the informed sender has an inductive bias towards combining the two images dimension-by-dimension whereas the agnostic sender does not (though we note the agnostic architecture neststhe informed one).1For example, Jorge et al. (2016) explore agents playing a “Guess Who” game to learn about the emergenceof question-asking and answering in language.3Published as a conference paper at ICLR 2017informed sender agnostic sender receiversymbol 1symbol 2symbol 3symbol 1symbol 2symbol 3left imageright imageinformed sender agnostic sender receiversymbol 1symbol 2symbol 3symsymsymleft imageright imageagnostic sender informed sender receiversymbol 1symbol 2symbol 3symbol 1symbol 2symbol 3left imageright imageFigure 1: Architectures of agent players.For both senders, motivated by the discrete nature of language, we enforce a strong communicationbottleneck that discretizes the communication protocol. Activations on the top (vocabulary) layerare converted to a Gibbs distribution (with temperature parameter ), and then a single symbol sissampled from the resulting probability distribution.The receiver takes as input the target and distractor image vectors in random order, as well as thesymbol produced by the sender (as a one-hot vector over the vocabulary). It embeds the images andthe symbol into its own “game-specific” space. It then computes dot products between the symboland image embeddings. Ideally, dot similarity should be higher for the image that is better denotedby the symbol. The two dot products are converted to a Gibbs distribution (with temperature ) andthe receiver “points” to an image by sampling from the resulting distribution.General Training Details We set the following hyperparameters without tuning: embedding di-mensionality: 50, number of filters applied to embeddings by informed sender: 20, temperature ofGibbs distributions: 10. We explore two vocabulary sizes: 10 and 100 symbols.The sender and receiver parameters =hR;Siare learned while playing the game. No weightsare shared and the only supervision used is communication success, i.e., whether the receiver pointedat the right referent.This setup is naturally modeled with Reinforcement Learning (Sutton & Barto, 1998). As out-lined in Section 2, the sender follows policy s(S(iL;iR;t))2Vand the receiver policyr(iL;iR;s(S(iL;iR;t)))2 fL;Rg. The loss function that the two agents must minimize isI E~r[R(~r)]whereRis the reward function returning 1 iff r(iL;iR;s(S(iL;iR;t)) =t. Param-eters are updated through the Reinforce rule (Williams, 1992). We apply mini-batch updates, witha batch size of 32 and for a total of 50k iterations (games). At test time, we compile a set of 10kgames using the same method as for the training games.We now turn to our main questions. The first is whether the agents can learn to successfully coordi-nate in a reasonable amount of time. The second is whether the agents’ language can be thought ofas “natural language”, i.e., symbols are assigned to meanings that make intuitive sense in terms ofour conceptualization of the world.4 L EARNING TO COMMUNICATEOur first question is whether agents converge to successful communication at all. We see that theydo: agents almost perfectly coordinate in the 1k rounds following the 10k training games for everyarchitecture and parameter choice (Table 1).We see, though, some differences between different sender architectures. Figure 2 (left) showsperformance on a sample of the test set as a function of the first 5,000 rounds of training. The agents4Published as a conference paper at ICLR 20170 1k 2k 3k 4k 5k#Games0.40.50.60.70.80.91.0 Communication successagnostic-sender (100 symbols)agnostic-sender (10 symbols)informed-sender (100 symbols)informed-sender (10 symbols)0.000.030.060.0921015202538 100Singular Value PositionNormalized SpectrumFigure 2: Left: Communication success as a function of training iterations, we see that informedsenders converge faster than agnostic ones. Right: Spectrum of an example symbol usage matrix:the first few dimensions do capture only partial variance, suggesting that the usage of more symbolsby the informed sender is not just due to synonymy.id sender vis voc used comm purity (%)obs-chancerep size symbols success ( %) purity (%)1 informed sm 100 58 100 46 272 informed fc 100 38 100 41 233 informed sm 10 10 100 35 184 informed fc 10 10 100 32 175 agnostic sm 100 2 99 21 156 agnostic fc 10 2 99 21 157 agnostic sm 10 2 99 20 158 agnostic fc 100 2 99 19 15Table 1: Playing the referential game: test results after 50K training games. Used symbols columnreports number of distinct vocabulary symbols that were produced at least once in the test phase. Seetext for explanation of comm success andpurity . All purity values are highly significant ( p<0:001)compared to simulated chance symbol assignment when matching observed symbol usage. The obs-chance purity column reports the difference between observed and expected purity under chance.converge to coordination quite fast, but the informed sender reaches higher levels more quickly thanthe agnostic one.The informed sender makes use of more symbols from the available vocabulary, while the agnosticsender constantly uses a compact 2-symbol vocabulary. This suggests that the informed sender isusing more varied and word-like symbols (recall that the images depict 463 distinct objects, so wewould expect a natural-language-endowed sender to use a wider array of symbols to discriminateamong them). However, it could also be the case that the informed sender vocabulary simply con-tains higher redundancy/synonymy. To check this, we construct a (sampled) matrix where rows aregame image pairs, columns are symbols, and entries represent how often that symbol is used for thatpair. We then decompose the matrix through SVD. If the sender is indeed just using a strategy withfew effective symbols but high synonymy, then we should expect a 1- or2-dimensional decomposi-tion. Figure 2 (right) plots the normalized spectrum of this matrix. While there is some redundancyin the matrix (thus potentially implying there is synonymy in the usage), the language still requiresmultiple dimensions to summarize (cross-validated SVD suggests 50 dimensions).We now turn to investigating the semantic properties of the emergent communication protocol. Re-call that the vocabulary that agents use is arbitrary and has no initial meaning. One way to understandits emerging semantics is by looking at the relationship between symbols and the sets of images theyrefer to.5Published as a conference paper at ICLR 2017accordionairplanealligatorambulanceanchorapartmentappleapronarmourashtrayasparagusavocadoaxebagbagpipeballballoonbanana banjobannerbarnbarrelbasementbasketbathtubbatonbayonetbazookabeanbearbeaverbedbedroombeehivebeetbeetlebeltbenchbikebirchbisonblackbirdblenderblouseblueberryboatboltbombbookbookcase●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●● ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●● ●●●●●●●●●●●●●●●●●●●●●● ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●● ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●● ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●accordionairplanealligatorambulanceanchorapartmentappleapronarmourashtrayasparagusavocadoaxebagbagpipeballballoonbananabanjobannerbarnbarrelbasementbasketbathtubbatonbayonetbazookabeanbearbeaverbedbedroombeehivebeetbeetlebeltbenchbikebirchbisonblackbirdblenderblouseblueberryboatboltbombbookbookcase●●●●●●●●●●● ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●● ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●● ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●Figure 3: t-SNE plots of object fc vectors color-coded by majority symbols assigned to them byinformed sender. Object class names shown for a random subset. Left: configuration of 4th row ofTable 1. Right: 2nd row of Table 2.The objects in our images were categorized into 20 broader categories (such as weapon andmammal )by McRae et al. (2005). If the agents converged to higher level semantic meanings for the symbols,we would expect that objects belonging to the same category would activate the same symbols, e.g.,that, say, when the target images depict bayonets and guns, the sender would use the same symbolto refer to them, whereas cows and guns should not share a symbol.To quantify this, we form clusters by grouping objects by the symbols that are most often activatedwhen target images contain them. We then assess the quality of the resulting clusters by measuringtheir purity with respect to the McRae categories. Purity (Zhao & Karypis, 2003) is a standardmeasure of cluster “quality”. The purity of a clustering solution is the proportion of category labelsin the clusters that agree with the respective cluster majority category. This number reaches 100%for perfect clustering and we always compare the observed purity to the score that would be obtainedfrom a random permutation of symbol assignments to objects. Table 1 shows that purity, while farfrom perfect, is significantly above chance in all cases. We confirm moreover that the informedsender is producing symbols that are more semantically natural than those of the agnostic one.Still, surprisingly, purity is significantly above chance even when the latter is only using two sym-bols. From our qualitative evaluations, in this case the agents converge to a (noisy) characterizationof objects as “living-vs-non-living” which, intriguingly, has been recognized as the most basic onein the human semantic system (Caramazza & Shelton, 1998).Rather than using hard clusters, we can also ask whether symbol usage reflects the semantics of thevisual space. To do so we construct vector representations for each object (defined by its ImageNetlabel) by averaging the CNN fc representations of all category images in our data-set (see Section3 above). Note that the fc layer, being near the top of a deep CNN, is expected to capture high-level visual properties of objects (Zeiler & Fergus, 2014). Moreover, since we average across manyspecific images, our vectors should capture rather general, high-level properties of objects.We map these average object vectors to 2 dimensions via t-SNE mapping (Van der Maaten & Hinton,2008) and we color-code them by the majority symbol the sender used for images containing thecorresponding object. Figure 3 (left) shows the results for the current experiment. We see thatobjects that are close in CNN space (thus, presumably, visually similar) are associated to the samesymbol (same color). However, there still appears to be quite a bit of variation.4.1 O BJECT -LEVEL REFERENCEWe established that our agents can solve the coordination problem, and we have at least tentativeevidence that they do so by developing symbol meanings that align with our semantic intuition. We6Published as a conference paper at ICLR 2017id sender vis voc used comm purity (%)obs-chancerep size symbols success( %) purity (%)1 informed fc 100 43 100 45 212 informed fc 10 10 100 37 193 agnostic fc 100 2 92 23 74 agnostic fc 10 3 98 28 12Table 2: Playing the referential game with image-level targets: test results after 50K training plays.Columns as in Table 1. All purity values significant at p<0:001.turn now to a simple way to tweak the game setup in order to encourage the agents to further pursuehigh-level semantics.The strategy is to remove some aspects of “common knowledge” from the game. Common knowl-edge, in game-theoretic parlance, are facts that everyone knows, everyone knows that everyoneknows, and so on (Brandenburger et al., 2014). Coordination can only occur if the basis of thecoordination is common knowledge (Rubinstein, 1989), therefore if we remove some facts fromcommon knowledge, we will preclude our agents from coordinating on them. In our case, we wantto remove facts pertaining to the details of the input images, thus forcing the agents to coordinate onmore abstract properties. We can remove all low-level common knowledge by letting the agents playonly using class-level properties of the objects. We achieve this by modifying the game to show theagents different pairs of images but maintaining the ImageNet class of both the target and distractor(e.g., if the target is dog, the sender is shown a picture of a Chihuahua and the receiver that of aBoston Terrier).Table 2 reports results for various configurations. We see that the agents are still able to coordinate.Moreover, we observe a small increase in symbol usage purity, as expected since agents can nowonly coordinate on general properties of object classes, rather than on the specific properties of eachimage. This effect is clearer in Figure 3 (right), when we repeat t-SNE based visualization of therelationship that emerges between visual embeddings and the words used to refer to them in thisnew experiment.5 G ROUNDING AGENTS ’ COMMUNICATION IN HUMAN LANGUAGEThe results in Section 4 show communication robustly arising in our game, and that we can changethe environment to nudge agents to develop symbol meanings which are more closely related to thevisual or class-based semantics of the images. Still, we would like agents to converge on a languagefully understandable by humans, as our ultimate goal is to develop conversational machines. To dothis, we will need to ground the communication.Taking inspiration from AlphaGo (Silver et al., 2016), an AI that reached the Go master level bycombining interactive learning in games of self-play with passive supervised learning from a largeset of human games, we combine the usual referential game, in which agents interactively developtheir communication protocol, with a supervised image labeling task, where the sender must learnto assign objects their conventional names. This way, the sender will naturally be encouraged to usesuch names with their conventional meaning to discriminate target images when playing the game,making communication more transparent to humans.In this experiment, the sender switches, equiprobably, between game playing and a supervised im-age classification task using ImageNet classes. Note that the supervised objective does not aim atimproving agents’ coordination performance. Instead, supervision provides them with basic ground-ing in natural language (in the form of image-label associations), while concurrent interactive gameplaying should teach them how to effectively use this grounding to communicate.We use the informed sender, fc image representations and a vocabulary size of 100. Supervisedtraining is based on 100 labels that are a subset of the object names in our data-set (see Section 3above). When predicting object names, the sender uses the usual game-embedding layer coupledwith a softmax layer of dimensionality 100 corresponding to the object names. Importantly, thegame-embedding layers used in object classification and the reference game are shared. Conse-7Published as a conference paper at ICLR 2017dolphinfenceFigure 4: Example pairs from the ReferItGame set, with word produced by sender. Target imagesframed in green.quently, we hope that, when playing, the sender will produce symbols aligned with object namesacquired in the supervised phase.The supervised objective has no negative effect on communication success: the agents are still ableto reach full coordination after 10k training trials (corresponding to 5k trials of reference gameplaying). The sender uses many more symbols after training than in any previous experiment (88)and symbol purity dramatically increases to 70% (the obs-chance purity difference also increases to37%).Even more importantly, many symbols have now become directly interpretable, thanks to their directcorrespondence to labels. Considering the 632 image pairs where the target gold standard labelcorresponds to one of the labels that were used in the supervised phase, in 47% of these cases thesender produced exactly the symbol corresponding to the correct supervised label for the targetimage (chance: 1%).For image pairs where the target image belongs to one of the directly supervised categories, it is notsurprising that the sender adopted the “conventional” supervised label to signal the target . However,a very interesting effect of supervision is that it improves the interpretability of the code even whenagents must communicate about images that do not contain objects in the supervised category set .This emerged in a follow-up experiment in which, during training, the sender was again exposed(with equal probability) to the same supervised classification task as above, but now the agentsplayed the referential game on a different dataset of images derived from ReferItGame (Kazemzadehet al., 2014). In its general format, the ReferItGame contains annotations of bounding boxes in realimages with referring expressions produced by humans when playing the game. For our purposes,we constructed 10k pairs by randomly sampling two bounding boxes, to act as target and distractor.Again, the agents converged to perfect communication after 15k trials, and this time used all 100available symbols in some trial.We then asked whether this language was human-interpretable. For each symbol used by the trainedsender, we randomly extracted 3 image pairs in which the sender picked that symbol and the receiverpointed at the right target (for two symbols, only 2 pairs matched these criteria, leading to a set of 298image pairs). We annotated each pair with the word corresponding to the symbol in the supervisedset. Out of the 298 pairs, only 25 (8%) included one of the 100 words among the correspondingreferring expressions in ReferItGame. So, in the large majority of cases, the sender had been facedwith a pair not (saliently) containing the categories used in the supervised phase of its training, andit had to produce a word that could, at best, only indirectly refer to what is depicted in the targetimage. We then tested whether this code would be understandable by humans. In essence, it is as ifwe replaced the trained agent receiver with a human.We prepared a crowdsourced survey using the CrowdFlower platform. For each pair, human partici-pants were shown the two images and the sender-emitted word (that is, the ImageNet label associatedto the symbol produced by the sender; see examples in Figure 4). The participants were asked topick the picture that they thought was most related to the word. We collected 10 ratings for eachpair.We found that in 68% of the cases the subjects were able to guess the right image. A logisticregression predicting subject image choice from ground-truth target images, with subjects and wordsas random effects, confirmed the highly significant correlation between the true and guessed images8Published as a conference paper at ICLR 2017(z= 16:75,p < 0:0001 ). Thus, while far from perfect, we find that supervised learning on aseparate data set does provide some grounding for communication with humans, that generalizesbeyond the conventional word denotations learned in the supervised phase.Looking at the results qualitatively, we found that very often sender-subject communication suc-ceeded when the sender established a sort of “metonymic” link between the words in its possessionand the contents of an image. Figure 4 shows an example where the sender produced dolphin torefer to a picture showing a stretch of sea, and fence for a patch of land. Similar semantic shiftsare a core characteristic of natural language (e.g., Pustejovsky, 1995), and thus subjects were, inmany cases, able to successfully play the referential game with our sender (10/10 subjects guessedthe dolphin target, and 8/10 the fence). This is very encouraging. Although the language developedin referential games will be initially very limited, if both agents and humans possess the sort offlexibility displayed in this last experiment, the noisy but shared common ground might suffice toestablish basic communication.6 D ISCUSSIONOur results confirmed that fairly simple neural-network agents can learn to coordinate in a referentialgame in which they need to communicate about a large number of real pictures. They also suggestthat the meanings agents come to assign to symbols in this setup capture general conceptual prop-erties of the objects depicted in the image, rather than low-level visual properties. We also showeda path to grounding the communication in natural language by mixing the game with a supervisedtask.In future work, encouraged by our preliminary experiments with object naming, we want to studyhow to ensure that the emergent communication stays close to human natural language. Predictivelearning should be retained as an important building block of intelligent agents, focusing on teachingthem structural properties of language (e.g., lexical choice, syntax or style). However, it is alsoimportant to learn the function-driven facets of language, such as how to hold a conversation, andinteractive games are a potentially fruitful method to achieve this goal. | rJATjv_4e | Final Review | 7: Good paper, accept | To train natural language systems by putting multiple agents within an interactive referential communication game is very nice. As the authors mention, there has been some (although seemingly not much) previous work on using multi-agent games to teach communication, and it certainly seems like a direction worth pursuing. Moreover, the approach of switching between these games and some supervised learning, as in the experiment described in Section 5 and suggested in Section 6, seems particularly fruitful.
Note: For “clarity”, I believe some of the network connections in Fig 1 have been omitted. However, given the rather highly-customized architecture and the slightly hard-to-follow description in Section 3, the shorthand diagram only adds to the confusion. The diagram probably needs to be fine-tuned, but at the very least (especially if I am misunderstanding it!), a caption must [still] be added to help the reader interpret the figure.
Overall, the framework (Section 2) is great and seems quite effective/useful in various ways, the results are reasonable, and I expect there will be some interesting future variations on this work as well.
Caveat: While I am quite confident I understood the paper (as per confidence score below), I do not feel I am sufficiently familiar with the most closely related literature to accurately assess the place of this work within that context. | 3: The reviewer is fairly confident that the evaluation is correct |
Hk8N3Sclg | ICLR.cc/2017/conference | 2017 | Multi-Agent Cooperation and the Emergence of (Natural) Language | ["Angeliki Lazaridou", "Alexander Peysakhovich", "Marco Baroni"] | The current mainstream approach to train natural language systems is to expose them to large amounts of text. This passive learning is problematic if we are in- terested in developing interactive machines, such as conversational agents. We propose a framework for language learning that relies on multi-agent communi- cation. We study this learning in the context of referential games. In these games, a sender and a receiver see a pair of images. The sender is told one of them is the target and is allowed to send a message to the receiver, while the receiver must rely on it to identify the target. Thus, the agents develop their own language interactively out of the need to communicate. We show that two networks with simple configurations are able to learn to coordinate in the referential game. We further explore whether the “word meanings” induced in the game reflect intuitive semantic properties of the objects depicted in the image, and we present a simple strategy for grounding the agents’ code into natural language, a necessary step in developing machines that should eventually be able to communicate with humans.
| ["Natural language processing", "Reinforcement Learning", "Games"] | ABSTRACTThe current mainstream approach to train natural language systems is to exposethem to large amounts of text. This passive learning is problematic if we are in-terested in developing interactive machines, such as conversational agents. Wepropose a framework for language learning that relies on multi-agent communi-cation. We study this learning in the context of referential games. In these games,a sender and a receiver see a pair of images. The sender is told one of them isthe target and is allowed to send a message from a fixed, arbitary vocabulary tothe receiver. The receiver must rely on this message to identify the target. Thus,the agents develop their own language interactively out of the need to communi-cate. We show that two networks with simple configurations are able to learn tocoordinate in the referential game. We further explore how to make changes to thegame environment to cause the “word meanings” induced in the game to better re-flect intuitive semantic properties of the images. In addition, we present a simplestrategy for grounding the agents’ code into natural language. Both of these arenecessary steps towards developing machines that are able to communicate withhumans productively.1 I NTRODUCTIONI tried to break it to him gently [...] the only way to learn an unknown languageis to interact with a native speaker [...] asking questions, holding a conversation,that sort of thing [...] If you want to learn the aliens’ language, someone [...] willhave to talk with an alien. Recordings alone aren’t sufficient.Ted Chiang, Story of Your LifeOne of the main aims of AI is to develop agents that can cooperate with others to achieve goals(Wooldridge, 2009). Such coordination requires communication. If the coordination partners are toinclude humans, the most obvious channel of communication is natural language. Thus, handlingnatural-language-based communication is a key step toward the development of AI that can thrivein a world populated by other agents.Given the success of deep learning models in related domains such as image captioning or machinetranslation (e.g., Sutskever et al., 2014; Xu et al., 2015), it would seem reasonable to cast the prob-lem of training conversational agents as an instance of supervised learning (Vinyals & Le, 2015).However, training on “canned” conversations does not allow learners to experience the interactiveaspects of communication. Supervised approaches, which focus on the structure of language, are anexcellent way to learn general statistical associations between sequences of symbols. However, theydo not capture the functional aspects of communication, i.e., that humans use words to coordinatewith others and make things happen (Austin, 1962; Clark, 1996; Wittgenstein, 1953).This paper introduces the first steps of a research program based on multi-agent coordination com-munication games . These games place agents in simple environments where they need to develop alanguage to coordinate and earn payoffs. Importantly, the agents start as blank slates, but, by play-ing a game together, they can develop and bootstrap knowledge on top of each others, leading to theemergence of a language.Work done while at Facebook AI Research.1Published as a conference paper at ICLR 2017The central problem of our program, then, is the following: How do we design environments thatfoster the development of a language that is portable to new situations and to new communicationpartners (in particular humans)?We start from the most basic challenge of using a language in order to refer to things in the contextof a two-agent game. We focus on two questions. First, whether tabula rasa agents succeed in com-munication. Second, what features of the environment lead to the development of codes resemblinghuman language.We assess this latter question in two ways. First, we consider whether the agents associate generalconceptual properties, such as broad object categories (as opposed to low-level visual properties),to the symbols they learn to use. Second, we examine whether the agents’ “word usage” is partiallyinterpretable by humans in an online experiment.Other researchers have proposed communication-based environments for the development ofcoordination-capable AI. Work in multi-agent systems has focused on the design of pre-programmedcommunication systems to solve specific tasks (e.g., robot soccer, Stone & Veloso 1998). Most re-lated to our work, Sukhbaatar et al. (2016) and Foerster et al. (2016) show that neural networks canevolve communication in the context of games without a pre-coded protocol. We pursue the samequestion, but further ask how we can change our environment to make the emergent language moreinterpretable.Others (e.g., the SHRLDU program of Winograd 1971 or the game in Wang et al. 2016) proposebuilding a communicating AI by putting humans in the loop from the very beginning. This approachhas benefits but faces serious scalability issues, as active human intervention is required at each step.An attractive component of our game-based paradigm is that humans may be added as players, butdo not need to be there all the time.A third branch of research focuses on “Wizard-of-Oz” environments, where agents learn to playgames by interacting with a complex scripted environment (Mikolov et al., 2015). This approachgives the designer tight control over the learning curriculum, but imposes a heavy engineering burdenon developers. We also stress the importance of the environment (game setup), but we focus onsimpler environments with multiple agents that force them to get smarter by bootstrapping on top ofeach other.We leverage ideas from work in linguistics, cognitive science and game theory on the emergence oflanguage (Wagner et al., 2003; Skyrms, 2010; Crawford & Sobel, 1982; Crawford, 1998). Our gameis a variation of Lewis’ signaling game (Lewis, 1969). There is a rich tradition of linguistic andcognitive studies using similar setups (e.g., Briscoe, 2002; Cangelosi & Parisi, 2002; Spike et al.,2016; Steels & Loetzsch, 2012). What distinguishes us from this literature is our aim to, eventually,develop practical AI. This motivates our focus on more realistic input data (a large collection ofnoisy natural images) and on trying to align the agents’ language with human intuitions.Lewis’ classic games have been studied extensively in game theory under the name of “cheap talk”.These games have been used as models to study the evolution of language both theoretically andexperimentally (Crawford, 1998; Blume et al., 1998; Crawford & Sobel, 1982). A major questionin game theory is whether equilibrium actually occurs in a game as convergence in learning isnot guaranteed (Fudenberg & Peysakhovich, 2014; Roth & Erev, 1995). And, if an equilibriumis reached, which one it will be (since they are typically not unique). This is particularly true forcheap talk games, which exhibit Nash equilibria in which precise language emerges, others wherevague language emerges and others where no language emerges at all (Crawford & Sobel, 1982). Inaddition, because in these games language has no ex-ante meaning and only emerges in the contextof the equilibrium, some of the emergent languages may not be very natural. Our results speak toboth the convergence question and the question of what features of the game cause the appearanceof different types of languages. Thus, our results are also of interest to game theorists.An evolutionary perspective has recently been advocated as a way to mitigate the data hunger oftraditional supervised approaches (Goodfellow et al., 2014; Silver et al., 2016). This research con-firms that learning can be bootstrapped from competition between agents. We focus, however, oncooperation between agents as a way to foster learning while reducing the need for annotated data.2Published as a conference paper at ICLR 20172 G ENERAL FRAMEWORKOur general framework includes K players, each parametrized by k, a collection of tasks/games thatthe players have to perform, a communication protocol Vthat enables the players to communicatewith each other, and payoffs assigned to the players as a deterministic function of a well-definedgoal. In this paper we focus on a particular version of this: referential games . These games arestructured as follows.1. There is a set of images represented by vectors fi1;:::;i Ng, two images are drawn atrandom from this set, call them (iL;iR), one of them is chosen to be the “target” t2fL;Rg2. There are two players, a sender and a receiver, each seeing the images - the sender receivesinputS(iL;iR;t)3. There is a vocabularyVof sizeKand the sender chooses one symbol to send to thereceiver, we call this the sender’s policy s(S(iL;iR;t))2V4. The receiver does not know the target, but sees the sender’s symbol and tries to guess thetarget image. We call this the receiver’s policy r(iL;iR;s(S(iL;iR;t)))2fL;Rg5. Ifr(iL;iR;s(S(iL;iR;t)) =t, that is, if the receiver guesses the target, both playersreceive a payoff of 1 (win), otherwise they receive a payoff of 0 (lose).Many extensions to the basic referential game explored here are possible. There can be more images,or a more sophisticated communication protocol (e.g., communication of a sequence of symbols ormulti-step communication requiring back-and-forth interaction1), rotation of the sender and receiverroles, having a human occasionally playing one of the roles, etc.3 E XPERIMENTAL SETUPImages We use the McRae et al.’s (2005) set of 463 base-level concrete concepts (e.g., cat, ap-ple, car . . . ) spanning across 20 general categories (e.g., animal ,fruit/vegetable ,vehicle . . . ). Werandomly sample 100 images of each concept from ImageNet (Deng et al., 2009). To create tar-get/distractor pairs, we randomly sample two concepts, one image for each concept and whether thefirst or second image will serve as target. We apply to each image a forward-pass through the pre-trained VGG ConvNet (Simonyan & Zisserman, 2014), and represent it with the activations fromeither the top 1000-D softmax layer ( sm) or the second-to-last 4096-D fully connected layer ( fc).Agent Players Both sender and receiver are simple feed-forward networks. For the sender, weexperiment with the two architectures depicted in Figure 1. Both sender architectures take as inputthe target (marked with a green square in Figure 1) and distractor representations, always in thisorder, so that they are implicitly informed of which image is the target (the receiver, instead, seesthe two images in random order).Theagnostic sender is a generic neural network that maps the original image vectors onto a “game-specific” embedding space (in the sense that the embedding is learned while playing the game)followed by a sigmoid nonlinearity. Fully-connected weights are applied to the embedding concate-nation to produce scores over vocabulary symbols.The informed sender also first embeds the images into a “game-specific” space. It then applies1-D convolutions (“filters”) on the image embeddings by treating them as different channels. Theinformed sender uses convolutions with kernel size 2x1 applied dimension-by-dimension to thetwo image embeddings (in Figure 1, there are 4 such filters). This is followed by the sigmoidnonlinearity. The resulting feature maps are combined through another filter (kernel size fx1, wherefis the number of filters on the image embeddings), to produce scores for the vocabulary symbols.Intuitively, the informed sender has an inductive bias towards combining the two images dimension-by-dimension whereas the agnostic sender does not (though we note the agnostic architecture neststhe informed one).1For example, Jorge et al. (2016) explore agents playing a “Guess Who” game to learn about the emergenceof question-asking and answering in language.3Published as a conference paper at ICLR 2017informed sender agnostic sender receiversymbol 1symbol 2symbol 3symbol 1symbol 2symbol 3left imageright imageinformed sender agnostic sender receiversymbol 1symbol 2symbol 3symsymsymleft imageright imageagnostic sender informed sender receiversymbol 1symbol 2symbol 3symbol 1symbol 2symbol 3left imageright imageFigure 1: Architectures of agent players.For both senders, motivated by the discrete nature of language, we enforce a strong communicationbottleneck that discretizes the communication protocol. Activations on the top (vocabulary) layerare converted to a Gibbs distribution (with temperature parameter ), and then a single symbol sissampled from the resulting probability distribution.The receiver takes as input the target and distractor image vectors in random order, as well as thesymbol produced by the sender (as a one-hot vector over the vocabulary). It embeds the images andthe symbol into its own “game-specific” space. It then computes dot products between the symboland image embeddings. Ideally, dot similarity should be higher for the image that is better denotedby the symbol. The two dot products are converted to a Gibbs distribution (with temperature ) andthe receiver “points” to an image by sampling from the resulting distribution.General Training Details We set the following hyperparameters without tuning: embedding di-mensionality: 50, number of filters applied to embeddings by informed sender: 20, temperature ofGibbs distributions: 10. We explore two vocabulary sizes: 10 and 100 symbols.The sender and receiver parameters =hR;Siare learned while playing the game. No weightsare shared and the only supervision used is communication success, i.e., whether the receiver pointedat the right referent.This setup is naturally modeled with Reinforcement Learning (Sutton & Barto, 1998). As out-lined in Section 2, the sender follows policy s(S(iL;iR;t))2Vand the receiver policyr(iL;iR;s(S(iL;iR;t)))2 fL;Rg. The loss function that the two agents must minimize isI E~r[R(~r)]whereRis the reward function returning 1 iff r(iL;iR;s(S(iL;iR;t)) =t. Param-eters are updated through the Reinforce rule (Williams, 1992). We apply mini-batch updates, witha batch size of 32 and for a total of 50k iterations (games). At test time, we compile a set of 10kgames using the same method as for the training games.We now turn to our main questions. The first is whether the agents can learn to successfully coordi-nate in a reasonable amount of time. The second is whether the agents’ language can be thought ofas “natural language”, i.e., symbols are assigned to meanings that make intuitive sense in terms ofour conceptualization of the world.4 L EARNING TO COMMUNICATEOur first question is whether agents converge to successful communication at all. We see that theydo: agents almost perfectly coordinate in the 1k rounds following the 10k training games for everyarchitecture and parameter choice (Table 1).We see, though, some differences between different sender architectures. Figure 2 (left) showsperformance on a sample of the test set as a function of the first 5,000 rounds of training. The agents4Published as a conference paper at ICLR 20170 1k 2k 3k 4k 5k#Games0.40.50.60.70.80.91.0 Communication successagnostic-sender (100 symbols)agnostic-sender (10 symbols)informed-sender (100 symbols)informed-sender (10 symbols)0.000.030.060.0921015202538 100Singular Value PositionNormalized SpectrumFigure 2: Left: Communication success as a function of training iterations, we see that informedsenders converge faster than agnostic ones. Right: Spectrum of an example symbol usage matrix:the first few dimensions do capture only partial variance, suggesting that the usage of more symbolsby the informed sender is not just due to synonymy.id sender vis voc used comm purity (%)obs-chancerep size symbols success ( %) purity (%)1 informed sm 100 58 100 46 272 informed fc 100 38 100 41 233 informed sm 10 10 100 35 184 informed fc 10 10 100 32 175 agnostic sm 100 2 99 21 156 agnostic fc 10 2 99 21 157 agnostic sm 10 2 99 20 158 agnostic fc 100 2 99 19 15Table 1: Playing the referential game: test results after 50K training games. Used symbols columnreports number of distinct vocabulary symbols that were produced at least once in the test phase. Seetext for explanation of comm success andpurity . All purity values are highly significant ( p<0:001)compared to simulated chance symbol assignment when matching observed symbol usage. The obs-chance purity column reports the difference between observed and expected purity under chance.converge to coordination quite fast, but the informed sender reaches higher levels more quickly thanthe agnostic one.The informed sender makes use of more symbols from the available vocabulary, while the agnosticsender constantly uses a compact 2-symbol vocabulary. This suggests that the informed sender isusing more varied and word-like symbols (recall that the images depict 463 distinct objects, so wewould expect a natural-language-endowed sender to use a wider array of symbols to discriminateamong them). However, it could also be the case that the informed sender vocabulary simply con-tains higher redundancy/synonymy. To check this, we construct a (sampled) matrix where rows aregame image pairs, columns are symbols, and entries represent how often that symbol is used for thatpair. We then decompose the matrix through SVD. If the sender is indeed just using a strategy withfew effective symbols but high synonymy, then we should expect a 1- or2-dimensional decomposi-tion. Figure 2 (right) plots the normalized spectrum of this matrix. While there is some redundancyin the matrix (thus potentially implying there is synonymy in the usage), the language still requiresmultiple dimensions to summarize (cross-validated SVD suggests 50 dimensions).We now turn to investigating the semantic properties of the emergent communication protocol. Re-call that the vocabulary that agents use is arbitrary and has no initial meaning. One way to understandits emerging semantics is by looking at the relationship between symbols and the sets of images theyrefer to.5Published as a conference paper at ICLR 2017accordionairplanealligatorambulanceanchorapartmentappleapronarmourashtrayasparagusavocadoaxebagbagpipeballballoonbanana banjobannerbarnbarrelbasementbasketbathtubbatonbayonetbazookabeanbearbeaverbedbedroombeehivebeetbeetlebeltbenchbikebirchbisonblackbirdblenderblouseblueberryboatboltbombbookbookcase●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●● ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●● ●●●●●●●●●●●●●●●●●●●●●● ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●● ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●● ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●accordionairplanealligatorambulanceanchorapartmentappleapronarmourashtrayasparagusavocadoaxebagbagpipeballballoonbananabanjobannerbarnbarrelbasementbasketbathtubbatonbayonetbazookabeanbearbeaverbedbedroombeehivebeetbeetlebeltbenchbikebirchbisonblackbirdblenderblouseblueberryboatboltbombbookbookcase●●●●●●●●●●● ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●● ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●● ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●Figure 3: t-SNE plots of object fc vectors color-coded by majority symbols assigned to them byinformed sender. Object class names shown for a random subset. Left: configuration of 4th row ofTable 1. Right: 2nd row of Table 2.The objects in our images were categorized into 20 broader categories (such as weapon andmammal )by McRae et al. (2005). If the agents converged to higher level semantic meanings for the symbols,we would expect that objects belonging to the same category would activate the same symbols, e.g.,that, say, when the target images depict bayonets and guns, the sender would use the same symbolto refer to them, whereas cows and guns should not share a symbol.To quantify this, we form clusters by grouping objects by the symbols that are most often activatedwhen target images contain them. We then assess the quality of the resulting clusters by measuringtheir purity with respect to the McRae categories. Purity (Zhao & Karypis, 2003) is a standardmeasure of cluster “quality”. The purity of a clustering solution is the proportion of category labelsin the clusters that agree with the respective cluster majority category. This number reaches 100%for perfect clustering and we always compare the observed purity to the score that would be obtainedfrom a random permutation of symbol assignments to objects. Table 1 shows that purity, while farfrom perfect, is significantly above chance in all cases. We confirm moreover that the informedsender is producing symbols that are more semantically natural than those of the agnostic one.Still, surprisingly, purity is significantly above chance even when the latter is only using two sym-bols. From our qualitative evaluations, in this case the agents converge to a (noisy) characterizationof objects as “living-vs-non-living” which, intriguingly, has been recognized as the most basic onein the human semantic system (Caramazza & Shelton, 1998).Rather than using hard clusters, we can also ask whether symbol usage reflects the semantics of thevisual space. To do so we construct vector representations for each object (defined by its ImageNetlabel) by averaging the CNN fc representations of all category images in our data-set (see Section3 above). Note that the fc layer, being near the top of a deep CNN, is expected to capture high-level visual properties of objects (Zeiler & Fergus, 2014). Moreover, since we average across manyspecific images, our vectors should capture rather general, high-level properties of objects.We map these average object vectors to 2 dimensions via t-SNE mapping (Van der Maaten & Hinton,2008) and we color-code them by the majority symbol the sender used for images containing thecorresponding object. Figure 3 (left) shows the results for the current experiment. We see thatobjects that are close in CNN space (thus, presumably, visually similar) are associated to the samesymbol (same color). However, there still appears to be quite a bit of variation.4.1 O BJECT -LEVEL REFERENCEWe established that our agents can solve the coordination problem, and we have at least tentativeevidence that they do so by developing symbol meanings that align with our semantic intuition. We6Published as a conference paper at ICLR 2017id sender vis voc used comm purity (%)obs-chancerep size symbols success( %) purity (%)1 informed fc 100 43 100 45 212 informed fc 10 10 100 37 193 agnostic fc 100 2 92 23 74 agnostic fc 10 3 98 28 12Table 2: Playing the referential game with image-level targets: test results after 50K training plays.Columns as in Table 1. All purity values significant at p<0:001.turn now to a simple way to tweak the game setup in order to encourage the agents to further pursuehigh-level semantics.The strategy is to remove some aspects of “common knowledge” from the game. Common knowl-edge, in game-theoretic parlance, are facts that everyone knows, everyone knows that everyoneknows, and so on (Brandenburger et al., 2014). Coordination can only occur if the basis of thecoordination is common knowledge (Rubinstein, 1989), therefore if we remove some facts fromcommon knowledge, we will preclude our agents from coordinating on them. In our case, we wantto remove facts pertaining to the details of the input images, thus forcing the agents to coordinate onmore abstract properties. We can remove all low-level common knowledge by letting the agents playonly using class-level properties of the objects. We achieve this by modifying the game to show theagents different pairs of images but maintaining the ImageNet class of both the target and distractor(e.g., if the target is dog, the sender is shown a picture of a Chihuahua and the receiver that of aBoston Terrier).Table 2 reports results for various configurations. We see that the agents are still able to coordinate.Moreover, we observe a small increase in symbol usage purity, as expected since agents can nowonly coordinate on general properties of object classes, rather than on the specific properties of eachimage. This effect is clearer in Figure 3 (right), when we repeat t-SNE based visualization of therelationship that emerges between visual embeddings and the words used to refer to them in thisnew experiment.5 G ROUNDING AGENTS ’ COMMUNICATION IN HUMAN LANGUAGEThe results in Section 4 show communication robustly arising in our game, and that we can changethe environment to nudge agents to develop symbol meanings which are more closely related to thevisual or class-based semantics of the images. Still, we would like agents to converge on a languagefully understandable by humans, as our ultimate goal is to develop conversational machines. To dothis, we will need to ground the communication.Taking inspiration from AlphaGo (Silver et al., 2016), an AI that reached the Go master level bycombining interactive learning in games of self-play with passive supervised learning from a largeset of human games, we combine the usual referential game, in which agents interactively developtheir communication protocol, with a supervised image labeling task, where the sender must learnto assign objects their conventional names. This way, the sender will naturally be encouraged to usesuch names with their conventional meaning to discriminate target images when playing the game,making communication more transparent to humans.In this experiment, the sender switches, equiprobably, between game playing and a supervised im-age classification task using ImageNet classes. Note that the supervised objective does not aim atimproving agents’ coordination performance. Instead, supervision provides them with basic ground-ing in natural language (in the form of image-label associations), while concurrent interactive gameplaying should teach them how to effectively use this grounding to communicate.We use the informed sender, fc image representations and a vocabulary size of 100. Supervisedtraining is based on 100 labels that are a subset of the object names in our data-set (see Section 3above). When predicting object names, the sender uses the usual game-embedding layer coupledwith a softmax layer of dimensionality 100 corresponding to the object names. Importantly, thegame-embedding layers used in object classification and the reference game are shared. Conse-7Published as a conference paper at ICLR 2017dolphinfenceFigure 4: Example pairs from the ReferItGame set, with word produced by sender. Target imagesframed in green.quently, we hope that, when playing, the sender will produce symbols aligned with object namesacquired in the supervised phase.The supervised objective has no negative effect on communication success: the agents are still ableto reach full coordination after 10k training trials (corresponding to 5k trials of reference gameplaying). The sender uses many more symbols after training than in any previous experiment (88)and symbol purity dramatically increases to 70% (the obs-chance purity difference also increases to37%).Even more importantly, many symbols have now become directly interpretable, thanks to their directcorrespondence to labels. Considering the 632 image pairs where the target gold standard labelcorresponds to one of the labels that were used in the supervised phase, in 47% of these cases thesender produced exactly the symbol corresponding to the correct supervised label for the targetimage (chance: 1%).For image pairs where the target image belongs to one of the directly supervised categories, it is notsurprising that the sender adopted the “conventional” supervised label to signal the target . However,a very interesting effect of supervision is that it improves the interpretability of the code even whenagents must communicate about images that do not contain objects in the supervised category set .This emerged in a follow-up experiment in which, during training, the sender was again exposed(with equal probability) to the same supervised classification task as above, but now the agentsplayed the referential game on a different dataset of images derived from ReferItGame (Kazemzadehet al., 2014). In its general format, the ReferItGame contains annotations of bounding boxes in realimages with referring expressions produced by humans when playing the game. For our purposes,we constructed 10k pairs by randomly sampling two bounding boxes, to act as target and distractor.Again, the agents converged to perfect communication after 15k trials, and this time used all 100available symbols in some trial.We then asked whether this language was human-interpretable. For each symbol used by the trainedsender, we randomly extracted 3 image pairs in which the sender picked that symbol and the receiverpointed at the right target (for two symbols, only 2 pairs matched these criteria, leading to a set of 298image pairs). We annotated each pair with the word corresponding to the symbol in the supervisedset. Out of the 298 pairs, only 25 (8%) included one of the 100 words among the correspondingreferring expressions in ReferItGame. So, in the large majority of cases, the sender had been facedwith a pair not (saliently) containing the categories used in the supervised phase of its training, andit had to produce a word that could, at best, only indirectly refer to what is depicted in the targetimage. We then tested whether this code would be understandable by humans. In essence, it is as ifwe replaced the trained agent receiver with a human.We prepared a crowdsourced survey using the CrowdFlower platform. For each pair, human partici-pants were shown the two images and the sender-emitted word (that is, the ImageNet label associatedto the symbol produced by the sender; see examples in Figure 4). The participants were asked topick the picture that they thought was most related to the word. We collected 10 ratings for eachpair.We found that in 68% of the cases the subjects were able to guess the right image. A logisticregression predicting subject image choice from ground-truth target images, with subjects and wordsas random effects, confirmed the highly significant correlation between the true and guessed images8Published as a conference paper at ICLR 2017(z= 16:75,p < 0:0001 ). Thus, while far from perfect, we find that supervised learning on aseparate data set does provide some grounding for communication with humans, that generalizesbeyond the conventional word denotations learned in the supervised phase.Looking at the results qualitatively, we found that very often sender-subject communication suc-ceeded when the sender established a sort of “metonymic” link between the words in its possessionand the contents of an image. Figure 4 shows an example where the sender produced dolphin torefer to a picture showing a stretch of sea, and fence for a patch of land. Similar semantic shiftsare a core characteristic of natural language (e.g., Pustejovsky, 1995), and thus subjects were, inmany cases, able to successfully play the referential game with our sender (10/10 subjects guessedthe dolphin target, and 8/10 the fence). This is very encouraging. Although the language developedin referential games will be initially very limited, if both agents and humans possess the sort offlexibility displayed in this last experiment, the noisy but shared common ground might suffice toestablish basic communication.6 D ISCUSSIONOur results confirmed that fairly simple neural-network agents can learn to coordinate in a referentialgame in which they need to communicate about a large number of real pictures. They also suggestthat the meanings agents come to assign to symbols in this setup capture general conceptual prop-erties of the objects depicted in the image, rather than low-level visual properties. We also showeda path to grounding the communication in natural language by mixing the game with a supervisedtask.In future work, encouraged by our preliminary experiments with object naming, we want to studyhow to ensure that the emergent communication stays close to human natural language. Predictivelearning should be retained as an important building block of intelligent agents, focusing on teachingthem structural properties of language (e.g., lexical choice, syntax or style). However, it is alsoimportant to learn the function-driven facets of language, such as how to hold a conversation, andinteractive games are a potentially fruitful method to achieve this goal. | B1rHxp-Ee | Review | 7: Good paper, accept | In this paper, a referential game is proposed between two agents. Both agents observe two images. The first agent, called the sender, receive a binary target variable (t) and must send a symbol (message) to the second agent, called the receiver, such that this agent can recover the target. The agents both get a reward, if the receiver agent can predict the target. The paper proposes to parametrize the agents as neural networks - with pretrained representations of the images as feature vectors - and train them using REINFORCE. In this setting, it is shown that the agents converge to optimal policies and that their learned communications (e.g. the symbolic code transmitted from the sender to the receiver) have some meaningful concepts. In addition to this, the paper presents experiments on a variant of the game grounded on different image classes. In this setting, the agents appear to learn even more meaningful concepts. Finally, multi-game setup is proposed, where the sender agent is alternating between playing the game before and playing a supervised learning task (classifying images). Not surprisingly, when anchored to the supervised learning task, the symbolic communications have even more meaningful concepts.
Learning shared representations for communication in a multi-agent setup is an interesting research direction to explore. This is a much harder task compared to standard supervised learning or single-agent reinforcement learning tasks, which justifies starting with a relatively simple task. To the best of my knowledge, the approach of first learning communication between two agents and then grounding this communication in human language is novel. As the authors remark, this may be an alternative paradigm to standard sequence-to-sequence models which tend to focus on statistical properties of language rather than their functional aspects. I believe the contributions of the proposed task and framework, and the analysis and visualization of what the communicated tokens represent is a useful stepping stone for future work. For this reason, I think the paper should be accepted.
Other comments:
- How is the target (t) incorporated into the sender networks? Please clarify this.
- Table 1 and Table 2 use percentage (%) values differently. In the first, percentages seem to be written in the interval [0, 100], and in the second in the interval [0, 1]. Please correct this. Perhaps related to this, in Table 1, the column "obs-chance purity" seems to have extremely small values. I assume this was mistake?
- "assest" -> "assess"
- "usufal" -> "usual" | 3: The reviewer is fairly confident that the evaluation is correct |
SygGlIBcel | ICLR.cc/2017/conference | 2017 | Opening the vocabulary of neural language models with character-level word representations | ["Matthieu Labeau", "Alexandre Allauzen"] | This paper introduces an architecture for an open-vocabulary neural language model. Word representations are computed on-the-fly by a convolution network followed by pooling layer. This allows the model to consider any word, in the context or for the prediction. The training objective is derived from the Noise-Contrastive Estimation to circumvent the lack of vocabulary. We test the ability of our model to build representations of unknown words on the MT task of IWSLT-2016 from English to Czech, in a reranking setting. Experimental results show promising results, with a gain up to 0.7 BLEU point. They also emphasize the difficulty and instability when training such models with character-based representations for the predicted words. | ["Natural language processing", "Deep learning"] | ABSTRACTThis paper introduces an architecture for an open-vocabulary neural languagemodel. Word representations are computed on-the-fly by a convolution networkfollowed by pooling layer. This allows the model to consider any word, in thecontext or for the prediction. The training objective is derived from the Noise-Contrastive Estimation to circumvent the lack of vocabulary. We test the ability ofour model to build representations of unknown words on the MT task of IWSLT-2016 from English to Czech, in a reranking setting. Experimental results showpromising results, with a gain up to 0.7 BLEU point. They also emphasize thedifficulty and instability when training such models with character-based repre-sentations for the predicted words.1 I NTRODUCTIONMost of neural language models, such as n-gram models Bengio et al. (2003) are word based andrely on the definition of a finite vocabulary V. As a consequence, a Look-up table is associated toVin which each word w2V is mapped to a vector of dEreal valued features stored in a matrixL2RjVjdE. While this approach has proven successful for a variety of tasks and languages, see forinstance Schwenk (2007) in speech recognition and Le et al. (2012); Devlin et al. (2014); Bahdanauet al. (2014) in machine translation, it induces several limitations.For morphologically-rich languages, like Czech or German, the lexical coverage is still an importantissue, since there is a combinatorial explosion of word forms, most of which are hardly observed ontraining data. On the one hand, growing the Look-up table is not a solution, since it would increasethe number of parameters without having enough training example for a proper estimation. On theother hand, rare words can be replaced by a special token. Nevertheless, this acts as a word classmerging very different words without any distinction and using different word classes to handle out-of-vocabulary words Allauzen & Gauvain (2005) does not really solve this issue, since rare wordsare difficult to classify.Moreover, for most inflected or agglutinative forms, as well as for compound words, the word struc-ture is overlooked, wasting parameters for modeling forms that could be more efficiently handledby word decomposition. While the use of subword units Botha & Blunsom (2014); Sennrich et al.(2016) could improve the generalization power of such models, it relies on a proper and efficientmethod to induce these subword units.To overcome these issues, we propose to investigate a word based language model with an openvocabulary. Since most of existing models and training criteria rely on the assumption of a finitevocabulary, the definition of an open vocabulary model, along with a training criterion, constitutesa scientific challenge. Our goal is to build word representations every words. Word representationsare inferred on-the-fly from its character sequence, using convolution filters which implicitly cap-ture subword patterns, as described in section 2. The architecture is based on a neural ngram modelinspired from Bengio et al. (2003), while this idea can be extended to other kind of models. Byrelaxing the normalized constraint, the objective function borrows from the noise contrastive esti-mation Gutmann & Hyv ̈arinen (2012) to allow our model to consider a possibly infinite vocabulary.This paper focusses on this challenge and its related training issues. To assess the efficiency of1Under review as a conference paper at ICLR 2017this approach, the experimental setup described in section 3 uses a large scale translation task in areranking setting. The experimental results summarized in section 4 show promising results as wellas training issues.2 M ODEL DESCRIPTIONWord embeddings are parameters, stored in a Look-up matrix L. The embedding ewordw of a wordwis simply the column of Lcorresponding to its index in the vocabulary:ewordw = [L]w2.1 C HARACTER -LEVEL WORD EMBEDDINGSTo infer a word embedding from its character embeddings, we use a convolution layer Waibel et al.(1990); Collobert et al. (2011), similar to layers used in Santos & Zadrozny (2014); Kim et al.(2015). As illustrated in figure 1, a word wis a character sequence fc1;::;cjwjgrepresented by theirembeddingsfCc1;::;Ccjwjg, where Ccidenotes the vector associated to the character ci. A convo-lution filter Wconv2RdeRdcncis applied over a sliding window of nccharacters, producinglocal features :xn=Wconv(Ccnnc+1::::Ccn)T+bconvwherexnis a vector of size deobtained for each position nin the word1. The notation ( Ccn1:Ccn)denotes the concatenation of two embeddings. The i-th element of the embedding of wis the meanover thei-th elements of the feature vectors, passed by the activation function :[echar]i=0@jwjnc+1Xn=1[xn]ijwjnc+ 11A (1)Using a mean after a sliding convolution window ensures that the embedding combines local featuresfrom the whole word, and that the gradient is redistributed at scale for each character n-gram. Theparameters of the layer are the matrices CandWconvand the bias bconv.2.2 M ODELSOur model follows the classic n-gram feedforward architecture. The input of the network is a n-words context Hi= (wi1;:::;wNi+1), and its output the probability P(wjHi)for each wordw2V. The embeddings of the word in the context are concatenated and fed into a hidden layer:hHi=(Whidden(ei1:::::eNi+1) +bhidden)A second hidden layer my be added. Finally, the output layer computes scores for each word:sHi= exp ( WouthHi+bout)Whidden,bhidden,Woutandboutare the parameters of the model. As the input Lookup-matrixL, the output weight matrix Woutcontains word embeddings, that are output representations of thewords in the vocabulary:eoutw= [Wout]wThen, the output probabilities are expressed as:P(wjHi) =expeoutwhHiP1<j<jVjexpeoutjhHiLater, we will use three different input layer to obtain word representations:1Two padding character tokens are used to deal with border effects. The first is added at the beginning andthe second at the end of the word, as many times as it is necessary to obtain the same number of windows thanthe length of the word. Their embeddings are added to C.2Under review as a conference paper at ICLR 2017jCjdCdEjVjLook-up Table LCharacter look-up Table Ccj2cj1cjcj+1cj+2mean (.)WconvCEWEWhiddenWoutecharwi1echarwi2echarwi3sCihiCi= (wi:wi1:wi2)ewordwi1ewordwi2ewordwi3Character-level representationWord-levelrepresentationFigure 1: CWE Model architectureA classic NLM using word-level embeddings only, that we will note WE, which usesjVjdeparameters.A NLM using embeddings constructed from character n-grams by convolution + pooling,that we will note CE, which usesjVcjdc+dcncdeparameters.A NLM using a concatenation of these two types of embeddings as word representation,that we will note CWE .2.3 O BJECTIVE FUNCTION FOR OPEN VOCABULARY MODELSUsually, such a model is trained by maximizing the log-likelihood. For a given word given itscontext, the model parameters are estimated in order to maximize the following function for allthe n-grams observed in the training data:LL() =X1<i<jDjlogP(wijHi):This objective function raises two important issues. For conventional word models, it implies a verycostly summation imposed by the softmax activation of the output layer. More importantly, thisobjective requires the definition of a finite vocabulary, while the proposed model may use character-based word embeddings, especially at the output, making the notion of vocabulary obsolete.Therefore, the parameters estimation relies on Noise Contrastive Estimation (NCE) introducedin Gutmann & Hyv ̈arinen (2012); Mnih & Teh (2012). This criterion allows us to train both typesof models based on conventional word embeddings, along with character-based embeddings. TheNCE objective function aims to discriminate between examples sampled from the real data and froma noise distribution. When presented with examples coming from a mixture of one sample from thedata distribution Pdandkfrom the noise distribution Pn,PH(w2D)denotes the posterior proba-bility of a word wgiven its context Hto be sampled from the training data D. This probability canbe expressed as follows:PH(w2D) =PHd(w)PHd(w) +kPn(w)As suggested in Mnih & Teh (2012), Pnonly depends on where, since we chose the unigramdistribution estimated on the training data. IfsH(w) = exp ( eouthH+bout) (2)denotes the non-normalized score given by the model to a specific word w, as a function of theparametersand the context H, the final NCE objective function has the following form Gutmann3Under review as a conference paper at ICLR 2017& Hyv ̈arinen (2012):JH=EsHlogsH(w)sH(w) +kPn(w)+kEPnlogkPn(w)sH(w) +kPn(w);wheresHwill tend toPHdwithout the need for an explicit normalization.2.4 C HARACTER -BASED OUTPUT WEIGHTS WITH NOISE -CONTRASTIVE ESTIMATIONThe output weights representing each word in the vocabulary eoutcan also be replaced by embed-dings computed by a convolution layer on character n-grams. In this case the model can efficientlyrepresent and infer a score to any word, observed during the training process or not, while withconventional word embeddings, out of vocabulary words only share the same representation anddistribution. Instead of using a parameter matrix Woutto estimate the score like in equation 2, theoutput representation of a word w,eoutwcan be replaced by a vector echaroutw estimated on the flybased on its character sequence as described in equation 1, using jVcjdc+dcncdhparameters.With this extension the model does not rely on a vocabulary anymore, hence motivating our choiceof the NCE. This unnormalized objective allows us to handle an open vocabulary, since we only needto computek+ 1word representations for each training examples. Models that use character-basedembeddings both for input and output words are denoted by CWE-CWE .Moreover, with this extension, the representations of words sharing character n-grams are tied. Thisis an important property to let the model generalize to unseen words. However, it can be also anissue: the limited number of updates for output representations ( k+ 1words) has a “rich get richer”effect: the most frequent words are usually short and will get most of the update. They may therefore”contaminate” the representation of longer words with which they share character n-grams, even ifthese words are not related. This issue is further addressed in section 4.1.3 E XPERIMENTAL SET -UPThe impact of the models described in section 2 is evaluated within the machine translation (MT)shared task of IWSLT-20162from Englih to Czech. This language pair is highly challenging sinceCzech is a morphologically-rich language. Neural language models are integrated in a two stepsapproach: the first step uses a conventional MT system to produce an n-best list (the nmost likelytranslations); in the second step, these hypothesis are re-ranked by adding the score of the neurallanguage model. To better benefit from the open vocabulary models introduced in section 2.1, a morecomplex system is also used: first an MT system is used to translate from English to a simplifiedform of Czech which is reinflected. With this pipeline we expect n-best lists with more diversity andalso words unseen during the training process. The neural language models are then used to re-rankthe reinflected n-best lists.3.1 D ATAThe IWSLT16 MT task is focused on the translation of TED talks. The translation systems aretrained on parallel data from the TED ,QED andeuroparl . Our Neural language models are trainedon the same data, but training examples are sampled from these corpora given weights that arecomputed to balance between in-domain parallel data ( TED ), out-of domain parallel data, and ad-ditional monolingual data. Finally, we use the concatenation of TED.dev2010 ,TED.dev2011 andTED.tst2010 as development set, while TED.tst2012 andTED.tst2013 provide the test set.3.2 C ZECH RE-INFLECTIONIn Czech, a morphologically rich language, each lemma can take a lot of possible word forms. Mostof them won’t appear - or with a very low frequency - in training data. For an important part of thewords found in test data and unseen during training, their lemmas however can be observed but witha different morphological derivation.2http://workshop2016.iwslt.org4Under review as a conference paper at ICLR 2017A non-observed word form can’t be generated by the translation system, and one seen too rarelywon’t be used in a relevant way. To circumvent this limitation, in a similar fashion as the methoddescribed in Marie et al. (2015), each noun, pronoun and adjective is replaced in the training corporaby its lemma along with some morphological features. These word forms are considered in factoredway, where some of the POS tags are discarded to reduce the vocabulary. After the translation pro-cess, a cascade of Conditional Random Fields (CRF) are used to reintroduce the discarded features,such as gender, number and case, and to generate a new word form.Formally, the MT system translates English into a simplified version of Czech, that is reinflected.Within this process, the MT system can produce a n-best list, that can be extended to a nk-best list,considering for each translation hypothesis the k-best reinflected sentences given by the factorizedCRF. Intuitively, this process can introduce word forms potentially not yet seen in training data, butbased on known paradigms, which can give an advantage to language models able to build a wordrepresentation from character n-grams.3.3 B ASELINE TRANSLATION SYSTEMOur baseline is built with a Statistical Machine Translation system based on bilingual n-grams,NCODE3, described in Crego et al. (2011). We follow the same setup as in Marie et al. (2015).3.4 NLM TRAINING AND OPTIMIZATIONFirst, some comparative experiments on a smaller dataset are carried out to better understand howopen vocabulary NLM behave and to set the hyper-parameters. First trained using stochastic gra-dient descent, we observed a quite unstable training process, restricting a proper hyper-parameterschoices. We found that especially the embedding dimensions, and the activation functions usedcould make the NCE-objective hard to optimize. This was aggravated in Czech, which we foundmore difficult to work with than other morphologically complex languages, like German and Rus-sian. The use of Adagrad Duchi et al. (2010) clearly helps to solve most of these issues, but addsconsequent computation time. Following preliminary results on our work with a similar model ona different task Labeau et al. (2015), we made the choice of not implementing LSTMs to obtaincharacter-level word representations. It gave similar results, at the cost of unstable training and ex-tended computation time. We then train using batches of 128, for various context sizes, WE,CWE ,andCWE-CWE models. The ReLu activation function is used, along with an embedding size ofde= 128 . When relevant, we used a character embedding size of dc= 32 and a convolution onnc= 5-grams of characters for all experiments4. Concerning the NCE training, we sampled k= 25examples from the unigram distribution obtained from the training data, for each example sampledfrom the data. The models were implemented using C++5.3.5 R ERANKINGThe re-ranking step uses additional features to find a better translation among the n-best generatedby the decoder (in our case, n= 300 ): we use the score (probability) of WE,CWE andCWE-CWE models given to each sentence by our models as such a feature. Tuning for re-ranking wasperformed with KB-M IRACherry & Foster (2012), and evaluation using BLEU score.4 E XPERIMENTAL RESULTSThe first set of experiments investigates the impact of the padding design on the character-levelrepresentation followed by a study of the learning behavior of our proposed models and trainingcriterion. Then, the proposed models are evaluated within the MT task. The final set of experimentsanalyzes the issues of the model based on character-level representation for output words, in orderto propose remedies.3http://ncode.limsi.fr4Results did not differ significantly when increasing these embedding sizes, with an impact on convergencespeed and computation time.5Implementation will be made available.5Under review as a conference paper at ICLR 20174.1 T IES BETWEEN CHARACTER -LEVEL REPRESENTATION OF OUTPUT WORDSPreliminary results on smaller dataset are quite poor for models using character-level representation,and far worse when used for the output layer. We suspect that groups of characters are updated farmore together, yielding a ”contamination” of several character n-grams by very frequent short words.Indeed, our simple padding scheme, as shown in the left part of table 1, makes words sharing firstor last letter(s) systematically share at least one character n-gram: we suppose it gives the modelsmore chance to detect similarities in word forms sharing prefixes and suffixes.The representations of any of the character n-grams that are included in the frequent words willthus be re-used in a large part of the other words in the corpus. A huge number of word forms areaffected: a little more than one third of the training data shares its first character n-gram with one ofthe ten most frequent words, and a little more than one quarter shares its last.While considering varying size of character n-grams when building our word representation, asin Kim et al. (2015), would certainly help, it would increase our computation time. We thus choose toalleviate our padding scheme, as shown on the right part of table 1. We add only one character tokenat the beginning of the word, and one at the end6. While it may inhibit the capacity of the modelto build links between words sharing prefixes or suffixes, it improves results drastically, especiallywhen using character-level outputs, as shown in figure 3. This limited padding scheme is used forthe following experiments.a aale naale naaby zaaby zaaz bylaaz bylaani dvaani dvaasi trebaasi trebaTable 1: Padding for word decomposition in character 5-grams: is a character token indicatingthe beginning of the word, while indicates the end of the word. The left part of the table showsour original padding scheme, which makes very different words share character 5-grams, especiallywith short, frequent words. The right part of the table shows our alleviated padding scheme.4.2 NLM TRAININGWhile the perplexity of our language models is not our main focus, it is still related to the quantitythat our training seeks to optimize - since the NCE gradient approaches the maximum likelihoodgradient Mnih & Teh (2012). On figure 2 are shown perplexity values of each model during training.These values are based on a vocabulary containing the 250K most frequent words on the training data- it is also the vocabulary used in the model when relevant. They are computed on the developmentset after each epoch. An epoch includes 2,5M N-grams sampled from the training data. On table 2are shown the best perplexity obtained on the development set by each model, during training.Context size (Number of words) 3 6WE 227 193CWE 207 185CWE-CWE 308 243Table 2: Best perplexity reached on the development set, on a 250K output vocabulary, after 15epochs of 2,5M n-gramsTable 2 shows that a character-level word representation helps to decrease the perplexity, even ifa larger context closes the gap. To compute the perplexity of CWE-CWE models, we use the6For short words, we add the numbers of tokens necessary for the word to have at least nC= 5characters,as shown in table 16Under review as a conference paper at ICLR 2017Figure 2 Figure 3Figure 4: Model perplexity measured on the development set during training. The context size is3 words. Figure 3 shows models based on character-level word representations, with and withoutcomplete padding. Models are trained on the same data than Figure 2 but on smaller epochs (250Kn-grams).same vocabulary as for other models, and use the ’unknown’ tokens for words and characters-basedrepresentations. Hence, the perplexity computed is difficult to interpret. The main downside ofAdagrad is that the learning rate determined by accumulating the history of past gradients is usuallytoo aggressive and stops learning rather early. We simply reset this history every five epochs to givethe model a chance to improve, which explains the flattening followed by small improvements wesee for WE andCWE models. We choose to do that reset 2 times, based on previous experiments.Despite adaptive gradient, training of CWE-CWE models stays unstable.4.3 R ERANKINGSystem to be re-ranked BLEU ReferenceCWE CWE-CWE WEn=3 n=6 n=3 n=6 n=3 n=6En!Cz Baseline system 19.6 20.1 20.3 19.8 20.0 20.0 20.2En!Simplified CzReinflected baseline system 19.5 20.0 20.2 19.6 20.1 20.1 20.03-best Reinflected baseline system 19.9 20.3 19.6 20.0 20.1 20.15-best Reinflected baseline system 19.9 20.3 19.5 19.9 20.0 20.1Table 3: Best BLEU score obtained after n-best reranking of the hypothesis given by the translationand translation + k-best reinflection systems. nis the context size (in number of words)The reranking results are shown in table 3. The first line corresponds to experiments with a di-rect translation from English to Czech, where n-best lists generated by the MT system are simplyrescored by our models. The best result is given by the longest-context CWE model, which producesa+0:7BLEU score improvement. CWE models gives on average +0:1BLEU point compared toWE models, while CWE-CWE are0:2BLEU point under. Doubling the context size consistentlyimproves results of +0:2BLEU point.Experimental results on reinflected Czech seems to follow a similar trend: CWE models behave alittle better than WE models, while CWE-CWE models are under. While simply reranking n-bestlists is not as efficient as doing it directly in Czech, reranking nk-best lists extended by the factorizedCRF gives a small improvement, reaching an improvement of +0:7BLEU point. As a general rule,small context models seem to have difficulties with reinflected Czech. The main advantage givenby the CWE model is an ability to better rerank nk-best lists. These results suggest that, whilethe normalization + reinflection procedure may introduce diversity in the output to be reranked, ourmodels are not able to draw any significant advantage from it.7Under review as a conference paper at ICLR 20174.4 A NALYSIS OF CHARACTER -LEVEL OUTPUT REPRESENTATIONS PERFORMANCEModels using character-level output representations gave sub-par results on re-ranking. It is sur-prising, especially for re-inflected Czech: such a model is supposed to behave better on unknownwords, and thus should benefit from diversity given by generating new words. However, as we cansee in table 4, re-inflection doesn’t add that much diversity (About 0.1 % of OOV words, and about0.001 % of words never seen by the model before). Diversity is also inhibited by our training algo-rithm: while we train open-vocabulary models, the negative examples used with Noise-contrastiveestimation come from a closed vocabulary.Full training vocabulary 250K words vocabularyReference 0.131 % 0.995 %En!Cz (300-best) 0.566 % 1.173 %En!Simplified Cz + Reinflection 0.567 % 1.263 %En!Simplified Cz + 3-Best reinflection 0.567 % 1.277 %En!Simplified Cz + 5-Best reinflection 0.568 % 1.285 %Table 4: Ratio of unknown words in system outputs measured on the test set.This can related to the nature of the unigram distribution used to sample negative examples. Asexplained in section 4.1, it makes frequent short words completely outweigh the others in numberof updates, and we are forced to reduce the ability of the model to find common morphologicalattributes between words to avoid ’contamination’ of character n-gram representations.5 R ELATED WORKSThere is a number of different strategies to efficiently train NNLMs with large vocabularies, such asdifferent types of hierarchical softmax Mnih & Hinton (2009); Le et al. (2011), importance samplingBengio & S ́en ́ecal (2003), and Noise contrastive estimation Gutmann & Hyv ̈arinen (2012); Mnih &Teh (2012). Vaswani et al. (2013) has showed the interest of training a NLM with NCE to re-rankk-best lists, while Devlin et al. (2014) uses a self-normalization. Recently, a comparative study Chenet al. (2016) has been made on how to deal with a large vocabulary. However, the purpose of thispaper is to explore models with open vocabulary rather large vocabulary.There is a surge of interest into using character-level information for a wide range of NLP tasks,with improved results in POS Tagging Santos & Zadrozny (2014), Text classification Zhang &LeCun (2015), Parsing Ballesteros et al. (2015), Named entity recognition Lample et al. (2016).In language modeling, first applications to language modeling were strictly using characters, andperformed less than word-level models Mikolov et al. (2012), while showing impressive results fortext generation Sutskever et al. (2011); Graves (2013), using bi-directional LSTM Graves et al.(2013). Recently, Ling et al. (2015) has used bi-directional LSTM to build word representationsfrom characters, with improvements in language modeling and POS-tagging.The recent work of Kim et al. (2015), that uses convolutional networks and pooling to constructa word representation from character n-grams, coupled with highway networks Srivastava et al.(2015), showed on various languages that using characters improves results on the language mod-eling task (for a small corpus), even more so for languages with complex morphology. A similararchitecture was used J ́ozefowicz et al. (2016) on a larger dataset, conjointly with bi-directionalLSTMs, and trained with importance sampling, showing great results.On the study of NNLMs in the context of Machine Translation, we can mention the work of Luonget al. (2015) on the effect of the number of layers on reranking n-best lists. Finally, while notdirectly related to our work, Luong & Manning (2016) very recently showed great improvementson a translation task by handling rare words with character-level recurrent networks, with a neuraltranslation model.8Under review as a conference paper at ICLR 20176 C ONCLUSIONIn this work, we addressed the challenge of designing an open vocabulary Neural Language Model.For that purpose, word representations are estimated on-the-fly from n-grams of characters. Twokinds of models are introduced: first, NLMs using word and character-level embeddings to representthe input context ( CWE ); then its extension to an open-vocabulary even for the predicted words(CWE-CWE ). These models were used to re-rank outputs of translation systems from English toCzech. We also carried out experiments on translation systems from English to a simplified Czech,which is then re-inflected into Czech before re-ranking.We obtained a slight improvement in BLEU score using a CWE model, which, given the littlevariety of the words generated by translation systems, makes us suppose there is room for more. Weplan to investigate with more complex translation systems, as well as with other applications, suchas morphological re-inflection.While the performance of our open-vocabulary models are to some extent disappointing, they openquestions about the learned representations we will explore. We also plan to investigate on a morefitted noise distribution to use with NCE when training open-vocabulary models.ACKNOWLEDGMENTS | rJ1RyB7Ng | lacks experimental evidence | 2: Strong rejection | this paper proposes a model for representing unseen words in a neural language model. the proposed model achieves poor results in LM and a slight improvement over a baseline model.
this work needs a more comprehensive analysis:
- there's no comparison with related work trying to address the same problem
- an intrinsic evaluation and investigation of why/how their work should be better are missing.
- to make a bolder claim, more investigation should be done with other morphologically rich languages. Especially for MT, in addition to going from En-> Language_X, MRL_X -> En or MRL_X -> MRL_Y should be done.
| 5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature |
SygGlIBcel | ICLR.cc/2017/conference | 2017 | Opening the vocabulary of neural language models with character-level word representations | ["Matthieu Labeau", "Alexandre Allauzen"] | This paper introduces an architecture for an open-vocabulary neural language model. Word representations are computed on-the-fly by a convolution network followed by pooling layer. This allows the model to consider any word, in the context or for the prediction. The training objective is derived from the Noise-Contrastive Estimation to circumvent the lack of vocabulary. We test the ability of our model to build representations of unknown words on the MT task of IWSLT-2016 from English to Czech, in a reranking setting. Experimental results show promising results, with a gain up to 0.7 BLEU point. They also emphasize the difficulty and instability when training such models with character-based representations for the predicted words. | ["Natural language processing", "Deep learning"] | ABSTRACTThis paper introduces an architecture for an open-vocabulary neural languagemodel. Word representations are computed on-the-fly by a convolution networkfollowed by pooling layer. This allows the model to consider any word, in thecontext or for the prediction. The training objective is derived from the Noise-Contrastive Estimation to circumvent the lack of vocabulary. We test the ability ofour model to build representations of unknown words on the MT task of IWSLT-2016 from English to Czech, in a reranking setting. Experimental results showpromising results, with a gain up to 0.7 BLEU point. They also emphasize thedifficulty and instability when training such models with character-based repre-sentations for the predicted words.1 I NTRODUCTIONMost of neural language models, such as n-gram models Bengio et al. (2003) are word based andrely on the definition of a finite vocabulary V. As a consequence, a Look-up table is associated toVin which each word w2V is mapped to a vector of dEreal valued features stored in a matrixL2RjVjdE. While this approach has proven successful for a variety of tasks and languages, see forinstance Schwenk (2007) in speech recognition and Le et al. (2012); Devlin et al. (2014); Bahdanauet al. (2014) in machine translation, it induces several limitations.For morphologically-rich languages, like Czech or German, the lexical coverage is still an importantissue, since there is a combinatorial explosion of word forms, most of which are hardly observed ontraining data. On the one hand, growing the Look-up table is not a solution, since it would increasethe number of parameters without having enough training example for a proper estimation. On theother hand, rare words can be replaced by a special token. Nevertheless, this acts as a word classmerging very different words without any distinction and using different word classes to handle out-of-vocabulary words Allauzen & Gauvain (2005) does not really solve this issue, since rare wordsare difficult to classify.Moreover, for most inflected or agglutinative forms, as well as for compound words, the word struc-ture is overlooked, wasting parameters for modeling forms that could be more efficiently handledby word decomposition. While the use of subword units Botha & Blunsom (2014); Sennrich et al.(2016) could improve the generalization power of such models, it relies on a proper and efficientmethod to induce these subword units.To overcome these issues, we propose to investigate a word based language model with an openvocabulary. Since most of existing models and training criteria rely on the assumption of a finitevocabulary, the definition of an open vocabulary model, along with a training criterion, constitutesa scientific challenge. Our goal is to build word representations every words. Word representationsare inferred on-the-fly from its character sequence, using convolution filters which implicitly cap-ture subword patterns, as described in section 2. The architecture is based on a neural ngram modelinspired from Bengio et al. (2003), while this idea can be extended to other kind of models. Byrelaxing the normalized constraint, the objective function borrows from the noise contrastive esti-mation Gutmann & Hyv ̈arinen (2012) to allow our model to consider a possibly infinite vocabulary.This paper focusses on this challenge and its related training issues. To assess the efficiency of1Under review as a conference paper at ICLR 2017this approach, the experimental setup described in section 3 uses a large scale translation task in areranking setting. The experimental results summarized in section 4 show promising results as wellas training issues.2 M ODEL DESCRIPTIONWord embeddings are parameters, stored in a Look-up matrix L. The embedding ewordw of a wordwis simply the column of Lcorresponding to its index in the vocabulary:ewordw = [L]w2.1 C HARACTER -LEVEL WORD EMBEDDINGSTo infer a word embedding from its character embeddings, we use a convolution layer Waibel et al.(1990); Collobert et al. (2011), similar to layers used in Santos & Zadrozny (2014); Kim et al.(2015). As illustrated in figure 1, a word wis a character sequence fc1;::;cjwjgrepresented by theirembeddingsfCc1;::;Ccjwjg, where Ccidenotes the vector associated to the character ci. A convo-lution filter Wconv2RdeRdcncis applied over a sliding window of nccharacters, producinglocal features :xn=Wconv(Ccnnc+1::::Ccn)T+bconvwherexnis a vector of size deobtained for each position nin the word1. The notation ( Ccn1:Ccn)denotes the concatenation of two embeddings. The i-th element of the embedding of wis the meanover thei-th elements of the feature vectors, passed by the activation function :[echar]i=0@jwjnc+1Xn=1[xn]ijwjnc+ 11A (1)Using a mean after a sliding convolution window ensures that the embedding combines local featuresfrom the whole word, and that the gradient is redistributed at scale for each character n-gram. Theparameters of the layer are the matrices CandWconvand the bias bconv.2.2 M ODELSOur model follows the classic n-gram feedforward architecture. The input of the network is a n-words context Hi= (wi1;:::;wNi+1), and its output the probability P(wjHi)for each wordw2V. The embeddings of the word in the context are concatenated and fed into a hidden layer:hHi=(Whidden(ei1:::::eNi+1) +bhidden)A second hidden layer my be added. Finally, the output layer computes scores for each word:sHi= exp ( WouthHi+bout)Whidden,bhidden,Woutandboutare the parameters of the model. As the input Lookup-matrixL, the output weight matrix Woutcontains word embeddings, that are output representations of thewords in the vocabulary:eoutw= [Wout]wThen, the output probabilities are expressed as:P(wjHi) =expeoutwhHiP1<j<jVjexpeoutjhHiLater, we will use three different input layer to obtain word representations:1Two padding character tokens are used to deal with border effects. The first is added at the beginning andthe second at the end of the word, as many times as it is necessary to obtain the same number of windows thanthe length of the word. Their embeddings are added to C.2Under review as a conference paper at ICLR 2017jCjdCdEjVjLook-up Table LCharacter look-up Table Ccj2cj1cjcj+1cj+2mean (.)WconvCEWEWhiddenWoutecharwi1echarwi2echarwi3sCihiCi= (wi:wi1:wi2)ewordwi1ewordwi2ewordwi3Character-level representationWord-levelrepresentationFigure 1: CWE Model architectureA classic NLM using word-level embeddings only, that we will note WE, which usesjVjdeparameters.A NLM using embeddings constructed from character n-grams by convolution + pooling,that we will note CE, which usesjVcjdc+dcncdeparameters.A NLM using a concatenation of these two types of embeddings as word representation,that we will note CWE .2.3 O BJECTIVE FUNCTION FOR OPEN VOCABULARY MODELSUsually, such a model is trained by maximizing the log-likelihood. For a given word given itscontext, the model parameters are estimated in order to maximize the following function for allthe n-grams observed in the training data:LL() =X1<i<jDjlogP(wijHi):This objective function raises two important issues. For conventional word models, it implies a verycostly summation imposed by the softmax activation of the output layer. More importantly, thisobjective requires the definition of a finite vocabulary, while the proposed model may use character-based word embeddings, especially at the output, making the notion of vocabulary obsolete.Therefore, the parameters estimation relies on Noise Contrastive Estimation (NCE) introducedin Gutmann & Hyv ̈arinen (2012); Mnih & Teh (2012). This criterion allows us to train both typesof models based on conventional word embeddings, along with character-based embeddings. TheNCE objective function aims to discriminate between examples sampled from the real data and froma noise distribution. When presented with examples coming from a mixture of one sample from thedata distribution Pdandkfrom the noise distribution Pn,PH(w2D)denotes the posterior proba-bility of a word wgiven its context Hto be sampled from the training data D. This probability canbe expressed as follows:PH(w2D) =PHd(w)PHd(w) +kPn(w)As suggested in Mnih & Teh (2012), Pnonly depends on where, since we chose the unigramdistribution estimated on the training data. IfsH(w) = exp ( eouthH+bout) (2)denotes the non-normalized score given by the model to a specific word w, as a function of theparametersand the context H, the final NCE objective function has the following form Gutmann3Under review as a conference paper at ICLR 2017& Hyv ̈arinen (2012):JH=EsHlogsH(w)sH(w) +kPn(w)+kEPnlogkPn(w)sH(w) +kPn(w);wheresHwill tend toPHdwithout the need for an explicit normalization.2.4 C HARACTER -BASED OUTPUT WEIGHTS WITH NOISE -CONTRASTIVE ESTIMATIONThe output weights representing each word in the vocabulary eoutcan also be replaced by embed-dings computed by a convolution layer on character n-grams. In this case the model can efficientlyrepresent and infer a score to any word, observed during the training process or not, while withconventional word embeddings, out of vocabulary words only share the same representation anddistribution. Instead of using a parameter matrix Woutto estimate the score like in equation 2, theoutput representation of a word w,eoutwcan be replaced by a vector echaroutw estimated on the flybased on its character sequence as described in equation 1, using jVcjdc+dcncdhparameters.With this extension the model does not rely on a vocabulary anymore, hence motivating our choiceof the NCE. This unnormalized objective allows us to handle an open vocabulary, since we only needto computek+ 1word representations for each training examples. Models that use character-basedembeddings both for input and output words are denoted by CWE-CWE .Moreover, with this extension, the representations of words sharing character n-grams are tied. Thisis an important property to let the model generalize to unseen words. However, it can be also anissue: the limited number of updates for output representations ( k+ 1words) has a “rich get richer”effect: the most frequent words are usually short and will get most of the update. They may therefore”contaminate” the representation of longer words with which they share character n-grams, even ifthese words are not related. This issue is further addressed in section 4.1.3 E XPERIMENTAL SET -UPThe impact of the models described in section 2 is evaluated within the machine translation (MT)shared task of IWSLT-20162from Englih to Czech. This language pair is highly challenging sinceCzech is a morphologically-rich language. Neural language models are integrated in a two stepsapproach: the first step uses a conventional MT system to produce an n-best list (the nmost likelytranslations); in the second step, these hypothesis are re-ranked by adding the score of the neurallanguage model. To better benefit from the open vocabulary models introduced in section 2.1, a morecomplex system is also used: first an MT system is used to translate from English to a simplifiedform of Czech which is reinflected. With this pipeline we expect n-best lists with more diversity andalso words unseen during the training process. The neural language models are then used to re-rankthe reinflected n-best lists.3.1 D ATAThe IWSLT16 MT task is focused on the translation of TED talks. The translation systems aretrained on parallel data from the TED ,QED andeuroparl . Our Neural language models are trainedon the same data, but training examples are sampled from these corpora given weights that arecomputed to balance between in-domain parallel data ( TED ), out-of domain parallel data, and ad-ditional monolingual data. Finally, we use the concatenation of TED.dev2010 ,TED.dev2011 andTED.tst2010 as development set, while TED.tst2012 andTED.tst2013 provide the test set.3.2 C ZECH RE-INFLECTIONIn Czech, a morphologically rich language, each lemma can take a lot of possible word forms. Mostof them won’t appear - or with a very low frequency - in training data. For an important part of thewords found in test data and unseen during training, their lemmas however can be observed but witha different morphological derivation.2http://workshop2016.iwslt.org4Under review as a conference paper at ICLR 2017A non-observed word form can’t be generated by the translation system, and one seen too rarelywon’t be used in a relevant way. To circumvent this limitation, in a similar fashion as the methoddescribed in Marie et al. (2015), each noun, pronoun and adjective is replaced in the training corporaby its lemma along with some morphological features. These word forms are considered in factoredway, where some of the POS tags are discarded to reduce the vocabulary. After the translation pro-cess, a cascade of Conditional Random Fields (CRF) are used to reintroduce the discarded features,such as gender, number and case, and to generate a new word form.Formally, the MT system translates English into a simplified version of Czech, that is reinflected.Within this process, the MT system can produce a n-best list, that can be extended to a nk-best list,considering for each translation hypothesis the k-best reinflected sentences given by the factorizedCRF. Intuitively, this process can introduce word forms potentially not yet seen in training data, butbased on known paradigms, which can give an advantage to language models able to build a wordrepresentation from character n-grams.3.3 B ASELINE TRANSLATION SYSTEMOur baseline is built with a Statistical Machine Translation system based on bilingual n-grams,NCODE3, described in Crego et al. (2011). We follow the same setup as in Marie et al. (2015).3.4 NLM TRAINING AND OPTIMIZATIONFirst, some comparative experiments on a smaller dataset are carried out to better understand howopen vocabulary NLM behave and to set the hyper-parameters. First trained using stochastic gra-dient descent, we observed a quite unstable training process, restricting a proper hyper-parameterschoices. We found that especially the embedding dimensions, and the activation functions usedcould make the NCE-objective hard to optimize. This was aggravated in Czech, which we foundmore difficult to work with than other morphologically complex languages, like German and Rus-sian. The use of Adagrad Duchi et al. (2010) clearly helps to solve most of these issues, but addsconsequent computation time. Following preliminary results on our work with a similar model ona different task Labeau et al. (2015), we made the choice of not implementing LSTMs to obtaincharacter-level word representations. It gave similar results, at the cost of unstable training and ex-tended computation time. We then train using batches of 128, for various context sizes, WE,CWE ,andCWE-CWE models. The ReLu activation function is used, along with an embedding size ofde= 128 . When relevant, we used a character embedding size of dc= 32 and a convolution onnc= 5-grams of characters for all experiments4. Concerning the NCE training, we sampled k= 25examples from the unigram distribution obtained from the training data, for each example sampledfrom the data. The models were implemented using C++5.3.5 R ERANKINGThe re-ranking step uses additional features to find a better translation among the n-best generatedby the decoder (in our case, n= 300 ): we use the score (probability) of WE,CWE andCWE-CWE models given to each sentence by our models as such a feature. Tuning for re-ranking wasperformed with KB-M IRACherry & Foster (2012), and evaluation using BLEU score.4 E XPERIMENTAL RESULTSThe first set of experiments investigates the impact of the padding design on the character-levelrepresentation followed by a study of the learning behavior of our proposed models and trainingcriterion. Then, the proposed models are evaluated within the MT task. The final set of experimentsanalyzes the issues of the model based on character-level representation for output words, in orderto propose remedies.3http://ncode.limsi.fr4Results did not differ significantly when increasing these embedding sizes, with an impact on convergencespeed and computation time.5Implementation will be made available.5Under review as a conference paper at ICLR 20174.1 T IES BETWEEN CHARACTER -LEVEL REPRESENTATION OF OUTPUT WORDSPreliminary results on smaller dataset are quite poor for models using character-level representation,and far worse when used for the output layer. We suspect that groups of characters are updated farmore together, yielding a ”contamination” of several character n-grams by very frequent short words.Indeed, our simple padding scheme, as shown in the left part of table 1, makes words sharing firstor last letter(s) systematically share at least one character n-gram: we suppose it gives the modelsmore chance to detect similarities in word forms sharing prefixes and suffixes.The representations of any of the character n-grams that are included in the frequent words willthus be re-used in a large part of the other words in the corpus. A huge number of word forms areaffected: a little more than one third of the training data shares its first character n-gram with one ofthe ten most frequent words, and a little more than one quarter shares its last.While considering varying size of character n-grams when building our word representation, asin Kim et al. (2015), would certainly help, it would increase our computation time. We thus choose toalleviate our padding scheme, as shown on the right part of table 1. We add only one character tokenat the beginning of the word, and one at the end6. While it may inhibit the capacity of the modelto build links between words sharing prefixes or suffixes, it improves results drastically, especiallywhen using character-level outputs, as shown in figure 3. This limited padding scheme is used forthe following experiments.a aale naale naaby zaaby zaaz bylaaz bylaani dvaani dvaasi trebaasi trebaTable 1: Padding for word decomposition in character 5-grams: is a character token indicatingthe beginning of the word, while indicates the end of the word. The left part of the table showsour original padding scheme, which makes very different words share character 5-grams, especiallywith short, frequent words. The right part of the table shows our alleviated padding scheme.4.2 NLM TRAININGWhile the perplexity of our language models is not our main focus, it is still related to the quantitythat our training seeks to optimize - since the NCE gradient approaches the maximum likelihoodgradient Mnih & Teh (2012). On figure 2 are shown perplexity values of each model during training.These values are based on a vocabulary containing the 250K most frequent words on the training data- it is also the vocabulary used in the model when relevant. They are computed on the developmentset after each epoch. An epoch includes 2,5M N-grams sampled from the training data. On table 2are shown the best perplexity obtained on the development set by each model, during training.Context size (Number of words) 3 6WE 227 193CWE 207 185CWE-CWE 308 243Table 2: Best perplexity reached on the development set, on a 250K output vocabulary, after 15epochs of 2,5M n-gramsTable 2 shows that a character-level word representation helps to decrease the perplexity, even ifa larger context closes the gap. To compute the perplexity of CWE-CWE models, we use the6For short words, we add the numbers of tokens necessary for the word to have at least nC= 5characters,as shown in table 16Under review as a conference paper at ICLR 2017Figure 2 Figure 3Figure 4: Model perplexity measured on the development set during training. The context size is3 words. Figure 3 shows models based on character-level word representations, with and withoutcomplete padding. Models are trained on the same data than Figure 2 but on smaller epochs (250Kn-grams).same vocabulary as for other models, and use the ’unknown’ tokens for words and characters-basedrepresentations. Hence, the perplexity computed is difficult to interpret. The main downside ofAdagrad is that the learning rate determined by accumulating the history of past gradients is usuallytoo aggressive and stops learning rather early. We simply reset this history every five epochs to givethe model a chance to improve, which explains the flattening followed by small improvements wesee for WE andCWE models. We choose to do that reset 2 times, based on previous experiments.Despite adaptive gradient, training of CWE-CWE models stays unstable.4.3 R ERANKINGSystem to be re-ranked BLEU ReferenceCWE CWE-CWE WEn=3 n=6 n=3 n=6 n=3 n=6En!Cz Baseline system 19.6 20.1 20.3 19.8 20.0 20.0 20.2En!Simplified CzReinflected baseline system 19.5 20.0 20.2 19.6 20.1 20.1 20.03-best Reinflected baseline system 19.9 20.3 19.6 20.0 20.1 20.15-best Reinflected baseline system 19.9 20.3 19.5 19.9 20.0 20.1Table 3: Best BLEU score obtained after n-best reranking of the hypothesis given by the translationand translation + k-best reinflection systems. nis the context size (in number of words)The reranking results are shown in table 3. The first line corresponds to experiments with a di-rect translation from English to Czech, where n-best lists generated by the MT system are simplyrescored by our models. The best result is given by the longest-context CWE model, which producesa+0:7BLEU score improvement. CWE models gives on average +0:1BLEU point compared toWE models, while CWE-CWE are0:2BLEU point under. Doubling the context size consistentlyimproves results of +0:2BLEU point.Experimental results on reinflected Czech seems to follow a similar trend: CWE models behave alittle better than WE models, while CWE-CWE models are under. While simply reranking n-bestlists is not as efficient as doing it directly in Czech, reranking nk-best lists extended by the factorizedCRF gives a small improvement, reaching an improvement of +0:7BLEU point. As a general rule,small context models seem to have difficulties with reinflected Czech. The main advantage givenby the CWE model is an ability to better rerank nk-best lists. These results suggest that, whilethe normalization + reinflection procedure may introduce diversity in the output to be reranked, ourmodels are not able to draw any significant advantage from it.7Under review as a conference paper at ICLR 20174.4 A NALYSIS OF CHARACTER -LEVEL OUTPUT REPRESENTATIONS PERFORMANCEModels using character-level output representations gave sub-par results on re-ranking. It is sur-prising, especially for re-inflected Czech: such a model is supposed to behave better on unknownwords, and thus should benefit from diversity given by generating new words. However, as we cansee in table 4, re-inflection doesn’t add that much diversity (About 0.1 % of OOV words, and about0.001 % of words never seen by the model before). Diversity is also inhibited by our training algo-rithm: while we train open-vocabulary models, the negative examples used with Noise-contrastiveestimation come from a closed vocabulary.Full training vocabulary 250K words vocabularyReference 0.131 % 0.995 %En!Cz (300-best) 0.566 % 1.173 %En!Simplified Cz + Reinflection 0.567 % 1.263 %En!Simplified Cz + 3-Best reinflection 0.567 % 1.277 %En!Simplified Cz + 5-Best reinflection 0.568 % 1.285 %Table 4: Ratio of unknown words in system outputs measured on the test set.This can related to the nature of the unigram distribution used to sample negative examples. Asexplained in section 4.1, it makes frequent short words completely outweigh the others in numberof updates, and we are forced to reduce the ability of the model to find common morphologicalattributes between words to avoid ’contamination’ of character n-gram representations.5 R ELATED WORKSThere is a number of different strategies to efficiently train NNLMs with large vocabularies, such asdifferent types of hierarchical softmax Mnih & Hinton (2009); Le et al. (2011), importance samplingBengio & S ́en ́ecal (2003), and Noise contrastive estimation Gutmann & Hyv ̈arinen (2012); Mnih &Teh (2012). Vaswani et al. (2013) has showed the interest of training a NLM with NCE to re-rankk-best lists, while Devlin et al. (2014) uses a self-normalization. Recently, a comparative study Chenet al. (2016) has been made on how to deal with a large vocabulary. However, the purpose of thispaper is to explore models with open vocabulary rather large vocabulary.There is a surge of interest into using character-level information for a wide range of NLP tasks,with improved results in POS Tagging Santos & Zadrozny (2014), Text classification Zhang &LeCun (2015), Parsing Ballesteros et al. (2015), Named entity recognition Lample et al. (2016).In language modeling, first applications to language modeling were strictly using characters, andperformed less than word-level models Mikolov et al. (2012), while showing impressive results fortext generation Sutskever et al. (2011); Graves (2013), using bi-directional LSTM Graves et al.(2013). Recently, Ling et al. (2015) has used bi-directional LSTM to build word representationsfrom characters, with improvements in language modeling and POS-tagging.The recent work of Kim et al. (2015), that uses convolutional networks and pooling to constructa word representation from character n-grams, coupled with highway networks Srivastava et al.(2015), showed on various languages that using characters improves results on the language mod-eling task (for a small corpus), even more so for languages with complex morphology. A similararchitecture was used J ́ozefowicz et al. (2016) on a larger dataset, conjointly with bi-directionalLSTMs, and trained with importance sampling, showing great results.On the study of NNLMs in the context of Machine Translation, we can mention the work of Luonget al. (2015) on the effect of the number of layers on reranking n-best lists. Finally, while notdirectly related to our work, Luong & Manning (2016) very recently showed great improvementson a translation task by handling rare words with character-level recurrent networks, with a neuraltranslation model.8Under review as a conference paper at ICLR 20176 C ONCLUSIONIn this work, we addressed the challenge of designing an open vocabulary Neural Language Model.For that purpose, word representations are estimated on-the-fly from n-grams of characters. Twokinds of models are introduced: first, NLMs using word and character-level embeddings to representthe input context ( CWE ); then its extension to an open-vocabulary even for the predicted words(CWE-CWE ). These models were used to re-rank outputs of translation systems from English toCzech. We also carried out experiments on translation systems from English to a simplified Czech,which is then re-inflected into Czech before re-ranking.We obtained a slight improvement in BLEU score using a CWE model, which, given the littlevariety of the words generated by translation systems, makes us suppose there is room for more. Weplan to investigate with more complex translation systems, as well as with other applications, suchas morphological re-inflection.While the performance of our open-vocabulary models are to some extent disappointing, they openquestions about the learned representations we will explore. We also plan to investigate on a morefitted noise distribution to use with NCE when training open-vocabulary models.ACKNOWLEDGMENTS | S1F8xwxNx | Review | 4: Ok but not good enough - rejection | In this submission, an interesting approach to character-based language modeling is pursued that retains word-level representations both in the context, and optionally also in the output. However, the approach is not new, cf. (Kim et al. 2015) as cited in the submission, as well as (Jozefowicz et al. 2016). Both Kim and Jozefowicz already go beyond this submission by applying the approach using RNNs/LSTMs. Also, Jozefowicz et al. provide a comparative discussion of different approaches to character-level modeling, which I am missing here, at least by discussing this existing work. THe remaining novelty of the approach then would be its application to machine translation, although it remains somewhat unclear, inhowfar reranking of N-best lists can handle the OOV problem - the translation-related part of the OVV problem should be elaborated here. That said, some of the claims of this submission seems somewhat exaggerated, like the statement in Sec. 2.3: "making the notion of vocabulary obsolete", whereas the authors e.g. express doubts concerning the interpretation of perplexity w/o an explicit output vocabulary. For example modeling of especially frequent word forms still can be expected to contribute, as shown in e.g. arXiv:1609.08144
Sec. 2.3: You claim that the objective requires a finite vocabulary. This statement only is correct if the units considered are limited to full word forms. However, using subwords and even individual characters, implicitly larger and even infinite vocabularies can be covered with the log-likelihood criterion. Even though this require a model different from the one proposed here, the corresponding statement should qualified in this respect.
The way character embeddings are used for the output should be clarified. The description in Sec. 2.4 is not explicit enough in my view.
Concerning the configuration of NCE, it would be desirable to get a better idea of how you arrived at your specific configuration and parameterization described in Sec. 3.4.
Sec. 4.1: you might want to mention that (Kim et al. 2015) came to similar conclusions w.r.t. the performance of using character embeddings at the output, and discuss the suggestions for possible improvements given therein.
Sec. 4.2: there are ways to calculate and interpret perplexity for unknown words, cf. (Shaik et al. IWSLT 2013).
Sec. 4.4 and Table 4: the size of the full training vocabulary should be provided here.
Minor comments:
p. 2, bottom: three different input layer -> three different input layers (plural)
Fig. 1: fonts within the figure are way too small
p. 3, first item below Fig. 1: that we will note WE -> that we will denote WE
Sec. 2.3: the parameters estimation -> the parameter estimation (or: the parameters' estimation)
p. 5, first paragraph: in factored way -> in a factored way
p. 5, second paragraph: a n-best list, a nk-best list -> an n-best list, an nk-best list
Sec. 4.2, last sentence: Despite adaptive gradient, -> verb and article missing
| 4: The reviewer is confident but not absolutely certain that the evaluation is correct |
SygGlIBcel | ICLR.cc/2017/conference | 2017 | Opening the vocabulary of neural language models with character-level word representations | ["Matthieu Labeau", "Alexandre Allauzen"] | This paper introduces an architecture for an open-vocabulary neural language model. Word representations are computed on-the-fly by a convolution network followed by pooling layer. This allows the model to consider any word, in the context or for the prediction. The training objective is derived from the Noise-Contrastive Estimation to circumvent the lack of vocabulary. We test the ability of our model to build representations of unknown words on the MT task of IWSLT-2016 from English to Czech, in a reranking setting. Experimental results show promising results, with a gain up to 0.7 BLEU point. They also emphasize the difficulty and instability when training such models with character-based representations for the predicted words. | ["Natural language processing", "Deep learning"] | ABSTRACTThis paper introduces an architecture for an open-vocabulary neural languagemodel. Word representations are computed on-the-fly by a convolution networkfollowed by pooling layer. This allows the model to consider any word, in thecontext or for the prediction. The training objective is derived from the Noise-Contrastive Estimation to circumvent the lack of vocabulary. We test the ability ofour model to build representations of unknown words on the MT task of IWSLT-2016 from English to Czech, in a reranking setting. Experimental results showpromising results, with a gain up to 0.7 BLEU point. They also emphasize thedifficulty and instability when training such models with character-based repre-sentations for the predicted words.1 I NTRODUCTIONMost of neural language models, such as n-gram models Bengio et al. (2003) are word based andrely on the definition of a finite vocabulary V. As a consequence, a Look-up table is associated toVin which each word w2V is mapped to a vector of dEreal valued features stored in a matrixL2RjVjdE. While this approach has proven successful for a variety of tasks and languages, see forinstance Schwenk (2007) in speech recognition and Le et al. (2012); Devlin et al. (2014); Bahdanauet al. (2014) in machine translation, it induces several limitations.For morphologically-rich languages, like Czech or German, the lexical coverage is still an importantissue, since there is a combinatorial explosion of word forms, most of which are hardly observed ontraining data. On the one hand, growing the Look-up table is not a solution, since it would increasethe number of parameters without having enough training example for a proper estimation. On theother hand, rare words can be replaced by a special token. Nevertheless, this acts as a word classmerging very different words without any distinction and using different word classes to handle out-of-vocabulary words Allauzen & Gauvain (2005) does not really solve this issue, since rare wordsare difficult to classify.Moreover, for most inflected or agglutinative forms, as well as for compound words, the word struc-ture is overlooked, wasting parameters for modeling forms that could be more efficiently handledby word decomposition. While the use of subword units Botha & Blunsom (2014); Sennrich et al.(2016) could improve the generalization power of such models, it relies on a proper and efficientmethod to induce these subword units.To overcome these issues, we propose to investigate a word based language model with an openvocabulary. Since most of existing models and training criteria rely on the assumption of a finitevocabulary, the definition of an open vocabulary model, along with a training criterion, constitutesa scientific challenge. Our goal is to build word representations every words. Word representationsare inferred on-the-fly from its character sequence, using convolution filters which implicitly cap-ture subword patterns, as described in section 2. The architecture is based on a neural ngram modelinspired from Bengio et al. (2003), while this idea can be extended to other kind of models. Byrelaxing the normalized constraint, the objective function borrows from the noise contrastive esti-mation Gutmann & Hyv ̈arinen (2012) to allow our model to consider a possibly infinite vocabulary.This paper focusses on this challenge and its related training issues. To assess the efficiency of1Under review as a conference paper at ICLR 2017this approach, the experimental setup described in section 3 uses a large scale translation task in areranking setting. The experimental results summarized in section 4 show promising results as wellas training issues.2 M ODEL DESCRIPTIONWord embeddings are parameters, stored in a Look-up matrix L. The embedding ewordw of a wordwis simply the column of Lcorresponding to its index in the vocabulary:ewordw = [L]w2.1 C HARACTER -LEVEL WORD EMBEDDINGSTo infer a word embedding from its character embeddings, we use a convolution layer Waibel et al.(1990); Collobert et al. (2011), similar to layers used in Santos & Zadrozny (2014); Kim et al.(2015). As illustrated in figure 1, a word wis a character sequence fc1;::;cjwjgrepresented by theirembeddingsfCc1;::;Ccjwjg, where Ccidenotes the vector associated to the character ci. A convo-lution filter Wconv2RdeRdcncis applied over a sliding window of nccharacters, producinglocal features :xn=Wconv(Ccnnc+1::::Ccn)T+bconvwherexnis a vector of size deobtained for each position nin the word1. The notation ( Ccn1:Ccn)denotes the concatenation of two embeddings. The i-th element of the embedding of wis the meanover thei-th elements of the feature vectors, passed by the activation function :[echar]i=0@jwjnc+1Xn=1[xn]ijwjnc+ 11A (1)Using a mean after a sliding convolution window ensures that the embedding combines local featuresfrom the whole word, and that the gradient is redistributed at scale for each character n-gram. Theparameters of the layer are the matrices CandWconvand the bias bconv.2.2 M ODELSOur model follows the classic n-gram feedforward architecture. The input of the network is a n-words context Hi= (wi1;:::;wNi+1), and its output the probability P(wjHi)for each wordw2V. The embeddings of the word in the context are concatenated and fed into a hidden layer:hHi=(Whidden(ei1:::::eNi+1) +bhidden)A second hidden layer my be added. Finally, the output layer computes scores for each word:sHi= exp ( WouthHi+bout)Whidden,bhidden,Woutandboutare the parameters of the model. As the input Lookup-matrixL, the output weight matrix Woutcontains word embeddings, that are output representations of thewords in the vocabulary:eoutw= [Wout]wThen, the output probabilities are expressed as:P(wjHi) =expeoutwhHiP1<j<jVjexpeoutjhHiLater, we will use three different input layer to obtain word representations:1Two padding character tokens are used to deal with border effects. The first is added at the beginning andthe second at the end of the word, as many times as it is necessary to obtain the same number of windows thanthe length of the word. Their embeddings are added to C.2Under review as a conference paper at ICLR 2017jCjdCdEjVjLook-up Table LCharacter look-up Table Ccj2cj1cjcj+1cj+2mean (.)WconvCEWEWhiddenWoutecharwi1echarwi2echarwi3sCihiCi= (wi:wi1:wi2)ewordwi1ewordwi2ewordwi3Character-level representationWord-levelrepresentationFigure 1: CWE Model architectureA classic NLM using word-level embeddings only, that we will note WE, which usesjVjdeparameters.A NLM using embeddings constructed from character n-grams by convolution + pooling,that we will note CE, which usesjVcjdc+dcncdeparameters.A NLM using a concatenation of these two types of embeddings as word representation,that we will note CWE .2.3 O BJECTIVE FUNCTION FOR OPEN VOCABULARY MODELSUsually, such a model is trained by maximizing the log-likelihood. For a given word given itscontext, the model parameters are estimated in order to maximize the following function for allthe n-grams observed in the training data:LL() =X1<i<jDjlogP(wijHi):This objective function raises two important issues. For conventional word models, it implies a verycostly summation imposed by the softmax activation of the output layer. More importantly, thisobjective requires the definition of a finite vocabulary, while the proposed model may use character-based word embeddings, especially at the output, making the notion of vocabulary obsolete.Therefore, the parameters estimation relies on Noise Contrastive Estimation (NCE) introducedin Gutmann & Hyv ̈arinen (2012); Mnih & Teh (2012). This criterion allows us to train both typesof models based on conventional word embeddings, along with character-based embeddings. TheNCE objective function aims to discriminate between examples sampled from the real data and froma noise distribution. When presented with examples coming from a mixture of one sample from thedata distribution Pdandkfrom the noise distribution Pn,PH(w2D)denotes the posterior proba-bility of a word wgiven its context Hto be sampled from the training data D. This probability canbe expressed as follows:PH(w2D) =PHd(w)PHd(w) +kPn(w)As suggested in Mnih & Teh (2012), Pnonly depends on where, since we chose the unigramdistribution estimated on the training data. IfsH(w) = exp ( eouthH+bout) (2)denotes the non-normalized score given by the model to a specific word w, as a function of theparametersand the context H, the final NCE objective function has the following form Gutmann3Under review as a conference paper at ICLR 2017& Hyv ̈arinen (2012):JH=EsHlogsH(w)sH(w) +kPn(w)+kEPnlogkPn(w)sH(w) +kPn(w);wheresHwill tend toPHdwithout the need for an explicit normalization.2.4 C HARACTER -BASED OUTPUT WEIGHTS WITH NOISE -CONTRASTIVE ESTIMATIONThe output weights representing each word in the vocabulary eoutcan also be replaced by embed-dings computed by a convolution layer on character n-grams. In this case the model can efficientlyrepresent and infer a score to any word, observed during the training process or not, while withconventional word embeddings, out of vocabulary words only share the same representation anddistribution. Instead of using a parameter matrix Woutto estimate the score like in equation 2, theoutput representation of a word w,eoutwcan be replaced by a vector echaroutw estimated on the flybased on its character sequence as described in equation 1, using jVcjdc+dcncdhparameters.With this extension the model does not rely on a vocabulary anymore, hence motivating our choiceof the NCE. This unnormalized objective allows us to handle an open vocabulary, since we only needto computek+ 1word representations for each training examples. Models that use character-basedembeddings both for input and output words are denoted by CWE-CWE .Moreover, with this extension, the representations of words sharing character n-grams are tied. Thisis an important property to let the model generalize to unseen words. However, it can be also anissue: the limited number of updates for output representations ( k+ 1words) has a “rich get richer”effect: the most frequent words are usually short and will get most of the update. They may therefore”contaminate” the representation of longer words with which they share character n-grams, even ifthese words are not related. This issue is further addressed in section 4.1.3 E XPERIMENTAL SET -UPThe impact of the models described in section 2 is evaluated within the machine translation (MT)shared task of IWSLT-20162from Englih to Czech. This language pair is highly challenging sinceCzech is a morphologically-rich language. Neural language models are integrated in a two stepsapproach: the first step uses a conventional MT system to produce an n-best list (the nmost likelytranslations); in the second step, these hypothesis are re-ranked by adding the score of the neurallanguage model. To better benefit from the open vocabulary models introduced in section 2.1, a morecomplex system is also used: first an MT system is used to translate from English to a simplifiedform of Czech which is reinflected. With this pipeline we expect n-best lists with more diversity andalso words unseen during the training process. The neural language models are then used to re-rankthe reinflected n-best lists.3.1 D ATAThe IWSLT16 MT task is focused on the translation of TED talks. The translation systems aretrained on parallel data from the TED ,QED andeuroparl . Our Neural language models are trainedon the same data, but training examples are sampled from these corpora given weights that arecomputed to balance between in-domain parallel data ( TED ), out-of domain parallel data, and ad-ditional monolingual data. Finally, we use the concatenation of TED.dev2010 ,TED.dev2011 andTED.tst2010 as development set, while TED.tst2012 andTED.tst2013 provide the test set.3.2 C ZECH RE-INFLECTIONIn Czech, a morphologically rich language, each lemma can take a lot of possible word forms. Mostof them won’t appear - or with a very low frequency - in training data. For an important part of thewords found in test data and unseen during training, their lemmas however can be observed but witha different morphological derivation.2http://workshop2016.iwslt.org4Under review as a conference paper at ICLR 2017A non-observed word form can’t be generated by the translation system, and one seen too rarelywon’t be used in a relevant way. To circumvent this limitation, in a similar fashion as the methoddescribed in Marie et al. (2015), each noun, pronoun and adjective is replaced in the training corporaby its lemma along with some morphological features. These word forms are considered in factoredway, where some of the POS tags are discarded to reduce the vocabulary. After the translation pro-cess, a cascade of Conditional Random Fields (CRF) are used to reintroduce the discarded features,such as gender, number and case, and to generate a new word form.Formally, the MT system translates English into a simplified version of Czech, that is reinflected.Within this process, the MT system can produce a n-best list, that can be extended to a nk-best list,considering for each translation hypothesis the k-best reinflected sentences given by the factorizedCRF. Intuitively, this process can introduce word forms potentially not yet seen in training data, butbased on known paradigms, which can give an advantage to language models able to build a wordrepresentation from character n-grams.3.3 B ASELINE TRANSLATION SYSTEMOur baseline is built with a Statistical Machine Translation system based on bilingual n-grams,NCODE3, described in Crego et al. (2011). We follow the same setup as in Marie et al. (2015).3.4 NLM TRAINING AND OPTIMIZATIONFirst, some comparative experiments on a smaller dataset are carried out to better understand howopen vocabulary NLM behave and to set the hyper-parameters. First trained using stochastic gra-dient descent, we observed a quite unstable training process, restricting a proper hyper-parameterschoices. We found that especially the embedding dimensions, and the activation functions usedcould make the NCE-objective hard to optimize. This was aggravated in Czech, which we foundmore difficult to work with than other morphologically complex languages, like German and Rus-sian. The use of Adagrad Duchi et al. (2010) clearly helps to solve most of these issues, but addsconsequent computation time. Following preliminary results on our work with a similar model ona different task Labeau et al. (2015), we made the choice of not implementing LSTMs to obtaincharacter-level word representations. It gave similar results, at the cost of unstable training and ex-tended computation time. We then train using batches of 128, for various context sizes, WE,CWE ,andCWE-CWE models. The ReLu activation function is used, along with an embedding size ofde= 128 . When relevant, we used a character embedding size of dc= 32 and a convolution onnc= 5-grams of characters for all experiments4. Concerning the NCE training, we sampled k= 25examples from the unigram distribution obtained from the training data, for each example sampledfrom the data. The models were implemented using C++5.3.5 R ERANKINGThe re-ranking step uses additional features to find a better translation among the n-best generatedby the decoder (in our case, n= 300 ): we use the score (probability) of WE,CWE andCWE-CWE models given to each sentence by our models as such a feature. Tuning for re-ranking wasperformed with KB-M IRACherry & Foster (2012), and evaluation using BLEU score.4 E XPERIMENTAL RESULTSThe first set of experiments investigates the impact of the padding design on the character-levelrepresentation followed by a study of the learning behavior of our proposed models and trainingcriterion. Then, the proposed models are evaluated within the MT task. The final set of experimentsanalyzes the issues of the model based on character-level representation for output words, in orderto propose remedies.3http://ncode.limsi.fr4Results did not differ significantly when increasing these embedding sizes, with an impact on convergencespeed and computation time.5Implementation will be made available.5Under review as a conference paper at ICLR 20174.1 T IES BETWEEN CHARACTER -LEVEL REPRESENTATION OF OUTPUT WORDSPreliminary results on smaller dataset are quite poor for models using character-level representation,and far worse when used for the output layer. We suspect that groups of characters are updated farmore together, yielding a ”contamination” of several character n-grams by very frequent short words.Indeed, our simple padding scheme, as shown in the left part of table 1, makes words sharing firstor last letter(s) systematically share at least one character n-gram: we suppose it gives the modelsmore chance to detect similarities in word forms sharing prefixes and suffixes.The representations of any of the character n-grams that are included in the frequent words willthus be re-used in a large part of the other words in the corpus. A huge number of word forms areaffected: a little more than one third of the training data shares its first character n-gram with one ofthe ten most frequent words, and a little more than one quarter shares its last.While considering varying size of character n-grams when building our word representation, asin Kim et al. (2015), would certainly help, it would increase our computation time. We thus choose toalleviate our padding scheme, as shown on the right part of table 1. We add only one character tokenat the beginning of the word, and one at the end6. While it may inhibit the capacity of the modelto build links between words sharing prefixes or suffixes, it improves results drastically, especiallywhen using character-level outputs, as shown in figure 3. This limited padding scheme is used forthe following experiments.a aale naale naaby zaaby zaaz bylaaz bylaani dvaani dvaasi trebaasi trebaTable 1: Padding for word decomposition in character 5-grams: is a character token indicatingthe beginning of the word, while indicates the end of the word. The left part of the table showsour original padding scheme, which makes very different words share character 5-grams, especiallywith short, frequent words. The right part of the table shows our alleviated padding scheme.4.2 NLM TRAININGWhile the perplexity of our language models is not our main focus, it is still related to the quantitythat our training seeks to optimize - since the NCE gradient approaches the maximum likelihoodgradient Mnih & Teh (2012). On figure 2 are shown perplexity values of each model during training.These values are based on a vocabulary containing the 250K most frequent words on the training data- it is also the vocabulary used in the model when relevant. They are computed on the developmentset after each epoch. An epoch includes 2,5M N-grams sampled from the training data. On table 2are shown the best perplexity obtained on the development set by each model, during training.Context size (Number of words) 3 6WE 227 193CWE 207 185CWE-CWE 308 243Table 2: Best perplexity reached on the development set, on a 250K output vocabulary, after 15epochs of 2,5M n-gramsTable 2 shows that a character-level word representation helps to decrease the perplexity, even ifa larger context closes the gap. To compute the perplexity of CWE-CWE models, we use the6For short words, we add the numbers of tokens necessary for the word to have at least nC= 5characters,as shown in table 16Under review as a conference paper at ICLR 2017Figure 2 Figure 3Figure 4: Model perplexity measured on the development set during training. The context size is3 words. Figure 3 shows models based on character-level word representations, with and withoutcomplete padding. Models are trained on the same data than Figure 2 but on smaller epochs (250Kn-grams).same vocabulary as for other models, and use the ’unknown’ tokens for words and characters-basedrepresentations. Hence, the perplexity computed is difficult to interpret. The main downside ofAdagrad is that the learning rate determined by accumulating the history of past gradients is usuallytoo aggressive and stops learning rather early. We simply reset this history every five epochs to givethe model a chance to improve, which explains the flattening followed by small improvements wesee for WE andCWE models. We choose to do that reset 2 times, based on previous experiments.Despite adaptive gradient, training of CWE-CWE models stays unstable.4.3 R ERANKINGSystem to be re-ranked BLEU ReferenceCWE CWE-CWE WEn=3 n=6 n=3 n=6 n=3 n=6En!Cz Baseline system 19.6 20.1 20.3 19.8 20.0 20.0 20.2En!Simplified CzReinflected baseline system 19.5 20.0 20.2 19.6 20.1 20.1 20.03-best Reinflected baseline system 19.9 20.3 19.6 20.0 20.1 20.15-best Reinflected baseline system 19.9 20.3 19.5 19.9 20.0 20.1Table 3: Best BLEU score obtained after n-best reranking of the hypothesis given by the translationand translation + k-best reinflection systems. nis the context size (in number of words)The reranking results are shown in table 3. The first line corresponds to experiments with a di-rect translation from English to Czech, where n-best lists generated by the MT system are simplyrescored by our models. The best result is given by the longest-context CWE model, which producesa+0:7BLEU score improvement. CWE models gives on average +0:1BLEU point compared toWE models, while CWE-CWE are0:2BLEU point under. Doubling the context size consistentlyimproves results of +0:2BLEU point.Experimental results on reinflected Czech seems to follow a similar trend: CWE models behave alittle better than WE models, while CWE-CWE models are under. While simply reranking n-bestlists is not as efficient as doing it directly in Czech, reranking nk-best lists extended by the factorizedCRF gives a small improvement, reaching an improvement of +0:7BLEU point. As a general rule,small context models seem to have difficulties with reinflected Czech. The main advantage givenby the CWE model is an ability to better rerank nk-best lists. These results suggest that, whilethe normalization + reinflection procedure may introduce diversity in the output to be reranked, ourmodels are not able to draw any significant advantage from it.7Under review as a conference paper at ICLR 20174.4 A NALYSIS OF CHARACTER -LEVEL OUTPUT REPRESENTATIONS PERFORMANCEModels using character-level output representations gave sub-par results on re-ranking. It is sur-prising, especially for re-inflected Czech: such a model is supposed to behave better on unknownwords, and thus should benefit from diversity given by generating new words. However, as we cansee in table 4, re-inflection doesn’t add that much diversity (About 0.1 % of OOV words, and about0.001 % of words never seen by the model before). Diversity is also inhibited by our training algo-rithm: while we train open-vocabulary models, the negative examples used with Noise-contrastiveestimation come from a closed vocabulary.Full training vocabulary 250K words vocabularyReference 0.131 % 0.995 %En!Cz (300-best) 0.566 % 1.173 %En!Simplified Cz + Reinflection 0.567 % 1.263 %En!Simplified Cz + 3-Best reinflection 0.567 % 1.277 %En!Simplified Cz + 5-Best reinflection 0.568 % 1.285 %Table 4: Ratio of unknown words in system outputs measured on the test set.This can related to the nature of the unigram distribution used to sample negative examples. Asexplained in section 4.1, it makes frequent short words completely outweigh the others in numberof updates, and we are forced to reduce the ability of the model to find common morphologicalattributes between words to avoid ’contamination’ of character n-gram representations.5 R ELATED WORKSThere is a number of different strategies to efficiently train NNLMs with large vocabularies, such asdifferent types of hierarchical softmax Mnih & Hinton (2009); Le et al. (2011), importance samplingBengio & S ́en ́ecal (2003), and Noise contrastive estimation Gutmann & Hyv ̈arinen (2012); Mnih &Teh (2012). Vaswani et al. (2013) has showed the interest of training a NLM with NCE to re-rankk-best lists, while Devlin et al. (2014) uses a self-normalization. Recently, a comparative study Chenet al. (2016) has been made on how to deal with a large vocabulary. However, the purpose of thispaper is to explore models with open vocabulary rather large vocabulary.There is a surge of interest into using character-level information for a wide range of NLP tasks,with improved results in POS Tagging Santos & Zadrozny (2014), Text classification Zhang &LeCun (2015), Parsing Ballesteros et al. (2015), Named entity recognition Lample et al. (2016).In language modeling, first applications to language modeling were strictly using characters, andperformed less than word-level models Mikolov et al. (2012), while showing impressive results fortext generation Sutskever et al. (2011); Graves (2013), using bi-directional LSTM Graves et al.(2013). Recently, Ling et al. (2015) has used bi-directional LSTM to build word representationsfrom characters, with improvements in language modeling and POS-tagging.The recent work of Kim et al. (2015), that uses convolutional networks and pooling to constructa word representation from character n-grams, coupled with highway networks Srivastava et al.(2015), showed on various languages that using characters improves results on the language mod-eling task (for a small corpus), even more so for languages with complex morphology. A similararchitecture was used J ́ozefowicz et al. (2016) on a larger dataset, conjointly with bi-directionalLSTMs, and trained with importance sampling, showing great results.On the study of NNLMs in the context of Machine Translation, we can mention the work of Luonget al. (2015) on the effect of the number of layers on reranking n-best lists. Finally, while notdirectly related to our work, Luong & Manning (2016) very recently showed great improvementson a translation task by handling rare words with character-level recurrent networks, with a neuraltranslation model.8Under review as a conference paper at ICLR 20176 C ONCLUSIONIn this work, we addressed the challenge of designing an open vocabulary Neural Language Model.For that purpose, word representations are estimated on-the-fly from n-grams of characters. Twokinds of models are introduced: first, NLMs using word and character-level embeddings to representthe input context ( CWE ); then its extension to an open-vocabulary even for the predicted words(CWE-CWE ). These models were used to re-rank outputs of translation systems from English toCzech. We also carried out experiments on translation systems from English to a simplified Czech,which is then re-inflected into Czech before re-ranking.We obtained a slight improvement in BLEU score using a CWE model, which, given the littlevariety of the words generated by translation systems, makes us suppose there is room for more. Weplan to investigate with more complex translation systems, as well as with other applications, suchas morphological re-inflection.While the performance of our open-vocabulary models are to some extent disappointing, they openquestions about the learned representations we will explore. We also plan to investigate on a morefitted noise distribution to use with NCE when training open-vocabulary models.ACKNOWLEDGMENTS | rkoNCSV4e | Review | 3: Clear rejection | This paper proposes an extension of neural network language (NLM) models to better handle large vocabularies. The main idea is to obtain word embeddings by combining character-level embeddings with a convolutional network.
The authors compare word embeddings (WE),character embeddings (CE) as well a combined character and word embeddings (CWE). It's quite obvious how CE or CWE embeddings can be used at the input of an NLM, but this is more tricky at the output layer. The authors propose to use NCE to handle this problem. NCE allows to speed-up training, but has no impact on inference during testing: the full softmax output layer must be calculated and normalized (which can be very costly).
It was not clear to me how the network is used during TESTING with an open-vocabulary. Since the NLM is only used during reranking, the unnormalized probability of the requested word could be obtained at the output. However, when reranking n-best lists with the NLM feature, different sentences are compared and I wonder whether this does work well without proper normalization.
In addition, the authors provide perplexities in Table 2 and Figures 2 and 3. This needs normalization, but it is not clear to me how this was performed. The authors mention a 250k output vocabulary. I doubt that the softmax was calculated over 250k values. Please explain.
The model is evaluated by reranking n-best lists of an SMT systems for the IWSLT 2016 EN/CZ task. In the abstract, the authors mention a gain of 0.7 BLEU. I do not agree with this claim. A vanilla word-based NLM, i.e. a well-known model, achieves already a gain of 0.6 BLEU. Therefore, the new model proposed in this paper brings only an additional improvement of 0.1 BLEU. This is not statistically significant. I conjecture that a similar variation could be obtained by just training several models with different initializations, etc.
Unfortunately, the NLM models which use a character representation at the output do not work well. There are already several works which use some form of character-level representations at the input.
Could you please discuss the computational complexity during training and inference.
Minor comments
- Figure 2 and 3 have the caption "Figure 4". This is misleading.
- the format of the citations is unusual, eg.
"While the use of subword units Botha & Blunsom (2014)"
-> "While the use of subword units (Botha & Blunsom, 2014)" | 4: The reviewer is confident but not absolutely certain that the evaluation is correct |
HJtN5K9gx | ICLR.cc/2017/conference | 2017 | Learning Disentangled Representations in Deep Generative Models | ["N. Siddharth", "Brooks Paige", "Alban Desmaison", "Jan-Willem van de Meent", "Frank Wood", "Noah D. Goodman", "Pushmeet Kohli", "Philip H.S. Torr"] | Deep generative models provide a powerful and flexible means to learn complex distributions over data by incorporating neural networks into latent-variable models. Variational approaches to training such models introduce a probabilistic encoder that casts data, typically unsupervised, into an entangled and unstructured representation space. While unsupervised learning is often desirable, sometimes even necessary, when we lack prior knowledge about what to represent, being able to incorporate domain knowledge in characterising certain aspects of variation in the data can often help learn better disentangled representations. Here, we introduce a new formulation of semi-supervised learning in variational autoencoders that allows precisely this. It permits flexible specification of probabilistic encoders as directed graphical models via a stochastic computation graph, containing both continuous and discrete latent variables, with conditional distributions parametrised by neural networks. We demonstrate how the provision of structure, along with a few labelled examples indicating plausible values for some components of the latent space, can help quickly learn disentangled representations. We then evaluate its ability to do so, both qualitatively by exploring its generative capacity, and quantitatively by using the disentangled representation to perform classification, on a variety of models and datasets. | ["Semi-Supervised Learning", "Deep learning", "Computer vision"] | ABSTRACTDeep generative models provide a powerful and flexible means to learn com-plex distributions over data by incorporating neural networks into latent-variablemodels. Variational approaches to training such models introduce a probabilisticencoder that casts data, typically unsupervised, into an entangled representationspace. While unsupervised learning is often desirable, sometimes even necessary,when we lack prior knowledge about what to represent, being able to incorporatedomain knowledge in characterising certain aspects of variation in the data canoften help learn better disentangled representations. Here, we introduce a newformulation of semi-supervised learning in variational autoencoders that allowsprecisely this. It permits flexible specification of probabilistic encoders as directedgraphical models via a stochastic computation graph, containing both continuousand discrete latent variables, with conditional distributions parametrised by neuralnetworks. We demonstrate how the provision of dependency structures, along witha few labelled examples indicating plausible values for some components of thelatent space, can help quickly learn disentangled representations. We then evalu-ate its ability to do so, both qualitatively by exploring its generative capacity, andquantitatively by using the disentangled representation to perform classification,on a variety of models and datasets.1 I NTRODUCTIONReasoning in complex perceptual domains such as vision often requires the ability to effectivelylearn flexible representations of high-dimensional data, interpret the representations in some form,and understand how the representations can be used to reconstruct the data. The ability to learnrepresentations is a measure of how well one can capture relevant information in the data. Beingable to interpret the learned representations is a measure of extracting consistent meaning in aneffort to make sense of them. Having the ability to reliably reconstruct the data, a tool for predictivesynthesis, can aid in model diagnosis, enable successful transfer learning, and improve generality.Such tasks are typically best addressed by generative models, as they exhibit the flexibility requiredto satisfy all three facets. Discriminative models primarily attend to the first two, learning flexiblerepresentations and conforming to some interpretable space (e.g. classification domain) but don’tperform the predictive synthesis task.Probabilistic graphical models (Koller & Friedman, 2009; Murphy, 2012) are a framework for gen-erative modelling that enables specifying a joint probability distribution on a richly semantic repre-sentation space. As good a fit as they are for specification and representation, the learning processfor both the analysis and synthesis tasks typically suffers in complex perceptual domains such asvision. This is because constructing a generative model requires explicitly specifying the condi-tional distribution of the observed data given latent variables of interest. In practice, designing such1Under review as a conference paper at ICLR 2017likelihood functions by hand is incredibly challenging, and applying generative models to visiondata often requires extensive and significant feature engineering to be successful. One approachto alleviate some of this hardship involves the development of deep generative models: generativemodels that employ neural networks to learn, automatically from data, the unknown conditional dis-tribution in the model. They function as flexible feature learners, where the features are encoded inthe posterior distribution over the latent variables in the model. Recent work exploring the effec-tiveness of such models (e.g. Kingma & Welling (2014); Kulkarni et al. (2015b); Goodfellow et al.(2014)) has shown considerable promise in being able to address the fundamental issues in per-forming this task. These models however are typically unsupervised, learning representations thatare not directly amenable to human interpretation. Any interpretability or disentanglement of thelearned representation is observed or extracted after learning has been performed, by exploring thelatent space along its non-specific axes of variation. A more recent approach by Chen et al. (2016)involves imposition of information-theoretic constraints to better separate factors of variation, buthere too, any interpretability is only established post facto.Figure 1: Variation along (top) light-ing and (bottom) identity axes.While such approaches have considerable merit, particu-larly when faced with the absence of any information aboutthe data, when there are aspects of variation in the data thatcanbe characterised effectively, using and being able toexpress these can often be desirable. For example, whenlearning representations for images of house numbers, hav-ing an explicit “digit” latent variable helps capture a mean-ingful axis of variation, independent of other aspects. Wealso often want to interpret the same data in different waysdepending on context: for a given image of a person, do wecare about the identity, lighting, or indeed any other facetsof the scene (c.f. Figure 1). In these situations, not beingable to enforce context is something of a handicap.In this paper, we seek to combine the best of both worlds: providing the facility to describe the struc-tural constraints under which we would like to interpret the data, while using neural nets to capturevariation for aspects we cannot, or choose not to, explicitly model. By structural constraints, we re-fer to the (arbitrary) dependencies one would like to employ in the recognition model, particularly inregard to there being consistent interpretable semantics of what the variables in the model represent.In particular, we set up our framework in the context of variational autoencoders (V AE Kingma &Welling (2014); Rezende et al. (2014)), as a means for semi-supervised learning in deep generativemodels (Kingma et al., 2014). We provide an alternate formulation of the variational objective and amodified training procedure which permits us to explore a wide space of recognition networks to useas probabilistic encoders. In particular we make no mean-field assumptions for our recognition net-works, allowing arbitrary hierarchical and structured-graphical-model representations, employingboth continuous and discrete latent variables that can be alternately observed, or left unobserved.2 B ACKGROUND AND RELATED WORKVariational autoencoders (Kingma & Welling, 2014; Rezende et al., 2014) simultaneously train botha probabilistic encoder and decoder for a dataset x. The central idea is that an encoding zcan beconsidered a latent variable which allows describing a decoder as a conditional probability densityp(xjz). This is typically a distribution with parameters defined as the output of a determinis-tic multi-layer neural network (itself with parameters ) which takes zas input. Placing a weakprior over z, the corresponding probabilistic encoder can be interpreted as the posterior distributionp(zjx)/p(xjz)p(z). Estimating parameters in this model is challenging, as is performingthe posterior inference necessary to encode data. The variational Bayes approach learns an approx-imate encoder q(zjx), called an “inference network” or a “recognition network”, which aims toapproximate the posterior distribution p(zjx). Then, rather than fitting parameters by maxi-mizing the marginal likelihood p(x), the variational approach maximizes an evidence lower bound(ELBO)L(;;x)logp(x), defined with respect to both decoder and encoder parameters.L(;;x) =Eq(zjx)[logp(x;z)logq(zjx)]; (1)One line of work to embed structure into the latent space zsuch that it exhibits disentangled fea-tures, is through partial supervision. This is either in terms of labelled data (Sohn et al., 2015),2Under review as a conference paper at ICLR 2017or curriculum-learning schemes (Kulkarni et al., 2015b) which explicitly disentangle different fac-tors. Kingma et al. (2014) explore semi-supervised learning in the V AE setting by factoring thelatent space to learn a joint classification model q(yjx)and recognition model q(zjx). Thisis done by separating the latent space into structured, interpretable components yand unstructuredcomponents z, analytically marginalising variables out where discrete. Sohn et al. (2015) performfully-supervised learning in V AEs by transforming an unconditional objective into one where thedata conditions both the (unstructured) latent and the (structured) labels. In contrast to Kingma et al.(2014), the learning objective is a lower bound on the conditional marginal likelihood p(xjy),conditioning the learned V AE on the values of the labelled data. Both of these approaches effec-tively require the label space yto be discrete and finite. Kulkarni et al. (2015b) attend to weakly-supervised learning with V AEs through a novel training procedure that uses data clustered intoequivalence classes along different axes of variation. They then constrain different parts of the latentspace to account for changes along a single axis, by training with data from a particular equivalenceclass. An advantage of this approach is not requiring any explicit labels on the latent space, though itdoes require independence assumptions on structured components, as well as carefully curated data.An alternative approach biases towards interpretable representations by introducing structure in theprior distribution over the latent space p(z). Johnson et al. (2016) explore the combination of graph-ical models and V AEs using classical conjugate exponential family statistical models as structuredpriors over the latent space. They consider relaxation of conjugacy constraints in the likelihoodmodel using neural network approximations, with a training scheme resembling traditional mean-field coordinate ascent algorithms. The recognition network, rather than proposing values outright,proposes parameters of a conjugate-likelihood approximation to the true non-conjugate likelihood.From a specific-instance perspective, Eslami et al. (2016) use a recurrent neural network (RNN)coupled with a spatial transformer network (STN, Jaderberg et al. (2015)) inducing a particularstate-space representation with the approximation distribution of a V AE to parse images into sceneconstituents. Kulkarni et al. (2015a) also explore a specific instance related to a 3D graphics engineby having a programmatic description provide structure using neural networks as surrogates for theperceptual-matching problem. Andreas et al. (2016) explore a more general formulation of structurewith compositional neural network models derived from linguistic dependency parses.3 F RAMEWORK AND FORMULATIONOur method synthesises the semi-supervised and structured-graphical-model approaches. Like John-son et al. (2016), we incorporate graphical model structures, however rather than placing them withinthe generative model p(z;x), we incorporate them into the encoder model q(zjx). For manyperceptual problems in domains such as vision, complex dependencies arise in the posterior due todeterministic interactions during rendering. A mean-field approximation in q(zjx)is a poor fit,even in situations where all the interpretable latent variables are a priori independent. This is animportant reason for our choice of where we embed structure. The use of a structured, multilevelprobabilistic model to define the encoder can also be interpreted as a hierarchical variational model(Ranganath et al., 2015). Interpretability is enforced by occasionally supplying labels to latent vari-ables expected to have a interpretable meaning in the final encoded representation.xlnfunction labelNoise()-- create the node connecting to inputlocal x = nn.Identity()()-- connect a discrete RV to inputlocal l = pp.Discrete({torch.Tensor(1,10)})({x})-- connect a std Gaussian RV to inputlocal n = pp.Gaussian({zeros(1,2), zeros(1,2)})({pp.r(x), pp.r(x)})nngraph.annotateNodes()-- return stochastic computation graphreturn pp.gModule({x}, {l, n})endFigure 2: Example graphical model and its ex-pression in our framework. Further details inthe Appendix.Our framework provides an embedded domain-specific language (EDSL) in Torch (Collobertet al., 2011), that can be used to specify a wide va-riety of graphical models in the form of a stochas-tic computation graph (Schulman et al., 2015). Anexample is shown in Figure 2. These graphicalmodels describe the structure of latent, observable,and partially observable random variables whichexist in an idealized representation space. Specif-ically, we assume a model structure of the formp(x;z;y) =p(xjz;y)p(z;y)where the like-lihoodp(xjz;y)of the data xis conditioned ona set of structured variables yandunstructuredvariables z, for which we define some appropri-3Under review as a conference paper at ICLR 2017ately structured prior p(z;y). The likelihood itself is typically unstructured (e.g. a multivariatenormal distribution). This model structure allows us to optimize the parameters learning a likeli-hood function constrained by the structured latents, but crucially does not require that these latentscompletely explain the data. The approximation to the true posterior is nominally taken to be of theform of the prior distribution q(z;yjx), with parameters but can often include additional struc-ture and alternate factorisations as appropriate. Models with such factoring are useful for situationswhere interpretability is required, or informative, for some axes of variation in the data. It is alsouseful when we wish to interpret the same data from different contexts and when we cannot con-ceivable capture all the variation in the data due to its complexity, settling for particular restrictions,as is often the case with real world data.A particular challenge here lies in choosing a manner for incorporating labelled data for some oftheyinto a training scheme. For example, choosing q(z;yjx) =qz(zjy;x)qy(yjx), de-composes the problem into simultaneously learning a classifier qy(yjx)alongside the generativemodel parameters and encoder qz(zjx;y). In the fully unsupervised setting, the contribution ofa particular data point xito the ELBO can be expressed, with minor adjustments of Equation (1), asL;;xi=Eq(z;yjxi)"logpxijz;yp(z;y)qz(z;yjxi)#: (2)a Monte Carlo approximation of which samples ysqy(yjx)andzsqz(zjy;x).By contrast, in the fully supervised setting the values yare treated as observed and become fixedinputs into the computation graph, instead of being sampled from q. When the label yis ob-served along with the data, for fixed (xi;yi)pairs, the lower bound on the conditional log-marginallikelihood logp(xjy)isLxjy;z;xi;yi=Eqz(zjxi;yi)"logpxijz;yipzjyiqz(zjxi;yi)#: (3)This quantity can be optimized directly to learn model parameters andzsimultaneously viaSGD. However, it does not contain the encoder parameters y. This difficulty was also encounteredin a related context by Kingma et al. (2014). Their solution was to augment the loss function byincluding an explicit additional term for learning a classifier directly on the supervised points.An alternative approach involves extending the model using an auxiliary variable ~y. Definingp(~y;y;zjx) =p(~yjy)p(x;y;z)andq(~y;y;zjx) =p(~yjy)q(y;zjx), with likelihoodp(~yjy) =~y(y), we obtain a model for which marginalization over ~yreproduces the ELBOin Equation (2), and treating ~yas observed gives the supervised objectiveL;;xi~y=yi=Eqy"yi(y)Eqz"logpxijz;yp(z;y)qy(yjxi)qz(zjy;xi)##=qyyijxiEqz"logpxijz;yipz;yiqy(yijxi)qz(zjyi;xi)#=qyyijxiLxjy;z;xi;yi+ logpyilogqyyijxi:(4)This formulation enables a range of capabilities for semi-supervised learning in deep generativemodels. To begin with, it extends the ability to partially-supervise latent variables to those thathave continuous support. This effectively learns a regressor instead of a classifier in the same for-mulation. Next, it automatically balances the trade-off between learning a classifier/regressor andlearning the parameters of the generative model and the remainder of the recognition network. Thisis due to the fact that the classifier qy(yjx)is always present and learned, and is contrast to thehyperparameter-driven approach in Kingma et al. (2014). Finally, it allows for easy automatic im-plementation of a wide variety of models, separating out the labelled and unlabelled variables, toderive a unified objective over both the supervised and unsupervised cases. When unsupervised, thevalue of the label yiis sampled from qy(yjx)and scored in that distribution, and when super-vised, it is set to the given value, and scored in the same distribution. This is in the same spirit as a4Under review as a conference paper at ICLR 2017number of approaches such as Automatic Differentiation (AD) and Probabilistic Program inference,where the choice of representation enables ease of automation for a great variety of different cases.Supervision rate. While learning with this objective, we observe data in batches that are eitherwholly supervised, or wholly unsupervised. This typically obviates the need to construct compli-cated estimators for the partially observed cases, while also helping reduce variance in general overthe learning and gradient computation (details of which are provided in the Appendix). Doing soalso presents a choice relating to how often we observe labelled data in a complete sweep throughthe dataset, referred to as the supervision rate r. Practically, the rate represents a clear trade-off inlearning the generative and recognition-network parameters under interpretability constraints. If therate is too low, the supervision can be insufficient to help with disentangling representation in therecognition network, and if too high, the generative model can overfit to just the (few) superviseddata points. The rate also has a natural relation to the variance of the objective function and its gra-dients. As can be seen from Equation (4), an evaluation of the objective for a given yiinvolves theunsupervised estimation of the conditional ELBO Lxjy. The rate implicitly affects the number ofsuch estimations for any given yiand thus the variance of the objective with respect to that label yi.The same argument applies for the gradients of the objective.Plug-in estimation for discrete variables. In targeting a general class of models, another par-ticular difficulty is the ubiquity of discrete latent variables. To obtain a differentiable objective,one can either marginalize over discrete variables directly (as done by Kingma et al. (2014) andin the STAN probabilistic programming system (Stan Development Team, 2013)), which doesn’tscale over numbers of variables, or use a REINFORCE-style estimator (Williams, 1992; Mnih &Gregor, 2014), which tends to have high variance. A third approach, related to Bengio et al. (2013),is to represent discrete latent variables defined on a finite domain using a one-hot encoding, thenrelaxing them to a continuous probability simplex when used as an input to a recognition network.For example, when yis a one-hot encoding of a discrete value used in a recognition network whichfactors asq(yjx)q(zjy;x), thenq(yjx)is itself a discrete distribution with a probabilityvector=g(x)for some deterministic function g. The value yis itself an input to a secondfunctionh(x;y)producing the parameters for q(zjy;x). Instead of evaluating h(x;y)at asampled value y(or enumerating over the entire domain), we simply evaluate it at the single point ,noting that=Eq(yjx)[y]. This may seem a crude approximation, replacing integration with asingle evaluation, claiming Eq(yjx)[h(x;y)]h(x;Eq(yjx)[y]);which is not true in generalforh(). However, if is actually a one-hot encoding, i.e., when Eq(yjx)[y]has a single non-zerovalue, they are in fact equal. For our experiments we employ this plug-in estimator where applicable,although our framwork can express any of the above methods.4 E XPERIMENTSWe evaluate our framework on along a number of different axes, pertaining to its ability to (i) learndisentangled representation from a little supervision, (ii) demonstrate capability at a relevant clas-sification/regression task, (iii) successfully also learn the generative model, and (iv) admit the useof latent spaces of varying dimensionality Note that we do not set out to build the best possibleclassifier in these tasks. Instead, the classification task is a means to the end of demonstrating thatthe learned representation is indeed disentangled, often with minimal supervision. Also, details ofneural network architectures, graphical models for the recognition networks, dataset characteristics,and hyper-parameter settings are provided in the Appendix.4.1 MNIST AND SVHNxndxndFigure 3: (left) Generative and(right) recognition model withdigitdand stylen.To begin with, we explore the facets of our model in thestandard MNIST and Google Street-View House Numbers(SVHN) datasets. We use this example to highlight how theprovision of even the slightest structure, coupled with minimalsupervision, in often sufficient to induce the emergence of dis-entangled representations in the recognition network. Figure 3shows the structure of the generative and recognition modelsfor this experiment.5Under review as a conference paper at ICLR 2017(a) (b) (c) (d)Figure 4: (a) Visual analogies for the MNIST data, with inferred style latent variable fixed andthe label varied. (b) Exploration in “style” space for a 2D latent gaussian random variable. Visualanalogies for the SVHN data when (c) fully supervised, and (d) supervised with just 100 labels/digit.MNIST SVHNl Ours Kingma et al. (2014) Ours Kingma et al. (2014)10 12.2 (1.38) 11.97 (1.71) - -60 5.28 (0.76) 4.94 (0.13) - -100 4.23 (0.68) 3.60 (0.56) 30.32 ( 2.74) 36.02 (0.10)300 3.94 (0.77) 3.92 (0.63) 23.98 ( 1.83) -Figure 5: (Top) Classification error graphs over different labelled set (per class) sizes and supervisionrates for MNIST (left) and SVHN (right). Note the steep drop in error rate with just a handful oflabels per class ( l), seen just a few times ( r). (Bottom) Classification error rates for different (per-class) labelled-set sizes lover different runs.Figure 4(a) and (c) show the effect of first transforming a given input (leftmost column) into thedisentangled latent space, and with the style latent variable fixed, manipulating the digit through thegenerative model to produce appropriately modified reconstructions. These were both derived withfull supervision over a 50 and 100 dimensional Gaussian latent space for the styles, respectively.Figure 4(b) shows the transformation for a fixed digit, when the style latent is varied. This wasderived with a simple 2D Gaussian latent space for the style. The last part, Figure 4(d) shows theability of the network to begin disentangling the latent space with just 100 labelled samples per digit(training dataset size is 73000 points). Separation between style and class is clearly evident evenwith such little supervision.We compute the classification accuracy of the label-prediction task with this model for both datasets,and the results are reported in the bottom of Figure 5. The results are compared to those reportedin Kingma et al. (2014). For the MNIST dataset, we compare against model M2 as we run directlyon the data, without performing a preliminary feature-extraction step. For the SVHN dataset, wecompare against model M1+M2 even though we run directly on the data, using a CNN to simultane-ously learn to extract features. Confidence estimates for both were computed off of 10 runs. We notethat we fare comparably with these models, and in particular, when employing a CNN for featureextraction for the SVHN dataset, comfortably exceed them.6Under review as a conference paper at ICLR 2017Ours (Full Supervision) Ours (Semi-Supervised) Jampani et al. (2015)Identity 4.2 ( 0.84) 10.3 ( 2.36) 30Lighting 14.2 ( 1.12) 28.4 ( 4.12) 10Figure 7: (Top) Exploring the generative capacity of the model. Column 1: input image. Col-umn 2: reconstruction. Columns 3-7: reconstructions with fixed (inferred) lighting and varyingidentities. (Bottom) Classification and regression error rates for the identity and lighting latent vari-ables, fully-supervised, and semi-supervised with 20 distinct labelled example per variation axis (60total). Classification is a direct 1-out-of-38 choice, whereas for the comparison, error is a nearest-neighbour loss based on the inferred reflectance. Regression loss for lighting is measured as cosineangle distance. Results for Jampani et al. (2015) are estimated from plot asymptotes.Figure 5 shows the effect of the supervision rate ron the error rate. As evident from the graph, therate has a strong affect on how quickly one learns an effective classifier. This indicates that whenlabels are sparse or hard to come by, a training regime that runs largely unsupervised, even only oc-casionally looking at the supervised data, still learns to disentangle the latent-space representations.4.2 I NTRINSIC FACESWe next move to a harder problem involving a generative model of faces, attempting to highlighthow the introduction of stronger dependency structures in the recognition model helps disentanglelatents, particularly when the generative model assumes conditional independence between the la-tents. Here, we use the “Yale B” dataset as processed by Jampani et al. (2015) to train the modelsshown in Figure 6. The primary tasks we are interested in here are (i) the ability to manipulate theinferred latents to evaluate if they qualitatively achieve semantically meaningful disentangled repre-sentations, (ii) the classification of person identity, and (iii) the regression for lighting direction.xi` s rxi`rsFigure 6: (Top) Generative and (Bottom)recognition model with identity i, light-ingl, reflectance r, and shading s.Figure 7 presents both qualitative and quantitative eval-uation of the framework to jointly learn both the struc-tured recognition model, and the generative model pa-rameters. A particular point of note is that we explic-itly encode “identity” as a categorical random variablesince we have knowledge about the domain and the rel-evant axis to explore. Since we also learn the generativemodel, which in the domain of the actual dataset is sim-ply the expression (n:l)r+, we can afford to weaklyspecify the structure allowing for some neural-networkcomponent to take up the requisite slack in order to re-construct the input. This allows us to directly addressthe task of predicting identity, instead of approachingit through surrogate evaluation methods (e.g. nearest-neighbour classification based on inferred reflectance).While this formulation allows us to to perform the identity classification task, the fact that ourrecognition model never supervises the reflectance means that the variable can typically absorbsome of the representational power of other, semi-supervised nodes. This is particularly the casewhen dealing with high-dimensional latent spaces as for reflectance and shading.7Under review as a conference paper at ICLR 2017xcknkdkKKmaxxKdknkckKsize rate (%) error rate (%)Unsup 0 32.25 ( 12.97)500 1 6.42 ( 2.15)500 10 4.21 ( 1.29)1000 1 4.72 ( 1.60)1000 10 2.98 ( 0.93)Figure 8: Generative (l) and recognition (m) model with digit d, stylen, canvasc, and countK.4.3 M ULTI -MNISTFinally, we run an experiment to test the ability of our framework to handle models that induce latentrepresentations of variable dimension. We extend the simple model from the MNIST experiment bycomposing it with a stochastic sequence generator, to test its ability to count the number of digits ina given input image, given its ability to encode and reconstruct the digits in isolation. The graphicalmodels employed are depicted in Figure 8.We observe that we are indeed able to reliable learn to count, at least within the limits of upto 3digits in the multi-mnist dataset. The dataset was generated directly from the MNIST dataset by ma-nipulating the scale and positioning of the standard digits into a combined canvas, evenly balancedacross the counts and digits. The results across different supervised set sizes and supervision ratesare shown in the table in Figure 8.5 D ISCUSSION AND CONCLUSIONIn this paper, we introduce a general framework for semi-supervised learning in the V AE setting thatallows incorporation of graphical models to specify a wide variety of structural constraints on therecognition network. We demonstrate its flexibility by applying it to a variety of different tasks in thevisual domain, and evaluate its efficacy at learning disentangled representations in a semi-supervisedmanner, showing strong performance.This framework ensures that the recognition network learns to make predictions in an interpretableand disentangled space, constrained by the structure provided by the graphical model. The structuredform of the recognition network also is typically a better fit for vision models, as it helps bettercapture complexities in the likelihood (usually the renderer). Given that we encode graphical modelsin the recognition network, and Johnson et al. (2016) encode it in the generative model in concertwith V AEs, a natural extension would be the exploration of the ability to learn effectively whenspecifying structure in both by means of graphical models. This is a direction of future work we areinterested in, particularly in context of semi-supervised learning.The framework is implemented as a Torch library (Collobert et al., 2011), enabling the constructionof stochastic computation graphs which encode the requisite structure and computation. This pro-vides another direction to explore in the future – the extension of the stochastic computation graphframework to probabilistic programming (Goodman et al., 2008; Wingate et al., 2011; Wood et al.,2014). Probabilistic programs go beyond the presented framework to include stochastic inferenceand the ability to specify arbitrary models of computation. The combination of such frameworkswith neural networks has recently been studied in Ritchie et al. (2016); Le et al. (2016), and indi-cates a promising avenue for further exploration. | rJUw6gmNx | Review | 6: Marginally above acceptance threshold | This paper investigates deep generative models with multiple stochastic nodes and gives them meaning by semi-supervision. From a methodological point of view, there is nothing fundamentally novel (it is very similar to the semi-supervised work of Kingma et al; although this work has sometimes more than two latent nodes, it is not a complex extension). There is a fairly classical auxiliary variable trick used to make sure the inference network for y is trained over all data points (by supposing y is in fact is a latent variable with an observation \tilde y; the observation is y if y is observed, or uninformative for unobserved y). Alternatively, one can separate the inference used to learn the generative model (which throws out inference over y if it is observed), from an inference used to 'exercise' the model (approximate the complex p(y|x) in the model by a simpler q(y|x) - effectively inferring the target p(y|x) for the data where only x is collected). Results are strong, although on simple datasets. Overall this is a well written, interesting paper, but lacking in terms of methodological advances.
Minor:
- I feel the title is a bit too general for the content of the paper. I personally don't agree with the strong contrast made between deep generative models and graphical models (deep generative models are graphical models, but they are more typically learned and un-interpretable than classical graphical models; and having multiple stochastic variables is not exclusive to graphical models, see DRAW, Deep Kalman Filter, Recurrent VAE, etc.). The word 'structure' is a bit problematic; here, the paper seems more concerned with disentangling and semanticizing the latent representation of a generative model by supervision. It is debatable whether the models themselves have structure. | 5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature |
HJtN5K9gx | ICLR.cc/2017/conference | 2017 | Learning Disentangled Representations in Deep Generative Models | ["N. Siddharth", "Brooks Paige", "Alban Desmaison", "Jan-Willem van de Meent", "Frank Wood", "Noah D. Goodman", "Pushmeet Kohli", "Philip H.S. Torr"] | Deep generative models provide a powerful and flexible means to learn complex distributions over data by incorporating neural networks into latent-variable models. Variational approaches to training such models introduce a probabilistic encoder that casts data, typically unsupervised, into an entangled and unstructured representation space. While unsupervised learning is often desirable, sometimes even necessary, when we lack prior knowledge about what to represent, being able to incorporate domain knowledge in characterising certain aspects of variation in the data can often help learn better disentangled representations. Here, we introduce a new formulation of semi-supervised learning in variational autoencoders that allows precisely this. It permits flexible specification of probabilistic encoders as directed graphical models via a stochastic computation graph, containing both continuous and discrete latent variables, with conditional distributions parametrised by neural networks. We demonstrate how the provision of structure, along with a few labelled examples indicating plausible values for some components of the latent space, can help quickly learn disentangled representations. We then evaluate its ability to do so, both qualitatively by exploring its generative capacity, and quantitatively by using the disentangled representation to perform classification, on a variety of models and datasets. | ["Semi-Supervised Learning", "Deep learning", "Computer vision"] | ABSTRACTDeep generative models provide a powerful and flexible means to learn com-plex distributions over data by incorporating neural networks into latent-variablemodels. Variational approaches to training such models introduce a probabilisticencoder that casts data, typically unsupervised, into an entangled representationspace. While unsupervised learning is often desirable, sometimes even necessary,when we lack prior knowledge about what to represent, being able to incorporatedomain knowledge in characterising certain aspects of variation in the data canoften help learn better disentangled representations. Here, we introduce a newformulation of semi-supervised learning in variational autoencoders that allowsprecisely this. It permits flexible specification of probabilistic encoders as directedgraphical models via a stochastic computation graph, containing both continuousand discrete latent variables, with conditional distributions parametrised by neuralnetworks. We demonstrate how the provision of dependency structures, along witha few labelled examples indicating plausible values for some components of thelatent space, can help quickly learn disentangled representations. We then evalu-ate its ability to do so, both qualitatively by exploring its generative capacity, andquantitatively by using the disentangled representation to perform classification,on a variety of models and datasets.1 I NTRODUCTIONReasoning in complex perceptual domains such as vision often requires the ability to effectivelylearn flexible representations of high-dimensional data, interpret the representations in some form,and understand how the representations can be used to reconstruct the data. The ability to learnrepresentations is a measure of how well one can capture relevant information in the data. Beingable to interpret the learned representations is a measure of extracting consistent meaning in aneffort to make sense of them. Having the ability to reliably reconstruct the data, a tool for predictivesynthesis, can aid in model diagnosis, enable successful transfer learning, and improve generality.Such tasks are typically best addressed by generative models, as they exhibit the flexibility requiredto satisfy all three facets. Discriminative models primarily attend to the first two, learning flexiblerepresentations and conforming to some interpretable space (e.g. classification domain) but don’tperform the predictive synthesis task.Probabilistic graphical models (Koller & Friedman, 2009; Murphy, 2012) are a framework for gen-erative modelling that enables specifying a joint probability distribution on a richly semantic repre-sentation space. As good a fit as they are for specification and representation, the learning processfor both the analysis and synthesis tasks typically suffers in complex perceptual domains such asvision. This is because constructing a generative model requires explicitly specifying the condi-tional distribution of the observed data given latent variables of interest. In practice, designing such1Under review as a conference paper at ICLR 2017likelihood functions by hand is incredibly challenging, and applying generative models to visiondata often requires extensive and significant feature engineering to be successful. One approachto alleviate some of this hardship involves the development of deep generative models: generativemodels that employ neural networks to learn, automatically from data, the unknown conditional dis-tribution in the model. They function as flexible feature learners, where the features are encoded inthe posterior distribution over the latent variables in the model. Recent work exploring the effec-tiveness of such models (e.g. Kingma & Welling (2014); Kulkarni et al. (2015b); Goodfellow et al.(2014)) has shown considerable promise in being able to address the fundamental issues in per-forming this task. These models however are typically unsupervised, learning representations thatare not directly amenable to human interpretation. Any interpretability or disentanglement of thelearned representation is observed or extracted after learning has been performed, by exploring thelatent space along its non-specific axes of variation. A more recent approach by Chen et al. (2016)involves imposition of information-theoretic constraints to better separate factors of variation, buthere too, any interpretability is only established post facto.Figure 1: Variation along (top) light-ing and (bottom) identity axes.While such approaches have considerable merit, particu-larly when faced with the absence of any information aboutthe data, when there are aspects of variation in the data thatcanbe characterised effectively, using and being able toexpress these can often be desirable. For example, whenlearning representations for images of house numbers, hav-ing an explicit “digit” latent variable helps capture a mean-ingful axis of variation, independent of other aspects. Wealso often want to interpret the same data in different waysdepending on context: for a given image of a person, do wecare about the identity, lighting, or indeed any other facetsof the scene (c.f. Figure 1). In these situations, not beingable to enforce context is something of a handicap.In this paper, we seek to combine the best of both worlds: providing the facility to describe the struc-tural constraints under which we would like to interpret the data, while using neural nets to capturevariation for aspects we cannot, or choose not to, explicitly model. By structural constraints, we re-fer to the (arbitrary) dependencies one would like to employ in the recognition model, particularly inregard to there being consistent interpretable semantics of what the variables in the model represent.In particular, we set up our framework in the context of variational autoencoders (V AE Kingma &Welling (2014); Rezende et al. (2014)), as a means for semi-supervised learning in deep generativemodels (Kingma et al., 2014). We provide an alternate formulation of the variational objective and amodified training procedure which permits us to explore a wide space of recognition networks to useas probabilistic encoders. In particular we make no mean-field assumptions for our recognition net-works, allowing arbitrary hierarchical and structured-graphical-model representations, employingboth continuous and discrete latent variables that can be alternately observed, or left unobserved.2 B ACKGROUND AND RELATED WORKVariational autoencoders (Kingma & Welling, 2014; Rezende et al., 2014) simultaneously train botha probabilistic encoder and decoder for a dataset x. The central idea is that an encoding zcan beconsidered a latent variable which allows describing a decoder as a conditional probability densityp(xjz). This is typically a distribution with parameters defined as the output of a determinis-tic multi-layer neural network (itself with parameters ) which takes zas input. Placing a weakprior over z, the corresponding probabilistic encoder can be interpreted as the posterior distributionp(zjx)/p(xjz)p(z). Estimating parameters in this model is challenging, as is performingthe posterior inference necessary to encode data. The variational Bayes approach learns an approx-imate encoder q(zjx), called an “inference network” or a “recognition network”, which aims toapproximate the posterior distribution p(zjx). Then, rather than fitting parameters by maxi-mizing the marginal likelihood p(x), the variational approach maximizes an evidence lower bound(ELBO)L(;;x)logp(x), defined with respect to both decoder and encoder parameters.L(;;x) =Eq(zjx)[logp(x;z)logq(zjx)]; (1)One line of work to embed structure into the latent space zsuch that it exhibits disentangled fea-tures, is through partial supervision. This is either in terms of labelled data (Sohn et al., 2015),2Under review as a conference paper at ICLR 2017or curriculum-learning schemes (Kulkarni et al., 2015b) which explicitly disentangle different fac-tors. Kingma et al. (2014) explore semi-supervised learning in the V AE setting by factoring thelatent space to learn a joint classification model q(yjx)and recognition model q(zjx). Thisis done by separating the latent space into structured, interpretable components yand unstructuredcomponents z, analytically marginalising variables out where discrete. Sohn et al. (2015) performfully-supervised learning in V AEs by transforming an unconditional objective into one where thedata conditions both the (unstructured) latent and the (structured) labels. In contrast to Kingma et al.(2014), the learning objective is a lower bound on the conditional marginal likelihood p(xjy),conditioning the learned V AE on the values of the labelled data. Both of these approaches effec-tively require the label space yto be discrete and finite. Kulkarni et al. (2015b) attend to weakly-supervised learning with V AEs through a novel training procedure that uses data clustered intoequivalence classes along different axes of variation. They then constrain different parts of the latentspace to account for changes along a single axis, by training with data from a particular equivalenceclass. An advantage of this approach is not requiring any explicit labels on the latent space, though itdoes require independence assumptions on structured components, as well as carefully curated data.An alternative approach biases towards interpretable representations by introducing structure in theprior distribution over the latent space p(z). Johnson et al. (2016) explore the combination of graph-ical models and V AEs using classical conjugate exponential family statistical models as structuredpriors over the latent space. They consider relaxation of conjugacy constraints in the likelihoodmodel using neural network approximations, with a training scheme resembling traditional mean-field coordinate ascent algorithms. The recognition network, rather than proposing values outright,proposes parameters of a conjugate-likelihood approximation to the true non-conjugate likelihood.From a specific-instance perspective, Eslami et al. (2016) use a recurrent neural network (RNN)coupled with a spatial transformer network (STN, Jaderberg et al. (2015)) inducing a particularstate-space representation with the approximation distribution of a V AE to parse images into sceneconstituents. Kulkarni et al. (2015a) also explore a specific instance related to a 3D graphics engineby having a programmatic description provide structure using neural networks as surrogates for theperceptual-matching problem. Andreas et al. (2016) explore a more general formulation of structurewith compositional neural network models derived from linguistic dependency parses.3 F RAMEWORK AND FORMULATIONOur method synthesises the semi-supervised and structured-graphical-model approaches. Like John-son et al. (2016), we incorporate graphical model structures, however rather than placing them withinthe generative model p(z;x), we incorporate them into the encoder model q(zjx). For manyperceptual problems in domains such as vision, complex dependencies arise in the posterior due todeterministic interactions during rendering. A mean-field approximation in q(zjx)is a poor fit,even in situations where all the interpretable latent variables are a priori independent. This is animportant reason for our choice of where we embed structure. The use of a structured, multilevelprobabilistic model to define the encoder can also be interpreted as a hierarchical variational model(Ranganath et al., 2015). Interpretability is enforced by occasionally supplying labels to latent vari-ables expected to have a interpretable meaning in the final encoded representation.xlnfunction labelNoise()-- create the node connecting to inputlocal x = nn.Identity()()-- connect a discrete RV to inputlocal l = pp.Discrete({torch.Tensor(1,10)})({x})-- connect a std Gaussian RV to inputlocal n = pp.Gaussian({zeros(1,2), zeros(1,2)})({pp.r(x), pp.r(x)})nngraph.annotateNodes()-- return stochastic computation graphreturn pp.gModule({x}, {l, n})endFigure 2: Example graphical model and its ex-pression in our framework. Further details inthe Appendix.Our framework provides an embedded domain-specific language (EDSL) in Torch (Collobertet al., 2011), that can be used to specify a wide va-riety of graphical models in the form of a stochas-tic computation graph (Schulman et al., 2015). Anexample is shown in Figure 2. These graphicalmodels describe the structure of latent, observable,and partially observable random variables whichexist in an idealized representation space. Specif-ically, we assume a model structure of the formp(x;z;y) =p(xjz;y)p(z;y)where the like-lihoodp(xjz;y)of the data xis conditioned ona set of structured variables yandunstructuredvariables z, for which we define some appropri-3Under review as a conference paper at ICLR 2017ately structured prior p(z;y). The likelihood itself is typically unstructured (e.g. a multivariatenormal distribution). This model structure allows us to optimize the parameters learning a likeli-hood function constrained by the structured latents, but crucially does not require that these latentscompletely explain the data. The approximation to the true posterior is nominally taken to be of theform of the prior distribution q(z;yjx), with parameters but can often include additional struc-ture and alternate factorisations as appropriate. Models with such factoring are useful for situationswhere interpretability is required, or informative, for some axes of variation in the data. It is alsouseful when we wish to interpret the same data from different contexts and when we cannot con-ceivable capture all the variation in the data due to its complexity, settling for particular restrictions,as is often the case with real world data.A particular challenge here lies in choosing a manner for incorporating labelled data for some oftheyinto a training scheme. For example, choosing q(z;yjx) =qz(zjy;x)qy(yjx), de-composes the problem into simultaneously learning a classifier qy(yjx)alongside the generativemodel parameters and encoder qz(zjx;y). In the fully unsupervised setting, the contribution ofa particular data point xito the ELBO can be expressed, with minor adjustments of Equation (1), asL;;xi=Eq(z;yjxi)"logpxijz;yp(z;y)qz(z;yjxi)#: (2)a Monte Carlo approximation of which samples ysqy(yjx)andzsqz(zjy;x).By contrast, in the fully supervised setting the values yare treated as observed and become fixedinputs into the computation graph, instead of being sampled from q. When the label yis ob-served along with the data, for fixed (xi;yi)pairs, the lower bound on the conditional log-marginallikelihood logp(xjy)isLxjy;z;xi;yi=Eqz(zjxi;yi)"logpxijz;yipzjyiqz(zjxi;yi)#: (3)This quantity can be optimized directly to learn model parameters andzsimultaneously viaSGD. However, it does not contain the encoder parameters y. This difficulty was also encounteredin a related context by Kingma et al. (2014). Their solution was to augment the loss function byincluding an explicit additional term for learning a classifier directly on the supervised points.An alternative approach involves extending the model using an auxiliary variable ~y. Definingp(~y;y;zjx) =p(~yjy)p(x;y;z)andq(~y;y;zjx) =p(~yjy)q(y;zjx), with likelihoodp(~yjy) =~y(y), we obtain a model for which marginalization over ~yreproduces the ELBOin Equation (2), and treating ~yas observed gives the supervised objectiveL;;xi~y=yi=Eqy"yi(y)Eqz"logpxijz;yp(z;y)qy(yjxi)qz(zjy;xi)##=qyyijxiEqz"logpxijz;yipz;yiqy(yijxi)qz(zjyi;xi)#=qyyijxiLxjy;z;xi;yi+ logpyilogqyyijxi:(4)This formulation enables a range of capabilities for semi-supervised learning in deep generativemodels. To begin with, it extends the ability to partially-supervise latent variables to those thathave continuous support. This effectively learns a regressor instead of a classifier in the same for-mulation. Next, it automatically balances the trade-off between learning a classifier/regressor andlearning the parameters of the generative model and the remainder of the recognition network. Thisis due to the fact that the classifier qy(yjx)is always present and learned, and is contrast to thehyperparameter-driven approach in Kingma et al. (2014). Finally, it allows for easy automatic im-plementation of a wide variety of models, separating out the labelled and unlabelled variables, toderive a unified objective over both the supervised and unsupervised cases. When unsupervised, thevalue of the label yiis sampled from qy(yjx)and scored in that distribution, and when super-vised, it is set to the given value, and scored in the same distribution. This is in the same spirit as a4Under review as a conference paper at ICLR 2017number of approaches such as Automatic Differentiation (AD) and Probabilistic Program inference,where the choice of representation enables ease of automation for a great variety of different cases.Supervision rate. While learning with this objective, we observe data in batches that are eitherwholly supervised, or wholly unsupervised. This typically obviates the need to construct compli-cated estimators for the partially observed cases, while also helping reduce variance in general overthe learning and gradient computation (details of which are provided in the Appendix). Doing soalso presents a choice relating to how often we observe labelled data in a complete sweep throughthe dataset, referred to as the supervision rate r. Practically, the rate represents a clear trade-off inlearning the generative and recognition-network parameters under interpretability constraints. If therate is too low, the supervision can be insufficient to help with disentangling representation in therecognition network, and if too high, the generative model can overfit to just the (few) superviseddata points. The rate also has a natural relation to the variance of the objective function and its gra-dients. As can be seen from Equation (4), an evaluation of the objective for a given yiinvolves theunsupervised estimation of the conditional ELBO Lxjy. The rate implicitly affects the number ofsuch estimations for any given yiand thus the variance of the objective with respect to that label yi.The same argument applies for the gradients of the objective.Plug-in estimation for discrete variables. In targeting a general class of models, another par-ticular difficulty is the ubiquity of discrete latent variables. To obtain a differentiable objective,one can either marginalize over discrete variables directly (as done by Kingma et al. (2014) andin the STAN probabilistic programming system (Stan Development Team, 2013)), which doesn’tscale over numbers of variables, or use a REINFORCE-style estimator (Williams, 1992; Mnih &Gregor, 2014), which tends to have high variance. A third approach, related to Bengio et al. (2013),is to represent discrete latent variables defined on a finite domain using a one-hot encoding, thenrelaxing them to a continuous probability simplex when used as an input to a recognition network.For example, when yis a one-hot encoding of a discrete value used in a recognition network whichfactors asq(yjx)q(zjy;x), thenq(yjx)is itself a discrete distribution with a probabilityvector=g(x)for some deterministic function g. The value yis itself an input to a secondfunctionh(x;y)producing the parameters for q(zjy;x). Instead of evaluating h(x;y)at asampled value y(or enumerating over the entire domain), we simply evaluate it at the single point ,noting that=Eq(yjx)[y]. This may seem a crude approximation, replacing integration with asingle evaluation, claiming Eq(yjx)[h(x;y)]h(x;Eq(yjx)[y]);which is not true in generalforh(). However, if is actually a one-hot encoding, i.e., when Eq(yjx)[y]has a single non-zerovalue, they are in fact equal. For our experiments we employ this plug-in estimator where applicable,although our framwork can express any of the above methods.4 E XPERIMENTSWe evaluate our framework on along a number of different axes, pertaining to its ability to (i) learndisentangled representation from a little supervision, (ii) demonstrate capability at a relevant clas-sification/regression task, (iii) successfully also learn the generative model, and (iv) admit the useof latent spaces of varying dimensionality Note that we do not set out to build the best possibleclassifier in these tasks. Instead, the classification task is a means to the end of demonstrating thatthe learned representation is indeed disentangled, often with minimal supervision. Also, details ofneural network architectures, graphical models for the recognition networks, dataset characteristics,and hyper-parameter settings are provided in the Appendix.4.1 MNIST AND SVHNxndxndFigure 3: (left) Generative and(right) recognition model withdigitdand stylen.To begin with, we explore the facets of our model in thestandard MNIST and Google Street-View House Numbers(SVHN) datasets. We use this example to highlight how theprovision of even the slightest structure, coupled with minimalsupervision, in often sufficient to induce the emergence of dis-entangled representations in the recognition network. Figure 3shows the structure of the generative and recognition modelsfor this experiment.5Under review as a conference paper at ICLR 2017(a) (b) (c) (d)Figure 4: (a) Visual analogies for the MNIST data, with inferred style latent variable fixed andthe label varied. (b) Exploration in “style” space for a 2D latent gaussian random variable. Visualanalogies for the SVHN data when (c) fully supervised, and (d) supervised with just 100 labels/digit.MNIST SVHNl Ours Kingma et al. (2014) Ours Kingma et al. (2014)10 12.2 (1.38) 11.97 (1.71) - -60 5.28 (0.76) 4.94 (0.13) - -100 4.23 (0.68) 3.60 (0.56) 30.32 ( 2.74) 36.02 (0.10)300 3.94 (0.77) 3.92 (0.63) 23.98 ( 1.83) -Figure 5: (Top) Classification error graphs over different labelled set (per class) sizes and supervisionrates for MNIST (left) and SVHN (right). Note the steep drop in error rate with just a handful oflabels per class ( l), seen just a few times ( r). (Bottom) Classification error rates for different (per-class) labelled-set sizes lover different runs.Figure 4(a) and (c) show the effect of first transforming a given input (leftmost column) into thedisentangled latent space, and with the style latent variable fixed, manipulating the digit through thegenerative model to produce appropriately modified reconstructions. These were both derived withfull supervision over a 50 and 100 dimensional Gaussian latent space for the styles, respectively.Figure 4(b) shows the transformation for a fixed digit, when the style latent is varied. This wasderived with a simple 2D Gaussian latent space for the style. The last part, Figure 4(d) shows theability of the network to begin disentangling the latent space with just 100 labelled samples per digit(training dataset size is 73000 points). Separation between style and class is clearly evident evenwith such little supervision.We compute the classification accuracy of the label-prediction task with this model for both datasets,and the results are reported in the bottom of Figure 5. The results are compared to those reportedin Kingma et al. (2014). For the MNIST dataset, we compare against model M2 as we run directlyon the data, without performing a preliminary feature-extraction step. For the SVHN dataset, wecompare against model M1+M2 even though we run directly on the data, using a CNN to simultane-ously learn to extract features. Confidence estimates for both were computed off of 10 runs. We notethat we fare comparably with these models, and in particular, when employing a CNN for featureextraction for the SVHN dataset, comfortably exceed them.6Under review as a conference paper at ICLR 2017Ours (Full Supervision) Ours (Semi-Supervised) Jampani et al. (2015)Identity 4.2 ( 0.84) 10.3 ( 2.36) 30Lighting 14.2 ( 1.12) 28.4 ( 4.12) 10Figure 7: (Top) Exploring the generative capacity of the model. Column 1: input image. Col-umn 2: reconstruction. Columns 3-7: reconstructions with fixed (inferred) lighting and varyingidentities. (Bottom) Classification and regression error rates for the identity and lighting latent vari-ables, fully-supervised, and semi-supervised with 20 distinct labelled example per variation axis (60total). Classification is a direct 1-out-of-38 choice, whereas for the comparison, error is a nearest-neighbour loss based on the inferred reflectance. Regression loss for lighting is measured as cosineangle distance. Results for Jampani et al. (2015) are estimated from plot asymptotes.Figure 5 shows the effect of the supervision rate ron the error rate. As evident from the graph, therate has a strong affect on how quickly one learns an effective classifier. This indicates that whenlabels are sparse or hard to come by, a training regime that runs largely unsupervised, even only oc-casionally looking at the supervised data, still learns to disentangle the latent-space representations.4.2 I NTRINSIC FACESWe next move to a harder problem involving a generative model of faces, attempting to highlighthow the introduction of stronger dependency structures in the recognition model helps disentanglelatents, particularly when the generative model assumes conditional independence between the la-tents. Here, we use the “Yale B” dataset as processed by Jampani et al. (2015) to train the modelsshown in Figure 6. The primary tasks we are interested in here are (i) the ability to manipulate theinferred latents to evaluate if they qualitatively achieve semantically meaningful disentangled repre-sentations, (ii) the classification of person identity, and (iii) the regression for lighting direction.xi` s rxi`rsFigure 6: (Top) Generative and (Bottom)recognition model with identity i, light-ingl, reflectance r, and shading s.Figure 7 presents both qualitative and quantitative eval-uation of the framework to jointly learn both the struc-tured recognition model, and the generative model pa-rameters. A particular point of note is that we explic-itly encode “identity” as a categorical random variablesince we have knowledge about the domain and the rel-evant axis to explore. Since we also learn the generativemodel, which in the domain of the actual dataset is sim-ply the expression (n:l)r+, we can afford to weaklyspecify the structure allowing for some neural-networkcomponent to take up the requisite slack in order to re-construct the input. This allows us to directly addressthe task of predicting identity, instead of approachingit through surrogate evaluation methods (e.g. nearest-neighbour classification based on inferred reflectance).While this formulation allows us to to perform the identity classification task, the fact that ourrecognition model never supervises the reflectance means that the variable can typically absorbsome of the representational power of other, semi-supervised nodes. This is particularly the casewhen dealing with high-dimensional latent spaces as for reflectance and shading.7Under review as a conference paper at ICLR 2017xcknkdkKKmaxxKdknkckKsize rate (%) error rate (%)Unsup 0 32.25 ( 12.97)500 1 6.42 ( 2.15)500 10 4.21 ( 1.29)1000 1 4.72 ( 1.60)1000 10 2.98 ( 0.93)Figure 8: Generative (l) and recognition (m) model with digit d, stylen, canvasc, and countK.4.3 M ULTI -MNISTFinally, we run an experiment to test the ability of our framework to handle models that induce latentrepresentations of variable dimension. We extend the simple model from the MNIST experiment bycomposing it with a stochastic sequence generator, to test its ability to count the number of digits ina given input image, given its ability to encode and reconstruct the digits in isolation. The graphicalmodels employed are depicted in Figure 8.We observe that we are indeed able to reliable learn to count, at least within the limits of upto 3digits in the multi-mnist dataset. The dataset was generated directly from the MNIST dataset by ma-nipulating the scale and positioning of the standard digits into a combined canvas, evenly balancedacross the counts and digits. The results across different supervised set sizes and supervision ratesare shown in the table in Figure 8.5 D ISCUSSION AND CONCLUSIONIn this paper, we introduce a general framework for semi-supervised learning in the V AE setting thatallows incorporation of graphical models to specify a wide variety of structural constraints on therecognition network. We demonstrate its flexibility by applying it to a variety of different tasks in thevisual domain, and evaluate its efficacy at learning disentangled representations in a semi-supervisedmanner, showing strong performance.This framework ensures that the recognition network learns to make predictions in an interpretableand disentangled space, constrained by the structure provided by the graphical model. The structuredform of the recognition network also is typically a better fit for vision models, as it helps bettercapture complexities in the likelihood (usually the renderer). Given that we encode graphical modelsin the recognition network, and Johnson et al. (2016) encode it in the generative model in concertwith V AEs, a natural extension would be the exploration of the ability to learn effectively whenspecifying structure in both by means of graphical models. This is a direction of future work we areinterested in, particularly in context of semi-supervised learning.The framework is implemented as a Torch library (Collobert et al., 2011), enabling the constructionof stochastic computation graphs which encode the requisite structure and computation. This pro-vides another direction to explore in the future – the extension of the stochastic computation graphframework to probabilistic programming (Goodman et al., 2008; Wingate et al., 2011; Wood et al.,2014). Probabilistic programs go beyond the presented framework to include stochastic inferenceand the ability to specify arbitrary models of computation. The combination of such frameworkswith neural networks has recently been studied in Ritchie et al. (2016); Le et al. (2016), and indi-cates a promising avenue for further exploration. | B1FyYJmVe | A variant of the semi-supervised VAE. | 6: Marginally above acceptance threshold | This paper proposed a variant of the semi-supervised VAE model which leads to a unified objective for supervised and unsupervised VAE. This variant gives software implementation of these VAE models more flexibility in specifying which variables are supervised and which are not.
This development introduces a few extra terms compared to the original semi-supervised VAE formulation proposed by Kingma et al., 2014. From the experiment results it seems that these terms do not do much as the new formulation and the performance difference between the proposed method and Kingma et al. 2014 are not very significant (Figure 5). Therefore the benefit of the new formulation is likely to be just software engineering flexibility and convenience.
This flexibility and convenience is nice to have, but it is better to demonstrate a few situations where the proposed method can be applied while for other previous methods it is non-trivial to do.
The paper's title and the way it is written make me expect a lot more than what is currently in the paper. I was expecting to see, for example, structured hidden variable model for the posterior (page 4, top), or really "structured interpretation" of the generative model (title), but I didn't see any of these. The main contribution of this paper (a variant of the semi-supervised VAE model) is quite far from these.
Aside from these, the plug-in estimation for discrete variables only works when the function h(x,y) is a continuous function of y. If however, h(x, y) is not continuous in y, for example h takes one form when y=1 and another form when y=2, then the approach of using Expectation[y] to replace y will not work. Therefore the "plug-in" estimation has its limitations.
| 4: The reviewer is confident but not absolutely certain that the evaluation is correct |
HJtN5K9gx | ICLR.cc/2017/conference | 2017 | Learning Disentangled Representations in Deep Generative Models | ["N. Siddharth", "Brooks Paige", "Alban Desmaison", "Jan-Willem van de Meent", "Frank Wood", "Noah D. Goodman", "Pushmeet Kohli", "Philip H.S. Torr"] | Deep generative models provide a powerful and flexible means to learn complex distributions over data by incorporating neural networks into latent-variable models. Variational approaches to training such models introduce a probabilistic encoder that casts data, typically unsupervised, into an entangled and unstructured representation space. While unsupervised learning is often desirable, sometimes even necessary, when we lack prior knowledge about what to represent, being able to incorporate domain knowledge in characterising certain aspects of variation in the data can often help learn better disentangled representations. Here, we introduce a new formulation of semi-supervised learning in variational autoencoders that allows precisely this. It permits flexible specification of probabilistic encoders as directed graphical models via a stochastic computation graph, containing both continuous and discrete latent variables, with conditional distributions parametrised by neural networks. We demonstrate how the provision of structure, along with a few labelled examples indicating plausible values for some components of the latent space, can help quickly learn disentangled representations. We then evaluate its ability to do so, both qualitatively by exploring its generative capacity, and quantitatively by using the disentangled representation to perform classification, on a variety of models and datasets. | ["Semi-Supervised Learning", "Deep learning", "Computer vision"] | ABSTRACTDeep generative models provide a powerful and flexible means to learn com-plex distributions over data by incorporating neural networks into latent-variablemodels. Variational approaches to training such models introduce a probabilisticencoder that casts data, typically unsupervised, into an entangled representationspace. While unsupervised learning is often desirable, sometimes even necessary,when we lack prior knowledge about what to represent, being able to incorporatedomain knowledge in characterising certain aspects of variation in the data canoften help learn better disentangled representations. Here, we introduce a newformulation of semi-supervised learning in variational autoencoders that allowsprecisely this. It permits flexible specification of probabilistic encoders as directedgraphical models via a stochastic computation graph, containing both continuousand discrete latent variables, with conditional distributions parametrised by neuralnetworks. We demonstrate how the provision of dependency structures, along witha few labelled examples indicating plausible values for some components of thelatent space, can help quickly learn disentangled representations. We then evalu-ate its ability to do so, both qualitatively by exploring its generative capacity, andquantitatively by using the disentangled representation to perform classification,on a variety of models and datasets.1 I NTRODUCTIONReasoning in complex perceptual domains such as vision often requires the ability to effectivelylearn flexible representations of high-dimensional data, interpret the representations in some form,and understand how the representations can be used to reconstruct the data. The ability to learnrepresentations is a measure of how well one can capture relevant information in the data. Beingable to interpret the learned representations is a measure of extracting consistent meaning in aneffort to make sense of them. Having the ability to reliably reconstruct the data, a tool for predictivesynthesis, can aid in model diagnosis, enable successful transfer learning, and improve generality.Such tasks are typically best addressed by generative models, as they exhibit the flexibility requiredto satisfy all three facets. Discriminative models primarily attend to the first two, learning flexiblerepresentations and conforming to some interpretable space (e.g. classification domain) but don’tperform the predictive synthesis task.Probabilistic graphical models (Koller & Friedman, 2009; Murphy, 2012) are a framework for gen-erative modelling that enables specifying a joint probability distribution on a richly semantic repre-sentation space. As good a fit as they are for specification and representation, the learning processfor both the analysis and synthesis tasks typically suffers in complex perceptual domains such asvision. This is because constructing a generative model requires explicitly specifying the condi-tional distribution of the observed data given latent variables of interest. In practice, designing such1Under review as a conference paper at ICLR 2017likelihood functions by hand is incredibly challenging, and applying generative models to visiondata often requires extensive and significant feature engineering to be successful. One approachto alleviate some of this hardship involves the development of deep generative models: generativemodels that employ neural networks to learn, automatically from data, the unknown conditional dis-tribution in the model. They function as flexible feature learners, where the features are encoded inthe posterior distribution over the latent variables in the model. Recent work exploring the effec-tiveness of such models (e.g. Kingma & Welling (2014); Kulkarni et al. (2015b); Goodfellow et al.(2014)) has shown considerable promise in being able to address the fundamental issues in per-forming this task. These models however are typically unsupervised, learning representations thatare not directly amenable to human interpretation. Any interpretability or disentanglement of thelearned representation is observed or extracted after learning has been performed, by exploring thelatent space along its non-specific axes of variation. A more recent approach by Chen et al. (2016)involves imposition of information-theoretic constraints to better separate factors of variation, buthere too, any interpretability is only established post facto.Figure 1: Variation along (top) light-ing and (bottom) identity axes.While such approaches have considerable merit, particu-larly when faced with the absence of any information aboutthe data, when there are aspects of variation in the data thatcanbe characterised effectively, using and being able toexpress these can often be desirable. For example, whenlearning representations for images of house numbers, hav-ing an explicit “digit” latent variable helps capture a mean-ingful axis of variation, independent of other aspects. Wealso often want to interpret the same data in different waysdepending on context: for a given image of a person, do wecare about the identity, lighting, or indeed any other facetsof the scene (c.f. Figure 1). In these situations, not beingable to enforce context is something of a handicap.In this paper, we seek to combine the best of both worlds: providing the facility to describe the struc-tural constraints under which we would like to interpret the data, while using neural nets to capturevariation for aspects we cannot, or choose not to, explicitly model. By structural constraints, we re-fer to the (arbitrary) dependencies one would like to employ in the recognition model, particularly inregard to there being consistent interpretable semantics of what the variables in the model represent.In particular, we set up our framework in the context of variational autoencoders (V AE Kingma &Welling (2014); Rezende et al. (2014)), as a means for semi-supervised learning in deep generativemodels (Kingma et al., 2014). We provide an alternate formulation of the variational objective and amodified training procedure which permits us to explore a wide space of recognition networks to useas probabilistic encoders. In particular we make no mean-field assumptions for our recognition net-works, allowing arbitrary hierarchical and structured-graphical-model representations, employingboth continuous and discrete latent variables that can be alternately observed, or left unobserved.2 B ACKGROUND AND RELATED WORKVariational autoencoders (Kingma & Welling, 2014; Rezende et al., 2014) simultaneously train botha probabilistic encoder and decoder for a dataset x. The central idea is that an encoding zcan beconsidered a latent variable which allows describing a decoder as a conditional probability densityp(xjz). This is typically a distribution with parameters defined as the output of a determinis-tic multi-layer neural network (itself with parameters ) which takes zas input. Placing a weakprior over z, the corresponding probabilistic encoder can be interpreted as the posterior distributionp(zjx)/p(xjz)p(z). Estimating parameters in this model is challenging, as is performingthe posterior inference necessary to encode data. The variational Bayes approach learns an approx-imate encoder q(zjx), called an “inference network” or a “recognition network”, which aims toapproximate the posterior distribution p(zjx). Then, rather than fitting parameters by maxi-mizing the marginal likelihood p(x), the variational approach maximizes an evidence lower bound(ELBO)L(;;x)logp(x), defined with respect to both decoder and encoder parameters.L(;;x) =Eq(zjx)[logp(x;z)logq(zjx)]; (1)One line of work to embed structure into the latent space zsuch that it exhibits disentangled fea-tures, is through partial supervision. This is either in terms of labelled data (Sohn et al., 2015),2Under review as a conference paper at ICLR 2017or curriculum-learning schemes (Kulkarni et al., 2015b) which explicitly disentangle different fac-tors. Kingma et al. (2014) explore semi-supervised learning in the V AE setting by factoring thelatent space to learn a joint classification model q(yjx)and recognition model q(zjx). Thisis done by separating the latent space into structured, interpretable components yand unstructuredcomponents z, analytically marginalising variables out where discrete. Sohn et al. (2015) performfully-supervised learning in V AEs by transforming an unconditional objective into one where thedata conditions both the (unstructured) latent and the (structured) labels. In contrast to Kingma et al.(2014), the learning objective is a lower bound on the conditional marginal likelihood p(xjy),conditioning the learned V AE on the values of the labelled data. Both of these approaches effec-tively require the label space yto be discrete and finite. Kulkarni et al. (2015b) attend to weakly-supervised learning with V AEs through a novel training procedure that uses data clustered intoequivalence classes along different axes of variation. They then constrain different parts of the latentspace to account for changes along a single axis, by training with data from a particular equivalenceclass. An advantage of this approach is not requiring any explicit labels on the latent space, though itdoes require independence assumptions on structured components, as well as carefully curated data.An alternative approach biases towards interpretable representations by introducing structure in theprior distribution over the latent space p(z). Johnson et al. (2016) explore the combination of graph-ical models and V AEs using classical conjugate exponential family statistical models as structuredpriors over the latent space. They consider relaxation of conjugacy constraints in the likelihoodmodel using neural network approximations, with a training scheme resembling traditional mean-field coordinate ascent algorithms. The recognition network, rather than proposing values outright,proposes parameters of a conjugate-likelihood approximation to the true non-conjugate likelihood.From a specific-instance perspective, Eslami et al. (2016) use a recurrent neural network (RNN)coupled with a spatial transformer network (STN, Jaderberg et al. (2015)) inducing a particularstate-space representation with the approximation distribution of a V AE to parse images into sceneconstituents. Kulkarni et al. (2015a) also explore a specific instance related to a 3D graphics engineby having a programmatic description provide structure using neural networks as surrogates for theperceptual-matching problem. Andreas et al. (2016) explore a more general formulation of structurewith compositional neural network models derived from linguistic dependency parses.3 F RAMEWORK AND FORMULATIONOur method synthesises the semi-supervised and structured-graphical-model approaches. Like John-son et al. (2016), we incorporate graphical model structures, however rather than placing them withinthe generative model p(z;x), we incorporate them into the encoder model q(zjx). For manyperceptual problems in domains such as vision, complex dependencies arise in the posterior due todeterministic interactions during rendering. A mean-field approximation in q(zjx)is a poor fit,even in situations where all the interpretable latent variables are a priori independent. This is animportant reason for our choice of where we embed structure. The use of a structured, multilevelprobabilistic model to define the encoder can also be interpreted as a hierarchical variational model(Ranganath et al., 2015). Interpretability is enforced by occasionally supplying labels to latent vari-ables expected to have a interpretable meaning in the final encoded representation.xlnfunction labelNoise()-- create the node connecting to inputlocal x = nn.Identity()()-- connect a discrete RV to inputlocal l = pp.Discrete({torch.Tensor(1,10)})({x})-- connect a std Gaussian RV to inputlocal n = pp.Gaussian({zeros(1,2), zeros(1,2)})({pp.r(x), pp.r(x)})nngraph.annotateNodes()-- return stochastic computation graphreturn pp.gModule({x}, {l, n})endFigure 2: Example graphical model and its ex-pression in our framework. Further details inthe Appendix.Our framework provides an embedded domain-specific language (EDSL) in Torch (Collobertet al., 2011), that can be used to specify a wide va-riety of graphical models in the form of a stochas-tic computation graph (Schulman et al., 2015). Anexample is shown in Figure 2. These graphicalmodels describe the structure of latent, observable,and partially observable random variables whichexist in an idealized representation space. Specif-ically, we assume a model structure of the formp(x;z;y) =p(xjz;y)p(z;y)where the like-lihoodp(xjz;y)of the data xis conditioned ona set of structured variables yandunstructuredvariables z, for which we define some appropri-3Under review as a conference paper at ICLR 2017ately structured prior p(z;y). The likelihood itself is typically unstructured (e.g. a multivariatenormal distribution). This model structure allows us to optimize the parameters learning a likeli-hood function constrained by the structured latents, but crucially does not require that these latentscompletely explain the data. The approximation to the true posterior is nominally taken to be of theform of the prior distribution q(z;yjx), with parameters but can often include additional struc-ture and alternate factorisations as appropriate. Models with such factoring are useful for situationswhere interpretability is required, or informative, for some axes of variation in the data. It is alsouseful when we wish to interpret the same data from different contexts and when we cannot con-ceivable capture all the variation in the data due to its complexity, settling for particular restrictions,as is often the case with real world data.A particular challenge here lies in choosing a manner for incorporating labelled data for some oftheyinto a training scheme. For example, choosing q(z;yjx) =qz(zjy;x)qy(yjx), de-composes the problem into simultaneously learning a classifier qy(yjx)alongside the generativemodel parameters and encoder qz(zjx;y). In the fully unsupervised setting, the contribution ofa particular data point xito the ELBO can be expressed, with minor adjustments of Equation (1), asL;;xi=Eq(z;yjxi)"logpxijz;yp(z;y)qz(z;yjxi)#: (2)a Monte Carlo approximation of which samples ysqy(yjx)andzsqz(zjy;x).By contrast, in the fully supervised setting the values yare treated as observed and become fixedinputs into the computation graph, instead of being sampled from q. When the label yis ob-served along with the data, for fixed (xi;yi)pairs, the lower bound on the conditional log-marginallikelihood logp(xjy)isLxjy;z;xi;yi=Eqz(zjxi;yi)"logpxijz;yipzjyiqz(zjxi;yi)#: (3)This quantity can be optimized directly to learn model parameters andzsimultaneously viaSGD. However, it does not contain the encoder parameters y. This difficulty was also encounteredin a related context by Kingma et al. (2014). Their solution was to augment the loss function byincluding an explicit additional term for learning a classifier directly on the supervised points.An alternative approach involves extending the model using an auxiliary variable ~y. Definingp(~y;y;zjx) =p(~yjy)p(x;y;z)andq(~y;y;zjx) =p(~yjy)q(y;zjx), with likelihoodp(~yjy) =~y(y), we obtain a model for which marginalization over ~yreproduces the ELBOin Equation (2), and treating ~yas observed gives the supervised objectiveL;;xi~y=yi=Eqy"yi(y)Eqz"logpxijz;yp(z;y)qy(yjxi)qz(zjy;xi)##=qyyijxiEqz"logpxijz;yipz;yiqy(yijxi)qz(zjyi;xi)#=qyyijxiLxjy;z;xi;yi+ logpyilogqyyijxi:(4)This formulation enables a range of capabilities for semi-supervised learning in deep generativemodels. To begin with, it extends the ability to partially-supervise latent variables to those thathave continuous support. This effectively learns a regressor instead of a classifier in the same for-mulation. Next, it automatically balances the trade-off between learning a classifier/regressor andlearning the parameters of the generative model and the remainder of the recognition network. Thisis due to the fact that the classifier qy(yjx)is always present and learned, and is contrast to thehyperparameter-driven approach in Kingma et al. (2014). Finally, it allows for easy automatic im-plementation of a wide variety of models, separating out the labelled and unlabelled variables, toderive a unified objective over both the supervised and unsupervised cases. When unsupervised, thevalue of the label yiis sampled from qy(yjx)and scored in that distribution, and when super-vised, it is set to the given value, and scored in the same distribution. This is in the same spirit as a4Under review as a conference paper at ICLR 2017number of approaches such as Automatic Differentiation (AD) and Probabilistic Program inference,where the choice of representation enables ease of automation for a great variety of different cases.Supervision rate. While learning with this objective, we observe data in batches that are eitherwholly supervised, or wholly unsupervised. This typically obviates the need to construct compli-cated estimators for the partially observed cases, while also helping reduce variance in general overthe learning and gradient computation (details of which are provided in the Appendix). Doing soalso presents a choice relating to how often we observe labelled data in a complete sweep throughthe dataset, referred to as the supervision rate r. Practically, the rate represents a clear trade-off inlearning the generative and recognition-network parameters under interpretability constraints. If therate is too low, the supervision can be insufficient to help with disentangling representation in therecognition network, and if too high, the generative model can overfit to just the (few) superviseddata points. The rate also has a natural relation to the variance of the objective function and its gra-dients. As can be seen from Equation (4), an evaluation of the objective for a given yiinvolves theunsupervised estimation of the conditional ELBO Lxjy. The rate implicitly affects the number ofsuch estimations for any given yiand thus the variance of the objective with respect to that label yi.The same argument applies for the gradients of the objective.Plug-in estimation for discrete variables. In targeting a general class of models, another par-ticular difficulty is the ubiquity of discrete latent variables. To obtain a differentiable objective,one can either marginalize over discrete variables directly (as done by Kingma et al. (2014) andin the STAN probabilistic programming system (Stan Development Team, 2013)), which doesn’tscale over numbers of variables, or use a REINFORCE-style estimator (Williams, 1992; Mnih &Gregor, 2014), which tends to have high variance. A third approach, related to Bengio et al. (2013),is to represent discrete latent variables defined on a finite domain using a one-hot encoding, thenrelaxing them to a continuous probability simplex when used as an input to a recognition network.For example, when yis a one-hot encoding of a discrete value used in a recognition network whichfactors asq(yjx)q(zjy;x), thenq(yjx)is itself a discrete distribution with a probabilityvector=g(x)for some deterministic function g. The value yis itself an input to a secondfunctionh(x;y)producing the parameters for q(zjy;x). Instead of evaluating h(x;y)at asampled value y(or enumerating over the entire domain), we simply evaluate it at the single point ,noting that=Eq(yjx)[y]. This may seem a crude approximation, replacing integration with asingle evaluation, claiming Eq(yjx)[h(x;y)]h(x;Eq(yjx)[y]);which is not true in generalforh(). However, if is actually a one-hot encoding, i.e., when Eq(yjx)[y]has a single non-zerovalue, they are in fact equal. For our experiments we employ this plug-in estimator where applicable,although our framwork can express any of the above methods.4 E XPERIMENTSWe evaluate our framework on along a number of different axes, pertaining to its ability to (i) learndisentangled representation from a little supervision, (ii) demonstrate capability at a relevant clas-sification/regression task, (iii) successfully also learn the generative model, and (iv) admit the useof latent spaces of varying dimensionality Note that we do not set out to build the best possibleclassifier in these tasks. Instead, the classification task is a means to the end of demonstrating thatthe learned representation is indeed disentangled, often with minimal supervision. Also, details ofneural network architectures, graphical models for the recognition networks, dataset characteristics,and hyper-parameter settings are provided in the Appendix.4.1 MNIST AND SVHNxndxndFigure 3: (left) Generative and(right) recognition model withdigitdand stylen.To begin with, we explore the facets of our model in thestandard MNIST and Google Street-View House Numbers(SVHN) datasets. We use this example to highlight how theprovision of even the slightest structure, coupled with minimalsupervision, in often sufficient to induce the emergence of dis-entangled representations in the recognition network. Figure 3shows the structure of the generative and recognition modelsfor this experiment.5Under review as a conference paper at ICLR 2017(a) (b) (c) (d)Figure 4: (a) Visual analogies for the MNIST data, with inferred style latent variable fixed andthe label varied. (b) Exploration in “style” space for a 2D latent gaussian random variable. Visualanalogies for the SVHN data when (c) fully supervised, and (d) supervised with just 100 labels/digit.MNIST SVHNl Ours Kingma et al. (2014) Ours Kingma et al. (2014)10 12.2 (1.38) 11.97 (1.71) - -60 5.28 (0.76) 4.94 (0.13) - -100 4.23 (0.68) 3.60 (0.56) 30.32 ( 2.74) 36.02 (0.10)300 3.94 (0.77) 3.92 (0.63) 23.98 ( 1.83) -Figure 5: (Top) Classification error graphs over different labelled set (per class) sizes and supervisionrates for MNIST (left) and SVHN (right). Note the steep drop in error rate with just a handful oflabels per class ( l), seen just a few times ( r). (Bottom) Classification error rates for different (per-class) labelled-set sizes lover different runs.Figure 4(a) and (c) show the effect of first transforming a given input (leftmost column) into thedisentangled latent space, and with the style latent variable fixed, manipulating the digit through thegenerative model to produce appropriately modified reconstructions. These were both derived withfull supervision over a 50 and 100 dimensional Gaussian latent space for the styles, respectively.Figure 4(b) shows the transformation for a fixed digit, when the style latent is varied. This wasderived with a simple 2D Gaussian latent space for the style. The last part, Figure 4(d) shows theability of the network to begin disentangling the latent space with just 100 labelled samples per digit(training dataset size is 73000 points). Separation between style and class is clearly evident evenwith such little supervision.We compute the classification accuracy of the label-prediction task with this model for both datasets,and the results are reported in the bottom of Figure 5. The results are compared to those reportedin Kingma et al. (2014). For the MNIST dataset, we compare against model M2 as we run directlyon the data, without performing a preliminary feature-extraction step. For the SVHN dataset, wecompare against model M1+M2 even though we run directly on the data, using a CNN to simultane-ously learn to extract features. Confidence estimates for both were computed off of 10 runs. We notethat we fare comparably with these models, and in particular, when employing a CNN for featureextraction for the SVHN dataset, comfortably exceed them.6Under review as a conference paper at ICLR 2017Ours (Full Supervision) Ours (Semi-Supervised) Jampani et al. (2015)Identity 4.2 ( 0.84) 10.3 ( 2.36) 30Lighting 14.2 ( 1.12) 28.4 ( 4.12) 10Figure 7: (Top) Exploring the generative capacity of the model. Column 1: input image. Col-umn 2: reconstruction. Columns 3-7: reconstructions with fixed (inferred) lighting and varyingidentities. (Bottom) Classification and regression error rates for the identity and lighting latent vari-ables, fully-supervised, and semi-supervised with 20 distinct labelled example per variation axis (60total). Classification is a direct 1-out-of-38 choice, whereas for the comparison, error is a nearest-neighbour loss based on the inferred reflectance. Regression loss for lighting is measured as cosineangle distance. Results for Jampani et al. (2015) are estimated from plot asymptotes.Figure 5 shows the effect of the supervision rate ron the error rate. As evident from the graph, therate has a strong affect on how quickly one learns an effective classifier. This indicates that whenlabels are sparse or hard to come by, a training regime that runs largely unsupervised, even only oc-casionally looking at the supervised data, still learns to disentangle the latent-space representations.4.2 I NTRINSIC FACESWe next move to a harder problem involving a generative model of faces, attempting to highlighthow the introduction of stronger dependency structures in the recognition model helps disentanglelatents, particularly when the generative model assumes conditional independence between the la-tents. Here, we use the “Yale B” dataset as processed by Jampani et al. (2015) to train the modelsshown in Figure 6. The primary tasks we are interested in here are (i) the ability to manipulate theinferred latents to evaluate if they qualitatively achieve semantically meaningful disentangled repre-sentations, (ii) the classification of person identity, and (iii) the regression for lighting direction.xi` s rxi`rsFigure 6: (Top) Generative and (Bottom)recognition model with identity i, light-ingl, reflectance r, and shading s.Figure 7 presents both qualitative and quantitative eval-uation of the framework to jointly learn both the struc-tured recognition model, and the generative model pa-rameters. A particular point of note is that we explic-itly encode “identity” as a categorical random variablesince we have knowledge about the domain and the rel-evant axis to explore. Since we also learn the generativemodel, which in the domain of the actual dataset is sim-ply the expression (n:l)r+, we can afford to weaklyspecify the structure allowing for some neural-networkcomponent to take up the requisite slack in order to re-construct the input. This allows us to directly addressthe task of predicting identity, instead of approachingit through surrogate evaluation methods (e.g. nearest-neighbour classification based on inferred reflectance).While this formulation allows us to to perform the identity classification task, the fact that ourrecognition model never supervises the reflectance means that the variable can typically absorbsome of the representational power of other, semi-supervised nodes. This is particularly the casewhen dealing with high-dimensional latent spaces as for reflectance and shading.7Under review as a conference paper at ICLR 2017xcknkdkKKmaxxKdknkckKsize rate (%) error rate (%)Unsup 0 32.25 ( 12.97)500 1 6.42 ( 2.15)500 10 4.21 ( 1.29)1000 1 4.72 ( 1.60)1000 10 2.98 ( 0.93)Figure 8: Generative (l) and recognition (m) model with digit d, stylen, canvasc, and countK.4.3 M ULTI -MNISTFinally, we run an experiment to test the ability of our framework to handle models that induce latentrepresentations of variable dimension. We extend the simple model from the MNIST experiment bycomposing it with a stochastic sequence generator, to test its ability to count the number of digits ina given input image, given its ability to encode and reconstruct the digits in isolation. The graphicalmodels employed are depicted in Figure 8.We observe that we are indeed able to reliable learn to count, at least within the limits of upto 3digits in the multi-mnist dataset. The dataset was generated directly from the MNIST dataset by ma-nipulating the scale and positioning of the standard digits into a combined canvas, evenly balancedacross the counts and digits. The results across different supervised set sizes and supervision ratesare shown in the table in Figure 8.5 D ISCUSSION AND CONCLUSIONIn this paper, we introduce a general framework for semi-supervised learning in the V AE setting thatallows incorporation of graphical models to specify a wide variety of structural constraints on therecognition network. We demonstrate its flexibility by applying it to a variety of different tasks in thevisual domain, and evaluate its efficacy at learning disentangled representations in a semi-supervisedmanner, showing strong performance.This framework ensures that the recognition network learns to make predictions in an interpretableand disentangled space, constrained by the structure provided by the graphical model. The structuredform of the recognition network also is typically a better fit for vision models, as it helps bettercapture complexities in the likelihood (usually the renderer). Given that we encode graphical modelsin the recognition network, and Johnson et al. (2016) encode it in the generative model in concertwith V AEs, a natural extension would be the exploration of the ability to learn effectively whenspecifying structure in both by means of graphical models. This is a direction of future work we areinterested in, particularly in context of semi-supervised learning.The framework is implemented as a Torch library (Collobert et al., 2011), enabling the constructionof stochastic computation graphs which encode the requisite structure and computation. This pro-vides another direction to explore in the future – the extension of the stochastic computation graphframework to probabilistic programming (Goodman et al., 2008; Wingate et al., 2011; Wood et al.,2014). Probabilistic programs go beyond the presented framework to include stochastic inferenceand the ability to specify arbitrary models of computation. The combination of such frameworkswith neural networks has recently been studied in Ritchie et al. (2016); Le et al. (2016), and indi-cates a promising avenue for further exploration. | HyW_r-fEl | 5: Marginally below acceptance threshold | This paper introduces a variant of the semi-supervised variational auto-encoder (VAE) framework. The authors present a way of introducing structure (observed variables) inside the recognition network.
I find that the presentation of the inference with auxiliary variables could be avoided, as it actually makes the presentation unnecessarily complicated. Specifically, the expressions with auxiliary variables are helpful for devising a unified implementation, but modeling-wise one can get the same model without these auxiliary variables and recover a minimal extension of VAE where part of the generating space is actually observed. The observed variables mean that the posterior needs to also condition on those, so as to incorporate the information they convey. The way this is done in this paper is actually not very different from Kingma et al. 2014, and I am surprised that the experiments show a large deviation in these two methods' results. Given the similarity of the models, it'd be useful if the authors could give a possible explanation on the superiority of their method compared to Kingma et al. 2014. By the way, I was wondering if the experimental setup is the same as in Kingma et al. 2014 for the results of Fig. 5 (bottom) - the authors mention that they use CNNs for feature extraction but from the paper it's not clear if Kingma et al. do the same.
On a related note, I was wondering the same for the comparison with Jampani et al. 2015. In particular, is that model also using the same rate of supervision for a fair comparison?
The experiment in section 4.3 is interesting and demonstrates a useful property of the approach.
The discussion of the supervision rate (and the pre-review answer) is helpful in giving some insight about what is a successful training protocol to use in semi-supervised learning.
Overall, the paper is interesting but the title and introduction made me expect something more from it. From the title I expected a method for interpreting general deep generative models, instead the described approach was about a semi-supervised variant of VAE - naturally including labelled examples disentangles the latent space, but this is a general property of any semi-supervised probabilistic model and not unique to the approach described here. Moreover, from the intro I expected to see a more general approximation scheme for the variational posterior (similar to Ranganath et al. 2015 which trully allows very flexible distributions), however this is not the case here.
Given the above, the contributions of this paper are in defining a slight variant of the semi-supervised VAE, and (perhaps more importantly) formulating it in a way that is amendable to easier automation in terms of software. But methodologically there is not much contribution to the current literature. The authors mention that they plan to extend the framework in the probabilistic programming setting. It seems indeed that this would be a very promising and useful extension.
Minor note: three of Kingma's papers are all cited in the main text as Kingma et al. 2014, causing confusion. I suggest using Kingma et al. 2014a etc.
| 3: The reviewer is fairly confident that the evaluation is correct |
|
BkSqjHqxg | ICLR.cc/2017/conference | 2017 | Skip-graph: Learning graph embeddings with an encoder-decoder model | ["John Boaz Lee", "Xiangnan Kong"] | In this work, we study the problem of feature representation learning for graph-structured data. Many of the existing work in the area are task-specific and based on supervised techniques. We study a method for obtaining a generic feature representation for a graph using an unsupervised approach. The neural encoder-decoder model is a method that has been used in the natural language processing domain to learn feature representations of sentences. In our proposed approach, we train the encoder-decoder model to predict the random walk sequence of neighboring regions in a graph given a random walk along a particular region. The goal is to map subgraphs — as represented by their random walks — that are structurally and functionally similar to nearby locations in feature space. We evaluate the learned graph vectors using several real-world datasets on the graph classification task. The proposed model is able to achieve good results against state-of- the-art techniques. | ["Unsupervised Learning", "Deep learning"] | ABSTRACTIn this work, we study the problem of feature representation learning for graph-structured data. Many of the existing work in the area are task-specific and basedon supervised techniques. We study a method for obtaining a generic featurerepresentation for a graph using an unsupervised approach. The neural encoder-decoder model is a method that has been used in the natural language processingdomain to learn feature representations of sentences. In our proposed approach,we train the encoder-decoder model to predict the random walk sequence of neigh-boring regions in a graph given a random walk along a particular region. The goalis to map subgraphs — as represented by their random walks — that are struc-turally and functionally similar to nearby locations in feature space. We evaluatethe learned graph vectors using several real-world datasets on the graph classifi-cation task. The proposed model is able to achieve good results against state-of-the-art techniques.1 I NTRODUCTIONThe skip-gram model (Mikolov et al., 2013) was originally introduced in the natural language pro-cessing (NLP) domain as a model for learning vector representations of words. Recently, it hasbeen adapted successfully to solve the problem of learning node representations for graph-structureddata (Grover & Leskovec, 2016; Perozzi et al., 2014). The learned vectors can then be used directlyin problems such as link prediction (Miller et al., 2009), or clustering of nodes on a graph (Vinayaket al., 2014). However, in many real-world applications we need to learn a feature representation forthe entire graph instead of representations for just the nodes in the graph. In this paper, we studythe graph representation learning problem, where the task is to learn a feature representation for anygraph object. We propose a novel solution based upon the encoder-decoder model.Graph-structured data can be found in many different domains including biology, chemistry, andthe study of social networks. For instance, in chemistry, chemical compounds can be representedas molecular graphs (Duvenaud et al., 2015). In social network analysis, the interaction amongdifferent entities of a community can be captured using a social graph (Yanardag & Vishwanathan,2015). A natural question that arises in these scenarios is what the structure of a graph tells usabout the properties of the graph ( e.g., what does the molecular graph tell us about the compound’saqueous solubility, or its anti-cancer activity?). In other words, we are often interested in performingmachine learning tasks on graph-structured data. Many techniques have been proposed to solve thisproblem, these include learning graph kernels (Vishwanathan et al., 2010), identifying discriminativesubgraphs (Kong et al., 2011), using specially designed neural network models such as the graphneural network (Scarselli et al., 2009), and learning the graph fingerprint (Duvenaud et al., 2015).Most of the approaches for learning graph features are supervised and task-specific. Our approach,on the other hand, is unsupervised and general-purpose. The learned features can be used directlywith off-the-shelf machine learning methods on different tasks, such as classification or clustering.Perhaps the work that resembles this work the most is the one in (Yanardag & Vishwanathan, 2015).We argue, however, that our approach is different and this is good motivation to pursue the study asthere has not been many work published in the area. For one, we use the skip-thought model (Kiros1Under review as a conference paper at ICLR 2017AAACCACBCBBDFEEAACACDEFECBCBBAFigure 1: A random walk over a graph is split into three subsequences (s1;s2;s3). The middlesequence is input into the encoder and the decoders attempt to reconstruct the previous and nextsub-sequence. The unattached arrows are connected to the encoder output to condition the decoder.et al., 2015) and we are not just interested in structurally similar subgraphs but also functionallysimilar ones.Our approach is based on the encoder-decoder model (Kalchbrenner & Blunsom, 2013; Cho et al.,2014); in particular, we are interested in the skip-thought model. In (Kiros et al., 2015), tuplescomposed of three consecutive sentences from word documents are fed into an RNN model and themodel attempts to reconstruct the previous and next statements given the middle sentence. Aftertraining on a large text corpus, the hidden vector values for an input sentence can be used as thatinput sequence’s feature representation. It has been shown that the model learns a function thatmaps semantically and syntactically similar sentences close to one another in feature space. In thiswork, the idea is to take instead a sequence generated by a random walk along a labeled graph andto divide it into three parts, feeding these into the encoder-decoder model. Since the structure of thegraph determines the random walk sequences that can be generated, we can treat each sub-sequenceas a representation of a particular subgraph in the graph. We argue that by training an encoder-decoder model on a large number of random walk sequences, we can learn a feature representationthat groups structurally and functionally similar subgraphs together. Figure 1 shows an example ofhow we can train the model using a random walk over a graph. A simple example that illustrateshow the model may learn to identify functionally similar subgraphs is shown in Figure 2.After the model is trained on a large sample of random walks generated from a dataset of labeledgraphs, we can then freeze the model and use the encoder as a feature extractor. In particular, weobtain a feature representation of a graph by sampling multiple short random walks and aggregatingthe information encoded in the feature representations of these short walks. We borrow an analogyfrom the NLP domain to highlight the idea. In order to obtain a good feature representation for atext document, short of sampling all the words in the document one may sample a set of sentencesfrom the document and use these to construct the features for the document. Similarly, to obtain afeature representation for a graph, we sample a set of subgraphs (as represented by the short walks)and use the aggregate subgraph features to construct the final graph feature vector. Since we use thetrained encoder as our feature extractor, graphs that share structural and functional properties willtend to have more similar feature vectors.2 P ROPOSED METHOD2.1 S KIP-THOUGHTSince our proposed approach is based on the encoder-decoder model of (Kiros et al., 2015), we beginby briefly introducing the model. The encoder-decoder model uses an RNN with GRU (Chung et al.,2014) activation as the encoder and an RNN with a conditional GRU as the decoder. The model istrained using the Adam stochastic optimization algorithm (Kingma & Ba, 2015).2Under review as a conference paper at ICLR 2017ABDFABBBCCCDFADFABBBBGHGDFsubgraph1subgraph2possiblerandomwalksequences:“B-B-A-B-B-A-C-C-C-D-F-D-F”,“B-B-A-B-B-A-G-H-G-D-F-D-F”Figure 2: Two structurally dissimilar subgraphs can be considered functionally similar if they alwaysappear in the same neighborhood. For instance, subgraphs “C-C-C” and “G-H-G” are structurallydifferent since they are composed of different types of nodes but they seem to be serving the samefunction of connecting the same kind of regions together. If these patterns appear frequently inthe dataset, the encoder-decoder model will learn very similar representations for the random walksequences corresponding to the two subgraphs.The input to the model is a tuple of sentences (si1;si;si+1), withxtibeing the word embeddingfor thet-th word,wti, of sentence si. The word embeddings for the middle sentence, si, are fedsequentially as input to the the encoder. The encoder generates a hidden vector htiat each timestept, this is the information the model retained after processing sequence x1i; :::;xtiand can bethought of as the sequence representation. The hidden state hNican thus be considered the sentencerepresentation, given siis of lengthN. Given a sequence to encode, the encoder iterates through thefollowing equations, as given in (Kiros et al., 2015). Here the subscripts iare dropped for simplicity.rt=(Wrxt+Urht1) (1)zt=(Wzxt+Uzht1) (2)ht=tanh(Wxt+U(rtht1)) (3)ht= (1zt)ht1+ztht(4)where rtis the forget gate, ztis the update gate, htis the proposed hidden state, and is thecomponent-wise product. Here rtdecides what information to discard from the previous state, ztdecides what new information to encode, and the new hidden vector htis calculated accordingly.Values in rtandztare in the range [0;1].Two decoders with separate parameters are used to reconstruct the previous statement si1and thenext statement si+1. The computation for the decoder is similar to that of the encoder, except thistime the models are also conditioned on the encoder output hi. Decoding involves iterating throughthe following statements. Again the subscript i+ 1(similarly,i1) is dropped.rt=(Wdrxt1+Udrht1+Crhi) (5)zt=(Wdzxt1+Udzht1+Czhi) (6)ht=tanh(Wdxt1+Ud(rtht1) +Chi) (7)hti+1= (1zt)ht1+ztht(8)here the Cmatrices are used to bias the computation by the sentence vector produced by the encoder.Also, note that the word embeddings are from the previous and next statements since these are whatis given to the decoders. The probability of word wti+1can be calculated byP(wti+1jw<ti+1;hi)/exp(vwti+1hti+1) (9)where vwti+1is the row vector in the vocabulary vector Vcorresponding to the word wti+1. Thevocabulary matrix, V, is a weight matrix shared by both decoders connecting the decoder’s hiddenstates for computing a distribution over words.Finally, given a sentence tuple, the training objective is given byXtlogP(wti+1jw<ti+1;hi) +XtlogP(wti1jw<ti1;hi) (10)which is the sum of log-probabilities for the words in the previous and next statements, si1andsi+1, conditioned on the sentence representation for si. The total objective would then be the abovesummed for all tuples in the training data.3Under review as a conference paper at ICLR 20172.2 S KIP-GRAPHIn this work, we are interested in graph-structured data in particular. In our setting, we are given aset of labeled graphs D=fG1;G2; :::;Gngwith each graph associated with a class label. A graphG= (V;E;`v)is comprised of a vertex set V, an edge setEVV , and a node labeling function`v:V!LVwhich assigns each node to a label in LV. Additionally, the edges may also be labeledin which case we also have an edge labeling function `e:E!LE. Nodes and edges can also haveassociated feature vectors, these are fv2RDv, andfe2RDe, respectively.2.2.1 U NLABALED GRAPHSAlthough we will be working primarily with labeled graphs, our method can be easily extendedto support unlabeled graphs by including an additional pre-processing step. Algorithms like theWeisfeiler-Lehman algorithm (Weisfeiler & Lehman, 1968; Shervashidze et al., 2011) or the Morganalgorithm (Rogers & Hahn, 2010) for calculating molecular fingerprints are iterative algorithms thatwork by repeatedly calculating the attribute for a node via hashing of the attributes of its neighboringnodes. The final node attributes capture the local structure or topology of the graph. For unlabeledgraphs, all node attributes can be initialized to a constant value and after the algorithm is run, wecan treat the node attributes as the labels for the nodes in the graph.2.2.2 T RAINING SET GENERATIONGiven a set of graphs D, a sample size K, a minimum random walk length lmin, and a maximumrandom walk length lmax, we take each graph G 2D and generate Krandom walk sequences.Specifically, for a graph G,Ksequences of the form`v(v1);:::;` v(vk);`v(vk+1);:::;` v(vk+k0);`v(vk+k0+1);:::;` v(vk+k0+k00)are generated. Here, v12V is a randomly selected start node, (vi;vi+1)2E forifrom 1:::k+k0+k001, andlmink;k0;k00lmax. We can split each sequence into three sub-sequences withs1=`v(v1);:::;` v(vk),s2=`v(vk+1);:::;` v(vk+k0), ands3=`v(vk+k0+1);:::;` v(vk+k0+k00). Foreach sequence, k;k0, andk00are randomly drawn to be between the constraints. Since the lengthof the sub-sequences do not need to have fixed lengths and can instead be between lminandlmax,regions of varying sizes can easily be considered.In the above formulation, we assume that only the vertices in the graph are labeled and node andedge features are not given. When nodes, or edges, are labeled and feature vectors are providedwe can use a one-hot embedding to represent each unique combination of labels and features. Thistreats each distinct combination as a unique “word” and does not capture the relationship betweennodes or edges that share labels or certain features. A better approach is to simply use a one-of- jLjvector to encode the label and concatenate this with the feature vector, this allows the node or edgeembedding to capture shared features and labels.Once all the tuples of random walk sequences have been generated, they can be used to train theencoder-decoder1in an unsupervised fashion.2.2.3 O BTAINING FINAL GRAPH REPRESENTATIONAfter the encoder-decoder has been trained, we can freeze the model and use the encoder to generaterepresentations, hi, for any arbitrary random walk sequence. Ultimately, however, we are interestedin obtaining a representation for entire graphs so we try several strategies for aggregating the encoderrepresentations obtained from a set of independent random walks sampled from a given graph.1.Single walk: In this approach we do not use several encoder representations. Instead, wetrain the model on relatively long (relative to the size of the graphs in the dataset) randomwalk sequences and use a single long walk over the graph to obtain its representation.2.Average: We compute the component-wise average of the encoder representations of thesampled random walk sequences. This is then used as the graph representation.1We use the implementation in https://github.com/ryankiros/skip-thoughts.4Under review as a conference paper at ICLR 20173.Max: As in (Kiela & Bottou, 2014), we take the component-wise absolute maximum ofall encoder representations.4.Cluster: The encoder representations are first fed into a clustering technique like K-means (Hamerly & Elkan, 2003) and we use the cluster information to create a bag-of-cluster vector that serves as the graph’s representation.The procedure for obtaining the graph embeddings is summarized in Algorithm 1. The calculatedgraph embeddings can now be used with any off-the-shelf machine learning method.Algorithm 1: Calculate graph embeddingInput : Training setD, sample size K, walk lengths lminandlmax, aggregate sample size K0,and aggregate method aggOutput : Graph embeddings1Generate set of KjDj random walk tuples, S;2Train encoder-decoder model using S;3foreachGinDdo4 Randomly select K0random walks;5 Obtain encoder representations h1;:::;hK0from the random walks;6 Compute graph embedding with agg(h1;:::;hK0);7end8Return final graph embeddings;3 E XPERIMENTS3.1 D ATASETWe evaluate our proposed method on the binary classification task using four chemical compounddatasets (Kong et al., 2011). The datasets contain chemical compounds encoded in the SMILESformat (Weininger, 1988); class labels indicate the anti-cancer properties (active or inactive) of eachcompound. We use the RDKit2package to obtain the molecular graphs from the SMILES data. Wealso use RDKit to obtain the labels for the nodes (atom type) and edges (bond type). Additionally, weused the number of attached hydrogens as a node feature and bond conjugation as an edge feature.Since the edges in the datasets we evaluate on are also labeled, the generated random walk sequencesinclude edges. The datasets are all highly skewed with far more negative samples than positive ones,we tested the methods on balanced datasets by selecting a random set of negative samples equal tothe positive ones. Table 1 shows a summary of the datasets used. The average size of the moleculargraphs in each of the four datasets is around 30.Table 1: Summary of experimental datasets. “# pos” stands for the number of positive samples.dataset # graphs # pos detailsNCI81 40700 1396 Colon CancerNCI83 27992 2276 Breast CancerNCI123 40152 3112 LeukemiaHIV 7781 266 HIV Anti-virus3.2 C OMPARED METHODSWe compared our proposed approach with several state-of-the-art techniques. Since the methodis a task-irrelevant way to obtain graph representations, the goal of the paper isn’t necessarily tocome up with a method that achieves absolute best performance on the tested datasets so we do nottest against an exhaustive list of methods. Our primary objective is to see whether the method can2http://www.rdkit.org/5Under review as a conference paper at ICLR 2017potentially be used to learn useful graph embeddings as a starting point for future investigation inthe area. Since we are testing the method using molecular graph datasets, we chose to compareagainst techniques that have achieved state-of-the-art performance on these type of graphs. We alsocompare against a method that learns node embeddings instead of an entire graph embedding. Thetested methods are:ECFP (Rogers & Hahn, 2010): Extended-connectivity circular fingerprints, which are arefinement of the Morgan algorithm (Morgan, 1965), use an iterative approach to encodeinformation about substructures in a molecular graph in a fingerprint vector. In this methoda hash function is used to map the concatenated features from a neighborhood to an indexin the fingerprint vector.NeuralFPS (Duvenaud et al., 2015): Neural fingerprints replace the function that is used tocompute a fingerprint vector with a differentiable neural network. This allows the methodto learn from the data, prioritizing useful or discriminative features.DeepWalk (Perozzi et al., 2014): The DeepWalk model learns representations for nodes ina single graph. However, we can also train the model using random walks from multiplegraphs if the various graphs share the same kind of nodes. The model will then learn togenerate similar representations for nodes that co-occur frequently across all the graphs.To generate the final embedding for a graph, we can simply apply average pooling to thevectors of all the nodes in the graph – which is a reasonable strategy to capture the overallprofile of the graph.Skip-graph : Our proposed method. We train an encoder-decoder model using randomwalks generated from the graphs and use the encoder’s random walk representation to cal-culate the graph embedding.To test ECFP and NeuralFPS, we used the library3provided by (Duvenaud et al., 2015). The size ofthe graph embedding was restricted to 164 for all methods and a grid-search was done to optimizethe parameters of the various methods. For ECFP and NeuralFPS, we tested different values for thefollowing parameters: fingerprint radius, `2regularization penalty, step size for the optimization,hidden layer dimension, and convolution layer dimension (only for NeuralFPS). All results reportedare the average over 5-fold cross validation. Since a neural network, with a single hidden layer, wasused as the classifier in Duvenaud et al. (2015), we chose to use the same classifier for our modeland the grid-search was performed over the same set of values for classifier-related parameters. Inparticular, for the neural network, we tested various settings with hidden layer size selected fromf70;100;140g, and`2regularization chosen from f0:0001;0:001;0:01;0:1g.3.3 C LASSIFICATION RESULTSWe show the classification accuracy of the different methods in Table 2. The proposed methodachieves top performance in three of the four datasets we tested. It is a little surprising, however, tofind that NeuralFPS performs slightly worse than ECFP. This seems to suggest that it is overfittingthe data as NeuralFPS is a generalization of ECFP and should, in theory, be at least as good as ECFP.Also, we find that averaging the DeepWalk embeddings trained from random walks generated fromthe entire training set can be a simple yet effective way to generate a graph representation.Table 2: Summary of experimental results.method datasetHIV NCI81 NCI83 NCI123ECFP 68.30% 68.90% 62.06% 60.17%NeuralFPS 67.48% 65.24% 59.91% 60.00%DeepWalk 69.90% 68.00% 63.89% 64.43%Skip-graph 72.77% 69.98% 63.80% 62.60%3https://github.com/HIPS/neural-fingerprint6Under review as a conference paper at ICLR 2017(a) Performance of various aggregation methods (b) Accuracy versus training epochs(c) Accuracy versus number of samples for aggrega-tionFigure 3: The performance of our proposed method under various settings.3.4 P ARAMETER STUDYWe tested the performance of the method using the various aggregation methods. The performancewas extremely poor when we trained the encoder-decoder model on long random walks and used asingle long walk to generate the graph representation. The other three aggregation strategies yieldedbetter results. Figure 3(a) shows the performance of these methods. Averaging the hidden vec-tor representations seems to yield the best performance, calculating the component-wise maximumyielded the second best results while the method that had the additional cluster pre-processing stepperformed slightly worse.We plot the accuracy of the method over the number of training epochs in Figure 3(b). With theexception of the HIV dataset, which has a relatively few number of samples, the results show agradual increase in the classification accuracy as the number of training epochs is increased. This isconsistent with results in other work that show that given a large number of training data, recurrentneural models generally achieve better results when trained longer.Figure 3(c) shows the accuracy in the classification task over different sample sizes K0, or thenumber of samples aggregated to obtain the final graph representation. It is clear from the resultsthat a better graph representation is obtained if we use more samples to calculate the final graphrepresentation. This is quite intuitive as a limited sample may not be representative and may fail tocapture the properties of the graph well enough.We tested several different values for lminandlmax and the one that seemed to perform best in ourcase waslmin= 7andlmax= 12 . This is a reasonable constraint on the random walk length giventhat the average size of the molecular graphs was around 30. We used K= 100 when generating aset of random walks to train the encoder-decoder.7Under review as a conference paper at ICLR 2017Figure 4: The learned embeddings for graphs in the HIV dataset. The 2-d representations werecalculated using Kernel PCA (Mika et al., 1998).3.5 V ISUALIZATION OF GRAPH EMBEDDINGSWe show a scatterplot of the HIV graph embeddings learned by our model in Figure 4. In particular,we highlight two pairs of graphs that had very similar embeddings. We note that the first pairof graphs (the one on the right) are structurally similar, that is they have a large sub-structure incommon. The graphs in the second pair each contain two similar substructures that are joined bysegments that appear to be “functionally” similar.3.6 U SING AN ENSEMBLE OF CLASSIFIERSSince it is possible to generate many different sets of random walks to train the encoder-decodermodel, we tried training five encoders on five separate sets of random walks. An ensemble (Opitz &Maclin, 1999) of five classifiers is then created with each classifier trained on the graph representa-tions obtained from one of the five encoders. We compare the predictive accuracy of the ensembleversus the single classifier when all other settings are fixed. We observed a slight improvement(around 13%) in the accuracy of the model. All the results reported above are for the singleclassifier case.4 C ONCLUSIONWe introduced an unsupervised method, based on the encoder-decoder model, for generating featurerepresentations for graph-structured data. The model was evaluated on the binary classification taskon several real-world datasets. The method outperformed several state-of-the-art algorithms on thetested datasets.There are several interesting directions for future work. For instance, we can try training multipleencoders on random walks generated using very different neighborhood selection strategies. Thismay allow the different encoders to capture different properties in the graphs. We would also like totest the approach using different neural network architectures. Finally, it would be interesting to testthe method on other types of heterogeneous information networks. | HJctDnzSe | Comparison with Graph kernels Missing | 5: Marginally below acceptance threshold | This paper studies the graph embedding problem by using the encoder-decoder method. The experimental study on real network data sets show the features extracted by the proposed model is good for classification.
Strong points of this paper:
1. The idea of using the methods from natural language processing to graph mining is quite interesting.
2. The organization of the paper is clear
Weak points of this paper:
1. Comparisons with state-of-art methods (Graph Kernels) is missing.
2. The problem is not well motivated, are there any application of this. What is the different from the graph kernel methods? The comparison with graph kernel is missing.
3. Need more experiment to demonstrate the power of their feature extraction methods. (Clustering, Search, Prediction etc.)
4. Presentation of the paper is weak. There are lots of typos and unclear statements.
5. The author mentioned about the graph kernel things, but in the experiment they didn't compare them. Also, only compare the classification accuracy by using the proposed method is not enough. | 4: The reviewer is confident but not absolutely certain that the evaluation is correct |
BkSqjHqxg | ICLR.cc/2017/conference | 2017 | Skip-graph: Learning graph embeddings with an encoder-decoder model | ["John Boaz Lee", "Xiangnan Kong"] | In this work, we study the problem of feature representation learning for graph-structured data. Many of the existing work in the area are task-specific and based on supervised techniques. We study a method for obtaining a generic feature representation for a graph using an unsupervised approach. The neural encoder-decoder model is a method that has been used in the natural language processing domain to learn feature representations of sentences. In our proposed approach, we train the encoder-decoder model to predict the random walk sequence of neighboring regions in a graph given a random walk along a particular region. The goal is to map subgraphs — as represented by their random walks — that are structurally and functionally similar to nearby locations in feature space. We evaluate the learned graph vectors using several real-world datasets on the graph classification task. The proposed model is able to achieve good results against state-of- the-art techniques. | ["Unsupervised Learning", "Deep learning"] | ABSTRACTIn this work, we study the problem of feature representation learning for graph-structured data. Many of the existing work in the area are task-specific and basedon supervised techniques. We study a method for obtaining a generic featurerepresentation for a graph using an unsupervised approach. The neural encoder-decoder model is a method that has been used in the natural language processingdomain to learn feature representations of sentences. In our proposed approach,we train the encoder-decoder model to predict the random walk sequence of neigh-boring regions in a graph given a random walk along a particular region. The goalis to map subgraphs — as represented by their random walks — that are struc-turally and functionally similar to nearby locations in feature space. We evaluatethe learned graph vectors using several real-world datasets on the graph classifi-cation task. The proposed model is able to achieve good results against state-of-the-art techniques.1 I NTRODUCTIONThe skip-gram model (Mikolov et al., 2013) was originally introduced in the natural language pro-cessing (NLP) domain as a model for learning vector representations of words. Recently, it hasbeen adapted successfully to solve the problem of learning node representations for graph-structureddata (Grover & Leskovec, 2016; Perozzi et al., 2014). The learned vectors can then be used directlyin problems such as link prediction (Miller et al., 2009), or clustering of nodes on a graph (Vinayaket al., 2014). However, in many real-world applications we need to learn a feature representation forthe entire graph instead of representations for just the nodes in the graph. In this paper, we studythe graph representation learning problem, where the task is to learn a feature representation for anygraph object. We propose a novel solution based upon the encoder-decoder model.Graph-structured data can be found in many different domains including biology, chemistry, andthe study of social networks. For instance, in chemistry, chemical compounds can be representedas molecular graphs (Duvenaud et al., 2015). In social network analysis, the interaction amongdifferent entities of a community can be captured using a social graph (Yanardag & Vishwanathan,2015). A natural question that arises in these scenarios is what the structure of a graph tells usabout the properties of the graph ( e.g., what does the molecular graph tell us about the compound’saqueous solubility, or its anti-cancer activity?). In other words, we are often interested in performingmachine learning tasks on graph-structured data. Many techniques have been proposed to solve thisproblem, these include learning graph kernels (Vishwanathan et al., 2010), identifying discriminativesubgraphs (Kong et al., 2011), using specially designed neural network models such as the graphneural network (Scarselli et al., 2009), and learning the graph fingerprint (Duvenaud et al., 2015).Most of the approaches for learning graph features are supervised and task-specific. Our approach,on the other hand, is unsupervised and general-purpose. The learned features can be used directlywith off-the-shelf machine learning methods on different tasks, such as classification or clustering.Perhaps the work that resembles this work the most is the one in (Yanardag & Vishwanathan, 2015).We argue, however, that our approach is different and this is good motivation to pursue the study asthere has not been many work published in the area. For one, we use the skip-thought model (Kiros1Under review as a conference paper at ICLR 2017AAACCACBCBBDFEEAACACDEFECBCBBAFigure 1: A random walk over a graph is split into three subsequences (s1;s2;s3). The middlesequence is input into the encoder and the decoders attempt to reconstruct the previous and nextsub-sequence. The unattached arrows are connected to the encoder output to condition the decoder.et al., 2015) and we are not just interested in structurally similar subgraphs but also functionallysimilar ones.Our approach is based on the encoder-decoder model (Kalchbrenner & Blunsom, 2013; Cho et al.,2014); in particular, we are interested in the skip-thought model. In (Kiros et al., 2015), tuplescomposed of three consecutive sentences from word documents are fed into an RNN model and themodel attempts to reconstruct the previous and next statements given the middle sentence. Aftertraining on a large text corpus, the hidden vector values for an input sentence can be used as thatinput sequence’s feature representation. It has been shown that the model learns a function thatmaps semantically and syntactically similar sentences close to one another in feature space. In thiswork, the idea is to take instead a sequence generated by a random walk along a labeled graph andto divide it into three parts, feeding these into the encoder-decoder model. Since the structure of thegraph determines the random walk sequences that can be generated, we can treat each sub-sequenceas a representation of a particular subgraph in the graph. We argue that by training an encoder-decoder model on a large number of random walk sequences, we can learn a feature representationthat groups structurally and functionally similar subgraphs together. Figure 1 shows an example ofhow we can train the model using a random walk over a graph. A simple example that illustrateshow the model may learn to identify functionally similar subgraphs is shown in Figure 2.After the model is trained on a large sample of random walks generated from a dataset of labeledgraphs, we can then freeze the model and use the encoder as a feature extractor. In particular, weobtain a feature representation of a graph by sampling multiple short random walks and aggregatingthe information encoded in the feature representations of these short walks. We borrow an analogyfrom the NLP domain to highlight the idea. In order to obtain a good feature representation for atext document, short of sampling all the words in the document one may sample a set of sentencesfrom the document and use these to construct the features for the document. Similarly, to obtain afeature representation for a graph, we sample a set of subgraphs (as represented by the short walks)and use the aggregate subgraph features to construct the final graph feature vector. Since we use thetrained encoder as our feature extractor, graphs that share structural and functional properties willtend to have more similar feature vectors.2 P ROPOSED METHOD2.1 S KIP-THOUGHTSince our proposed approach is based on the encoder-decoder model of (Kiros et al., 2015), we beginby briefly introducing the model. The encoder-decoder model uses an RNN with GRU (Chung et al.,2014) activation as the encoder and an RNN with a conditional GRU as the decoder. The model istrained using the Adam stochastic optimization algorithm (Kingma & Ba, 2015).2Under review as a conference paper at ICLR 2017ABDFABBBCCCDFADFABBBBGHGDFsubgraph1subgraph2possiblerandomwalksequences:“B-B-A-B-B-A-C-C-C-D-F-D-F”,“B-B-A-B-B-A-G-H-G-D-F-D-F”Figure 2: Two structurally dissimilar subgraphs can be considered functionally similar if they alwaysappear in the same neighborhood. For instance, subgraphs “C-C-C” and “G-H-G” are structurallydifferent since they are composed of different types of nodes but they seem to be serving the samefunction of connecting the same kind of regions together. If these patterns appear frequently inthe dataset, the encoder-decoder model will learn very similar representations for the random walksequences corresponding to the two subgraphs.The input to the model is a tuple of sentences (si1;si;si+1), withxtibeing the word embeddingfor thet-th word,wti, of sentence si. The word embeddings for the middle sentence, si, are fedsequentially as input to the the encoder. The encoder generates a hidden vector htiat each timestept, this is the information the model retained after processing sequence x1i; :::;xtiand can bethought of as the sequence representation. The hidden state hNican thus be considered the sentencerepresentation, given siis of lengthN. Given a sequence to encode, the encoder iterates through thefollowing equations, as given in (Kiros et al., 2015). Here the subscripts iare dropped for simplicity.rt=(Wrxt+Urht1) (1)zt=(Wzxt+Uzht1) (2)ht=tanh(Wxt+U(rtht1)) (3)ht= (1zt)ht1+ztht(4)where rtis the forget gate, ztis the update gate, htis the proposed hidden state, and is thecomponent-wise product. Here rtdecides what information to discard from the previous state, ztdecides what new information to encode, and the new hidden vector htis calculated accordingly.Values in rtandztare in the range [0;1].Two decoders with separate parameters are used to reconstruct the previous statement si1and thenext statement si+1. The computation for the decoder is similar to that of the encoder, except thistime the models are also conditioned on the encoder output hi. Decoding involves iterating throughthe following statements. Again the subscript i+ 1(similarly,i1) is dropped.rt=(Wdrxt1+Udrht1+Crhi) (5)zt=(Wdzxt1+Udzht1+Czhi) (6)ht=tanh(Wdxt1+Ud(rtht1) +Chi) (7)hti+1= (1zt)ht1+ztht(8)here the Cmatrices are used to bias the computation by the sentence vector produced by the encoder.Also, note that the word embeddings are from the previous and next statements since these are whatis given to the decoders. The probability of word wti+1can be calculated byP(wti+1jw<ti+1;hi)/exp(vwti+1hti+1) (9)where vwti+1is the row vector in the vocabulary vector Vcorresponding to the word wti+1. Thevocabulary matrix, V, is a weight matrix shared by both decoders connecting the decoder’s hiddenstates for computing a distribution over words.Finally, given a sentence tuple, the training objective is given byXtlogP(wti+1jw<ti+1;hi) +XtlogP(wti1jw<ti1;hi) (10)which is the sum of log-probabilities for the words in the previous and next statements, si1andsi+1, conditioned on the sentence representation for si. The total objective would then be the abovesummed for all tuples in the training data.3Under review as a conference paper at ICLR 20172.2 S KIP-GRAPHIn this work, we are interested in graph-structured data in particular. In our setting, we are given aset of labeled graphs D=fG1;G2; :::;Gngwith each graph associated with a class label. A graphG= (V;E;`v)is comprised of a vertex set V, an edge setEVV , and a node labeling function`v:V!LVwhich assigns each node to a label in LV. Additionally, the edges may also be labeledin which case we also have an edge labeling function `e:E!LE. Nodes and edges can also haveassociated feature vectors, these are fv2RDv, andfe2RDe, respectively.2.2.1 U NLABALED GRAPHSAlthough we will be working primarily with labeled graphs, our method can be easily extendedto support unlabeled graphs by including an additional pre-processing step. Algorithms like theWeisfeiler-Lehman algorithm (Weisfeiler & Lehman, 1968; Shervashidze et al., 2011) or the Morganalgorithm (Rogers & Hahn, 2010) for calculating molecular fingerprints are iterative algorithms thatwork by repeatedly calculating the attribute for a node via hashing of the attributes of its neighboringnodes. The final node attributes capture the local structure or topology of the graph. For unlabeledgraphs, all node attributes can be initialized to a constant value and after the algorithm is run, wecan treat the node attributes as the labels for the nodes in the graph.2.2.2 T RAINING SET GENERATIONGiven a set of graphs D, a sample size K, a minimum random walk length lmin, and a maximumrandom walk length lmax, we take each graph G 2D and generate Krandom walk sequences.Specifically, for a graph G,Ksequences of the form`v(v1);:::;` v(vk);`v(vk+1);:::;` v(vk+k0);`v(vk+k0+1);:::;` v(vk+k0+k00)are generated. Here, v12V is a randomly selected start node, (vi;vi+1)2E forifrom 1:::k+k0+k001, andlmink;k0;k00lmax. We can split each sequence into three sub-sequences withs1=`v(v1);:::;` v(vk),s2=`v(vk+1);:::;` v(vk+k0), ands3=`v(vk+k0+1);:::;` v(vk+k0+k00). Foreach sequence, k;k0, andk00are randomly drawn to be between the constraints. Since the lengthof the sub-sequences do not need to have fixed lengths and can instead be between lminandlmax,regions of varying sizes can easily be considered.In the above formulation, we assume that only the vertices in the graph are labeled and node andedge features are not given. When nodes, or edges, are labeled and feature vectors are providedwe can use a one-hot embedding to represent each unique combination of labels and features. Thistreats each distinct combination as a unique “word” and does not capture the relationship betweennodes or edges that share labels or certain features. A better approach is to simply use a one-of- jLjvector to encode the label and concatenate this with the feature vector, this allows the node or edgeembedding to capture shared features and labels.Once all the tuples of random walk sequences have been generated, they can be used to train theencoder-decoder1in an unsupervised fashion.2.2.3 O BTAINING FINAL GRAPH REPRESENTATIONAfter the encoder-decoder has been trained, we can freeze the model and use the encoder to generaterepresentations, hi, for any arbitrary random walk sequence. Ultimately, however, we are interestedin obtaining a representation for entire graphs so we try several strategies for aggregating the encoderrepresentations obtained from a set of independent random walks sampled from a given graph.1.Single walk: In this approach we do not use several encoder representations. Instead, wetrain the model on relatively long (relative to the size of the graphs in the dataset) randomwalk sequences and use a single long walk over the graph to obtain its representation.2.Average: We compute the component-wise average of the encoder representations of thesampled random walk sequences. This is then used as the graph representation.1We use the implementation in https://github.com/ryankiros/skip-thoughts.4Under review as a conference paper at ICLR 20173.Max: As in (Kiela & Bottou, 2014), we take the component-wise absolute maximum ofall encoder representations.4.Cluster: The encoder representations are first fed into a clustering technique like K-means (Hamerly & Elkan, 2003) and we use the cluster information to create a bag-of-cluster vector that serves as the graph’s representation.The procedure for obtaining the graph embeddings is summarized in Algorithm 1. The calculatedgraph embeddings can now be used with any off-the-shelf machine learning method.Algorithm 1: Calculate graph embeddingInput : Training setD, sample size K, walk lengths lminandlmax, aggregate sample size K0,and aggregate method aggOutput : Graph embeddings1Generate set of KjDj random walk tuples, S;2Train encoder-decoder model using S;3foreachGinDdo4 Randomly select K0random walks;5 Obtain encoder representations h1;:::;hK0from the random walks;6 Compute graph embedding with agg(h1;:::;hK0);7end8Return final graph embeddings;3 E XPERIMENTS3.1 D ATASETWe evaluate our proposed method on the binary classification task using four chemical compounddatasets (Kong et al., 2011). The datasets contain chemical compounds encoded in the SMILESformat (Weininger, 1988); class labels indicate the anti-cancer properties (active or inactive) of eachcompound. We use the RDKit2package to obtain the molecular graphs from the SMILES data. Wealso use RDKit to obtain the labels for the nodes (atom type) and edges (bond type). Additionally, weused the number of attached hydrogens as a node feature and bond conjugation as an edge feature.Since the edges in the datasets we evaluate on are also labeled, the generated random walk sequencesinclude edges. The datasets are all highly skewed with far more negative samples than positive ones,we tested the methods on balanced datasets by selecting a random set of negative samples equal tothe positive ones. Table 1 shows a summary of the datasets used. The average size of the moleculargraphs in each of the four datasets is around 30.Table 1: Summary of experimental datasets. “# pos” stands for the number of positive samples.dataset # graphs # pos detailsNCI81 40700 1396 Colon CancerNCI83 27992 2276 Breast CancerNCI123 40152 3112 LeukemiaHIV 7781 266 HIV Anti-virus3.2 C OMPARED METHODSWe compared our proposed approach with several state-of-the-art techniques. Since the methodis a task-irrelevant way to obtain graph representations, the goal of the paper isn’t necessarily tocome up with a method that achieves absolute best performance on the tested datasets so we do nottest against an exhaustive list of methods. Our primary objective is to see whether the method can2http://www.rdkit.org/5Under review as a conference paper at ICLR 2017potentially be used to learn useful graph embeddings as a starting point for future investigation inthe area. Since we are testing the method using molecular graph datasets, we chose to compareagainst techniques that have achieved state-of-the-art performance on these type of graphs. We alsocompare against a method that learns node embeddings instead of an entire graph embedding. Thetested methods are:ECFP (Rogers & Hahn, 2010): Extended-connectivity circular fingerprints, which are arefinement of the Morgan algorithm (Morgan, 1965), use an iterative approach to encodeinformation about substructures in a molecular graph in a fingerprint vector. In this methoda hash function is used to map the concatenated features from a neighborhood to an indexin the fingerprint vector.NeuralFPS (Duvenaud et al., 2015): Neural fingerprints replace the function that is used tocompute a fingerprint vector with a differentiable neural network. This allows the methodto learn from the data, prioritizing useful or discriminative features.DeepWalk (Perozzi et al., 2014): The DeepWalk model learns representations for nodes ina single graph. However, we can also train the model using random walks from multiplegraphs if the various graphs share the same kind of nodes. The model will then learn togenerate similar representations for nodes that co-occur frequently across all the graphs.To generate the final embedding for a graph, we can simply apply average pooling to thevectors of all the nodes in the graph – which is a reasonable strategy to capture the overallprofile of the graph.Skip-graph : Our proposed method. We train an encoder-decoder model using randomwalks generated from the graphs and use the encoder’s random walk representation to cal-culate the graph embedding.To test ECFP and NeuralFPS, we used the library3provided by (Duvenaud et al., 2015). The size ofthe graph embedding was restricted to 164 for all methods and a grid-search was done to optimizethe parameters of the various methods. For ECFP and NeuralFPS, we tested different values for thefollowing parameters: fingerprint radius, `2regularization penalty, step size for the optimization,hidden layer dimension, and convolution layer dimension (only for NeuralFPS). All results reportedare the average over 5-fold cross validation. Since a neural network, with a single hidden layer, wasused as the classifier in Duvenaud et al. (2015), we chose to use the same classifier for our modeland the grid-search was performed over the same set of values for classifier-related parameters. Inparticular, for the neural network, we tested various settings with hidden layer size selected fromf70;100;140g, and`2regularization chosen from f0:0001;0:001;0:01;0:1g.3.3 C LASSIFICATION RESULTSWe show the classification accuracy of the different methods in Table 2. The proposed methodachieves top performance in three of the four datasets we tested. It is a little surprising, however, tofind that NeuralFPS performs slightly worse than ECFP. This seems to suggest that it is overfittingthe data as NeuralFPS is a generalization of ECFP and should, in theory, be at least as good as ECFP.Also, we find that averaging the DeepWalk embeddings trained from random walks generated fromthe entire training set can be a simple yet effective way to generate a graph representation.Table 2: Summary of experimental results.method datasetHIV NCI81 NCI83 NCI123ECFP 68.30% 68.90% 62.06% 60.17%NeuralFPS 67.48% 65.24% 59.91% 60.00%DeepWalk 69.90% 68.00% 63.89% 64.43%Skip-graph 72.77% 69.98% 63.80% 62.60%3https://github.com/HIPS/neural-fingerprint6Under review as a conference paper at ICLR 2017(a) Performance of various aggregation methods (b) Accuracy versus training epochs(c) Accuracy versus number of samples for aggrega-tionFigure 3: The performance of our proposed method under various settings.3.4 P ARAMETER STUDYWe tested the performance of the method using the various aggregation methods. The performancewas extremely poor when we trained the encoder-decoder model on long random walks and used asingle long walk to generate the graph representation. The other three aggregation strategies yieldedbetter results. Figure 3(a) shows the performance of these methods. Averaging the hidden vec-tor representations seems to yield the best performance, calculating the component-wise maximumyielded the second best results while the method that had the additional cluster pre-processing stepperformed slightly worse.We plot the accuracy of the method over the number of training epochs in Figure 3(b). With theexception of the HIV dataset, which has a relatively few number of samples, the results show agradual increase in the classification accuracy as the number of training epochs is increased. This isconsistent with results in other work that show that given a large number of training data, recurrentneural models generally achieve better results when trained longer.Figure 3(c) shows the accuracy in the classification task over different sample sizes K0, or thenumber of samples aggregated to obtain the final graph representation. It is clear from the resultsthat a better graph representation is obtained if we use more samples to calculate the final graphrepresentation. This is quite intuitive as a limited sample may not be representative and may fail tocapture the properties of the graph well enough.We tested several different values for lminandlmax and the one that seemed to perform best in ourcase waslmin= 7andlmax= 12 . This is a reasonable constraint on the random walk length giventhat the average size of the molecular graphs was around 30. We used K= 100 when generating aset of random walks to train the encoder-decoder.7Under review as a conference paper at ICLR 2017Figure 4: The learned embeddings for graphs in the HIV dataset. The 2-d representations werecalculated using Kernel PCA (Mika et al., 1998).3.5 V ISUALIZATION OF GRAPH EMBEDDINGSWe show a scatterplot of the HIV graph embeddings learned by our model in Figure 4. In particular,we highlight two pairs of graphs that had very similar embeddings. We note that the first pairof graphs (the one on the right) are structurally similar, that is they have a large sub-structure incommon. The graphs in the second pair each contain two similar substructures that are joined bysegments that appear to be “functionally” similar.3.6 U SING AN ENSEMBLE OF CLASSIFIERSSince it is possible to generate many different sets of random walks to train the encoder-decodermodel, we tried training five encoders on five separate sets of random walks. An ensemble (Opitz &Maclin, 1999) of five classifiers is then created with each classifier trained on the graph representa-tions obtained from one of the five encoders. We compare the predictive accuracy of the ensembleversus the single classifier when all other settings are fixed. We observed a slight improvement(around 13%) in the accuracy of the model. All the results reported above are for the singleclassifier case.4 C ONCLUSIONWe introduced an unsupervised method, based on the encoder-decoder model, for generating featurerepresentations for graph-structured data. The model was evaluated on the binary classification taskon several real-world datasets. The method outperformed several state-of-the-art algorithms on thetested datasets.There are several interesting directions for future work. For instance, we can try training multipleencoders on random walks generated using very different neighborhood selection strategies. Thismay allow the different encoders to capture different properties in the graphs. We would also like totest the approach using different neural network architectures. Finally, it would be interesting to testthe method on other types of heterogeneous information networks. | H1Q8Ckz4l | Good paper | 7: Good paper, accept | The paper presents a method to learn graph embeddings in a unsupervised way using random walks. It is well written and the execution appears quite accurate. The area of learning whole graph representations does not seem to be very well explored in general, and the proposed approach enjoys having very few competitors.
In a nutshell, the idea is to linearize the graph using random walks and to compute the embedding of the central segment of each walk using the skip-thought criterion. Being not an expert in biology, I can not comment whether or not this makes sense, but the gains reported in Table 2 are quite significant.
An anonymous public comment compared this work to a number of others in which the problem of learning representations of nodes is considered. While this is arguably a different goal, one natural baseline would be to pool these representations using mean- or max- pooling. It would very interesting to do such a comparison, especially given that the considered approach heavily relies on pooling (see Figure 3(c))
To sum up, I think it is a nice paper, and with more baselines I would be ready to further increase the numerical score.
| 3: The reviewer is fairly confident that the evaluation is correct |
BkSqjHqxg | ICLR.cc/2017/conference | 2017 | Skip-graph: Learning graph embeddings with an encoder-decoder model | ["John Boaz Lee", "Xiangnan Kong"] | In this work, we study the problem of feature representation learning for graph-structured data. Many of the existing work in the area are task-specific and based on supervised techniques. We study a method for obtaining a generic feature representation for a graph using an unsupervised approach. The neural encoder-decoder model is a method that has been used in the natural language processing domain to learn feature representations of sentences. In our proposed approach, we train the encoder-decoder model to predict the random walk sequence of neighboring regions in a graph given a random walk along a particular region. The goal is to map subgraphs — as represented by their random walks — that are structurally and functionally similar to nearby locations in feature space. We evaluate the learned graph vectors using several real-world datasets on the graph classification task. The proposed model is able to achieve good results against state-of- the-art techniques. | ["Unsupervised Learning", "Deep learning"] | ABSTRACTIn this work, we study the problem of feature representation learning for graph-structured data. Many of the existing work in the area are task-specific and basedon supervised techniques. We study a method for obtaining a generic featurerepresentation for a graph using an unsupervised approach. The neural encoder-decoder model is a method that has been used in the natural language processingdomain to learn feature representations of sentences. In our proposed approach,we train the encoder-decoder model to predict the random walk sequence of neigh-boring regions in a graph given a random walk along a particular region. The goalis to map subgraphs — as represented by their random walks — that are struc-turally and functionally similar to nearby locations in feature space. We evaluatethe learned graph vectors using several real-world datasets on the graph classifi-cation task. The proposed model is able to achieve good results against state-of-the-art techniques.1 I NTRODUCTIONThe skip-gram model (Mikolov et al., 2013) was originally introduced in the natural language pro-cessing (NLP) domain as a model for learning vector representations of words. Recently, it hasbeen adapted successfully to solve the problem of learning node representations for graph-structureddata (Grover & Leskovec, 2016; Perozzi et al., 2014). The learned vectors can then be used directlyin problems such as link prediction (Miller et al., 2009), or clustering of nodes on a graph (Vinayaket al., 2014). However, in many real-world applications we need to learn a feature representation forthe entire graph instead of representations for just the nodes in the graph. In this paper, we studythe graph representation learning problem, where the task is to learn a feature representation for anygraph object. We propose a novel solution based upon the encoder-decoder model.Graph-structured data can be found in many different domains including biology, chemistry, andthe study of social networks. For instance, in chemistry, chemical compounds can be representedas molecular graphs (Duvenaud et al., 2015). In social network analysis, the interaction amongdifferent entities of a community can be captured using a social graph (Yanardag & Vishwanathan,2015). A natural question that arises in these scenarios is what the structure of a graph tells usabout the properties of the graph ( e.g., what does the molecular graph tell us about the compound’saqueous solubility, or its anti-cancer activity?). In other words, we are often interested in performingmachine learning tasks on graph-structured data. Many techniques have been proposed to solve thisproblem, these include learning graph kernels (Vishwanathan et al., 2010), identifying discriminativesubgraphs (Kong et al., 2011), using specially designed neural network models such as the graphneural network (Scarselli et al., 2009), and learning the graph fingerprint (Duvenaud et al., 2015).Most of the approaches for learning graph features are supervised and task-specific. Our approach,on the other hand, is unsupervised and general-purpose. The learned features can be used directlywith off-the-shelf machine learning methods on different tasks, such as classification or clustering.Perhaps the work that resembles this work the most is the one in (Yanardag & Vishwanathan, 2015).We argue, however, that our approach is different and this is good motivation to pursue the study asthere has not been many work published in the area. For one, we use the skip-thought model (Kiros1Under review as a conference paper at ICLR 2017AAACCACBCBBDFEEAACACDEFECBCBBAFigure 1: A random walk over a graph is split into three subsequences (s1;s2;s3). The middlesequence is input into the encoder and the decoders attempt to reconstruct the previous and nextsub-sequence. The unattached arrows are connected to the encoder output to condition the decoder.et al., 2015) and we are not just interested in structurally similar subgraphs but also functionallysimilar ones.Our approach is based on the encoder-decoder model (Kalchbrenner & Blunsom, 2013; Cho et al.,2014); in particular, we are interested in the skip-thought model. In (Kiros et al., 2015), tuplescomposed of three consecutive sentences from word documents are fed into an RNN model and themodel attempts to reconstruct the previous and next statements given the middle sentence. Aftertraining on a large text corpus, the hidden vector values for an input sentence can be used as thatinput sequence’s feature representation. It has been shown that the model learns a function thatmaps semantically and syntactically similar sentences close to one another in feature space. In thiswork, the idea is to take instead a sequence generated by a random walk along a labeled graph andto divide it into three parts, feeding these into the encoder-decoder model. Since the structure of thegraph determines the random walk sequences that can be generated, we can treat each sub-sequenceas a representation of a particular subgraph in the graph. We argue that by training an encoder-decoder model on a large number of random walk sequences, we can learn a feature representationthat groups structurally and functionally similar subgraphs together. Figure 1 shows an example ofhow we can train the model using a random walk over a graph. A simple example that illustrateshow the model may learn to identify functionally similar subgraphs is shown in Figure 2.After the model is trained on a large sample of random walks generated from a dataset of labeledgraphs, we can then freeze the model and use the encoder as a feature extractor. In particular, weobtain a feature representation of a graph by sampling multiple short random walks and aggregatingthe information encoded in the feature representations of these short walks. We borrow an analogyfrom the NLP domain to highlight the idea. In order to obtain a good feature representation for atext document, short of sampling all the words in the document one may sample a set of sentencesfrom the document and use these to construct the features for the document. Similarly, to obtain afeature representation for a graph, we sample a set of subgraphs (as represented by the short walks)and use the aggregate subgraph features to construct the final graph feature vector. Since we use thetrained encoder as our feature extractor, graphs that share structural and functional properties willtend to have more similar feature vectors.2 P ROPOSED METHOD2.1 S KIP-THOUGHTSince our proposed approach is based on the encoder-decoder model of (Kiros et al., 2015), we beginby briefly introducing the model. The encoder-decoder model uses an RNN with GRU (Chung et al.,2014) activation as the encoder and an RNN with a conditional GRU as the decoder. The model istrained using the Adam stochastic optimization algorithm (Kingma & Ba, 2015).2Under review as a conference paper at ICLR 2017ABDFABBBCCCDFADFABBBBGHGDFsubgraph1subgraph2possiblerandomwalksequences:“B-B-A-B-B-A-C-C-C-D-F-D-F”,“B-B-A-B-B-A-G-H-G-D-F-D-F”Figure 2: Two structurally dissimilar subgraphs can be considered functionally similar if they alwaysappear in the same neighborhood. For instance, subgraphs “C-C-C” and “G-H-G” are structurallydifferent since they are composed of different types of nodes but they seem to be serving the samefunction of connecting the same kind of regions together. If these patterns appear frequently inthe dataset, the encoder-decoder model will learn very similar representations for the random walksequences corresponding to the two subgraphs.The input to the model is a tuple of sentences (si1;si;si+1), withxtibeing the word embeddingfor thet-th word,wti, of sentence si. The word embeddings for the middle sentence, si, are fedsequentially as input to the the encoder. The encoder generates a hidden vector htiat each timestept, this is the information the model retained after processing sequence x1i; :::;xtiand can bethought of as the sequence representation. The hidden state hNican thus be considered the sentencerepresentation, given siis of lengthN. Given a sequence to encode, the encoder iterates through thefollowing equations, as given in (Kiros et al., 2015). Here the subscripts iare dropped for simplicity.rt=(Wrxt+Urht1) (1)zt=(Wzxt+Uzht1) (2)ht=tanh(Wxt+U(rtht1)) (3)ht= (1zt)ht1+ztht(4)where rtis the forget gate, ztis the update gate, htis the proposed hidden state, and is thecomponent-wise product. Here rtdecides what information to discard from the previous state, ztdecides what new information to encode, and the new hidden vector htis calculated accordingly.Values in rtandztare in the range [0;1].Two decoders with separate parameters are used to reconstruct the previous statement si1and thenext statement si+1. The computation for the decoder is similar to that of the encoder, except thistime the models are also conditioned on the encoder output hi. Decoding involves iterating throughthe following statements. Again the subscript i+ 1(similarly,i1) is dropped.rt=(Wdrxt1+Udrht1+Crhi) (5)zt=(Wdzxt1+Udzht1+Czhi) (6)ht=tanh(Wdxt1+Ud(rtht1) +Chi) (7)hti+1= (1zt)ht1+ztht(8)here the Cmatrices are used to bias the computation by the sentence vector produced by the encoder.Also, note that the word embeddings are from the previous and next statements since these are whatis given to the decoders. The probability of word wti+1can be calculated byP(wti+1jw<ti+1;hi)/exp(vwti+1hti+1) (9)where vwti+1is the row vector in the vocabulary vector Vcorresponding to the word wti+1. Thevocabulary matrix, V, is a weight matrix shared by both decoders connecting the decoder’s hiddenstates for computing a distribution over words.Finally, given a sentence tuple, the training objective is given byXtlogP(wti+1jw<ti+1;hi) +XtlogP(wti1jw<ti1;hi) (10)which is the sum of log-probabilities for the words in the previous and next statements, si1andsi+1, conditioned on the sentence representation for si. The total objective would then be the abovesummed for all tuples in the training data.3Under review as a conference paper at ICLR 20172.2 S KIP-GRAPHIn this work, we are interested in graph-structured data in particular. In our setting, we are given aset of labeled graphs D=fG1;G2; :::;Gngwith each graph associated with a class label. A graphG= (V;E;`v)is comprised of a vertex set V, an edge setEVV , and a node labeling function`v:V!LVwhich assigns each node to a label in LV. Additionally, the edges may also be labeledin which case we also have an edge labeling function `e:E!LE. Nodes and edges can also haveassociated feature vectors, these are fv2RDv, andfe2RDe, respectively.2.2.1 U NLABALED GRAPHSAlthough we will be working primarily with labeled graphs, our method can be easily extendedto support unlabeled graphs by including an additional pre-processing step. Algorithms like theWeisfeiler-Lehman algorithm (Weisfeiler & Lehman, 1968; Shervashidze et al., 2011) or the Morganalgorithm (Rogers & Hahn, 2010) for calculating molecular fingerprints are iterative algorithms thatwork by repeatedly calculating the attribute for a node via hashing of the attributes of its neighboringnodes. The final node attributes capture the local structure or topology of the graph. For unlabeledgraphs, all node attributes can be initialized to a constant value and after the algorithm is run, wecan treat the node attributes as the labels for the nodes in the graph.2.2.2 T RAINING SET GENERATIONGiven a set of graphs D, a sample size K, a minimum random walk length lmin, and a maximumrandom walk length lmax, we take each graph G 2D and generate Krandom walk sequences.Specifically, for a graph G,Ksequences of the form`v(v1);:::;` v(vk);`v(vk+1);:::;` v(vk+k0);`v(vk+k0+1);:::;` v(vk+k0+k00)are generated. Here, v12V is a randomly selected start node, (vi;vi+1)2E forifrom 1:::k+k0+k001, andlmink;k0;k00lmax. We can split each sequence into three sub-sequences withs1=`v(v1);:::;` v(vk),s2=`v(vk+1);:::;` v(vk+k0), ands3=`v(vk+k0+1);:::;` v(vk+k0+k00). Foreach sequence, k;k0, andk00are randomly drawn to be between the constraints. Since the lengthof the sub-sequences do not need to have fixed lengths and can instead be between lminandlmax,regions of varying sizes can easily be considered.In the above formulation, we assume that only the vertices in the graph are labeled and node andedge features are not given. When nodes, or edges, are labeled and feature vectors are providedwe can use a one-hot embedding to represent each unique combination of labels and features. Thistreats each distinct combination as a unique “word” and does not capture the relationship betweennodes or edges that share labels or certain features. A better approach is to simply use a one-of- jLjvector to encode the label and concatenate this with the feature vector, this allows the node or edgeembedding to capture shared features and labels.Once all the tuples of random walk sequences have been generated, they can be used to train theencoder-decoder1in an unsupervised fashion.2.2.3 O BTAINING FINAL GRAPH REPRESENTATIONAfter the encoder-decoder has been trained, we can freeze the model and use the encoder to generaterepresentations, hi, for any arbitrary random walk sequence. Ultimately, however, we are interestedin obtaining a representation for entire graphs so we try several strategies for aggregating the encoderrepresentations obtained from a set of independent random walks sampled from a given graph.1.Single walk: In this approach we do not use several encoder representations. Instead, wetrain the model on relatively long (relative to the size of the graphs in the dataset) randomwalk sequences and use a single long walk over the graph to obtain its representation.2.Average: We compute the component-wise average of the encoder representations of thesampled random walk sequences. This is then used as the graph representation.1We use the implementation in https://github.com/ryankiros/skip-thoughts.4Under review as a conference paper at ICLR 20173.Max: As in (Kiela & Bottou, 2014), we take the component-wise absolute maximum ofall encoder representations.4.Cluster: The encoder representations are first fed into a clustering technique like K-means (Hamerly & Elkan, 2003) and we use the cluster information to create a bag-of-cluster vector that serves as the graph’s representation.The procedure for obtaining the graph embeddings is summarized in Algorithm 1. The calculatedgraph embeddings can now be used with any off-the-shelf machine learning method.Algorithm 1: Calculate graph embeddingInput : Training setD, sample size K, walk lengths lminandlmax, aggregate sample size K0,and aggregate method aggOutput : Graph embeddings1Generate set of KjDj random walk tuples, S;2Train encoder-decoder model using S;3foreachGinDdo4 Randomly select K0random walks;5 Obtain encoder representations h1;:::;hK0from the random walks;6 Compute graph embedding with agg(h1;:::;hK0);7end8Return final graph embeddings;3 E XPERIMENTS3.1 D ATASETWe evaluate our proposed method on the binary classification task using four chemical compounddatasets (Kong et al., 2011). The datasets contain chemical compounds encoded in the SMILESformat (Weininger, 1988); class labels indicate the anti-cancer properties (active or inactive) of eachcompound. We use the RDKit2package to obtain the molecular graphs from the SMILES data. Wealso use RDKit to obtain the labels for the nodes (atom type) and edges (bond type). Additionally, weused the number of attached hydrogens as a node feature and bond conjugation as an edge feature.Since the edges in the datasets we evaluate on are also labeled, the generated random walk sequencesinclude edges. The datasets are all highly skewed with far more negative samples than positive ones,we tested the methods on balanced datasets by selecting a random set of negative samples equal tothe positive ones. Table 1 shows a summary of the datasets used. The average size of the moleculargraphs in each of the four datasets is around 30.Table 1: Summary of experimental datasets. “# pos” stands for the number of positive samples.dataset # graphs # pos detailsNCI81 40700 1396 Colon CancerNCI83 27992 2276 Breast CancerNCI123 40152 3112 LeukemiaHIV 7781 266 HIV Anti-virus3.2 C OMPARED METHODSWe compared our proposed approach with several state-of-the-art techniques. Since the methodis a task-irrelevant way to obtain graph representations, the goal of the paper isn’t necessarily tocome up with a method that achieves absolute best performance on the tested datasets so we do nottest against an exhaustive list of methods. Our primary objective is to see whether the method can2http://www.rdkit.org/5Under review as a conference paper at ICLR 2017potentially be used to learn useful graph embeddings as a starting point for future investigation inthe area. Since we are testing the method using molecular graph datasets, we chose to compareagainst techniques that have achieved state-of-the-art performance on these type of graphs. We alsocompare against a method that learns node embeddings instead of an entire graph embedding. Thetested methods are:ECFP (Rogers & Hahn, 2010): Extended-connectivity circular fingerprints, which are arefinement of the Morgan algorithm (Morgan, 1965), use an iterative approach to encodeinformation about substructures in a molecular graph in a fingerprint vector. In this methoda hash function is used to map the concatenated features from a neighborhood to an indexin the fingerprint vector.NeuralFPS (Duvenaud et al., 2015): Neural fingerprints replace the function that is used tocompute a fingerprint vector with a differentiable neural network. This allows the methodto learn from the data, prioritizing useful or discriminative features.DeepWalk (Perozzi et al., 2014): The DeepWalk model learns representations for nodes ina single graph. However, we can also train the model using random walks from multiplegraphs if the various graphs share the same kind of nodes. The model will then learn togenerate similar representations for nodes that co-occur frequently across all the graphs.To generate the final embedding for a graph, we can simply apply average pooling to thevectors of all the nodes in the graph – which is a reasonable strategy to capture the overallprofile of the graph.Skip-graph : Our proposed method. We train an encoder-decoder model using randomwalks generated from the graphs and use the encoder’s random walk representation to cal-culate the graph embedding.To test ECFP and NeuralFPS, we used the library3provided by (Duvenaud et al., 2015). The size ofthe graph embedding was restricted to 164 for all methods and a grid-search was done to optimizethe parameters of the various methods. For ECFP and NeuralFPS, we tested different values for thefollowing parameters: fingerprint radius, `2regularization penalty, step size for the optimization,hidden layer dimension, and convolution layer dimension (only for NeuralFPS). All results reportedare the average over 5-fold cross validation. Since a neural network, with a single hidden layer, wasused as the classifier in Duvenaud et al. (2015), we chose to use the same classifier for our modeland the grid-search was performed over the same set of values for classifier-related parameters. Inparticular, for the neural network, we tested various settings with hidden layer size selected fromf70;100;140g, and`2regularization chosen from f0:0001;0:001;0:01;0:1g.3.3 C LASSIFICATION RESULTSWe show the classification accuracy of the different methods in Table 2. The proposed methodachieves top performance in three of the four datasets we tested. It is a little surprising, however, tofind that NeuralFPS performs slightly worse than ECFP. This seems to suggest that it is overfittingthe data as NeuralFPS is a generalization of ECFP and should, in theory, be at least as good as ECFP.Also, we find that averaging the DeepWalk embeddings trained from random walks generated fromthe entire training set can be a simple yet effective way to generate a graph representation.Table 2: Summary of experimental results.method datasetHIV NCI81 NCI83 NCI123ECFP 68.30% 68.90% 62.06% 60.17%NeuralFPS 67.48% 65.24% 59.91% 60.00%DeepWalk 69.90% 68.00% 63.89% 64.43%Skip-graph 72.77% 69.98% 63.80% 62.60%3https://github.com/HIPS/neural-fingerprint6Under review as a conference paper at ICLR 2017(a) Performance of various aggregation methods (b) Accuracy versus training epochs(c) Accuracy versus number of samples for aggrega-tionFigure 3: The performance of our proposed method under various settings.3.4 P ARAMETER STUDYWe tested the performance of the method using the various aggregation methods. The performancewas extremely poor when we trained the encoder-decoder model on long random walks and used asingle long walk to generate the graph representation. The other three aggregation strategies yieldedbetter results. Figure 3(a) shows the performance of these methods. Averaging the hidden vec-tor representations seems to yield the best performance, calculating the component-wise maximumyielded the second best results while the method that had the additional cluster pre-processing stepperformed slightly worse.We plot the accuracy of the method over the number of training epochs in Figure 3(b). With theexception of the HIV dataset, which has a relatively few number of samples, the results show agradual increase in the classification accuracy as the number of training epochs is increased. This isconsistent with results in other work that show that given a large number of training data, recurrentneural models generally achieve better results when trained longer.Figure 3(c) shows the accuracy in the classification task over different sample sizes K0, or thenumber of samples aggregated to obtain the final graph representation. It is clear from the resultsthat a better graph representation is obtained if we use more samples to calculate the final graphrepresentation. This is quite intuitive as a limited sample may not be representative and may fail tocapture the properties of the graph well enough.We tested several different values for lminandlmax and the one that seemed to perform best in ourcase waslmin= 7andlmax= 12 . This is a reasonable constraint on the random walk length giventhat the average size of the molecular graphs was around 30. We used K= 100 when generating aset of random walks to train the encoder-decoder.7Under review as a conference paper at ICLR 2017Figure 4: The learned embeddings for graphs in the HIV dataset. The 2-d representations werecalculated using Kernel PCA (Mika et al., 1998).3.5 V ISUALIZATION OF GRAPH EMBEDDINGSWe show a scatterplot of the HIV graph embeddings learned by our model in Figure 4. In particular,we highlight two pairs of graphs that had very similar embeddings. We note that the first pairof graphs (the one on the right) are structurally similar, that is they have a large sub-structure incommon. The graphs in the second pair each contain two similar substructures that are joined bysegments that appear to be “functionally” similar.3.6 U SING AN ENSEMBLE OF CLASSIFIERSSince it is possible to generate many different sets of random walks to train the encoder-decodermodel, we tried training five encoders on five separate sets of random walks. An ensemble (Opitz &Maclin, 1999) of five classifiers is then created with each classifier trained on the graph representa-tions obtained from one of the five encoders. We compare the predictive accuracy of the ensembleversus the single classifier when all other settings are fixed. We observed a slight improvement(around 13%) in the accuracy of the model. All the results reported above are for the singleclassifier case.4 C ONCLUSIONWe introduced an unsupervised method, based on the encoder-decoder model, for generating featurerepresentations for graph-structured data. The model was evaluated on the binary classification taskon several real-world datasets. The method outperformed several state-of-the-art algorithms on thetested datasets.There are several interesting directions for future work. For instance, we can try training multipleencoders on random walks generated using very different neighborhood selection strategies. Thismay allow the different encoders to capture different properties in the graphs. We would also like totest the approach using different neural network architectures. Finally, it would be interesting to testthe method on other types of heterogeneous information networks. | HJLfcyL4x | An extension of skip-graph architecture to classifying similar molecular graphs | 6: Marginally above acceptance threshold | Authors take the skip-graph architecture (Kiros 2015) and apply it to classifying labeled graphs (molecular graphs). They do it by creating many sentences by walking the graph randomly, and asking the model to predict previous part and next part from the middle part. Activations of the decoder part of this model on a walk generated from a new graph are used as features for a binary classifier use to predict whether the molecule has anti-cancer properties.
Paper is well written, except that evaluation section is missing details of how the embedding is used for actual classification (ie, what classifier is used)
Unfortunately I'm not familiar with the dataset and how hard it is to achieve the results they demonstrate, that would be the important factor to weight on the papers acceptance. | 1: The reviewer's evaluation is an educated guess |
H1oRQDqlg | ICLR.cc/2017/conference | 2017 | Learning to Draw Samples: With Application to Amortized MLE for Generative Adversarial Learning | ["Dilin Wang", "Qiang Liu"] | We propose a simple algorithm to train stochastic neural networks to draw samples from given target distributions for probabilistic inference. Our method is based on iteratively adjusting the neural network parameters so that the output changes along a Stein variational gradient that maximumly decreases the KL divergence with the target distribution. Our method works for any target distribution specified by their unnormalized density function, and can train any black-box architectures that are differentiable in terms of the parameters we want to adapt. As an application of our method, we propose an amortized MLE algorithm for training deep energy model, where a neural sampler is adaptively trained to approximate the likelihood function. Our method mimics an adversarial game between the deep energy model and the neural sampler, and obtains realistic-looking images competitive with the state-of-the-art results. | ["Unsupervised Learning"] | ABSTRACTWe propose a simple algorithm to train stochastic neural networks to draw sam-ples from given target distributions for probabilistic inference. Our method isbased on iteratively adjusting the neural network parameters so that the outputchanges along a Stein variational gradient (Liu & Wang, 2016) that maximumlydecreases the KL divergence with the target distribution. Our method works forany target distribution specified by their unnormalized density function, and cantrain any black-box architectures that are differentiable in terms of the parame-ters we want to adapt. As an application of our method, we propose an amor-tized MLE algorithm for training deep energy model, where a neural sampler isadaptively trained to approximate the likelihood function. Our method mimicsan adversarial game between the deep energy model and the neural sampler, andobtains realistic-looking images competitive with the state-of-the-art results.1 I NTRODUCTIONModern machine learning increasingly relies on highly complex probabilistic models to reason aboutuncertainty. A key computational challenge is to develop efficient inference techniques to approx-imate, or draw samples from complex distributions. Currently, most inference methods, includingMCMC and variational inference, are hand-designed by researchers or domain experts. This makesit difficult to fully optimize the choice of different methods and their parameters, and exploit thestructures in the problems of interest in an automatic way. The hand-designed algorithm can also beinefficient when it requires to make fast inference repeatedly on a large number of different distri-butions with similar structures. This happens, for example, when we need to reason about a numberof observed datasets in settings like online learning, or need fast inference as inner loops for otheralgorithms such as maximum likelihood training. Therefore, it is highly desirable to develop moreintelligent probabilistic inference systems that can adaptively improve its own performance to fullythe optimize computational efficiency, and generalize to new tasks with similar structures.Specifically, denote by p(x)a probability density of interest specified up to the normalization con-stant, which we want to draw sample from, or marginalize to estimate its normalization constant.We want to study the following problem:Problem 1. Given a distribution with density p(x)and a function f(;)with parameter andrandom input , for which we only have assess to draws of the random input (without knowing itstrue distribution q0), and the output values of f(;)and its derivative @f(;)givenand. Wewant to find an optimal parameter so that the density of the random output variable x=f(;)withq0closely matches the target density p(x).Because we have no assumption on the structure of f(;)and the distribution of random input,we can not directly calculate the actual distribution of the output random variable x=f(;);this makes it difficult to solve Problem 1 using the traditional variational inference (VI) methods.Recall that traditional VI approximates p(x)using simple proposal distributions q(x)indexed byparameter, and finds the optimal by minimizing KL divergence KL(qjjp) =Eq[log(q=p)],which requires to calculate the density q(x)or its derivative that is not computable by our assump-1Under review as a conference paper at ICLR 2017tion (even when the Monte Carlo gradient estimation and the reparametrization trick (Kingma &Welling, 2013) are applied).In fact, it is this requirement of calculating q(x)that has been the major constraint for the de-signing of state-of-the-art variational inference methods with rich approximation families; the re-cent successful algorithms (e.g., Rezende & Mohamed, 2015b; Tran et al., 2015; Ranganath et al.,2015, to name only a few) have to handcraft special variational families to ensure the computationaltractability of q(x)and simultaneously obtain high approximation accuracy, which require substan-tial mathematical insights and research effects. Methods that do not require to explicitly calculateq(x)can significantly simplify the design and applications of VI methods, allowing practical usersto focus more on choosing proposals that work best with their specific tasks. We will use the termwild variational inference to refer to new variants of variational methods that require no tractabil-ityq(x), to distinguish with the black-box variational inference (Ranganath et al., 2014) whichrefers to methods that work for generic target distributions p(x)without significant model-by-modelconsideration (but still require to calculate the proposal density q(x)).A similar problem also appears in importance sampling (IS), where it requires to calculate the IS pro-posal density q(x)in order to calculate the importance weight w(x) =p(x)=q(x). However, thereexist methods that use no explicit information of q(x), which, seemingly counter-intuitively, givebetter asymptotic variance or converge rates than the typical IS that uses the proposal information(e.g., Liu & Lee, 2016; Briol et al., 2015; Henmi et al., 2007; Delyon & Portier, 2014). Discussionson this phenomenon dates back to O’Hagan (1987), who argued that “Monte Carlo (that uses theproposal information) is fundamentally unsound” for violating the Likelihood Principle, and devel-oped Bayesian Monte Carlo (O’Hagan, 1991) as an example that uses no information on q(x), yetgives better convergence rate than the typical Monte Carlo O(n1=2)rate (Briol et al., 2015). De-spite the substantial difference between IS and VI, these results intuitively suggest the possibility ofdeveloping efficient variational inference without calculating q(x)explicitly.In this work, we propose a simple algorithm for Problem 1 by iteratively adjusting the network pa-rameterto make its output random variable changes along a Stein variational gradient direction(SVGD) (Liu & Wang, 2016) that optimally decreases its KL divergence with the target distribu-tion. Critically, the SVGD gradient includes a repulsive term to ensure that the generated sampleshave the right amount of variability that matches p(x):In this way, we “amortize SVGD” using aneural network, which makes it possible for our method to adaptively improve its own efficiency byleveraging fast experience, especially in cases when it needs to perform fast inference repeatedly ona large number of similar tasks. As an application, we use our method to amortize the MLE trainingof deep energy models, where a neural sampler is adaptively trained to approximate the likelihoodfunction. Our method, which we call SteinGAN , mimics an adversarial game between the energymodel and the neural sampler, and obtains realistic-looking images competitive with the state-of-the-art results produced by generative adversarial networks (GAN) (Goodfellow et al., 2014; Radfordet al., 2015).Related Work The idea of amortized inference (Gershman & Goodman, 2014) has been recentlyapplied in various domains of probabilistic reasoning, including both amortized variational infer-ence (e.g., Kingma & Welling, 2013; Rezende & Mohamed, 2015a), and data-driven proposals for(sequential) Monte Carlo methods (e.g., Paige & Wood, 2016), to name only a few. Most of thesemethods, however, require to explicitly calculate q(x)(or its gradient). One exception is a veryrecent paper (Ranganath et al., 2016) that avoids calculating q(x)using an idea related to Steindiscrepancy (Gorham & Mackey, 2015; Liu et al., 2016; Oates et al., 2014; Chwialkowski et al.,2016). There is also a raising interest recently on a similar problem of “learning to optimize” (e.g.,Andrychowicz et al., 2016; Daniel et al., 2016; Li & Malik, 2016), which is technically easier thanthe more general problem of “learning to sample”. In fact, we show that our algorithm reduces to“learning to optimize” when only one particle is used in SVGD.Generative adversarial network (GAN) and its variants have recently gained remarkable successon generating realistic-looking images (Goodfellow et al., 2014; Salimans et al., 2016; Radfordet al., 2015; Li et al., 2015; Dziugaite et al., 2015; Nowozin et al., 2016). All these methods areset up to train latent variable models (the generator) under the assistant of the discriminator. OurSteinGAN instead performs traditional MLE training for a deep energy model, with the help ofa neural sampler that learns to draw samples from the energy model to approximate the likelihood2Under review as a conference paper at ICLR 2017function; this admits an adversarial interpretation: we can view the neural sampler as a generator thatattends to fool the deep energy model, which in turn serves as a discriminator that distinguishes thereal samples and the simulated samples given by the neural sampler. This idea of training MLE withneural samplers was first discussed by Kim & Bengio (2016); one of the key differences is that theneural sampler in Kim & Bengio (2016) is trained with the help of a heuristic diversity regularizerbased on batch normalization, while SVGD enforces the diversity in a more principled way. Anothermethod by Zhao et al. (2016) also trains an energy score to distinguish real and simulated samples,but within a non-probabilistic framework (see Section 5 for more discussion). Other more traditionalapproaches for training energy-based models (e.g., Ngiam et al., 2011; Xie et al., 2016) are oftenbased on variants of MCMC-MLE or contrastive divergence (Geyer, 1991; Hinton, 2002; Tieleman,2008), and have difficulty generating realistic-looking images from scratch.2 S TEIN VARIATIONAL GRADIENT DESCENT (SVGD)Stein variational gradient descent (SVGD) (Liu & Wang, 2016) is a general purpose Bayesian infer-ence algorithm motivated by Stein’s method (Stein, 1972; Barbour & Chen, 2005) and kernelizedStein discrepancy (Liu et al., 2016; Chwialkowski et al., 2016; Oates et al., 2014). It uses an effi-cient deterministic gradient-based update to iteratively evolve a set of particles fxigni=1to minimizethe KL divergence with the target distribution. SVGD has a simple form that reduces to the typicalgradient descent for maximizing logpwhen using only one particle (n= 1) , and hence can beeasily combined with the successful tricks for gradient optimization, including stochastic gradient,adaptive learning rates (such as adagrad), and momentum.To give a quick overview of the main idea of SVGD, let p(x)be a positive density function on Rdwhich we want to approximate with a set of particles fxigni=1. SVGD initializes the particles bysampling from some simple distribution q0, and updates the particles iteratively byxi xi+(xi);8i= 1;:::;n; (1)whereis a step size, and (x)is a “particle gradient direction” chosen to maximumly decrease theKL divergence between the distribution of particles and the target distribution, in the sense that= arg max2FddKL(q[]jjp)=0; (2)whereq[]denotes the density of the updated particle x0=x+(x)when the density of theoriginal particle xisq, andFis the set of perturbation directions that we optimize over. We chooseFto be the unit ball of a vector-valued reproducing kernel Hilbert space (RKHS) Hd=HHwith eachHassociating with a positive definite kernel k(x;x0); note thatHis dense in the space ofcontinuous functions with universal kernels such as the Gaussian RBF kernel.Critically, the gradient of KL divergence in (2) equals a simple linear functional of , allowing usto obtain a closed form solution for the optimal . Liu & Wang (2016) showed thatddKL(q[]jjp)=0=Exq[Tp(x)]; (3)withTp(x) =rxlogp(x)>(x) +rx(x); (4)whereTpis considered as a linear operator acting on function and is called the Stein operator inconnection with Stein’s identity which shows that the RHS of (3) equals zero if p=q:Ep[Tp] =Ep[rxlogp>+rx] = 0: (5)This is a result of integration by parts assuming the value of p(x)(x)vanishes on the boundary ofthe integration domain.Therefore, the optimization in (2) reduces toD(qjjp)def= max2HdfExq[Tp(x)]s:t:jjjjHd1g; (6)where D(qjjp)is the kernelized Stein discrepancy defined in Liu et al. (2016), which equals zero ifand only ifp=qunder mild regularity conditions. Importantly, the optimal solution of (6) yields aclosed form(x0)/Exq[rxlogp(x)k(x;x0) +rxk(x;x0)]:3Under review as a conference paper at ICLR 2017Algorithm 1 Amortized SVGD for Problem 1Set batch size m, step-size schemeftgand kernelk(x;x0). Initialize0.foriterationtdoDraw randomfigmi=1, calculatexi=f(t;i), and the Stein variational gradient xiin (7).Update parameter using (8), (9) or (10).end forBy approximating the expectation under qwith the empirical average of the current particlesfxigni=1, SVGD admits a simple form of update:xi xi+xi;8i= 1;:::;n;where xi=^Ex2fxigni=1[rxlogp(x)k(x;xi) +rxk(x;xi)]; (7)and^Exfxigni=1[f(x)] =Pif(xi)=n. The two terms in xiplay two different roles: the termwith the gradient rxlogp(x)drives the particles toward the high probability regions of p(x),while the term with rxk(x;xi)serves as a repulsive force to encourage diversity; to see this, con-sider a stationary kernel k(x;x0) =k(xx0), then the second term reduces to ^Exrxk(x;xi) =^Exrxik(x;xi), which can be treated as the negative gradient for minimizing the average similar-ity^Exk(x;xi)in terms ofxi. Overall, this particle update produces diverse points for distributionalapproximation and uncertainty assessment, and also has an interesting “momentum” effect in whichthe particles move collaboratively to escape the local optima.It is easy to see from (7) that xireduces to the typical gradient rxlogp(xi)when there is only asingle particle ( n= 1) andrxk(x;xi)whenx=xi, in which case SVGD reduces to the standardgradient ascent for maximizing logp(x)(i.e., maximum a posteriori (MAP)).3 A MORTIZED SVGD: T OWARDS AN AUTOMATIC NEURAL SAMPLERSVGD and other particle-based methods become inefficient when we need to repeatedly infer a largenumber different target distributions for multiple tasks, including online learning or inner loops ofother algorithms, because they can not improve based on the experience from the past tasks, and mayrequire a large memory to restore a large number of particles. We propose to “amortize SVGD” bytraining a neural network f(;)to mimic the SVGD dynamics, yielding a solution for Problem 1.One straightforward way to achieve this is to run SVGD to convergence and train f(;)to fit theSVGD results. This, however, requires to run many epochs of fully converged SVGD and can beslow in practice. We instead propose an incremental approach in whichis iteratively adjusted sothat the network outputs x=f(;)changes along the Stein variational gradient direction in (7) inorder to decrease the KL divergence between the target and approximation distribution.To be specific, denote by tthe estimated parameter at the t-th iteration of our method; each iterationof our method draws a batch of random inputs figmi=1and calculate their corresponding outputxi=f(;i)based ont; heremis a mini-batch size (e.g., m= 100 ). The Stein variationalgradient xiin (7) would then ensure that x0i=xi+xiforms a better approximation of thetarget distribution p. Therefore, we should adjust to make its output matches fx0ig, that is, wewant to update byt+1 arg minmXi=1jjf(;i)x0ijj22; wherex0i=xi+xi: (8)See Algorithm 1 for the summary of this procedure. If we assume is very small, then (8) reducesto a least square optimization. To see this, note that f(;i)f(t;i) +@f(t;i)(t)byTaylor expansion. Since xi=f(t;i), we havejjf(;i)x0ijj22jj@f(t;i)(t)xijj22:As a result, (8) reduces to the following least square optimization:t+1 t+t;where t= arg minmXi=1jj@f(t;i)xijj22: (9)4Under review as a conference paper at ICLR 2017Update (9) can still be computationally expensive because of the matrix inversion. We can derive afurther approximation by performing only one step of gradient descent of (8) (or (9)), which givest+1 t+mXi=1@f(t;i)xi: (10)Although update (10) is derived as an approximation of (8)-(9), it is computationally faster and wefind it works very effectively in practice; this is because when is small, one step of gradient updatecan be sufficiently close to the optimum.Update (10) also has a simple and intuitive form: (10) can be thought as a “chain rule” that back-propagates the Stein variational gradient to the network parameter . This can be justified byconsidering the special case when we use only a single particle (n= 1) in which case xiin(7) reduces to the typical gradient rxlogp(xi)oflogp(x), and update (10) reduces to the typicalgradient ascent for maximizingE[logp(f(;))];in which case f(;)is trained to maximize logp(x)(that is, learning to optimize ), instead oflearning to draw samples from pfor which it is crucial to use Stein variational gradient xitodiversify the network outputs.Update (10) also has a close connection with the typical variational inference with the reparameter-ization trick (Kingma & Welling, 2013). Let q(x)be the density function of x=f(;),q0.Using the reparameterization trick, the gradient of KL(qjjp)w.r.t.can be shown to berKL(qjjp) =Eq0[@f(;)(rxlogp(x)rxlogq(x))]:Withfigi.i.d. drawn from q0andxi=f(;i);8i, the standard stochastic gradient descent forminimizing the KL divergence ist+1 t+Xi@f(t;i)~xi;where ~xi=rxlogp(xi)rxlogq(xi): (11)This is similar with (10), but replaces the Stein gradient xidefined in (7) with ~xi. The advantageof using xiis that it does not require to explicitly calculate q, and hence admits a solution to Prob-lem 1 in which qis not computable for complex network f(;)and unknown input distributionq0. Further insights can be obtained by noting thatxiExq[rxlogp(x)k(x;xi) +rxk(x;xi)]=Exq[(rxlogp(x)rxlogq(x))k(x;xi)] (12)=Exq[(~x)k(x;xi)];where (12) is obtained by using Stein’s identity (5). Therefore, xican be treated as a kernelsmoothed version of ~xi.4 A MORTIZED MLE FOR GENERATIVE ADVERSARIAL TRAININGOur method allows us to design efficient approximate sampling methods adaptively and automat-ically, and enables a host of novel applications. In this paper, we apply it in an amortized MLEmethod for training deep generative models.Maximum likelihood estimator (MLE) provides a fundamental approach for learning probabilisticmodels from data, but can be computationally prohibitive on distributions for which drawing sam-ples or computing likelihood is intractable due to the normalization constant. Traditional methodssuch as MCMC-MLE use hand-designed methods (e.g., MCMC) to approximate the intractable like-lihood function but do not work efficiently in practice. We propose to adaptively train a generativeneural network to draw samples from the distribution during MLE training, which not only providescomputational advantage, and also allows us to generate realistic-looking images competitive with,or better than the state-of-the-art generative adversarial networks (GAN) (Goodfellow et al., 2014;Radford et al., 2015) (see Figure 1-5).5Under review as a conference paper at ICLR 2017Algorithm 2 Amortized MLE as Generative Adversarial LearningGoal: MLE training for energy model p(xj) = exp((x;)()).Initializeand.foriterationtdoUpdating:Drawiq0,xi=f(;i); updateusing (8), (9) or (10) with p(x) =p(xj).Repeat several times when needed.Updating:Draw a mini-batch of observed data fxi;obsg, and simulated data xi=f(;i),updateby (13).end forTo be specific, denote by fxi;obsga set of observed data. We consider the maximum likelihoodtraining of energy-based models of formp(xj) = exp((x;)());() = logZexp((x;))dx;where(x;)is an energy function for xindexed by parameter and()is the log-normalizationconstant. The log-likelihood function of isL() =1nnXi=1logp(xi;obsj);whose gradient isrL() =^Eobs[@(x;)] +E[@(x;)];where ^Eobs[]andE[]denote the empirical average on the observed data fxi;obsgand the expecta-tion under model p(xj), respectively. The key computational difficulty is to approximate the modelexpectation E[]. To address this problem, we use a generative neural network x=f(;)trainedby Algorithm 1 to approximately sample from p(xj), yielding a gradient update for of form +^rL(); ^rL() =^Eobs[@(x;)] +^E[@(x;)]; (13)where ^Edenotes the empirical average on fxigwherexi=f(;i),figq0. Asis updatedby gradient ascent, is successively updated via Algorithm 1 to followp(xj). See Algorithm 2.We call our method SteinGAN , because it can be intuitively interpreted as an adversarial game be-tween the generative network f(;)and the energy model p(xj)which serves as a discriminator:The MLE gradient update of p(xj)effectively decreases the energy of the training data and in-creases the energy of the simulated data from f(;), while the SVGD update of f(;)decreasesthe energy of the simulated data to fit better with p(xj). Compared with the traditional methodsbased on MCMC-MLE or contrastive divergence, we amortize the sampler as we train , which givesmuch faster speed and simultaneously provides a high quality generative neural network that cangenerate realistic-looking images; see Kim & Bengio (2016) for a similar idea and discussions.5 E MPIRICAL RESULTSWe evaluated our SteinGAN on four datasets, MNIST, CIFAR-10, CelebA (Liu et al., 2015), andLarge-scale Scene Understanding (LSUN) (Yu et al., 2015), on which we find our method tends togenerate realistic-looking images competitive with, sometimes better than DCGAN (Radford et al.,2015) (see Figure 2 - Figure 3). Our code is available at https://github.com/DartML/SteinGAN .Model Setup In order to generate realistic-looking images, we define our energy model based onan autoencoder:p(xj)/exp(jjxD(E(x;);)jj); (14)wherexdenotes the image. This choice is motivated by Energy-based GAN (Zhao et al., 2016) inwhich the autoencoder loss is used as a discriminator but without a probabilistic interpretation. We6Under review as a conference paper at ICLR 2017assumef(;)to be a neural network whose input is a100-dimensional random vector drawn byUniform([1;1]). The positive definite kernel in SVGD is defined by the RBF kernel on the hiddenrepresentation obtained by the autoencoder in (14), that is,k(x;x0) = exp(1h2jjE(x;)E(x0;)jj2):As it is discussed in Section 3, the kernel provides a repulsive force to produce an amount of variabil-ity required for generating samples from p(x). This is similar to the heuristic repelling regularizerin Zhao et al. (2016) and the batch normalization based regularizer in Kim & Bengio (2016), but isderived in a more principled way. We take the bandwidth to be h= 0:5med, where med is themedian of the pairwise distances between E(x)on the image simulated by f(;). This makes thekernel change adaptively based on both (through E(x;)) and(through bandwidth h).Some datasets include both images xand their associated discrete labels y. In these cases, we traina joint energy model on (x;y)to capture both the inner structure of the images and its predictiverelation with the label, allowing us to simulate images with a control on which category it belongsto. Our joint energy model is defined to bep(x;yj)/expjjxD(E(x;);)jjmax[m; (y;E(x;))]; (15)where(;)is the cross entropy loss function of a fully connected output layer. In this case, ourneural sampler first draws a label yrandomly according to the empirical counts in the dataset, andthen passesyinto a neural network together with a 1001random vector to generate image x.This allows us to generate images for particular categories by controlling the value of input y.Stabilization In practice, we find it is useful to modify (13) to be ^Eobs[r(x;)] +(1)^E[r(x;)]: (16)whereis a discount factor (which we take to be = 0:7). This is equivalent to maximizing aregularized likelihood:maxflogp(xj) +()gwhere ()is the log-partition function; note that exp(())is a conjugate prior of p(xj).We initialize the weights of both the generator and discriminator from Gaussian distributionN(0;0:02), and train them using Adam (Kingma & Ba, 2014) with a learning rate of 0:001forthe generator and 0:0001 for the energy model (the discriminator). In order to keep the generatorand discriminator approximately aligned during training, we speed up the MLE update (16) of thediscriminator (by increasing its learning rate to 0:0005 ) when the energy of the real data batch islarger than the energy of the simulated images, while slow down it (by freezing the MLE updateofin (16)) if the magnitude of the energy difference between the real images and the simulatedimages goes above a threshold of 0.5. We used the bag of architecture guidelines for stable trainingsuggested in DCGAN (Radford et al., 2015).Discussion The MNIST dataset has a training set of 60;000examples. Both DCGAN and ourmodel produce high quality images, both visually indistinguishable from real images; see figure 1.CIFAR-10 is very diverse, and with only 50,000 training examples. Figure 2 shows examples ofsimulated images by DCGAN and SteinGAN generated conditional on each category, which lookequally well visually. We also provide quantitively evaluation using a recently proposed inceptionscore (Salimans et al., 2016), as well as the classification accuracy when training ResNet using50;000simulated images as train sets, evaluated on a separate held-out testing set never seen by theGAN models. Besides DCGAN and SteinGAN, we also evaluate another simple baseline obtainedby subsampling 500 real images from the training set and duplicating them 100 times. We observethat these scores capture rather different perspectives of image generation: The inception scorefavors images that look realistic individually and have uniformly distributed labels; as a result, theinception score of the duplicated 500 images is almost as high as the real training set. We find thatthe inception score of SteinGAN is comparable, or slightly lower than that of DCGAN. On the otherhand, the classification accuracy measures the amount information captured in the simulated imagesets; we find that SteinGAN achieves the highest classification accuracy, suggesting that it capturesmore information in the training set.Figure 3 and 4 visualize the results on CelebA (with more than 200k face images) and LSUN (withnearly 3M bedroom images), respectively. We cropped and resized both dataset images into 6464.7Under review as a conference paper at ICLR 2017DCGAN SteinGANFigure 1: MNIST images generated by DCGAN and our SteinGAN. We use the joint model in (15)to allow us to generate images for each digit. We set m= 0:2.airplaneautomobilebirdcatdeerdogfroghorseshiptruckDCGAN SteinGANInception ScoreReal Training Set 500 Duplicate DCGAN SteinGANModel Trained on ImageNet 11.237 11.100 6.581 6.351Model Trained on CIFAR-10 9.848 9.807 7.368 7.428Testing AccuracyReal Training Set 500 Duplicate DCGAN SteinGAN92.58 % 44.96 % 44.78 % 63.81 %Figure 2: Results on CIFAR-10. “500 Duplicate” denotes 500 images randomly subsampled fromthe training set, each duplicated 100 times. Upper: images simulated by DCGAN and SteinGAN(based on joint model (15)) conditional on each category. Middle: inception scores for samplesgenerated by various methods (all with 50,000 images) on inception models trained on ImageNet andCIFAR-10, respectively. Lower: testing accuracy on real testing set when using 50,000 simulatedimages to train ResNets for classification. SteinGAN achieves higher testing accuracy than DCGAN.We setm= 1and= 0:8.6 C ONCLUSIONWe propose a new method to train neural samplers for given distributions, together with a newSteinGAN method for generative adversarial training. Future directions involve more applicationsand theoretical understandings for training neural samplers.8Under review as a conference paper at ICLR 2017DCGAN SteinGANFigure 3: Results on CelebA. Upper: images generated by DCGAN and our SteinGAN. Lower:images generated by SteinGAN when performing a random walk + 0:01Uniform([1;1])on the random input ; we can see that a man with glasses and black hair gradually changes to awoman with blonde hair. See Figure 5 for more examples.DCGAN SteinGANFigure 4: Images generated by DCGAN and our SteinGAN on LSUN.9Under review as a conference paper at ICLR 2017 | By-ehDEEg | Insufficient empirical evaluation. | 4: Ok but not good enough - rejection | This paper proposes an amortized version of the Stein variational gradient descent (SVGD) method in which "a neural network is trained to mimic the SVGD dynamics". It applies the method to generative adversarial training to yield a training procedure where the discriminator is interpreted as an energy-based probabilistic model.
One criticism I have of the presentation is that a lot of time and energy is spent setting the table for a method which is claimed to be widely applicable, and the scope of the empirical evaluation is narrowed down to a single specific setting. In my view, either the paper falls short of its goal of showing how widely applicable the proposed method is, or it spends too much time setting the table for SteinGAN and not enough time evaluating it.
The consequence of this is that the empirical results are insufficient in justifying the approach proposed by the paper. As another reviewer pointed out, DCGAN is becoming outdated as a benchmark for comparison.
Qualitatively, SteinGAN samples don't look significantly better than DCGAN samples, except for the CelebA dataset. In that particular case, the DCGAN samples don't appear to be the ones presented in the original paper; where do they come from?
Quantitatively, DCGAN beats SteinGAN by a small margin for the ImageNet Inception Score and SteinGAN beats DCGAN by an even smaller margin for the CIFAR10 Inception Score. Also, in my opinion, the "testing accuracy" score is not a convincing evaluation metric: while it is true that it measures the amount of information captured in the simulated image sets, it is only sensitive to information useful for the discrimination task, not for the more general modeling task. For instance, this score is likely completely blind to information present in the background of the image.
Because of the reasons outlined above, I don't think the paper is ready for publication at ICLR. | 3: The reviewer is fairly confident that the evaluation is correct |
H1oRQDqlg | ICLR.cc/2017/conference | 2017 | Learning to Draw Samples: With Application to Amortized MLE for Generative Adversarial Learning | ["Dilin Wang", "Qiang Liu"] | We propose a simple algorithm to train stochastic neural networks to draw samples from given target distributions for probabilistic inference. Our method is based on iteratively adjusting the neural network parameters so that the output changes along a Stein variational gradient that maximumly decreases the KL divergence with the target distribution. Our method works for any target distribution specified by their unnormalized density function, and can train any black-box architectures that are differentiable in terms of the parameters we want to adapt. As an application of our method, we propose an amortized MLE algorithm for training deep energy model, where a neural sampler is adaptively trained to approximate the likelihood function. Our method mimics an adversarial game between the deep energy model and the neural sampler, and obtains realistic-looking images competitive with the state-of-the-art results. | ["Unsupervised Learning"] | ABSTRACTWe propose a simple algorithm to train stochastic neural networks to draw sam-ples from given target distributions for probabilistic inference. Our method isbased on iteratively adjusting the neural network parameters so that the outputchanges along a Stein variational gradient (Liu & Wang, 2016) that maximumlydecreases the KL divergence with the target distribution. Our method works forany target distribution specified by their unnormalized density function, and cantrain any black-box architectures that are differentiable in terms of the parame-ters we want to adapt. As an application of our method, we propose an amor-tized MLE algorithm for training deep energy model, where a neural sampler isadaptively trained to approximate the likelihood function. Our method mimicsan adversarial game between the deep energy model and the neural sampler, andobtains realistic-looking images competitive with the state-of-the-art results.1 I NTRODUCTIONModern machine learning increasingly relies on highly complex probabilistic models to reason aboutuncertainty. A key computational challenge is to develop efficient inference techniques to approx-imate, or draw samples from complex distributions. Currently, most inference methods, includingMCMC and variational inference, are hand-designed by researchers or domain experts. This makesit difficult to fully optimize the choice of different methods and their parameters, and exploit thestructures in the problems of interest in an automatic way. The hand-designed algorithm can also beinefficient when it requires to make fast inference repeatedly on a large number of different distri-butions with similar structures. This happens, for example, when we need to reason about a numberof observed datasets in settings like online learning, or need fast inference as inner loops for otheralgorithms such as maximum likelihood training. Therefore, it is highly desirable to develop moreintelligent probabilistic inference systems that can adaptively improve its own performance to fullythe optimize computational efficiency, and generalize to new tasks with similar structures.Specifically, denote by p(x)a probability density of interest specified up to the normalization con-stant, which we want to draw sample from, or marginalize to estimate its normalization constant.We want to study the following problem:Problem 1. Given a distribution with density p(x)and a function f(;)with parameter andrandom input , for which we only have assess to draws of the random input (without knowing itstrue distribution q0), and the output values of f(;)and its derivative @f(;)givenand. Wewant to find an optimal parameter so that the density of the random output variable x=f(;)withq0closely matches the target density p(x).Because we have no assumption on the structure of f(;)and the distribution of random input,we can not directly calculate the actual distribution of the output random variable x=f(;);this makes it difficult to solve Problem 1 using the traditional variational inference (VI) methods.Recall that traditional VI approximates p(x)using simple proposal distributions q(x)indexed byparameter, and finds the optimal by minimizing KL divergence KL(qjjp) =Eq[log(q=p)],which requires to calculate the density q(x)or its derivative that is not computable by our assump-1Under review as a conference paper at ICLR 2017tion (even when the Monte Carlo gradient estimation and the reparametrization trick (Kingma &Welling, 2013) are applied).In fact, it is this requirement of calculating q(x)that has been the major constraint for the de-signing of state-of-the-art variational inference methods with rich approximation families; the re-cent successful algorithms (e.g., Rezende & Mohamed, 2015b; Tran et al., 2015; Ranganath et al.,2015, to name only a few) have to handcraft special variational families to ensure the computationaltractability of q(x)and simultaneously obtain high approximation accuracy, which require substan-tial mathematical insights and research effects. Methods that do not require to explicitly calculateq(x)can significantly simplify the design and applications of VI methods, allowing practical usersto focus more on choosing proposals that work best with their specific tasks. We will use the termwild variational inference to refer to new variants of variational methods that require no tractabil-ityq(x), to distinguish with the black-box variational inference (Ranganath et al., 2014) whichrefers to methods that work for generic target distributions p(x)without significant model-by-modelconsideration (but still require to calculate the proposal density q(x)).A similar problem also appears in importance sampling (IS), where it requires to calculate the IS pro-posal density q(x)in order to calculate the importance weight w(x) =p(x)=q(x). However, thereexist methods that use no explicit information of q(x), which, seemingly counter-intuitively, givebetter asymptotic variance or converge rates than the typical IS that uses the proposal information(e.g., Liu & Lee, 2016; Briol et al., 2015; Henmi et al., 2007; Delyon & Portier, 2014). Discussionson this phenomenon dates back to O’Hagan (1987), who argued that “Monte Carlo (that uses theproposal information) is fundamentally unsound” for violating the Likelihood Principle, and devel-oped Bayesian Monte Carlo (O’Hagan, 1991) as an example that uses no information on q(x), yetgives better convergence rate than the typical Monte Carlo O(n1=2)rate (Briol et al., 2015). De-spite the substantial difference between IS and VI, these results intuitively suggest the possibility ofdeveloping efficient variational inference without calculating q(x)explicitly.In this work, we propose a simple algorithm for Problem 1 by iteratively adjusting the network pa-rameterto make its output random variable changes along a Stein variational gradient direction(SVGD) (Liu & Wang, 2016) that optimally decreases its KL divergence with the target distribu-tion. Critically, the SVGD gradient includes a repulsive term to ensure that the generated sampleshave the right amount of variability that matches p(x):In this way, we “amortize SVGD” using aneural network, which makes it possible for our method to adaptively improve its own efficiency byleveraging fast experience, especially in cases when it needs to perform fast inference repeatedly ona large number of similar tasks. As an application, we use our method to amortize the MLE trainingof deep energy models, where a neural sampler is adaptively trained to approximate the likelihoodfunction. Our method, which we call SteinGAN , mimics an adversarial game between the energymodel and the neural sampler, and obtains realistic-looking images competitive with the state-of-the-art results produced by generative adversarial networks (GAN) (Goodfellow et al., 2014; Radfordet al., 2015).Related Work The idea of amortized inference (Gershman & Goodman, 2014) has been recentlyapplied in various domains of probabilistic reasoning, including both amortized variational infer-ence (e.g., Kingma & Welling, 2013; Rezende & Mohamed, 2015a), and data-driven proposals for(sequential) Monte Carlo methods (e.g., Paige & Wood, 2016), to name only a few. Most of thesemethods, however, require to explicitly calculate q(x)(or its gradient). One exception is a veryrecent paper (Ranganath et al., 2016) that avoids calculating q(x)using an idea related to Steindiscrepancy (Gorham & Mackey, 2015; Liu et al., 2016; Oates et al., 2014; Chwialkowski et al.,2016). There is also a raising interest recently on a similar problem of “learning to optimize” (e.g.,Andrychowicz et al., 2016; Daniel et al., 2016; Li & Malik, 2016), which is technically easier thanthe more general problem of “learning to sample”. In fact, we show that our algorithm reduces to“learning to optimize” when only one particle is used in SVGD.Generative adversarial network (GAN) and its variants have recently gained remarkable successon generating realistic-looking images (Goodfellow et al., 2014; Salimans et al., 2016; Radfordet al., 2015; Li et al., 2015; Dziugaite et al., 2015; Nowozin et al., 2016). All these methods areset up to train latent variable models (the generator) under the assistant of the discriminator. OurSteinGAN instead performs traditional MLE training for a deep energy model, with the help ofa neural sampler that learns to draw samples from the energy model to approximate the likelihood2Under review as a conference paper at ICLR 2017function; this admits an adversarial interpretation: we can view the neural sampler as a generator thatattends to fool the deep energy model, which in turn serves as a discriminator that distinguishes thereal samples and the simulated samples given by the neural sampler. This idea of training MLE withneural samplers was first discussed by Kim & Bengio (2016); one of the key differences is that theneural sampler in Kim & Bengio (2016) is trained with the help of a heuristic diversity regularizerbased on batch normalization, while SVGD enforces the diversity in a more principled way. Anothermethod by Zhao et al. (2016) also trains an energy score to distinguish real and simulated samples,but within a non-probabilistic framework (see Section 5 for more discussion). Other more traditionalapproaches for training energy-based models (e.g., Ngiam et al., 2011; Xie et al., 2016) are oftenbased on variants of MCMC-MLE or contrastive divergence (Geyer, 1991; Hinton, 2002; Tieleman,2008), and have difficulty generating realistic-looking images from scratch.2 S TEIN VARIATIONAL GRADIENT DESCENT (SVGD)Stein variational gradient descent (SVGD) (Liu & Wang, 2016) is a general purpose Bayesian infer-ence algorithm motivated by Stein’s method (Stein, 1972; Barbour & Chen, 2005) and kernelizedStein discrepancy (Liu et al., 2016; Chwialkowski et al., 2016; Oates et al., 2014). It uses an effi-cient deterministic gradient-based update to iteratively evolve a set of particles fxigni=1to minimizethe KL divergence with the target distribution. SVGD has a simple form that reduces to the typicalgradient descent for maximizing logpwhen using only one particle (n= 1) , and hence can beeasily combined with the successful tricks for gradient optimization, including stochastic gradient,adaptive learning rates (such as adagrad), and momentum.To give a quick overview of the main idea of SVGD, let p(x)be a positive density function on Rdwhich we want to approximate with a set of particles fxigni=1. SVGD initializes the particles bysampling from some simple distribution q0, and updates the particles iteratively byxi xi+(xi);8i= 1;:::;n; (1)whereis a step size, and (x)is a “particle gradient direction” chosen to maximumly decrease theKL divergence between the distribution of particles and the target distribution, in the sense that= arg max2FddKL(q[]jjp)=0; (2)whereq[]denotes the density of the updated particle x0=x+(x)when the density of theoriginal particle xisq, andFis the set of perturbation directions that we optimize over. We chooseFto be the unit ball of a vector-valued reproducing kernel Hilbert space (RKHS) Hd=HHwith eachHassociating with a positive definite kernel k(x;x0); note thatHis dense in the space ofcontinuous functions with universal kernels such as the Gaussian RBF kernel.Critically, the gradient of KL divergence in (2) equals a simple linear functional of , allowing usto obtain a closed form solution for the optimal . Liu & Wang (2016) showed thatddKL(q[]jjp)=0=Exq[Tp(x)]; (3)withTp(x) =rxlogp(x)>(x) +rx(x); (4)whereTpis considered as a linear operator acting on function and is called the Stein operator inconnection with Stein’s identity which shows that the RHS of (3) equals zero if p=q:Ep[Tp] =Ep[rxlogp>+rx] = 0: (5)This is a result of integration by parts assuming the value of p(x)(x)vanishes on the boundary ofthe integration domain.Therefore, the optimization in (2) reduces toD(qjjp)def= max2HdfExq[Tp(x)]s:t:jjjjHd1g; (6)where D(qjjp)is the kernelized Stein discrepancy defined in Liu et al. (2016), which equals zero ifand only ifp=qunder mild regularity conditions. Importantly, the optimal solution of (6) yields aclosed form(x0)/Exq[rxlogp(x)k(x;x0) +rxk(x;x0)]:3Under review as a conference paper at ICLR 2017Algorithm 1 Amortized SVGD for Problem 1Set batch size m, step-size schemeftgand kernelk(x;x0). Initialize0.foriterationtdoDraw randomfigmi=1, calculatexi=f(t;i), and the Stein variational gradient xiin (7).Update parameter using (8), (9) or (10).end forBy approximating the expectation under qwith the empirical average of the current particlesfxigni=1, SVGD admits a simple form of update:xi xi+xi;8i= 1;:::;n;where xi=^Ex2fxigni=1[rxlogp(x)k(x;xi) +rxk(x;xi)]; (7)and^Exfxigni=1[f(x)] =Pif(xi)=n. The two terms in xiplay two different roles: the termwith the gradient rxlogp(x)drives the particles toward the high probability regions of p(x),while the term with rxk(x;xi)serves as a repulsive force to encourage diversity; to see this, con-sider a stationary kernel k(x;x0) =k(xx0), then the second term reduces to ^Exrxk(x;xi) =^Exrxik(x;xi), which can be treated as the negative gradient for minimizing the average similar-ity^Exk(x;xi)in terms ofxi. Overall, this particle update produces diverse points for distributionalapproximation and uncertainty assessment, and also has an interesting “momentum” effect in whichthe particles move collaboratively to escape the local optima.It is easy to see from (7) that xireduces to the typical gradient rxlogp(xi)when there is only asingle particle ( n= 1) andrxk(x;xi)whenx=xi, in which case SVGD reduces to the standardgradient ascent for maximizing logp(x)(i.e., maximum a posteriori (MAP)).3 A MORTIZED SVGD: T OWARDS AN AUTOMATIC NEURAL SAMPLERSVGD and other particle-based methods become inefficient when we need to repeatedly infer a largenumber different target distributions for multiple tasks, including online learning or inner loops ofother algorithms, because they can not improve based on the experience from the past tasks, and mayrequire a large memory to restore a large number of particles. We propose to “amortize SVGD” bytraining a neural network f(;)to mimic the SVGD dynamics, yielding a solution for Problem 1.One straightforward way to achieve this is to run SVGD to convergence and train f(;)to fit theSVGD results. This, however, requires to run many epochs of fully converged SVGD and can beslow in practice. We instead propose an incremental approach in whichis iteratively adjusted sothat the network outputs x=f(;)changes along the Stein variational gradient direction in (7) inorder to decrease the KL divergence between the target and approximation distribution.To be specific, denote by tthe estimated parameter at the t-th iteration of our method; each iterationof our method draws a batch of random inputs figmi=1and calculate their corresponding outputxi=f(;i)based ont; heremis a mini-batch size (e.g., m= 100 ). The Stein variationalgradient xiin (7) would then ensure that x0i=xi+xiforms a better approximation of thetarget distribution p. Therefore, we should adjust to make its output matches fx0ig, that is, wewant to update byt+1 arg minmXi=1jjf(;i)x0ijj22; wherex0i=xi+xi: (8)See Algorithm 1 for the summary of this procedure. If we assume is very small, then (8) reducesto a least square optimization. To see this, note that f(;i)f(t;i) +@f(t;i)(t)byTaylor expansion. Since xi=f(t;i), we havejjf(;i)x0ijj22jj@f(t;i)(t)xijj22:As a result, (8) reduces to the following least square optimization:t+1 t+t;where t= arg minmXi=1jj@f(t;i)xijj22: (9)4Under review as a conference paper at ICLR 2017Update (9) can still be computationally expensive because of the matrix inversion. We can derive afurther approximation by performing only one step of gradient descent of (8) (or (9)), which givest+1 t+mXi=1@f(t;i)xi: (10)Although update (10) is derived as an approximation of (8)-(9), it is computationally faster and wefind it works very effectively in practice; this is because when is small, one step of gradient updatecan be sufficiently close to the optimum.Update (10) also has a simple and intuitive form: (10) can be thought as a “chain rule” that back-propagates the Stein variational gradient to the network parameter . This can be justified byconsidering the special case when we use only a single particle (n= 1) in which case xiin(7) reduces to the typical gradient rxlogp(xi)oflogp(x), and update (10) reduces to the typicalgradient ascent for maximizingE[logp(f(;))];in which case f(;)is trained to maximize logp(x)(that is, learning to optimize ), instead oflearning to draw samples from pfor which it is crucial to use Stein variational gradient xitodiversify the network outputs.Update (10) also has a close connection with the typical variational inference with the reparameter-ization trick (Kingma & Welling, 2013). Let q(x)be the density function of x=f(;),q0.Using the reparameterization trick, the gradient of KL(qjjp)w.r.t.can be shown to berKL(qjjp) =Eq0[@f(;)(rxlogp(x)rxlogq(x))]:Withfigi.i.d. drawn from q0andxi=f(;i);8i, the standard stochastic gradient descent forminimizing the KL divergence ist+1 t+Xi@f(t;i)~xi;where ~xi=rxlogp(xi)rxlogq(xi): (11)This is similar with (10), but replaces the Stein gradient xidefined in (7) with ~xi. The advantageof using xiis that it does not require to explicitly calculate q, and hence admits a solution to Prob-lem 1 in which qis not computable for complex network f(;)and unknown input distributionq0. Further insights can be obtained by noting thatxiExq[rxlogp(x)k(x;xi) +rxk(x;xi)]=Exq[(rxlogp(x)rxlogq(x))k(x;xi)] (12)=Exq[(~x)k(x;xi)];where (12) is obtained by using Stein’s identity (5). Therefore, xican be treated as a kernelsmoothed version of ~xi.4 A MORTIZED MLE FOR GENERATIVE ADVERSARIAL TRAININGOur method allows us to design efficient approximate sampling methods adaptively and automat-ically, and enables a host of novel applications. In this paper, we apply it in an amortized MLEmethod for training deep generative models.Maximum likelihood estimator (MLE) provides a fundamental approach for learning probabilisticmodels from data, but can be computationally prohibitive on distributions for which drawing sam-ples or computing likelihood is intractable due to the normalization constant. Traditional methodssuch as MCMC-MLE use hand-designed methods (e.g., MCMC) to approximate the intractable like-lihood function but do not work efficiently in practice. We propose to adaptively train a generativeneural network to draw samples from the distribution during MLE training, which not only providescomputational advantage, and also allows us to generate realistic-looking images competitive with,or better than the state-of-the-art generative adversarial networks (GAN) (Goodfellow et al., 2014;Radford et al., 2015) (see Figure 1-5).5Under review as a conference paper at ICLR 2017Algorithm 2 Amortized MLE as Generative Adversarial LearningGoal: MLE training for energy model p(xj) = exp((x;)()).Initializeand.foriterationtdoUpdating:Drawiq0,xi=f(;i); updateusing (8), (9) or (10) with p(x) =p(xj).Repeat several times when needed.Updating:Draw a mini-batch of observed data fxi;obsg, and simulated data xi=f(;i),updateby (13).end forTo be specific, denote by fxi;obsga set of observed data. We consider the maximum likelihoodtraining of energy-based models of formp(xj) = exp((x;)());() = logZexp((x;))dx;where(x;)is an energy function for xindexed by parameter and()is the log-normalizationconstant. The log-likelihood function of isL() =1nnXi=1logp(xi;obsj);whose gradient isrL() =^Eobs[@(x;)] +E[@(x;)];where ^Eobs[]andE[]denote the empirical average on the observed data fxi;obsgand the expecta-tion under model p(xj), respectively. The key computational difficulty is to approximate the modelexpectation E[]. To address this problem, we use a generative neural network x=f(;)trainedby Algorithm 1 to approximately sample from p(xj), yielding a gradient update for of form +^rL(); ^rL() =^Eobs[@(x;)] +^E[@(x;)]; (13)where ^Edenotes the empirical average on fxigwherexi=f(;i),figq0. Asis updatedby gradient ascent, is successively updated via Algorithm 1 to followp(xj). See Algorithm 2.We call our method SteinGAN , because it can be intuitively interpreted as an adversarial game be-tween the generative network f(;)and the energy model p(xj)which serves as a discriminator:The MLE gradient update of p(xj)effectively decreases the energy of the training data and in-creases the energy of the simulated data from f(;), while the SVGD update of f(;)decreasesthe energy of the simulated data to fit better with p(xj). Compared with the traditional methodsbased on MCMC-MLE or contrastive divergence, we amortize the sampler as we train , which givesmuch faster speed and simultaneously provides a high quality generative neural network that cangenerate realistic-looking images; see Kim & Bengio (2016) for a similar idea and discussions.5 E MPIRICAL RESULTSWe evaluated our SteinGAN on four datasets, MNIST, CIFAR-10, CelebA (Liu et al., 2015), andLarge-scale Scene Understanding (LSUN) (Yu et al., 2015), on which we find our method tends togenerate realistic-looking images competitive with, sometimes better than DCGAN (Radford et al.,2015) (see Figure 2 - Figure 3). Our code is available at https://github.com/DartML/SteinGAN .Model Setup In order to generate realistic-looking images, we define our energy model based onan autoencoder:p(xj)/exp(jjxD(E(x;);)jj); (14)wherexdenotes the image. This choice is motivated by Energy-based GAN (Zhao et al., 2016) inwhich the autoencoder loss is used as a discriminator but without a probabilistic interpretation. We6Under review as a conference paper at ICLR 2017assumef(;)to be a neural network whose input is a100-dimensional random vector drawn byUniform([1;1]). The positive definite kernel in SVGD is defined by the RBF kernel on the hiddenrepresentation obtained by the autoencoder in (14), that is,k(x;x0) = exp(1h2jjE(x;)E(x0;)jj2):As it is discussed in Section 3, the kernel provides a repulsive force to produce an amount of variabil-ity required for generating samples from p(x). This is similar to the heuristic repelling regularizerin Zhao et al. (2016) and the batch normalization based regularizer in Kim & Bengio (2016), but isderived in a more principled way. We take the bandwidth to be h= 0:5med, where med is themedian of the pairwise distances between E(x)on the image simulated by f(;). This makes thekernel change adaptively based on both (through E(x;)) and(through bandwidth h).Some datasets include both images xand their associated discrete labels y. In these cases, we traina joint energy model on (x;y)to capture both the inner structure of the images and its predictiverelation with the label, allowing us to simulate images with a control on which category it belongsto. Our joint energy model is defined to bep(x;yj)/expjjxD(E(x;);)jjmax[m; (y;E(x;))]; (15)where(;)is the cross entropy loss function of a fully connected output layer. In this case, ourneural sampler first draws a label yrandomly according to the empirical counts in the dataset, andthen passesyinto a neural network together with a 1001random vector to generate image x.This allows us to generate images for particular categories by controlling the value of input y.Stabilization In practice, we find it is useful to modify (13) to be ^Eobs[r(x;)] +(1)^E[r(x;)]: (16)whereis a discount factor (which we take to be = 0:7). This is equivalent to maximizing aregularized likelihood:maxflogp(xj) +()gwhere ()is the log-partition function; note that exp(())is a conjugate prior of p(xj).We initialize the weights of both the generator and discriminator from Gaussian distributionN(0;0:02), and train them using Adam (Kingma & Ba, 2014) with a learning rate of 0:001forthe generator and 0:0001 for the energy model (the discriminator). In order to keep the generatorand discriminator approximately aligned during training, we speed up the MLE update (16) of thediscriminator (by increasing its learning rate to 0:0005 ) when the energy of the real data batch islarger than the energy of the simulated images, while slow down it (by freezing the MLE updateofin (16)) if the magnitude of the energy difference between the real images and the simulatedimages goes above a threshold of 0.5. We used the bag of architecture guidelines for stable trainingsuggested in DCGAN (Radford et al., 2015).Discussion The MNIST dataset has a training set of 60;000examples. Both DCGAN and ourmodel produce high quality images, both visually indistinguishable from real images; see figure 1.CIFAR-10 is very diverse, and with only 50,000 training examples. Figure 2 shows examples ofsimulated images by DCGAN and SteinGAN generated conditional on each category, which lookequally well visually. We also provide quantitively evaluation using a recently proposed inceptionscore (Salimans et al., 2016), as well as the classification accuracy when training ResNet using50;000simulated images as train sets, evaluated on a separate held-out testing set never seen by theGAN models. Besides DCGAN and SteinGAN, we also evaluate another simple baseline obtainedby subsampling 500 real images from the training set and duplicating them 100 times. We observethat these scores capture rather different perspectives of image generation: The inception scorefavors images that look realistic individually and have uniformly distributed labels; as a result, theinception score of the duplicated 500 images is almost as high as the real training set. We find thatthe inception score of SteinGAN is comparable, or slightly lower than that of DCGAN. On the otherhand, the classification accuracy measures the amount information captured in the simulated imagesets; we find that SteinGAN achieves the highest classification accuracy, suggesting that it capturesmore information in the training set.Figure 3 and 4 visualize the results on CelebA (with more than 200k face images) and LSUN (withnearly 3M bedroom images), respectively. We cropped and resized both dataset images into 6464.7Under review as a conference paper at ICLR 2017DCGAN SteinGANFigure 1: MNIST images generated by DCGAN and our SteinGAN. We use the joint model in (15)to allow us to generate images for each digit. We set m= 0:2.airplaneautomobilebirdcatdeerdogfroghorseshiptruckDCGAN SteinGANInception ScoreReal Training Set 500 Duplicate DCGAN SteinGANModel Trained on ImageNet 11.237 11.100 6.581 6.351Model Trained on CIFAR-10 9.848 9.807 7.368 7.428Testing AccuracyReal Training Set 500 Duplicate DCGAN SteinGAN92.58 % 44.96 % 44.78 % 63.81 %Figure 2: Results on CIFAR-10. “500 Duplicate” denotes 500 images randomly subsampled fromthe training set, each duplicated 100 times. Upper: images simulated by DCGAN and SteinGAN(based on joint model (15)) conditional on each category. Middle: inception scores for samplesgenerated by various methods (all with 50,000 images) on inception models trained on ImageNet andCIFAR-10, respectively. Lower: testing accuracy on real testing set when using 50,000 simulatedimages to train ResNets for classification. SteinGAN achieves higher testing accuracy than DCGAN.We setm= 1and= 0:8.6 C ONCLUSIONWe propose a new method to train neural samplers for given distributions, together with a newSteinGAN method for generative adversarial training. Future directions involve more applicationsand theoretical understandings for training neural samplers.8Under review as a conference paper at ICLR 2017DCGAN SteinGANFigure 3: Results on CelebA. Upper: images generated by DCGAN and our SteinGAN. Lower:images generated by SteinGAN when performing a random walk + 0:01Uniform([1;1])on the random input ; we can see that a man with glasses and black hair gradually changes to awoman with blonde hair. See Figure 5 for more examples.DCGAN SteinGANFigure 4: Images generated by DCGAN and our SteinGAN on LSUN.9Under review as a conference paper at ICLR 2017 | r193JUZVl | Decent results, but unclear whether this is due to the proposed Stein variational gradient | 4: Ok but not good enough - rejection | This paper considers the energy-based model interpretation of GAN, where the discriminator is an unnormalized model for the likelihood of a generative model p(x|theta) and the generator is a directed model that approximates this distribution. The generator is used to draw approximate negative phase samples that are used in stochastic maximum likelihood / contrastive divergence learning of the EBM / discriminator.
The main idea in the paper is to fit the generator by following the Stein variational gradient. In practice this gradient consists of the usual gradient provided by the discriminator with an added term that provides a repulsive force between the sampled data points to increase sample diversity.
The idea of using a kernel to push apart the sampled points is interesting, and will work in low dimensions, but it is hard to see how it can work in full scale images. For high dimensional samples x, the proposed kernel is unlikely to provide a useful distance measure between points. There are no convincing experiments in the paper that show otherwise. Specifically:
- There is no experiment that compares between standard GAN and GAN + repulsion, using the same architecture. (please address this in the rebuttal)
- If the Stein variational idea is taken literally, the right thing to do would be to fully optimize the generator at every step, and then taking a single optimization step on the discriminator. Instead, each is updated in turn, and the learning rates of both steps are adjusted to keep the two "in line".
- The kernel used to fit the generator is defined in the auto-encoder space of the discriminator, and thus depends on the discriminator parameters. The objective that is used to fit the generator thus changes at every step, and the procedure can no longer be interpreted as stochastic gradient descent with respect to any single well defined objective.
The authors obtain good results: The generated images clearly look better than those generated by DCGAN. However, their approach has a number of changes compared to DCGAN, so it is not clear where the improvement comes from. In addition, by now the DCGAN is no longer a very strong baseline, as various other techniques have been proposed.
Note: The use of phi for both the "particle gradient direction" and energy function is confusing | 3: The reviewer is fairly confident that the evaluation is correct |
H1oRQDqlg | ICLR.cc/2017/conference | 2017 | Learning to Draw Samples: With Application to Amortized MLE for Generative Adversarial Learning | ["Dilin Wang", "Qiang Liu"] | We propose a simple algorithm to train stochastic neural networks to draw samples from given target distributions for probabilistic inference. Our method is based on iteratively adjusting the neural network parameters so that the output changes along a Stein variational gradient that maximumly decreases the KL divergence with the target distribution. Our method works for any target distribution specified by their unnormalized density function, and can train any black-box architectures that are differentiable in terms of the parameters we want to adapt. As an application of our method, we propose an amortized MLE algorithm for training deep energy model, where a neural sampler is adaptively trained to approximate the likelihood function. Our method mimics an adversarial game between the deep energy model and the neural sampler, and obtains realistic-looking images competitive with the state-of-the-art results. | ["Unsupervised Learning"] | ABSTRACTWe propose a simple algorithm to train stochastic neural networks to draw sam-ples from given target distributions for probabilistic inference. Our method isbased on iteratively adjusting the neural network parameters so that the outputchanges along a Stein variational gradient (Liu & Wang, 2016) that maximumlydecreases the KL divergence with the target distribution. Our method works forany target distribution specified by their unnormalized density function, and cantrain any black-box architectures that are differentiable in terms of the parame-ters we want to adapt. As an application of our method, we propose an amor-tized MLE algorithm for training deep energy model, where a neural sampler isadaptively trained to approximate the likelihood function. Our method mimicsan adversarial game between the deep energy model and the neural sampler, andobtains realistic-looking images competitive with the state-of-the-art results.1 I NTRODUCTIONModern machine learning increasingly relies on highly complex probabilistic models to reason aboutuncertainty. A key computational challenge is to develop efficient inference techniques to approx-imate, or draw samples from complex distributions. Currently, most inference methods, includingMCMC and variational inference, are hand-designed by researchers or domain experts. This makesit difficult to fully optimize the choice of different methods and their parameters, and exploit thestructures in the problems of interest in an automatic way. The hand-designed algorithm can also beinefficient when it requires to make fast inference repeatedly on a large number of different distri-butions with similar structures. This happens, for example, when we need to reason about a numberof observed datasets in settings like online learning, or need fast inference as inner loops for otheralgorithms such as maximum likelihood training. Therefore, it is highly desirable to develop moreintelligent probabilistic inference systems that can adaptively improve its own performance to fullythe optimize computational efficiency, and generalize to new tasks with similar structures.Specifically, denote by p(x)a probability density of interest specified up to the normalization con-stant, which we want to draw sample from, or marginalize to estimate its normalization constant.We want to study the following problem:Problem 1. Given a distribution with density p(x)and a function f(;)with parameter andrandom input , for which we only have assess to draws of the random input (without knowing itstrue distribution q0), and the output values of f(;)and its derivative @f(;)givenand. Wewant to find an optimal parameter so that the density of the random output variable x=f(;)withq0closely matches the target density p(x).Because we have no assumption on the structure of f(;)and the distribution of random input,we can not directly calculate the actual distribution of the output random variable x=f(;);this makes it difficult to solve Problem 1 using the traditional variational inference (VI) methods.Recall that traditional VI approximates p(x)using simple proposal distributions q(x)indexed byparameter, and finds the optimal by minimizing KL divergence KL(qjjp) =Eq[log(q=p)],which requires to calculate the density q(x)or its derivative that is not computable by our assump-1Under review as a conference paper at ICLR 2017tion (even when the Monte Carlo gradient estimation and the reparametrization trick (Kingma &Welling, 2013) are applied).In fact, it is this requirement of calculating q(x)that has been the major constraint for the de-signing of state-of-the-art variational inference methods with rich approximation families; the re-cent successful algorithms (e.g., Rezende & Mohamed, 2015b; Tran et al., 2015; Ranganath et al.,2015, to name only a few) have to handcraft special variational families to ensure the computationaltractability of q(x)and simultaneously obtain high approximation accuracy, which require substan-tial mathematical insights and research effects. Methods that do not require to explicitly calculateq(x)can significantly simplify the design and applications of VI methods, allowing practical usersto focus more on choosing proposals that work best with their specific tasks. We will use the termwild variational inference to refer to new variants of variational methods that require no tractabil-ityq(x), to distinguish with the black-box variational inference (Ranganath et al., 2014) whichrefers to methods that work for generic target distributions p(x)without significant model-by-modelconsideration (but still require to calculate the proposal density q(x)).A similar problem also appears in importance sampling (IS), where it requires to calculate the IS pro-posal density q(x)in order to calculate the importance weight w(x) =p(x)=q(x). However, thereexist methods that use no explicit information of q(x), which, seemingly counter-intuitively, givebetter asymptotic variance or converge rates than the typical IS that uses the proposal information(e.g., Liu & Lee, 2016; Briol et al., 2015; Henmi et al., 2007; Delyon & Portier, 2014). Discussionson this phenomenon dates back to O’Hagan (1987), who argued that “Monte Carlo (that uses theproposal information) is fundamentally unsound” for violating the Likelihood Principle, and devel-oped Bayesian Monte Carlo (O’Hagan, 1991) as an example that uses no information on q(x), yetgives better convergence rate than the typical Monte Carlo O(n1=2)rate (Briol et al., 2015). De-spite the substantial difference between IS and VI, these results intuitively suggest the possibility ofdeveloping efficient variational inference without calculating q(x)explicitly.In this work, we propose a simple algorithm for Problem 1 by iteratively adjusting the network pa-rameterto make its output random variable changes along a Stein variational gradient direction(SVGD) (Liu & Wang, 2016) that optimally decreases its KL divergence with the target distribu-tion. Critically, the SVGD gradient includes a repulsive term to ensure that the generated sampleshave the right amount of variability that matches p(x):In this way, we “amortize SVGD” using aneural network, which makes it possible for our method to adaptively improve its own efficiency byleveraging fast experience, especially in cases when it needs to perform fast inference repeatedly ona large number of similar tasks. As an application, we use our method to amortize the MLE trainingof deep energy models, where a neural sampler is adaptively trained to approximate the likelihoodfunction. Our method, which we call SteinGAN , mimics an adversarial game between the energymodel and the neural sampler, and obtains realistic-looking images competitive with the state-of-the-art results produced by generative adversarial networks (GAN) (Goodfellow et al., 2014; Radfordet al., 2015).Related Work The idea of amortized inference (Gershman & Goodman, 2014) has been recentlyapplied in various domains of probabilistic reasoning, including both amortized variational infer-ence (e.g., Kingma & Welling, 2013; Rezende & Mohamed, 2015a), and data-driven proposals for(sequential) Monte Carlo methods (e.g., Paige & Wood, 2016), to name only a few. Most of thesemethods, however, require to explicitly calculate q(x)(or its gradient). One exception is a veryrecent paper (Ranganath et al., 2016) that avoids calculating q(x)using an idea related to Steindiscrepancy (Gorham & Mackey, 2015; Liu et al., 2016; Oates et al., 2014; Chwialkowski et al.,2016). There is also a raising interest recently on a similar problem of “learning to optimize” (e.g.,Andrychowicz et al., 2016; Daniel et al., 2016; Li & Malik, 2016), which is technically easier thanthe more general problem of “learning to sample”. In fact, we show that our algorithm reduces to“learning to optimize” when only one particle is used in SVGD.Generative adversarial network (GAN) and its variants have recently gained remarkable successon generating realistic-looking images (Goodfellow et al., 2014; Salimans et al., 2016; Radfordet al., 2015; Li et al., 2015; Dziugaite et al., 2015; Nowozin et al., 2016). All these methods areset up to train latent variable models (the generator) under the assistant of the discriminator. OurSteinGAN instead performs traditional MLE training for a deep energy model, with the help ofa neural sampler that learns to draw samples from the energy model to approximate the likelihood2Under review as a conference paper at ICLR 2017function; this admits an adversarial interpretation: we can view the neural sampler as a generator thatattends to fool the deep energy model, which in turn serves as a discriminator that distinguishes thereal samples and the simulated samples given by the neural sampler. This idea of training MLE withneural samplers was first discussed by Kim & Bengio (2016); one of the key differences is that theneural sampler in Kim & Bengio (2016) is trained with the help of a heuristic diversity regularizerbased on batch normalization, while SVGD enforces the diversity in a more principled way. Anothermethod by Zhao et al. (2016) also trains an energy score to distinguish real and simulated samples,but within a non-probabilistic framework (see Section 5 for more discussion). Other more traditionalapproaches for training energy-based models (e.g., Ngiam et al., 2011; Xie et al., 2016) are oftenbased on variants of MCMC-MLE or contrastive divergence (Geyer, 1991; Hinton, 2002; Tieleman,2008), and have difficulty generating realistic-looking images from scratch.2 S TEIN VARIATIONAL GRADIENT DESCENT (SVGD)Stein variational gradient descent (SVGD) (Liu & Wang, 2016) is a general purpose Bayesian infer-ence algorithm motivated by Stein’s method (Stein, 1972; Barbour & Chen, 2005) and kernelizedStein discrepancy (Liu et al., 2016; Chwialkowski et al., 2016; Oates et al., 2014). It uses an effi-cient deterministic gradient-based update to iteratively evolve a set of particles fxigni=1to minimizethe KL divergence with the target distribution. SVGD has a simple form that reduces to the typicalgradient descent for maximizing logpwhen using only one particle (n= 1) , and hence can beeasily combined with the successful tricks for gradient optimization, including stochastic gradient,adaptive learning rates (such as adagrad), and momentum.To give a quick overview of the main idea of SVGD, let p(x)be a positive density function on Rdwhich we want to approximate with a set of particles fxigni=1. SVGD initializes the particles bysampling from some simple distribution q0, and updates the particles iteratively byxi xi+(xi);8i= 1;:::;n; (1)whereis a step size, and (x)is a “particle gradient direction” chosen to maximumly decrease theKL divergence between the distribution of particles and the target distribution, in the sense that= arg max2FddKL(q[]jjp)=0; (2)whereq[]denotes the density of the updated particle x0=x+(x)when the density of theoriginal particle xisq, andFis the set of perturbation directions that we optimize over. We chooseFto be the unit ball of a vector-valued reproducing kernel Hilbert space (RKHS) Hd=HHwith eachHassociating with a positive definite kernel k(x;x0); note thatHis dense in the space ofcontinuous functions with universal kernels such as the Gaussian RBF kernel.Critically, the gradient of KL divergence in (2) equals a simple linear functional of , allowing usto obtain a closed form solution for the optimal . Liu & Wang (2016) showed thatddKL(q[]jjp)=0=Exq[Tp(x)]; (3)withTp(x) =rxlogp(x)>(x) +rx(x); (4)whereTpis considered as a linear operator acting on function and is called the Stein operator inconnection with Stein’s identity which shows that the RHS of (3) equals zero if p=q:Ep[Tp] =Ep[rxlogp>+rx] = 0: (5)This is a result of integration by parts assuming the value of p(x)(x)vanishes on the boundary ofthe integration domain.Therefore, the optimization in (2) reduces toD(qjjp)def= max2HdfExq[Tp(x)]s:t:jjjjHd1g; (6)where D(qjjp)is the kernelized Stein discrepancy defined in Liu et al. (2016), which equals zero ifand only ifp=qunder mild regularity conditions. Importantly, the optimal solution of (6) yields aclosed form(x0)/Exq[rxlogp(x)k(x;x0) +rxk(x;x0)]:3Under review as a conference paper at ICLR 2017Algorithm 1 Amortized SVGD for Problem 1Set batch size m, step-size schemeftgand kernelk(x;x0). Initialize0.foriterationtdoDraw randomfigmi=1, calculatexi=f(t;i), and the Stein variational gradient xiin (7).Update parameter using (8), (9) or (10).end forBy approximating the expectation under qwith the empirical average of the current particlesfxigni=1, SVGD admits a simple form of update:xi xi+xi;8i= 1;:::;n;where xi=^Ex2fxigni=1[rxlogp(x)k(x;xi) +rxk(x;xi)]; (7)and^Exfxigni=1[f(x)] =Pif(xi)=n. The two terms in xiplay two different roles: the termwith the gradient rxlogp(x)drives the particles toward the high probability regions of p(x),while the term with rxk(x;xi)serves as a repulsive force to encourage diversity; to see this, con-sider a stationary kernel k(x;x0) =k(xx0), then the second term reduces to ^Exrxk(x;xi) =^Exrxik(x;xi), which can be treated as the negative gradient for minimizing the average similar-ity^Exk(x;xi)in terms ofxi. Overall, this particle update produces diverse points for distributionalapproximation and uncertainty assessment, and also has an interesting “momentum” effect in whichthe particles move collaboratively to escape the local optima.It is easy to see from (7) that xireduces to the typical gradient rxlogp(xi)when there is only asingle particle ( n= 1) andrxk(x;xi)whenx=xi, in which case SVGD reduces to the standardgradient ascent for maximizing logp(x)(i.e., maximum a posteriori (MAP)).3 A MORTIZED SVGD: T OWARDS AN AUTOMATIC NEURAL SAMPLERSVGD and other particle-based methods become inefficient when we need to repeatedly infer a largenumber different target distributions for multiple tasks, including online learning or inner loops ofother algorithms, because they can not improve based on the experience from the past tasks, and mayrequire a large memory to restore a large number of particles. We propose to “amortize SVGD” bytraining a neural network f(;)to mimic the SVGD dynamics, yielding a solution for Problem 1.One straightforward way to achieve this is to run SVGD to convergence and train f(;)to fit theSVGD results. This, however, requires to run many epochs of fully converged SVGD and can beslow in practice. We instead propose an incremental approach in whichis iteratively adjusted sothat the network outputs x=f(;)changes along the Stein variational gradient direction in (7) inorder to decrease the KL divergence between the target and approximation distribution.To be specific, denote by tthe estimated parameter at the t-th iteration of our method; each iterationof our method draws a batch of random inputs figmi=1and calculate their corresponding outputxi=f(;i)based ont; heremis a mini-batch size (e.g., m= 100 ). The Stein variationalgradient xiin (7) would then ensure that x0i=xi+xiforms a better approximation of thetarget distribution p. Therefore, we should adjust to make its output matches fx0ig, that is, wewant to update byt+1 arg minmXi=1jjf(;i)x0ijj22; wherex0i=xi+xi: (8)See Algorithm 1 for the summary of this procedure. If we assume is very small, then (8) reducesto a least square optimization. To see this, note that f(;i)f(t;i) +@f(t;i)(t)byTaylor expansion. Since xi=f(t;i), we havejjf(;i)x0ijj22jj@f(t;i)(t)xijj22:As a result, (8) reduces to the following least square optimization:t+1 t+t;where t= arg minmXi=1jj@f(t;i)xijj22: (9)4Under review as a conference paper at ICLR 2017Update (9) can still be computationally expensive because of the matrix inversion. We can derive afurther approximation by performing only one step of gradient descent of (8) (or (9)), which givest+1 t+mXi=1@f(t;i)xi: (10)Although update (10) is derived as an approximation of (8)-(9), it is computationally faster and wefind it works very effectively in practice; this is because when is small, one step of gradient updatecan be sufficiently close to the optimum.Update (10) also has a simple and intuitive form: (10) can be thought as a “chain rule” that back-propagates the Stein variational gradient to the network parameter . This can be justified byconsidering the special case when we use only a single particle (n= 1) in which case xiin(7) reduces to the typical gradient rxlogp(xi)oflogp(x), and update (10) reduces to the typicalgradient ascent for maximizingE[logp(f(;))];in which case f(;)is trained to maximize logp(x)(that is, learning to optimize ), instead oflearning to draw samples from pfor which it is crucial to use Stein variational gradient xitodiversify the network outputs.Update (10) also has a close connection with the typical variational inference with the reparameter-ization trick (Kingma & Welling, 2013). Let q(x)be the density function of x=f(;),q0.Using the reparameterization trick, the gradient of KL(qjjp)w.r.t.can be shown to berKL(qjjp) =Eq0[@f(;)(rxlogp(x)rxlogq(x))]:Withfigi.i.d. drawn from q0andxi=f(;i);8i, the standard stochastic gradient descent forminimizing the KL divergence ist+1 t+Xi@f(t;i)~xi;where ~xi=rxlogp(xi)rxlogq(xi): (11)This is similar with (10), but replaces the Stein gradient xidefined in (7) with ~xi. The advantageof using xiis that it does not require to explicitly calculate q, and hence admits a solution to Prob-lem 1 in which qis not computable for complex network f(;)and unknown input distributionq0. Further insights can be obtained by noting thatxiExq[rxlogp(x)k(x;xi) +rxk(x;xi)]=Exq[(rxlogp(x)rxlogq(x))k(x;xi)] (12)=Exq[(~x)k(x;xi)];where (12) is obtained by using Stein’s identity (5). Therefore, xican be treated as a kernelsmoothed version of ~xi.4 A MORTIZED MLE FOR GENERATIVE ADVERSARIAL TRAININGOur method allows us to design efficient approximate sampling methods adaptively and automat-ically, and enables a host of novel applications. In this paper, we apply it in an amortized MLEmethod for training deep generative models.Maximum likelihood estimator (MLE) provides a fundamental approach for learning probabilisticmodels from data, but can be computationally prohibitive on distributions for which drawing sam-ples or computing likelihood is intractable due to the normalization constant. Traditional methodssuch as MCMC-MLE use hand-designed methods (e.g., MCMC) to approximate the intractable like-lihood function but do not work efficiently in practice. We propose to adaptively train a generativeneural network to draw samples from the distribution during MLE training, which not only providescomputational advantage, and also allows us to generate realistic-looking images competitive with,or better than the state-of-the-art generative adversarial networks (GAN) (Goodfellow et al., 2014;Radford et al., 2015) (see Figure 1-5).5Under review as a conference paper at ICLR 2017Algorithm 2 Amortized MLE as Generative Adversarial LearningGoal: MLE training for energy model p(xj) = exp((x;)()).Initializeand.foriterationtdoUpdating:Drawiq0,xi=f(;i); updateusing (8), (9) or (10) with p(x) =p(xj).Repeat several times when needed.Updating:Draw a mini-batch of observed data fxi;obsg, and simulated data xi=f(;i),updateby (13).end forTo be specific, denote by fxi;obsga set of observed data. We consider the maximum likelihoodtraining of energy-based models of formp(xj) = exp((x;)());() = logZexp((x;))dx;where(x;)is an energy function for xindexed by parameter and()is the log-normalizationconstant. The log-likelihood function of isL() =1nnXi=1logp(xi;obsj);whose gradient isrL() =^Eobs[@(x;)] +E[@(x;)];where ^Eobs[]andE[]denote the empirical average on the observed data fxi;obsgand the expecta-tion under model p(xj), respectively. The key computational difficulty is to approximate the modelexpectation E[]. To address this problem, we use a generative neural network x=f(;)trainedby Algorithm 1 to approximately sample from p(xj), yielding a gradient update for of form +^rL(); ^rL() =^Eobs[@(x;)] +^E[@(x;)]; (13)where ^Edenotes the empirical average on fxigwherexi=f(;i),figq0. Asis updatedby gradient ascent, is successively updated via Algorithm 1 to followp(xj). See Algorithm 2.We call our method SteinGAN , because it can be intuitively interpreted as an adversarial game be-tween the generative network f(;)and the energy model p(xj)which serves as a discriminator:The MLE gradient update of p(xj)effectively decreases the energy of the training data and in-creases the energy of the simulated data from f(;), while the SVGD update of f(;)decreasesthe energy of the simulated data to fit better with p(xj). Compared with the traditional methodsbased on MCMC-MLE or contrastive divergence, we amortize the sampler as we train , which givesmuch faster speed and simultaneously provides a high quality generative neural network that cangenerate realistic-looking images; see Kim & Bengio (2016) for a similar idea and discussions.5 E MPIRICAL RESULTSWe evaluated our SteinGAN on four datasets, MNIST, CIFAR-10, CelebA (Liu et al., 2015), andLarge-scale Scene Understanding (LSUN) (Yu et al., 2015), on which we find our method tends togenerate realistic-looking images competitive with, sometimes better than DCGAN (Radford et al.,2015) (see Figure 2 - Figure 3). Our code is available at https://github.com/DartML/SteinGAN .Model Setup In order to generate realistic-looking images, we define our energy model based onan autoencoder:p(xj)/exp(jjxD(E(x;);)jj); (14)wherexdenotes the image. This choice is motivated by Energy-based GAN (Zhao et al., 2016) inwhich the autoencoder loss is used as a discriminator but without a probabilistic interpretation. We6Under review as a conference paper at ICLR 2017assumef(;)to be a neural network whose input is a100-dimensional random vector drawn byUniform([1;1]). The positive definite kernel in SVGD is defined by the RBF kernel on the hiddenrepresentation obtained by the autoencoder in (14), that is,k(x;x0) = exp(1h2jjE(x;)E(x0;)jj2):As it is discussed in Section 3, the kernel provides a repulsive force to produce an amount of variabil-ity required for generating samples from p(x). This is similar to the heuristic repelling regularizerin Zhao et al. (2016) and the batch normalization based regularizer in Kim & Bengio (2016), but isderived in a more principled way. We take the bandwidth to be h= 0:5med, where med is themedian of the pairwise distances between E(x)on the image simulated by f(;). This makes thekernel change adaptively based on both (through E(x;)) and(through bandwidth h).Some datasets include both images xand their associated discrete labels y. In these cases, we traina joint energy model on (x;y)to capture both the inner structure of the images and its predictiverelation with the label, allowing us to simulate images with a control on which category it belongsto. Our joint energy model is defined to bep(x;yj)/expjjxD(E(x;);)jjmax[m; (y;E(x;))]; (15)where(;)is the cross entropy loss function of a fully connected output layer. In this case, ourneural sampler first draws a label yrandomly according to the empirical counts in the dataset, andthen passesyinto a neural network together with a 1001random vector to generate image x.This allows us to generate images for particular categories by controlling the value of input y.Stabilization In practice, we find it is useful to modify (13) to be ^Eobs[r(x;)] +(1)^E[r(x;)]: (16)whereis a discount factor (which we take to be = 0:7). This is equivalent to maximizing aregularized likelihood:maxflogp(xj) +()gwhere ()is the log-partition function; note that exp(())is a conjugate prior of p(xj).We initialize the weights of both the generator and discriminator from Gaussian distributionN(0;0:02), and train them using Adam (Kingma & Ba, 2014) with a learning rate of 0:001forthe generator and 0:0001 for the energy model (the discriminator). In order to keep the generatorand discriminator approximately aligned during training, we speed up the MLE update (16) of thediscriminator (by increasing its learning rate to 0:0005 ) when the energy of the real data batch islarger than the energy of the simulated images, while slow down it (by freezing the MLE updateofin (16)) if the magnitude of the energy difference between the real images and the simulatedimages goes above a threshold of 0.5. We used the bag of architecture guidelines for stable trainingsuggested in DCGAN (Radford et al., 2015).Discussion The MNIST dataset has a training set of 60;000examples. Both DCGAN and ourmodel produce high quality images, both visually indistinguishable from real images; see figure 1.CIFAR-10 is very diverse, and with only 50,000 training examples. Figure 2 shows examples ofsimulated images by DCGAN and SteinGAN generated conditional on each category, which lookequally well visually. We also provide quantitively evaluation using a recently proposed inceptionscore (Salimans et al., 2016), as well as the classification accuracy when training ResNet using50;000simulated images as train sets, evaluated on a separate held-out testing set never seen by theGAN models. Besides DCGAN and SteinGAN, we also evaluate another simple baseline obtainedby subsampling 500 real images from the training set and duplicating them 100 times. We observethat these scores capture rather different perspectives of image generation: The inception scorefavors images that look realistic individually and have uniformly distributed labels; as a result, theinception score of the duplicated 500 images is almost as high as the real training set. We find thatthe inception score of SteinGAN is comparable, or slightly lower than that of DCGAN. On the otherhand, the classification accuracy measures the amount information captured in the simulated imagesets; we find that SteinGAN achieves the highest classification accuracy, suggesting that it capturesmore information in the training set.Figure 3 and 4 visualize the results on CelebA (with more than 200k face images) and LSUN (withnearly 3M bedroom images), respectively. We cropped and resized both dataset images into 6464.7Under review as a conference paper at ICLR 2017DCGAN SteinGANFigure 1: MNIST images generated by DCGAN and our SteinGAN. We use the joint model in (15)to allow us to generate images for each digit. We set m= 0:2.airplaneautomobilebirdcatdeerdogfroghorseshiptruckDCGAN SteinGANInception ScoreReal Training Set 500 Duplicate DCGAN SteinGANModel Trained on ImageNet 11.237 11.100 6.581 6.351Model Trained on CIFAR-10 9.848 9.807 7.368 7.428Testing AccuracyReal Training Set 500 Duplicate DCGAN SteinGAN92.58 % 44.96 % 44.78 % 63.81 %Figure 2: Results on CIFAR-10. “500 Duplicate” denotes 500 images randomly subsampled fromthe training set, each duplicated 100 times. Upper: images simulated by DCGAN and SteinGAN(based on joint model (15)) conditional on each category. Middle: inception scores for samplesgenerated by various methods (all with 50,000 images) on inception models trained on ImageNet andCIFAR-10, respectively. Lower: testing accuracy on real testing set when using 50,000 simulatedimages to train ResNets for classification. SteinGAN achieves higher testing accuracy than DCGAN.We setm= 1and= 0:8.6 C ONCLUSIONWe propose a new method to train neural samplers for given distributions, together with a newSteinGAN method for generative adversarial training. Future directions involve more applicationsand theoretical understandings for training neural samplers.8Under review as a conference paper at ICLR 2017DCGAN SteinGANFigure 3: Results on CelebA. Upper: images generated by DCGAN and our SteinGAN. Lower:images generated by SteinGAN when performing a random walk + 0:01Uniform([1;1])on the random input ; we can see that a man with glasses and black hair gradually changes to awoman with blonde hair. See Figure 5 for more examples.DCGAN SteinGANFigure 4: Images generated by DCGAN and our SteinGAN on LSUN.9Under review as a conference paper at ICLR 2017 | Byj2SWzVx | Review: Learning to Draw Samples: With Application to Amortized MLE for Generative Adversarial Learning | 4: Ok but not good enough - rejection | The authors propose amortized SVGD, an amortized form of prior work on SVGD, which is a particle variational method that maximally decreases the KL divergence at each update. "amortized SVGD" is done by training a neural network to learn this dynamic. They then apply this idea to train energy-based models, which admit a tractable unnormalized density.
In SVGD, the main difference from just MAP is the addition of a "repulsive force" that prevents degeneracy by encouraging probability mass to be spread to locations outside the mode. How this is able to still act as a strong enough entropy-like term in high dimensions is curious. From my understanding of their previous work, this was not a problem as the only experiments were on toy and UCI data sets.
In the experimental results here, they apply the kernel on the hidden representation of an autoencoder, which seems key, similar to Li et al. (2015) where their kernel approach for MMD would not work as well otherwise. However, unlike Li et al. (2015) the autoencoder is part of the model itself and not fixed. This breaks much of the authors' proposed motivation and criticisms of prior work, if they must autoencode onto some low-dimensional space (putting most effort then on the autoencoder, which changes per iteration) before then applying their method.
Unlike previous literature which uses inference networks, their amortized SVGD approach seems in fact slower than the non-amortized approach. This is because they must make the actual update on xi before then regressing to perform the update on eta (in previous approaches, this would be like having to perform local inferences before then updating inference network parameters, or at least partially performing the local inference). This seems quite costly during training.
I recommend the paper be rejected, and that the authors provide more comprehensive experimental results, expecially around the influence of the autoencoder, the incremental updates versus full updates, and the training time of amortized vs non-amortized approaches. The current results are promising but unclear why given the many knobs that the authors are playing with.
References
Li, Y., Swersky, K., & Zemel, R. (2015). Generative Moment Matching Networks. Presented at the International Conference on Machine Learning. | 4: The reviewer is confident but not absolutely certain that the evaluation is correct |
S1Y0td9ee | ICLR.cc/2017/conference | 2017 | Shift Aggregate Extract Networks | ["Francesco Orsini", "Daniele Baracchi", "Paolo Frasconi"] | The Shift Aggregate Extract Network SAEN is an architecture for learning representations on social network data.
SAEN decomposes input graphs into hierarchies made of multiple strata of objects.
Vector representations of each object are learnt by applying 'shift', 'aggregate' and 'extract' operations on the vector representations of its parts.
We propose an algorithm for domain compression which takes advantage of symmetries in hierarchical decompositions to reduce the memory usage and obtain significant speedups.
Our method is empirically evaluated on real world social network datasets, outperforming the current state of the art. | ["Supervised Learning"] | ABSTRACTThe Shift Aggregate Extract Network ( SAEN ) is an architecture for learning repre-sentations on social network data. SAEN decomposes input graphs into hierarchiesmade of multiple strata of objects. Vector representations of each object are learntby applying shift,aggregate andextract operations on the vector representationsof its parts. We propose an algorithm for domain compression which takes ad-vantage of symmetries in hierarchical decompositions to reduce the memory us-age and obtain significant speedups. Our method is empirically evaluated on realworld social network datasets, outperforming the current state of the art.1 I NTRODUCTIONMany different problems in various fields of science require the classification of structured data ,i.e. collections of objects bond together by some kind of relation. A natural way to represent suchstructures is through graphs, which are able to encode both the individual objects composing thecollection (as vertices) and the relationships between them (as edges). A number of approaches tothe graph classification problem has been studied in graph kernel and neural network literature.Graph kernels decompose input graphs in substructures such as shortest paths (Borgwardt & Kriegel,2005), graphlets (Shervashidze et al., 2009) or neighborhood subgraph pairs (Costa & De Grave,2010). The similarity between two graphs is then computed by comparing the respective sets ofparts. Methods based on recursive neural networks unfold a neural network over input graphs andlearn vector representations of their nodes employing backpropagation though structure (Goller &Kuchler, 1996). Recursive neural networks have been successfully applied to domains such as nat-ural language (Socher et al., 2011) and biology (Vullo & Frasconi, 2004; Baldi & Pollastri, 2003).An advantage of recursive neural networks over graph kernels, is that the vector representations ofthe input graphs are learnt rather than handcrafted.Learning on social network data can be considerably hard due to their peculiar structure: as opposedto chemical compounds and parse trees, the structure of social network graphs is highly irregular.Indeed in social networks it is common to have nodes in the same graph whose degree differs byorders of magnitude. This poses a significant challenge for the substructure matching approach usedby some graph kernels as the variability in connectivity generates a large number of unique patternsleading to diagonally dominant kernel matrices.We propose Shift Aggregate Extract Networks ( SAEN ), a neural network architecture for learningrepresentations of input graphs. SAEN decomposes input graphs into H-hierarchies made of multiplestrata of objects. Objects in each stratum are connected by “part-of” relations to the objects to thestratum above.In case we wish to classify graphs we can use an H-hierarchical decomposition in which the topstratum contains the graph Gthat we want to classify, while the intermediate strata contain subgraphsofG, subgraphs of subgraphs of Gand so on, until we reach the bottom stratum which contains theverticesvofG.1Under review as a conference paper at ICLR 2017UnlikeR-convolution relations in kernel methods (which decompose objects into the set of theirparts),H-hierarchical decompositions are deep as they can represent the parts of the parts of anobject.Recursive neural networks associate to the vertices of the input graphs vector representations impos-ing that they have identical dimensions. Moreover, the propagation follows the edge connectivityand weights are shared over the whole input graph. If we consider that vector representations ofnodes (whose number of parents can differ by orders of magnitude) must share the same weights,learning on social network data with recursive neural networks might be nontrivial.SAEN compensates the limitations of recursive neural networks by adding the following degrees offlexibility:1. the SAEN computation schema unfolds a neural network over H-decompositions instead of theinput graph,2.SAEN imposes weight sharing and fixed size of the learnt vector representations on a per stratumbasis instead of globally.Indeed SAEN allows to use vector representations of different sizes for different strata of objects(e.g. graphs, subgraphs, subgraphs of subgraphs, edges, vertices etc.) The SAEN schema computesthe vector representation of each object by applying shift,aggregate andextract operations on thevector representations of its parts.Another contribution of this paper is the introduction of a domain compression algorithm, that weuse in our experiments to reduce memory usage and runtime. Domain compression collapses objectsin the same stratum of an H-hierarchical decomposition into a compressed one whenever theseobjects are indistinguishable for the SAEN computation schema. In particular objects made of thesame sets of parts are indistinguishable. In order obtain a lossless compression an H-hierarchicaldecomposition we store counts on symmetries adopting some mathematical results from lifted linearprogramming (Mladenov et al., 2012). The domain compression algorithm is also reminiscent of thework of Sperduti & Starita (1997) in which common substructures of recursive neural networks arecollapsed in order to reduce the computational cost.2 S HIFT -AGGREGATE -EXTRACT NEURAL NETWORKSWe propose a neural network architecture that takes as input an undirected attributed graph G=(V,E,X )whereVis the vertex set, E⊆V×Vis the edge set, and X={xv∈Rp}v∈Vis aset ofp-dimensional vertex attributes. When vertices do not have associated attributes (for examplethis happens in some of the social network datasets of §4.1), we can set xvto some vertex invariantsuch as node centrality or betweenness.2.1H-HIERARCHICAL DECOMPOSITIONSMost graph kernels decompose graphs into parts by using an R-convolution relation (Haussler,1999). We extend this approach by decomposing graphs into a hierarchy ofπ-parametrized “partof” relations. Formally, an H-hierarchical decomposition is a pair ({Sl}Ll=0,{Rl,π}Ll=1)where:•{Sl}Ll=0are disjoint sets of objects Slcalled strata, or levels of the hierarchy. The bottom stratumS0contains non-decomposable objects (e.g. individual vertices), while the other strata Sl, l=1,...,L contain composite objects, oi∈Sl, whose parts oj∈Sl−1belong to the preceding stratum,Sl−1.•{Rl,π}Ll=1is a set ofl,π-parametrizedRl,π-convolution relations. A pair (oi,oj)∈Sl×Sl−1belongs toRl,πiff “ojis part ofoiwith membership type π”. For notational convenience, the partsofoiare denoted asR−1l,π(oi) ={oj|(oj,oi)∈Rl,π}.The membership type πis used to represent the roles of the parts of an object. For example, wecould decompose a graph as a multiset of π-neighborhood subgraphs1in whichπis the radius ofthe neighborhoods (see Figure 1 on the left). Another possible use of the πmembership type is to1Ther-neighborhood subgraph (or ego graph) of a vertex vin a graph Gis the induced subgraph of Gconsisting of all vertices whose shortest-path distance from vis at most r.2Under review as a conference paper at ICLR 2017Ego Graph⇡:ROOT⇡:ELEM⇡:ELEMGraph⇡:0⇡:0⇡:0⇡:0⇡:1⇡:1⇡:1⇡:1⇡:1Ego graph (stratumS1) decomposed intovertices (stratumS2).⇡:0Root of the ego graph.Other vertices of the ego graph.Ego graphs of radius 0.Ego graphs of radius 1.Graph (stratumS2) decomposed into ego graphs ofradius 0 and 1 (stratumS1).Figure 1: Image of an H-hierarchical decomposition (in particular the EGNN explained in§4.2).On the left we decompose a graph into rooted ego graphs of radius 0and1, while on the right wedecompose an ego graph into the set of its vertices. The directed arrows represent “part of” relationslabeled with their membership type π. The membership type πrepresents the radius π= 0,1of theego graphs (decomposition on the left) and the role (i.e. π=ROOT,ELEM ) of a vertex in the egograph (decomposition on the right) respectively.distinguish the root from the other vertices in a rooted neighborhood subgraph (see Figure 1 on theright).AnH-hierarchical decomposition is a multilevel generalization of R-convolution relations, and itreduces to anR-convolution relation for L= 1.2.2 S HIFT AGGREGATE EXTRACT SCHEMA FOR LEARNING REPRESENTATIONSWe propose Shift Aggregate Extract Network ( SAEN ) to learn vector representations for all theobjects of all the strata {Sl}Ll=0in anH-hierarchical decomposition. SAEN unfolds a neural net-work architecture over an H-hierarchical decomposition by using the Shift Aggregate Extract ( SAE)schema.According to the SAE schema the vector representation of each object in the H-hierarchical decom-position is either computed by applying a neural network on the vertex attributes (for the objects inbottom stratum) or defined in terms of the vector representations of its parts (for the other objects).More formally, the SAE schema associates a dl-dimensional representation hi∈Rdlto each objectoi∈Slof theH-hierarchical decomposition according to the following formula:hi=f0(xvi; Θ0) ifoi∈S0fl/parenleftBigg/summationdisplayπ∈Πl/summationdisplayoj∈R−1l,π(oi)(zπ⊗hj)/bracehtipupleft/bracehtipdownright/bracehtipdownleft/bracehtipuprightShift/bracehtipupleft/bracehtipdownright/bracehtipdownleft/bracehtipuprightAggregate; Θl/parenrightBigg/bracehtipupleft /bracehtipdownright/bracehtipdownleft /bracehtipuprightExtractotherwise(1)wherefl(·; Θl), l= 0,...,L are multilayer neural networks with parameters Θl.With respect to the base case (first branch of Eq. 1) we have that each object oiin the bottom stratumS0is in one-to-one correspondence with the vertices vi∈Vof the graph that we are decomposing.Indeed the vector representations hiare computed by evaluating f0(·; Θ0)in correspondence of thevertex attributes xvi∈X.The recursion step (second branch of Eq. 1) follows the Shift Aggregate Extract ( SAE) schema:•Shift : each part representation hj∈Rdl−1is remapped into a space R|Πldl−1|made of|Πl|slots,where each slot has dimension dl−1. This transformation shifts part representations hjby usingthe Kronecker product ⊗between an indicator vector zπ∈R|Πl|and the vector representation hjof partoj∈Sl−1. The indicator vector zπ∈R|Πl|defined aszi=/braceleftBig1ifi=π0otherwise.and it is used to3Under review as a conference paper at ICLR 2017whole graphego graphpatternsvertices....domaincompressionoriginal graphcompressed graph.....compressedH-decomposition.....originalH-decompositionFigure 2: Pictorial representation of the H-hierarchical decomposition of a graph taken from theIMDB -BINARY dataset (see§4.1) together with its compressed version.make sure that vector representations hjof object parts will fall in the same slot if and only if theyhave the same membership type π.•Aggregate : the shifted representations (zπ⊗hj)of the partsojare then aggregated with a sum.•Extract : the aggregated representation is compressed to a dl-dimensional space by a Θl-parametrized nonlinear map fl(·,Θl) :R|Πldl−1|→Rdlimplemented with a multilayer neuralnetwork.The shift and aggregate steps, that we have seen so far, are identical to those used in kernel designwhen computing the explicit feature of a kernel k(x,z)derived from a sum/summationtextπ∈Πkπ(x,z)of basekernelskπ(x,z), π∈Π. In principle, it would be indeed possible to turn SAEN into a kernel methodby removing the extraction step Efrom the SAEschema. However, such an approach would increasethe dimensionality of the feature space by a multiplicative factor |Πl|for each level lof theH-hierarchical decomposition, thus leading to an exponential number of features. When using SAEN ,the feature space growth is prevented by exploiting a distributed representation (via a multilayeredneural network) during the Estep of the SAE schema. As a result, SAEN can easily cope with H-hierarchical decompositions consisting of multiple strata.2.3 E XPLOITING SYMMETRIES FOR DOMAIN COMPRESSIONIn this section we propose a technique, called domain compression , which allows to save memoryand speedup the SAEN computation. Domain compression exploits symmetries in H-hierarchical de-compositions by collapsing equivalent objects in each stratum. The greater the number of collapsedobjects the highest the compression ratio.Two objects a,bin a stratum Slare collapsable a∼bif they share the same representation (i.e.ha=hb) for all the possible values of Θl. A compressed stratum Scomplis the quotient set Sl/∼ofstratumSlw.r.t. the collapsibility relation ∼. We assume that the attributes of the elements in thebottom stratum S0are categorical, so that the same vector representation can be shared by multipleelements with non-zero probability.2While objects in the bottom stratum S0are collapsable whentheir attributes are identical, for all the other strata Sl, l= 1,...,L , objects are collapsable if theyare made by the same sets of parts for all the membership types π.In Figure 2 we provide a pictorial representation of the domain compression of an H-hierarchicaldecomposition ( EGNN , described in§4.2). On the left we show the H-hierarchical decompositionof a graph taken from the IMDB -BINARY dataset (see§4.1) together with its compressed version onthe right.2.3.1 D OMAIN COMPRESSION ALGORITHMIn order to compress H-hierarchical decompositions we adapt the lifted linear programming tech-nique proposed by Mladenov et al. (2012) to the SAEN architecture. If a matrix M∈Rn×phas2Vectors of real valued attributes could be discretized using clustering techniques. However, we leavediscretization in SAEN to future works.4Under review as a conference paper at ICLR 2017m≤ndistinct rows it can be decomposed as the product DMcompwhereMcompis a compressedversion ofMin which the distinct rows of Mappear exactly once. The Boolean decompressionmatrix,D, encodes the collapsibility relation among the rows of Mso thatDij= 1iff theithrowofMfalls in the equivalence class jof∼. A pseudo-inverse CofDcan be computed by dividingthe rows ofD/latticetopby their sum (where D/latticetopis the transpose of D).Example 1 If we look at matrix Min Eq. 2 we notice that row 1and4share the encoding [0,0,0],rows 3and5share the encoding [1,1,0]while the encoding [1,0,1]appears only once at row 2.MatrixMcompis the compressed version of M.M=0 0 01 0 11 1 00 0 01 1 0Mcomp=/bracketleftBigg0 0 01 0 11 1 0/bracketrightBiggD=1 0 00 1 00 0 11 0 00 0 1C=/bracketleftBigg1/20 0 1/200 1 0 0 00 0 1/20 1/2/bracketrightBigg(2)MatrixMcan be expressed as the matrix product between the decompression matrix Dand thecompressed version of Mcomp(i.e.M=DMcomp), while the matrix multiplication between thecompression matrix Cand theMleads to the compressed matrix Mcomp(i.e.Mcomp=CM).To apply domain compression we rewrite Eq. 1 in matrix form as follows:Hl=f0(X; Θ0)/bracehtipupleft/bracehtipdownright/bracehtipdownleft/bracehtipupright|S0|×d0ifl= 0fl/bracketleftbigRl,1,...,Rl,π,...,Rl,|Πl|/bracketrightbig/bracehtipupleft/bracehtipdownright/bracehtipdownleft/bracehtipupright|Sl|×|Πl||Sl−1|Hl−1... 0.........0... Hl−1/bracehtipupleft/bracehtipdownright/bracehtipdownleft/bracehtipupright|Πl||Sl−1|×|Πl|dl−1; Θl/bracehtipupleft /bracehtipdownright/bracehtipdownleft /bracehtipupright|Sl|×dlotherwise(3)where:•Hl∈R|Sl|×dlis the matrix that represents the dl-dimensional encodings of the objects in Sl.The rows of Hlare the vector representations hiin Eq. 1, while the rows of Hl−1are the vectorrepresentations hjin Eq. 1;•X∈R|S0|×pis the matrix that represents the p-dimensional encodings of the vertex attributes inV(i.e. the rows of Xare the xviof Eq. 1);•fl(·; Θl)is unchanged w.r.t. Eq. 1 and is applied to its input matrices row-wise;•Rl,π∈R|Sl|×|Sl−1|∀π∈Πlare the matrix representations of the Rl,π-convolution relations ofEq. 1 whose elements are (Rl,π)ij= 1if(oj,oi)∈Rl,πand0otherwise.Domain compression on Eq. 3 is performed by the DOMAIN -COMPRESSION procedure (see Algo-rithm 3) that takes as input the attribute matrix Xand the part-of matrices Rl,πand returns theircompressed versions Xcompand theRcompl,πrespectively. The algorithm starts by invoking (line 1)the procedure COMPUTE -CDonXto obtain the compression and decompression matrices C0andD0respectively. The compression matrix C0is used to compress X(line 2) then we start iteratingover the levels l= 0,...,L of theH-hierarchical decomposition (line 4) and compress the Rl,πmatrices. The compression of the Rl,πmatrices is done by right-multiplying them by the decom-pression matrix Dl−1of the previous level l−1(line 5). In this way we collapse the parts of relationRl,π(i.e. the columns of Rl,π) as these were identified in stratum Sl−1as identical objects (i.e.those objects corresponding to the rows of XorRl−1,πcollapsed during the previous step). Theresult is a list Rcolcomp= [Rl,πDl−1,∀π= 1,...,|Πl|]of column compressed Rl,π−matrices.We proceed collapsing equivalent objects in stratum Sl, i.e. those made of identical sets of parts:we find symmetries in Rcolcompby invoking COMPUTE -CD(line 6) and obtain a new pair Cl,Dlof compression, and decompression matrices respectively. Finally the compression matrix Clis ap-plied to the column-compressed matrices in Rcolcompin order to obtain the Πlcompressed matrices5Under review as a conference paper at ICLR 2017DOMAIN -COMPRESSION (X,R)1C0,D0=COMPUTE -CD(X)2Xcomp=C0X/ /Compress the Xmatrix.3Rcomp={}/ /Initialize an empty container for compressed matrices.4forl= 1toL5Rcolcomp= [Rl,πDl−1,∀π= 1,...,|Πl|]/ /column compression6Cl,Dl=COMPUTE -CD(Rcolcomp)7 forπ= 1to|Πl|8 Rcompl,π=ClRcolcompπ / /row compression9returnXcomp,RcompFigure 3: DOMAIN -COMPRESSIONof stratumSl(line 8). Algorithm 3 allows us to compute the domain compressed version of Eq. 3which can be obtained by replacing: XwithXcomp=C0X,Rl,πwithRcompl,π=ClRl,πDl−1andHlwithHcompl. Willing to recover the original encodings Hlwe just need to employ the decom-pression matrix Dlon the compressed encodings Hcompl, indeedHl=DlHcompl.As we can see by substituting SlwithScompl, the more are the symmetries (i.e. when |Scompl|/lessmuch|Sl|) the greater the domain compression will be.3 R ELATED WORKSWhen learning with graph inputs two fundamental design aspects that must be taken into account are:the choice of the pattern generator and the choice of the matching operator. The former decomposesthe graph input in substructures while the latter allows to compare the substructures.Among the patterns considered from the graph kernel literature we have paths, shortest paths,walks (Kashima et al., 2003), subtrees (Ramon & G ̈artner, 2003; Shervashidze et al., 2011) andneighborhood subgraphs (Costa & De Grave, 2010). The similarity between graphs GandG/primeiscomputed by counting the number of matches between their common the substructures (i.e. a kernelon the sets of the substructures). The match between two substructures can be defined by usinggraph isomorphism or some other weaker graph invariant.When the number of substructures to enumerate is infinite or exponential with the size of the graph(perhaps this is the case for random walks and shortest paths respectively) the kernel between thetwo graphs is computed without generating an explicit feature map. Learning with an implicit fea-ture map is not scalable as it has a space complexity quadratic in the number of training examples(because we need to store in memory the gram matrix).Other graph kernels such as the Weisfeiler-Lehman Subtree Kernel ( WLST ) (Shervashidze et al.,2011) and the Neighborhood Subgraph Pairwise Distance Kernel ( NSPDK ) (Costa & De Grave,2010) deliberately choose a pattern generator that scales polynomially and produces an explicitfeature map. However the vector representations produced by WLST and NSPDK are handcraftedand not learned.A recent work by Yanardag & Vishwanathan (2015) proposes to uses pattern generators such asgraphlets, shortest paths and WLST subtrees to transform input graphs into documents. The gener-ated substructures are then treated as words and embedded in the Euclidean space with a CBOWor a Skip-gram model. The deep upgrade of existing graph kernels is performed by reweighing thecounts of the substructures by the square root of their word-vector self similarity.Another recent work by Niepert et al. (2016) upgrades the convolutional neural networks CNNs forimages to graphs. While the receptive field of a CNN is usually a square window (Niepert et al.,2016) employ neighborhood subgraphs as receptive fields. As nodes in graphs do not have a specifictemporal or spatial order, (Niepert et al., 2016) employ vertex invariants to impose an order on thenodes of the subgraphs/receptive fields.6Under review as a conference paper at ICLR 20174 E XPERIMENTAL EVALUATIONWe answer to the following experimental questions:Q1How does SAEN compare to the state of the art?Q2Can SAEN exploit symmetries in social networks to reduce the memory usage and the runtime?4.1 D ATASETSIn order to answer the experimental questions we tested our method on six publicly available datasetsfirst proposed by Yanardag & Vishwanathan (2015).•COLLAB is a dataset where each graph represent the ego-network of a researcher, and the task isto determine the field of study of the researcher between High Energy Physics ,Condensed MatterPhysics andAstro Physics .•IMDB -BINARY ,IMDB -MULTI are datasets derived from IMDB where in each graph the ver-tices represent actors/actresses and the edges connect people which have performed in the samemovie. Collaboration graphs are generated from movies belonging to genres Action andRomanceforIMDB -BINARY andComedy ,Romance andSci-Fi forIMDB -MULTI , and for each actor/actress inthose genres an ego-graph is extracted. The task is to identify the genre from which the ego-graphhas been generated.•REDDIT -BINARY ,REDDIT -MULTI 5K,REDDIT -MULTI 12Kare datasets where each graph is de-rived from a discussion thread from Reddit. In those datasets each vertex represent a distinct userand two users are connected by an edge if one of them has responded to a post of the other inthat discussion. The task in REDDIT -BINARY is to discriminate between threads originating froma discussion-based subreddit ( TrollXChromosomes ,atheism ) or from a question/answers-basedsubreddit ( IAmA ,AskReddit ). The task in REDDIT -MULTI 5Kand REDDIT -MULTI 12Kis a multi-class classification problem where each graph is labeled with the subreddit where it has originated(worldnews, videos, AdviceAnimals, aww, mildlyinteresting forREDDIT -MULTI 5KandAskReddit,AdviceAnimals, atheism, aww, IAmA, mildlyinteresting, Showerthoughts, videos, todayilearned,worldnews, TrollXChromosomes forREDDIT -MULTI 12K).4.2 E XPERIMENTSIn our experiments we chose an H-hierarchical decomposition called Ego Graph Neural Network(EGNN ), that mimics the graph kernel NSPDK with the distance parameter set to 0.Before applying EGNN we turn unattributed graphs (V,E)into attributed graphs (V,E,X )by an-notating their vertices v∈Vwith attributes xv∈X. We label vertices vofGwith their degree andencode this information into the attributes xvby employing the 1-hot encoding.EGNN decomposes attributed graphs G= (V,E,X )into a 3levelH-hierarchical decompositionwith the following strata (see Figure 1 for a pictorial representation of EGNN ):•stratumS0contains objects ovthat are in one-to-one correspondence with the vertices v∈V.•stratumS1containsvroot-rootedr-neighborhood subgraphs (i.e. ego graphs) e= (vroot,Ve,Ee)of radiusr= 0,1,...,R and has part-of alphabet Π1={ROOT,ELEM}. Objectsov∈S0are“ELEM -part-of” ego graph eifv∈Ve\{vroot}, while the are “ ROOT -part-of” ego graph eifv=vroot.•stratumS2contains the graph Gthat we want to classify and has part-of alphabet Π2={0,1}which correspond to the radius of the ego graphs e∈S1of whichGis made of.E1We experimented with SAEN applying the EGNNH-decomposition on all the datasets. For eachdataset, we manually chose the parameters of SAEN , i.e. the number of hidden layers for eachstratum, the size of each layer and the maximum radius R. We used the Leaky ReLU (Maas et al.)activation function on all the units. We report the chosen parameters in Table A1 of the appendix.In all our experiments we trained the neural networks by using the Adam algorithm to minimize across entropy loss.The classification accuracy of SAEN was measured with 10-times 10-fold cross-validation. We man-ually chose the number of layers and units for each level of the part-of decomposition; the numberof epochs was chosen manually for each dataset and we kept the same value for all the 100runs ofthe10-times 10-fold cross-validation.7Under review as a conference paper at ICLR 2017Figure 4: Comparison of accuracy results.DATASET DGK PSCN SAEN(Yanardag et al. 2015) (Niepert et al., 2016) (our method)COLLAB 73.09±0.25 72.60±2.16 75.63±0.31IMDB -BINARY 66.96±0.56 71.00±2.29 71.26±0.74IMDB -MULTI 44.55±0.52 45.23±2.84 49.11±0.64REDDIT -BINARY 78.04±0.39 86.30±1.58 86.08±0.53REDDIT -MULTI 5K 41.27±0.18 49.10±0.70 52.24±0.38REDDIT -MULTI 12K 32.22±0.10 41.32±0.42 46.72±0.23Figure 5: Comparison of accuracy on bio-informatics datasets.DATASET PSCN (k= 10E) SAEN(Niepert et al., 2016) (our method)MUTAG 92.63±4.21 84.99±1.82PTC 60.00±4.82 57.04±1.30NCI1 78.59±1.89 77.80±0.42PROTEINS 75.89±2.76 75.31±0.70D&D 77.12±2.41 77.69±0.96The mean accuracies and their standard deviations obtained by our method are reported in Ta-ble 4, where we compare these results with those obtained by Yanardag & Vishwanathan (2015)and by Niepert et al. (2016).Although our method was conceived for social network data, it can also handle other types of graphs.For the sake of completeness in Table 5 we report the mean accuracies obtained with SAEN on themolecule and protein datasets studied in previous works (e.g. Niepert et al. (2016)).Table 1: Comparison of sizes and runtimes of the datasets before and after the compression.DATASETSIZE (MB) RUNTIMEORIGINAL COMP . RATIO ORIGINAL COMP . SPEEDUPCOLLAB 1190 448 0.38 43’ 18” 8’ 20” 5.2IMDB -BINARY 68 34 0.50 3’ 9” 0’ 30” 6.3IMDB -MULTI 74 40 0.54 7’ 41” 1’ 54” 4.0REDDIT -BINARY 326 56 0.17 TO 2’ 35”≥100.0REDDIT -MULTI 5K 952 162 0.17 OOM 9’ 51” –REDDIT -MULTI 12K 1788 347 0.19 OOM 29’ 55” –E2In Table 1 we show the file sizes of the preprocessed datasets before and after the compressiontogether with the data compression ratio.3We also estimate the benefit of the relational compressionfrom a computational time point of view and report the measurement of the runtime for 1run withand without compression together with the speedup factor.For the purpose of this experiment, all tests were run on a computer with two 8-cores Intel XeonE5-2665 processors and 94 GB RAM . Uncompressed datasets which exhausted our server’s memoryduring the test are marked as “ OOM ” (out of memory) in the table, while those who exceeded thetime limit of 100times the time needed for the uncompressed version are marked as “ TO” (timeout).4.3 D ISCUSSIONA1As shown in Table 4, EGNN performs consistently better than the other two methods on all thesocial network datasets. This confirms that the chosen H-hierarchical decomposition is effective onthis kind of problems. Also the results for molecule and protein datasets (see Table 5) are in linewith the current state of the art.A2The compression algorithm has proven to be effective in improving the computational cost of ourmethod. Most of the datasets improved their runtimes by a factor of at least 4while maintaining the3The size of the uncompressed files are shown for the sole purpose of computing the data compression ratio.Indeed the last version of our code compresses the files on the fly.8Under review as a conference paper at ICLR 2017same expressive power. Moreover, experiments on REDDIT -MULTI 5Kand REDDIT -MULTI 12Khaveonly been possible thanks to the size reduction operated by the algorithm as the script exhausted thememory while executing the training step on the uncompressed files.5 C ONCLUSIONSWe proposed SAEN , a novel architecture for learning vector representations of H-decompositionsof input graphs. We applied SAEN for graph classification on 6real world social network datasets,outperforming the current state of the art on 4of them and obtaining state-of-the-art classificationaccuracy on the others. Another important contribution of this paper is the domain compressionalgorithm which greatly reduces memory usage and allowed us to speedup the training time of afactor of at least 4. | B17yL74He | Poor performance on bioinformatics dataset? | 5: Marginally below acceptance threshold | the paper proposed a method mainly for graph classification. The proposal is to decompose graphs objects into hierarchies of small graphs followed by generating vector embeddings and aggregation using deep networks.
The approach is reasonable and intuitive however, experiments do not show superiority of their approach.
The proposed method outperforms Yanardag et al. 2015 and Niepert et al., 2016 on social networks graphs but are quite inferior to Niepert et al., 2016 on bio-informatics datasets. the authors did not report acccuracy for Yanardag et al. 2015 which on similar bio-ddatasets for example NCI1 is 80%, significantly better than achieved by the proposed method. The authors claim that their method is tailored for social networks graph more is not supported by good arguments? what models of graphs is this method more suitable? | 3: The reviewer is fairly confident that the evaluation is correct |
S1Y0td9ee | ICLR.cc/2017/conference | 2017 | Shift Aggregate Extract Networks | ["Francesco Orsini", "Daniele Baracchi", "Paolo Frasconi"] | The Shift Aggregate Extract Network SAEN is an architecture for learning representations on social network data.
SAEN decomposes input graphs into hierarchies made of multiple strata of objects.
Vector representations of each object are learnt by applying 'shift', 'aggregate' and 'extract' operations on the vector representations of its parts.
We propose an algorithm for domain compression which takes advantage of symmetries in hierarchical decompositions to reduce the memory usage and obtain significant speedups.
Our method is empirically evaluated on real world social network datasets, outperforming the current state of the art. | ["Supervised Learning"] | ABSTRACTThe Shift Aggregate Extract Network ( SAEN ) is an architecture for learning repre-sentations on social network data. SAEN decomposes input graphs into hierarchiesmade of multiple strata of objects. Vector representations of each object are learntby applying shift,aggregate andextract operations on the vector representationsof its parts. We propose an algorithm for domain compression which takes ad-vantage of symmetries in hierarchical decompositions to reduce the memory us-age and obtain significant speedups. Our method is empirically evaluated on realworld social network datasets, outperforming the current state of the art.1 I NTRODUCTIONMany different problems in various fields of science require the classification of structured data ,i.e. collections of objects bond together by some kind of relation. A natural way to represent suchstructures is through graphs, which are able to encode both the individual objects composing thecollection (as vertices) and the relationships between them (as edges). A number of approaches tothe graph classification problem has been studied in graph kernel and neural network literature.Graph kernels decompose input graphs in substructures such as shortest paths (Borgwardt & Kriegel,2005), graphlets (Shervashidze et al., 2009) or neighborhood subgraph pairs (Costa & De Grave,2010). The similarity between two graphs is then computed by comparing the respective sets ofparts. Methods based on recursive neural networks unfold a neural network over input graphs andlearn vector representations of their nodes employing backpropagation though structure (Goller &Kuchler, 1996). Recursive neural networks have been successfully applied to domains such as nat-ural language (Socher et al., 2011) and biology (Vullo & Frasconi, 2004; Baldi & Pollastri, 2003).An advantage of recursive neural networks over graph kernels, is that the vector representations ofthe input graphs are learnt rather than handcrafted.Learning on social network data can be considerably hard due to their peculiar structure: as opposedto chemical compounds and parse trees, the structure of social network graphs is highly irregular.Indeed in social networks it is common to have nodes in the same graph whose degree differs byorders of magnitude. This poses a significant challenge for the substructure matching approach usedby some graph kernels as the variability in connectivity generates a large number of unique patternsleading to diagonally dominant kernel matrices.We propose Shift Aggregate Extract Networks ( SAEN ), a neural network architecture for learningrepresentations of input graphs. SAEN decomposes input graphs into H-hierarchies made of multiplestrata of objects. Objects in each stratum are connected by “part-of” relations to the objects to thestratum above.In case we wish to classify graphs we can use an H-hierarchical decomposition in which the topstratum contains the graph Gthat we want to classify, while the intermediate strata contain subgraphsofG, subgraphs of subgraphs of Gand so on, until we reach the bottom stratum which contains theverticesvofG.1Under review as a conference paper at ICLR 2017UnlikeR-convolution relations in kernel methods (which decompose objects into the set of theirparts),H-hierarchical decompositions are deep as they can represent the parts of the parts of anobject.Recursive neural networks associate to the vertices of the input graphs vector representations impos-ing that they have identical dimensions. Moreover, the propagation follows the edge connectivityand weights are shared over the whole input graph. If we consider that vector representations ofnodes (whose number of parents can differ by orders of magnitude) must share the same weights,learning on social network data with recursive neural networks might be nontrivial.SAEN compensates the limitations of recursive neural networks by adding the following degrees offlexibility:1. the SAEN computation schema unfolds a neural network over H-decompositions instead of theinput graph,2.SAEN imposes weight sharing and fixed size of the learnt vector representations on a per stratumbasis instead of globally.Indeed SAEN allows to use vector representations of different sizes for different strata of objects(e.g. graphs, subgraphs, subgraphs of subgraphs, edges, vertices etc.) The SAEN schema computesthe vector representation of each object by applying shift,aggregate andextract operations on thevector representations of its parts.Another contribution of this paper is the introduction of a domain compression algorithm, that weuse in our experiments to reduce memory usage and runtime. Domain compression collapses objectsin the same stratum of an H-hierarchical decomposition into a compressed one whenever theseobjects are indistinguishable for the SAEN computation schema. In particular objects made of thesame sets of parts are indistinguishable. In order obtain a lossless compression an H-hierarchicaldecomposition we store counts on symmetries adopting some mathematical results from lifted linearprogramming (Mladenov et al., 2012). The domain compression algorithm is also reminiscent of thework of Sperduti & Starita (1997) in which common substructures of recursive neural networks arecollapsed in order to reduce the computational cost.2 S HIFT -AGGREGATE -EXTRACT NEURAL NETWORKSWe propose a neural network architecture that takes as input an undirected attributed graph G=(V,E,X )whereVis the vertex set, E⊆V×Vis the edge set, and X={xv∈Rp}v∈Vis aset ofp-dimensional vertex attributes. When vertices do not have associated attributes (for examplethis happens in some of the social network datasets of §4.1), we can set xvto some vertex invariantsuch as node centrality or betweenness.2.1H-HIERARCHICAL DECOMPOSITIONSMost graph kernels decompose graphs into parts by using an R-convolution relation (Haussler,1999). We extend this approach by decomposing graphs into a hierarchy ofπ-parametrized “partof” relations. Formally, an H-hierarchical decomposition is a pair ({Sl}Ll=0,{Rl,π}Ll=1)where:•{Sl}Ll=0are disjoint sets of objects Slcalled strata, or levels of the hierarchy. The bottom stratumS0contains non-decomposable objects (e.g. individual vertices), while the other strata Sl, l=1,...,L contain composite objects, oi∈Sl, whose parts oj∈Sl−1belong to the preceding stratum,Sl−1.•{Rl,π}Ll=1is a set ofl,π-parametrizedRl,π-convolution relations. A pair (oi,oj)∈Sl×Sl−1belongs toRl,πiff “ojis part ofoiwith membership type π”. For notational convenience, the partsofoiare denoted asR−1l,π(oi) ={oj|(oj,oi)∈Rl,π}.The membership type πis used to represent the roles of the parts of an object. For example, wecould decompose a graph as a multiset of π-neighborhood subgraphs1in whichπis the radius ofthe neighborhoods (see Figure 1 on the left). Another possible use of the πmembership type is to1Ther-neighborhood subgraph (or ego graph) of a vertex vin a graph Gis the induced subgraph of Gconsisting of all vertices whose shortest-path distance from vis at most r.2Under review as a conference paper at ICLR 2017Ego Graph⇡:ROOT⇡:ELEM⇡:ELEMGraph⇡:0⇡:0⇡:0⇡:0⇡:1⇡:1⇡:1⇡:1⇡:1Ego graph (stratumS1) decomposed intovertices (stratumS2).⇡:0Root of the ego graph.Other vertices of the ego graph.Ego graphs of radius 0.Ego graphs of radius 1.Graph (stratumS2) decomposed into ego graphs ofradius 0 and 1 (stratumS1).Figure 1: Image of an H-hierarchical decomposition (in particular the EGNN explained in§4.2).On the left we decompose a graph into rooted ego graphs of radius 0and1, while on the right wedecompose an ego graph into the set of its vertices. The directed arrows represent “part of” relationslabeled with their membership type π. The membership type πrepresents the radius π= 0,1of theego graphs (decomposition on the left) and the role (i.e. π=ROOT,ELEM ) of a vertex in the egograph (decomposition on the right) respectively.distinguish the root from the other vertices in a rooted neighborhood subgraph (see Figure 1 on theright).AnH-hierarchical decomposition is a multilevel generalization of R-convolution relations, and itreduces to anR-convolution relation for L= 1.2.2 S HIFT AGGREGATE EXTRACT SCHEMA FOR LEARNING REPRESENTATIONSWe propose Shift Aggregate Extract Network ( SAEN ) to learn vector representations for all theobjects of all the strata {Sl}Ll=0in anH-hierarchical decomposition. SAEN unfolds a neural net-work architecture over an H-hierarchical decomposition by using the Shift Aggregate Extract ( SAE)schema.According to the SAE schema the vector representation of each object in the H-hierarchical decom-position is either computed by applying a neural network on the vertex attributes (for the objects inbottom stratum) or defined in terms of the vector representations of its parts (for the other objects).More formally, the SAE schema associates a dl-dimensional representation hi∈Rdlto each objectoi∈Slof theH-hierarchical decomposition according to the following formula:hi=f0(xvi; Θ0) ifoi∈S0fl/parenleftBigg/summationdisplayπ∈Πl/summationdisplayoj∈R−1l,π(oi)(zπ⊗hj)/bracehtipupleft/bracehtipdownright/bracehtipdownleft/bracehtipuprightShift/bracehtipupleft/bracehtipdownright/bracehtipdownleft/bracehtipuprightAggregate; Θl/parenrightBigg/bracehtipupleft /bracehtipdownright/bracehtipdownleft /bracehtipuprightExtractotherwise(1)wherefl(·; Θl), l= 0,...,L are multilayer neural networks with parameters Θl.With respect to the base case (first branch of Eq. 1) we have that each object oiin the bottom stratumS0is in one-to-one correspondence with the vertices vi∈Vof the graph that we are decomposing.Indeed the vector representations hiare computed by evaluating f0(·; Θ0)in correspondence of thevertex attributes xvi∈X.The recursion step (second branch of Eq. 1) follows the Shift Aggregate Extract ( SAE) schema:•Shift : each part representation hj∈Rdl−1is remapped into a space R|Πldl−1|made of|Πl|slots,where each slot has dimension dl−1. This transformation shifts part representations hjby usingthe Kronecker product ⊗between an indicator vector zπ∈R|Πl|and the vector representation hjof partoj∈Sl−1. The indicator vector zπ∈R|Πl|defined aszi=/braceleftBig1ifi=π0otherwise.and it is used to3Under review as a conference paper at ICLR 2017whole graphego graphpatternsvertices....domaincompressionoriginal graphcompressed graph.....compressedH-decomposition.....originalH-decompositionFigure 2: Pictorial representation of the H-hierarchical decomposition of a graph taken from theIMDB -BINARY dataset (see§4.1) together with its compressed version.make sure that vector representations hjof object parts will fall in the same slot if and only if theyhave the same membership type π.•Aggregate : the shifted representations (zπ⊗hj)of the partsojare then aggregated with a sum.•Extract : the aggregated representation is compressed to a dl-dimensional space by a Θl-parametrized nonlinear map fl(·,Θl) :R|Πldl−1|→Rdlimplemented with a multilayer neuralnetwork.The shift and aggregate steps, that we have seen so far, are identical to those used in kernel designwhen computing the explicit feature of a kernel k(x,z)derived from a sum/summationtextπ∈Πkπ(x,z)of basekernelskπ(x,z), π∈Π. In principle, it would be indeed possible to turn SAEN into a kernel methodby removing the extraction step Efrom the SAEschema. However, such an approach would increasethe dimensionality of the feature space by a multiplicative factor |Πl|for each level lof theH-hierarchical decomposition, thus leading to an exponential number of features. When using SAEN ,the feature space growth is prevented by exploiting a distributed representation (via a multilayeredneural network) during the Estep of the SAE schema. As a result, SAEN can easily cope with H-hierarchical decompositions consisting of multiple strata.2.3 E XPLOITING SYMMETRIES FOR DOMAIN COMPRESSIONIn this section we propose a technique, called domain compression , which allows to save memoryand speedup the SAEN computation. Domain compression exploits symmetries in H-hierarchical de-compositions by collapsing equivalent objects in each stratum. The greater the number of collapsedobjects the highest the compression ratio.Two objects a,bin a stratum Slare collapsable a∼bif they share the same representation (i.e.ha=hb) for all the possible values of Θl. A compressed stratum Scomplis the quotient set Sl/∼ofstratumSlw.r.t. the collapsibility relation ∼. We assume that the attributes of the elements in thebottom stratum S0are categorical, so that the same vector representation can be shared by multipleelements with non-zero probability.2While objects in the bottom stratum S0are collapsable whentheir attributes are identical, for all the other strata Sl, l= 1,...,L , objects are collapsable if theyare made by the same sets of parts for all the membership types π.In Figure 2 we provide a pictorial representation of the domain compression of an H-hierarchicaldecomposition ( EGNN , described in§4.2). On the left we show the H-hierarchical decompositionof a graph taken from the IMDB -BINARY dataset (see§4.1) together with its compressed version onthe right.2.3.1 D OMAIN COMPRESSION ALGORITHMIn order to compress H-hierarchical decompositions we adapt the lifted linear programming tech-nique proposed by Mladenov et al. (2012) to the SAEN architecture. If a matrix M∈Rn×phas2Vectors of real valued attributes could be discretized using clustering techniques. However, we leavediscretization in SAEN to future works.4Under review as a conference paper at ICLR 2017m≤ndistinct rows it can be decomposed as the product DMcompwhereMcompis a compressedversion ofMin which the distinct rows of Mappear exactly once. The Boolean decompressionmatrix,D, encodes the collapsibility relation among the rows of Mso thatDij= 1iff theithrowofMfalls in the equivalence class jof∼. A pseudo-inverse CofDcan be computed by dividingthe rows ofD/latticetopby their sum (where D/latticetopis the transpose of D).Example 1 If we look at matrix Min Eq. 2 we notice that row 1and4share the encoding [0,0,0],rows 3and5share the encoding [1,1,0]while the encoding [1,0,1]appears only once at row 2.MatrixMcompis the compressed version of M.M=0 0 01 0 11 1 00 0 01 1 0Mcomp=/bracketleftBigg0 0 01 0 11 1 0/bracketrightBiggD=1 0 00 1 00 0 11 0 00 0 1C=/bracketleftBigg1/20 0 1/200 1 0 0 00 0 1/20 1/2/bracketrightBigg(2)MatrixMcan be expressed as the matrix product between the decompression matrix Dand thecompressed version of Mcomp(i.e.M=DMcomp), while the matrix multiplication between thecompression matrix Cand theMleads to the compressed matrix Mcomp(i.e.Mcomp=CM).To apply domain compression we rewrite Eq. 1 in matrix form as follows:Hl=f0(X; Θ0)/bracehtipupleft/bracehtipdownright/bracehtipdownleft/bracehtipupright|S0|×d0ifl= 0fl/bracketleftbigRl,1,...,Rl,π,...,Rl,|Πl|/bracketrightbig/bracehtipupleft/bracehtipdownright/bracehtipdownleft/bracehtipupright|Sl|×|Πl||Sl−1|Hl−1... 0.........0... Hl−1/bracehtipupleft/bracehtipdownright/bracehtipdownleft/bracehtipupright|Πl||Sl−1|×|Πl|dl−1; Θl/bracehtipupleft /bracehtipdownright/bracehtipdownleft /bracehtipupright|Sl|×dlotherwise(3)where:•Hl∈R|Sl|×dlis the matrix that represents the dl-dimensional encodings of the objects in Sl.The rows of Hlare the vector representations hiin Eq. 1, while the rows of Hl−1are the vectorrepresentations hjin Eq. 1;•X∈R|S0|×pis the matrix that represents the p-dimensional encodings of the vertex attributes inV(i.e. the rows of Xare the xviof Eq. 1);•fl(·; Θl)is unchanged w.r.t. Eq. 1 and is applied to its input matrices row-wise;•Rl,π∈R|Sl|×|Sl−1|∀π∈Πlare the matrix representations of the Rl,π-convolution relations ofEq. 1 whose elements are (Rl,π)ij= 1if(oj,oi)∈Rl,πand0otherwise.Domain compression on Eq. 3 is performed by the DOMAIN -COMPRESSION procedure (see Algo-rithm 3) that takes as input the attribute matrix Xand the part-of matrices Rl,πand returns theircompressed versions Xcompand theRcompl,πrespectively. The algorithm starts by invoking (line 1)the procedure COMPUTE -CDonXto obtain the compression and decompression matrices C0andD0respectively. The compression matrix C0is used to compress X(line 2) then we start iteratingover the levels l= 0,...,L of theH-hierarchical decomposition (line 4) and compress the Rl,πmatrices. The compression of the Rl,πmatrices is done by right-multiplying them by the decom-pression matrix Dl−1of the previous level l−1(line 5). In this way we collapse the parts of relationRl,π(i.e. the columns of Rl,π) as these were identified in stratum Sl−1as identical objects (i.e.those objects corresponding to the rows of XorRl−1,πcollapsed during the previous step). Theresult is a list Rcolcomp= [Rl,πDl−1,∀π= 1,...,|Πl|]of column compressed Rl,π−matrices.We proceed collapsing equivalent objects in stratum Sl, i.e. those made of identical sets of parts:we find symmetries in Rcolcompby invoking COMPUTE -CD(line 6) and obtain a new pair Cl,Dlof compression, and decompression matrices respectively. Finally the compression matrix Clis ap-plied to the column-compressed matrices in Rcolcompin order to obtain the Πlcompressed matrices5Under review as a conference paper at ICLR 2017DOMAIN -COMPRESSION (X,R)1C0,D0=COMPUTE -CD(X)2Xcomp=C0X/ /Compress the Xmatrix.3Rcomp={}/ /Initialize an empty container for compressed matrices.4forl= 1toL5Rcolcomp= [Rl,πDl−1,∀π= 1,...,|Πl|]/ /column compression6Cl,Dl=COMPUTE -CD(Rcolcomp)7 forπ= 1to|Πl|8 Rcompl,π=ClRcolcompπ / /row compression9returnXcomp,RcompFigure 3: DOMAIN -COMPRESSIONof stratumSl(line 8). Algorithm 3 allows us to compute the domain compressed version of Eq. 3which can be obtained by replacing: XwithXcomp=C0X,Rl,πwithRcompl,π=ClRl,πDl−1andHlwithHcompl. Willing to recover the original encodings Hlwe just need to employ the decom-pression matrix Dlon the compressed encodings Hcompl, indeedHl=DlHcompl.As we can see by substituting SlwithScompl, the more are the symmetries (i.e. when |Scompl|/lessmuch|Sl|) the greater the domain compression will be.3 R ELATED WORKSWhen learning with graph inputs two fundamental design aspects that must be taken into account are:the choice of the pattern generator and the choice of the matching operator. The former decomposesthe graph input in substructures while the latter allows to compare the substructures.Among the patterns considered from the graph kernel literature we have paths, shortest paths,walks (Kashima et al., 2003), subtrees (Ramon & G ̈artner, 2003; Shervashidze et al., 2011) andneighborhood subgraphs (Costa & De Grave, 2010). The similarity between graphs GandG/primeiscomputed by counting the number of matches between their common the substructures (i.e. a kernelon the sets of the substructures). The match between two substructures can be defined by usinggraph isomorphism or some other weaker graph invariant.When the number of substructures to enumerate is infinite or exponential with the size of the graph(perhaps this is the case for random walks and shortest paths respectively) the kernel between thetwo graphs is computed without generating an explicit feature map. Learning with an implicit fea-ture map is not scalable as it has a space complexity quadratic in the number of training examples(because we need to store in memory the gram matrix).Other graph kernels such as the Weisfeiler-Lehman Subtree Kernel ( WLST ) (Shervashidze et al.,2011) and the Neighborhood Subgraph Pairwise Distance Kernel ( NSPDK ) (Costa & De Grave,2010) deliberately choose a pattern generator that scales polynomially and produces an explicitfeature map. However the vector representations produced by WLST and NSPDK are handcraftedand not learned.A recent work by Yanardag & Vishwanathan (2015) proposes to uses pattern generators such asgraphlets, shortest paths and WLST subtrees to transform input graphs into documents. The gener-ated substructures are then treated as words and embedded in the Euclidean space with a CBOWor a Skip-gram model. The deep upgrade of existing graph kernels is performed by reweighing thecounts of the substructures by the square root of their word-vector self similarity.Another recent work by Niepert et al. (2016) upgrades the convolutional neural networks CNNs forimages to graphs. While the receptive field of a CNN is usually a square window (Niepert et al.,2016) employ neighborhood subgraphs as receptive fields. As nodes in graphs do not have a specifictemporal or spatial order, (Niepert et al., 2016) employ vertex invariants to impose an order on thenodes of the subgraphs/receptive fields.6Under review as a conference paper at ICLR 20174 E XPERIMENTAL EVALUATIONWe answer to the following experimental questions:Q1How does SAEN compare to the state of the art?Q2Can SAEN exploit symmetries in social networks to reduce the memory usage and the runtime?4.1 D ATASETSIn order to answer the experimental questions we tested our method on six publicly available datasetsfirst proposed by Yanardag & Vishwanathan (2015).•COLLAB is a dataset where each graph represent the ego-network of a researcher, and the task isto determine the field of study of the researcher between High Energy Physics ,Condensed MatterPhysics andAstro Physics .•IMDB -BINARY ,IMDB -MULTI are datasets derived from IMDB where in each graph the ver-tices represent actors/actresses and the edges connect people which have performed in the samemovie. Collaboration graphs are generated from movies belonging to genres Action andRomanceforIMDB -BINARY andComedy ,Romance andSci-Fi forIMDB -MULTI , and for each actor/actress inthose genres an ego-graph is extracted. The task is to identify the genre from which the ego-graphhas been generated.•REDDIT -BINARY ,REDDIT -MULTI 5K,REDDIT -MULTI 12Kare datasets where each graph is de-rived from a discussion thread from Reddit. In those datasets each vertex represent a distinct userand two users are connected by an edge if one of them has responded to a post of the other inthat discussion. The task in REDDIT -BINARY is to discriminate between threads originating froma discussion-based subreddit ( TrollXChromosomes ,atheism ) or from a question/answers-basedsubreddit ( IAmA ,AskReddit ). The task in REDDIT -MULTI 5Kand REDDIT -MULTI 12Kis a multi-class classification problem where each graph is labeled with the subreddit where it has originated(worldnews, videos, AdviceAnimals, aww, mildlyinteresting forREDDIT -MULTI 5KandAskReddit,AdviceAnimals, atheism, aww, IAmA, mildlyinteresting, Showerthoughts, videos, todayilearned,worldnews, TrollXChromosomes forREDDIT -MULTI 12K).4.2 E XPERIMENTSIn our experiments we chose an H-hierarchical decomposition called Ego Graph Neural Network(EGNN ), that mimics the graph kernel NSPDK with the distance parameter set to 0.Before applying EGNN we turn unattributed graphs (V,E)into attributed graphs (V,E,X )by an-notating their vertices v∈Vwith attributes xv∈X. We label vertices vofGwith their degree andencode this information into the attributes xvby employing the 1-hot encoding.EGNN decomposes attributed graphs G= (V,E,X )into a 3levelH-hierarchical decompositionwith the following strata (see Figure 1 for a pictorial representation of EGNN ):•stratumS0contains objects ovthat are in one-to-one correspondence with the vertices v∈V.•stratumS1containsvroot-rootedr-neighborhood subgraphs (i.e. ego graphs) e= (vroot,Ve,Ee)of radiusr= 0,1,...,R and has part-of alphabet Π1={ROOT,ELEM}. Objectsov∈S0are“ELEM -part-of” ego graph eifv∈Ve\{vroot}, while the are “ ROOT -part-of” ego graph eifv=vroot.•stratumS2contains the graph Gthat we want to classify and has part-of alphabet Π2={0,1}which correspond to the radius of the ego graphs e∈S1of whichGis made of.E1We experimented with SAEN applying the EGNNH-decomposition on all the datasets. For eachdataset, we manually chose the parameters of SAEN , i.e. the number of hidden layers for eachstratum, the size of each layer and the maximum radius R. We used the Leaky ReLU (Maas et al.)activation function on all the units. We report the chosen parameters in Table A1 of the appendix.In all our experiments we trained the neural networks by using the Adam algorithm to minimize across entropy loss.The classification accuracy of SAEN was measured with 10-times 10-fold cross-validation. We man-ually chose the number of layers and units for each level of the part-of decomposition; the numberof epochs was chosen manually for each dataset and we kept the same value for all the 100runs ofthe10-times 10-fold cross-validation.7Under review as a conference paper at ICLR 2017Figure 4: Comparison of accuracy results.DATASET DGK PSCN SAEN(Yanardag et al. 2015) (Niepert et al., 2016) (our method)COLLAB 73.09±0.25 72.60±2.16 75.63±0.31IMDB -BINARY 66.96±0.56 71.00±2.29 71.26±0.74IMDB -MULTI 44.55±0.52 45.23±2.84 49.11±0.64REDDIT -BINARY 78.04±0.39 86.30±1.58 86.08±0.53REDDIT -MULTI 5K 41.27±0.18 49.10±0.70 52.24±0.38REDDIT -MULTI 12K 32.22±0.10 41.32±0.42 46.72±0.23Figure 5: Comparison of accuracy on bio-informatics datasets.DATASET PSCN (k= 10E) SAEN(Niepert et al., 2016) (our method)MUTAG 92.63±4.21 84.99±1.82PTC 60.00±4.82 57.04±1.30NCI1 78.59±1.89 77.80±0.42PROTEINS 75.89±2.76 75.31±0.70D&D 77.12±2.41 77.69±0.96The mean accuracies and their standard deviations obtained by our method are reported in Ta-ble 4, where we compare these results with those obtained by Yanardag & Vishwanathan (2015)and by Niepert et al. (2016).Although our method was conceived for social network data, it can also handle other types of graphs.For the sake of completeness in Table 5 we report the mean accuracies obtained with SAEN on themolecule and protein datasets studied in previous works (e.g. Niepert et al. (2016)).Table 1: Comparison of sizes and runtimes of the datasets before and after the compression.DATASETSIZE (MB) RUNTIMEORIGINAL COMP . RATIO ORIGINAL COMP . SPEEDUPCOLLAB 1190 448 0.38 43’ 18” 8’ 20” 5.2IMDB -BINARY 68 34 0.50 3’ 9” 0’ 30” 6.3IMDB -MULTI 74 40 0.54 7’ 41” 1’ 54” 4.0REDDIT -BINARY 326 56 0.17 TO 2’ 35”≥100.0REDDIT -MULTI 5K 952 162 0.17 OOM 9’ 51” –REDDIT -MULTI 12K 1788 347 0.19 OOM 29’ 55” –E2In Table 1 we show the file sizes of the preprocessed datasets before and after the compressiontogether with the data compression ratio.3We also estimate the benefit of the relational compressionfrom a computational time point of view and report the measurement of the runtime for 1run withand without compression together with the speedup factor.For the purpose of this experiment, all tests were run on a computer with two 8-cores Intel XeonE5-2665 processors and 94 GB RAM . Uncompressed datasets which exhausted our server’s memoryduring the test are marked as “ OOM ” (out of memory) in the table, while those who exceeded thetime limit of 100times the time needed for the uncompressed version are marked as “ TO” (timeout).4.3 D ISCUSSIONA1As shown in Table 4, EGNN performs consistently better than the other two methods on all thesocial network datasets. This confirms that the chosen H-hierarchical decomposition is effective onthis kind of problems. Also the results for molecule and protein datasets (see Table 5) are in linewith the current state of the art.A2The compression algorithm has proven to be effective in improving the computational cost of ourmethod. Most of the datasets improved their runtimes by a factor of at least 4while maintaining the3The size of the uncompressed files are shown for the sole purpose of computing the data compression ratio.Indeed the last version of our code compresses the files on the fly.8Under review as a conference paper at ICLR 2017same expressive power. Moreover, experiments on REDDIT -MULTI 5Kand REDDIT -MULTI 12Khaveonly been possible thanks to the size reduction operated by the algorithm as the script exhausted thememory while executing the training step on the uncompressed files.5 C ONCLUSIONSWe proposed SAEN , a novel architecture for learning vector representations of H-decompositionsof input graphs. We applied SAEN for graph classification on 6real world social network datasets,outperforming the current state of the art on 4of them and obtaining state-of-the-art classificationaccuracy on the others. Another important contribution of this paper is the domain compressionalgorithm which greatly reduces memory usage and allowed us to speedup the training time of afactor of at least 4. | r1xXahBNl | Interesting approach, confusing presentation. | 5: Marginally below acceptance threshold | The paper contributes to recent work investigating how neural networks can be used on graph-structured data. As far as I can tell, the proposed approach is the following:
1. Construct a hierarchical set of "objects" within the graph. Each object consists of multiple "parts" from the set of objects in the level below. There are potentially different ways a part can be part of an object (the different \pi labels), which I would maybe call "membership types". In the experiments, the objects at the bottom level are vertices. At the next level they are radius 0 (just a vertex?) and radius 1 neighborhoods around each vertex, and the membership types here are either "root", or "element" (depending on whether a vertex is the center of the neighborhood or a neighbor). At the top level there is one object consisting of all of these neighborhoods, with membership types of "radius 0 neighborhood" (isn't this still just a vertex?) or "radius 1 neighborhood".
2. Every object has a representation. Each vertex's representation is a one-hot encoding of its degree. To construct an object's representation at the next level, the following scheme is employed:
a. For each object, sum the representation of all of its parts having the same membership type.
b. Concatenate the sums obtained from different membership types.
c. Pass this vector through a multi-layer neural net.
I've provided this summary mainly because the description in the paper itself is somewhat hard to follow, and relevant details are scattered throughout the text, so I'd like to verify that my understanding is correct.
Some additional questions I have that weren't clear from the text: how many layers and hidden units were used? What are the dimensionalities of the representations used at each layer? How is final classification performed? What is the motivation for the chosen "ego-graph" representation?
The proposed approach is interesting and novel, the compression technique appears effective, and the results seem compelling. However, the clarity and structure of the writing is quite poor. It took me a while to figure out what was going on---the initial description is provided without any illustrative examples, and it required jumping around the paper to figure for example how the \pi labels are actually used. Important details around network architecture aren't provided, and very little in the way of motivation is given for many of the choices made. Were other choices of decomposition/object-part structures investigated, given the generality of the shift-aggregate-extract formulation? What motivated the choice of "ego-graphs"? Why one-hot degrees for the initial attributes?
Overall, I think the paper contains a useful contribution on a technical level, but the presentation needs to be significantly cleaned up before I can recommend acceptance. | 3: The reviewer is fairly confident that the evaluation is correct |
S1Y0td9ee | ICLR.cc/2017/conference | 2017 | Shift Aggregate Extract Networks | ["Francesco Orsini", "Daniele Baracchi", "Paolo Frasconi"] | The Shift Aggregate Extract Network SAEN is an architecture for learning representations on social network data.
SAEN decomposes input graphs into hierarchies made of multiple strata of objects.
Vector representations of each object are learnt by applying 'shift', 'aggregate' and 'extract' operations on the vector representations of its parts.
We propose an algorithm for domain compression which takes advantage of symmetries in hierarchical decompositions to reduce the memory usage and obtain significant speedups.
Our method is empirically evaluated on real world social network datasets, outperforming the current state of the art. | ["Supervised Learning"] | ABSTRACTThe Shift Aggregate Extract Network ( SAEN ) is an architecture for learning repre-sentations on social network data. SAEN decomposes input graphs into hierarchiesmade of multiple strata of objects. Vector representations of each object are learntby applying shift,aggregate andextract operations on the vector representationsof its parts. We propose an algorithm for domain compression which takes ad-vantage of symmetries in hierarchical decompositions to reduce the memory us-age and obtain significant speedups. Our method is empirically evaluated on realworld social network datasets, outperforming the current state of the art.1 I NTRODUCTIONMany different problems in various fields of science require the classification of structured data ,i.e. collections of objects bond together by some kind of relation. A natural way to represent suchstructures is through graphs, which are able to encode both the individual objects composing thecollection (as vertices) and the relationships between them (as edges). A number of approaches tothe graph classification problem has been studied in graph kernel and neural network literature.Graph kernels decompose input graphs in substructures such as shortest paths (Borgwardt & Kriegel,2005), graphlets (Shervashidze et al., 2009) or neighborhood subgraph pairs (Costa & De Grave,2010). The similarity between two graphs is then computed by comparing the respective sets ofparts. Methods based on recursive neural networks unfold a neural network over input graphs andlearn vector representations of their nodes employing backpropagation though structure (Goller &Kuchler, 1996). Recursive neural networks have been successfully applied to domains such as nat-ural language (Socher et al., 2011) and biology (Vullo & Frasconi, 2004; Baldi & Pollastri, 2003).An advantage of recursive neural networks over graph kernels, is that the vector representations ofthe input graphs are learnt rather than handcrafted.Learning on social network data can be considerably hard due to their peculiar structure: as opposedto chemical compounds and parse trees, the structure of social network graphs is highly irregular.Indeed in social networks it is common to have nodes in the same graph whose degree differs byorders of magnitude. This poses a significant challenge for the substructure matching approach usedby some graph kernels as the variability in connectivity generates a large number of unique patternsleading to diagonally dominant kernel matrices.We propose Shift Aggregate Extract Networks ( SAEN ), a neural network architecture for learningrepresentations of input graphs. SAEN decomposes input graphs into H-hierarchies made of multiplestrata of objects. Objects in each stratum are connected by “part-of” relations to the objects to thestratum above.In case we wish to classify graphs we can use an H-hierarchical decomposition in which the topstratum contains the graph Gthat we want to classify, while the intermediate strata contain subgraphsofG, subgraphs of subgraphs of Gand so on, until we reach the bottom stratum which contains theverticesvofG.1Under review as a conference paper at ICLR 2017UnlikeR-convolution relations in kernel methods (which decompose objects into the set of theirparts),H-hierarchical decompositions are deep as they can represent the parts of the parts of anobject.Recursive neural networks associate to the vertices of the input graphs vector representations impos-ing that they have identical dimensions. Moreover, the propagation follows the edge connectivityand weights are shared over the whole input graph. If we consider that vector representations ofnodes (whose number of parents can differ by orders of magnitude) must share the same weights,learning on social network data with recursive neural networks might be nontrivial.SAEN compensates the limitations of recursive neural networks by adding the following degrees offlexibility:1. the SAEN computation schema unfolds a neural network over H-decompositions instead of theinput graph,2.SAEN imposes weight sharing and fixed size of the learnt vector representations on a per stratumbasis instead of globally.Indeed SAEN allows to use vector representations of different sizes for different strata of objects(e.g. graphs, subgraphs, subgraphs of subgraphs, edges, vertices etc.) The SAEN schema computesthe vector representation of each object by applying shift,aggregate andextract operations on thevector representations of its parts.Another contribution of this paper is the introduction of a domain compression algorithm, that weuse in our experiments to reduce memory usage and runtime. Domain compression collapses objectsin the same stratum of an H-hierarchical decomposition into a compressed one whenever theseobjects are indistinguishable for the SAEN computation schema. In particular objects made of thesame sets of parts are indistinguishable. In order obtain a lossless compression an H-hierarchicaldecomposition we store counts on symmetries adopting some mathematical results from lifted linearprogramming (Mladenov et al., 2012). The domain compression algorithm is also reminiscent of thework of Sperduti & Starita (1997) in which common substructures of recursive neural networks arecollapsed in order to reduce the computational cost.2 S HIFT -AGGREGATE -EXTRACT NEURAL NETWORKSWe propose a neural network architecture that takes as input an undirected attributed graph G=(V,E,X )whereVis the vertex set, E⊆V×Vis the edge set, and X={xv∈Rp}v∈Vis aset ofp-dimensional vertex attributes. When vertices do not have associated attributes (for examplethis happens in some of the social network datasets of §4.1), we can set xvto some vertex invariantsuch as node centrality or betweenness.2.1H-HIERARCHICAL DECOMPOSITIONSMost graph kernels decompose graphs into parts by using an R-convolution relation (Haussler,1999). We extend this approach by decomposing graphs into a hierarchy ofπ-parametrized “partof” relations. Formally, an H-hierarchical decomposition is a pair ({Sl}Ll=0,{Rl,π}Ll=1)where:•{Sl}Ll=0are disjoint sets of objects Slcalled strata, or levels of the hierarchy. The bottom stratumS0contains non-decomposable objects (e.g. individual vertices), while the other strata Sl, l=1,...,L contain composite objects, oi∈Sl, whose parts oj∈Sl−1belong to the preceding stratum,Sl−1.•{Rl,π}Ll=1is a set ofl,π-parametrizedRl,π-convolution relations. A pair (oi,oj)∈Sl×Sl−1belongs toRl,πiff “ojis part ofoiwith membership type π”. For notational convenience, the partsofoiare denoted asR−1l,π(oi) ={oj|(oj,oi)∈Rl,π}.The membership type πis used to represent the roles of the parts of an object. For example, wecould decompose a graph as a multiset of π-neighborhood subgraphs1in whichπis the radius ofthe neighborhoods (see Figure 1 on the left). Another possible use of the πmembership type is to1Ther-neighborhood subgraph (or ego graph) of a vertex vin a graph Gis the induced subgraph of Gconsisting of all vertices whose shortest-path distance from vis at most r.2Under review as a conference paper at ICLR 2017Ego Graph⇡:ROOT⇡:ELEM⇡:ELEMGraph⇡:0⇡:0⇡:0⇡:0⇡:1⇡:1⇡:1⇡:1⇡:1Ego graph (stratumS1) decomposed intovertices (stratumS2).⇡:0Root of the ego graph.Other vertices of the ego graph.Ego graphs of radius 0.Ego graphs of radius 1.Graph (stratumS2) decomposed into ego graphs ofradius 0 and 1 (stratumS1).Figure 1: Image of an H-hierarchical decomposition (in particular the EGNN explained in§4.2).On the left we decompose a graph into rooted ego graphs of radius 0and1, while on the right wedecompose an ego graph into the set of its vertices. The directed arrows represent “part of” relationslabeled with their membership type π. The membership type πrepresents the radius π= 0,1of theego graphs (decomposition on the left) and the role (i.e. π=ROOT,ELEM ) of a vertex in the egograph (decomposition on the right) respectively.distinguish the root from the other vertices in a rooted neighborhood subgraph (see Figure 1 on theright).AnH-hierarchical decomposition is a multilevel generalization of R-convolution relations, and itreduces to anR-convolution relation for L= 1.2.2 S HIFT AGGREGATE EXTRACT SCHEMA FOR LEARNING REPRESENTATIONSWe propose Shift Aggregate Extract Network ( SAEN ) to learn vector representations for all theobjects of all the strata {Sl}Ll=0in anH-hierarchical decomposition. SAEN unfolds a neural net-work architecture over an H-hierarchical decomposition by using the Shift Aggregate Extract ( SAE)schema.According to the SAE schema the vector representation of each object in the H-hierarchical decom-position is either computed by applying a neural network on the vertex attributes (for the objects inbottom stratum) or defined in terms of the vector representations of its parts (for the other objects).More formally, the SAE schema associates a dl-dimensional representation hi∈Rdlto each objectoi∈Slof theH-hierarchical decomposition according to the following formula:hi=f0(xvi; Θ0) ifoi∈S0fl/parenleftBigg/summationdisplayπ∈Πl/summationdisplayoj∈R−1l,π(oi)(zπ⊗hj)/bracehtipupleft/bracehtipdownright/bracehtipdownleft/bracehtipuprightShift/bracehtipupleft/bracehtipdownright/bracehtipdownleft/bracehtipuprightAggregate; Θl/parenrightBigg/bracehtipupleft /bracehtipdownright/bracehtipdownleft /bracehtipuprightExtractotherwise(1)wherefl(·; Θl), l= 0,...,L are multilayer neural networks with parameters Θl.With respect to the base case (first branch of Eq. 1) we have that each object oiin the bottom stratumS0is in one-to-one correspondence with the vertices vi∈Vof the graph that we are decomposing.Indeed the vector representations hiare computed by evaluating f0(·; Θ0)in correspondence of thevertex attributes xvi∈X.The recursion step (second branch of Eq. 1) follows the Shift Aggregate Extract ( SAE) schema:•Shift : each part representation hj∈Rdl−1is remapped into a space R|Πldl−1|made of|Πl|slots,where each slot has dimension dl−1. This transformation shifts part representations hjby usingthe Kronecker product ⊗between an indicator vector zπ∈R|Πl|and the vector representation hjof partoj∈Sl−1. The indicator vector zπ∈R|Πl|defined aszi=/braceleftBig1ifi=π0otherwise.and it is used to3Under review as a conference paper at ICLR 2017whole graphego graphpatternsvertices....domaincompressionoriginal graphcompressed graph.....compressedH-decomposition.....originalH-decompositionFigure 2: Pictorial representation of the H-hierarchical decomposition of a graph taken from theIMDB -BINARY dataset (see§4.1) together with its compressed version.make sure that vector representations hjof object parts will fall in the same slot if and only if theyhave the same membership type π.•Aggregate : the shifted representations (zπ⊗hj)of the partsojare then aggregated with a sum.•Extract : the aggregated representation is compressed to a dl-dimensional space by a Θl-parametrized nonlinear map fl(·,Θl) :R|Πldl−1|→Rdlimplemented with a multilayer neuralnetwork.The shift and aggregate steps, that we have seen so far, are identical to those used in kernel designwhen computing the explicit feature of a kernel k(x,z)derived from a sum/summationtextπ∈Πkπ(x,z)of basekernelskπ(x,z), π∈Π. In principle, it would be indeed possible to turn SAEN into a kernel methodby removing the extraction step Efrom the SAEschema. However, such an approach would increasethe dimensionality of the feature space by a multiplicative factor |Πl|for each level lof theH-hierarchical decomposition, thus leading to an exponential number of features. When using SAEN ,the feature space growth is prevented by exploiting a distributed representation (via a multilayeredneural network) during the Estep of the SAE schema. As a result, SAEN can easily cope with H-hierarchical decompositions consisting of multiple strata.2.3 E XPLOITING SYMMETRIES FOR DOMAIN COMPRESSIONIn this section we propose a technique, called domain compression , which allows to save memoryand speedup the SAEN computation. Domain compression exploits symmetries in H-hierarchical de-compositions by collapsing equivalent objects in each stratum. The greater the number of collapsedobjects the highest the compression ratio.Two objects a,bin a stratum Slare collapsable a∼bif they share the same representation (i.e.ha=hb) for all the possible values of Θl. A compressed stratum Scomplis the quotient set Sl/∼ofstratumSlw.r.t. the collapsibility relation ∼. We assume that the attributes of the elements in thebottom stratum S0are categorical, so that the same vector representation can be shared by multipleelements with non-zero probability.2While objects in the bottom stratum S0are collapsable whentheir attributes are identical, for all the other strata Sl, l= 1,...,L , objects are collapsable if theyare made by the same sets of parts for all the membership types π.In Figure 2 we provide a pictorial representation of the domain compression of an H-hierarchicaldecomposition ( EGNN , described in§4.2). On the left we show the H-hierarchical decompositionof a graph taken from the IMDB -BINARY dataset (see§4.1) together with its compressed version onthe right.2.3.1 D OMAIN COMPRESSION ALGORITHMIn order to compress H-hierarchical decompositions we adapt the lifted linear programming tech-nique proposed by Mladenov et al. (2012) to the SAEN architecture. If a matrix M∈Rn×phas2Vectors of real valued attributes could be discretized using clustering techniques. However, we leavediscretization in SAEN to future works.4Under review as a conference paper at ICLR 2017m≤ndistinct rows it can be decomposed as the product DMcompwhereMcompis a compressedversion ofMin which the distinct rows of Mappear exactly once. The Boolean decompressionmatrix,D, encodes the collapsibility relation among the rows of Mso thatDij= 1iff theithrowofMfalls in the equivalence class jof∼. A pseudo-inverse CofDcan be computed by dividingthe rows ofD/latticetopby their sum (where D/latticetopis the transpose of D).Example 1 If we look at matrix Min Eq. 2 we notice that row 1and4share the encoding [0,0,0],rows 3and5share the encoding [1,1,0]while the encoding [1,0,1]appears only once at row 2.MatrixMcompis the compressed version of M.M=0 0 01 0 11 1 00 0 01 1 0Mcomp=/bracketleftBigg0 0 01 0 11 1 0/bracketrightBiggD=1 0 00 1 00 0 11 0 00 0 1C=/bracketleftBigg1/20 0 1/200 1 0 0 00 0 1/20 1/2/bracketrightBigg(2)MatrixMcan be expressed as the matrix product between the decompression matrix Dand thecompressed version of Mcomp(i.e.M=DMcomp), while the matrix multiplication between thecompression matrix Cand theMleads to the compressed matrix Mcomp(i.e.Mcomp=CM).To apply domain compression we rewrite Eq. 1 in matrix form as follows:Hl=f0(X; Θ0)/bracehtipupleft/bracehtipdownright/bracehtipdownleft/bracehtipupright|S0|×d0ifl= 0fl/bracketleftbigRl,1,...,Rl,π,...,Rl,|Πl|/bracketrightbig/bracehtipupleft/bracehtipdownright/bracehtipdownleft/bracehtipupright|Sl|×|Πl||Sl−1|Hl−1... 0.........0... Hl−1/bracehtipupleft/bracehtipdownright/bracehtipdownleft/bracehtipupright|Πl||Sl−1|×|Πl|dl−1; Θl/bracehtipupleft /bracehtipdownright/bracehtipdownleft /bracehtipupright|Sl|×dlotherwise(3)where:•Hl∈R|Sl|×dlis the matrix that represents the dl-dimensional encodings of the objects in Sl.The rows of Hlare the vector representations hiin Eq. 1, while the rows of Hl−1are the vectorrepresentations hjin Eq. 1;•X∈R|S0|×pis the matrix that represents the p-dimensional encodings of the vertex attributes inV(i.e. the rows of Xare the xviof Eq. 1);•fl(·; Θl)is unchanged w.r.t. Eq. 1 and is applied to its input matrices row-wise;•Rl,π∈R|Sl|×|Sl−1|∀π∈Πlare the matrix representations of the Rl,π-convolution relations ofEq. 1 whose elements are (Rl,π)ij= 1if(oj,oi)∈Rl,πand0otherwise.Domain compression on Eq. 3 is performed by the DOMAIN -COMPRESSION procedure (see Algo-rithm 3) that takes as input the attribute matrix Xand the part-of matrices Rl,πand returns theircompressed versions Xcompand theRcompl,πrespectively. The algorithm starts by invoking (line 1)the procedure COMPUTE -CDonXto obtain the compression and decompression matrices C0andD0respectively. The compression matrix C0is used to compress X(line 2) then we start iteratingover the levels l= 0,...,L of theH-hierarchical decomposition (line 4) and compress the Rl,πmatrices. The compression of the Rl,πmatrices is done by right-multiplying them by the decom-pression matrix Dl−1of the previous level l−1(line 5). In this way we collapse the parts of relationRl,π(i.e. the columns of Rl,π) as these were identified in stratum Sl−1as identical objects (i.e.those objects corresponding to the rows of XorRl−1,πcollapsed during the previous step). Theresult is a list Rcolcomp= [Rl,πDl−1,∀π= 1,...,|Πl|]of column compressed Rl,π−matrices.We proceed collapsing equivalent objects in stratum Sl, i.e. those made of identical sets of parts:we find symmetries in Rcolcompby invoking COMPUTE -CD(line 6) and obtain a new pair Cl,Dlof compression, and decompression matrices respectively. Finally the compression matrix Clis ap-plied to the column-compressed matrices in Rcolcompin order to obtain the Πlcompressed matrices5Under review as a conference paper at ICLR 2017DOMAIN -COMPRESSION (X,R)1C0,D0=COMPUTE -CD(X)2Xcomp=C0X/ /Compress the Xmatrix.3Rcomp={}/ /Initialize an empty container for compressed matrices.4forl= 1toL5Rcolcomp= [Rl,πDl−1,∀π= 1,...,|Πl|]/ /column compression6Cl,Dl=COMPUTE -CD(Rcolcomp)7 forπ= 1to|Πl|8 Rcompl,π=ClRcolcompπ / /row compression9returnXcomp,RcompFigure 3: DOMAIN -COMPRESSIONof stratumSl(line 8). Algorithm 3 allows us to compute the domain compressed version of Eq. 3which can be obtained by replacing: XwithXcomp=C0X,Rl,πwithRcompl,π=ClRl,πDl−1andHlwithHcompl. Willing to recover the original encodings Hlwe just need to employ the decom-pression matrix Dlon the compressed encodings Hcompl, indeedHl=DlHcompl.As we can see by substituting SlwithScompl, the more are the symmetries (i.e. when |Scompl|/lessmuch|Sl|) the greater the domain compression will be.3 R ELATED WORKSWhen learning with graph inputs two fundamental design aspects that must be taken into account are:the choice of the pattern generator and the choice of the matching operator. The former decomposesthe graph input in substructures while the latter allows to compare the substructures.Among the patterns considered from the graph kernel literature we have paths, shortest paths,walks (Kashima et al., 2003), subtrees (Ramon & G ̈artner, 2003; Shervashidze et al., 2011) andneighborhood subgraphs (Costa & De Grave, 2010). The similarity between graphs GandG/primeiscomputed by counting the number of matches between their common the substructures (i.e. a kernelon the sets of the substructures). The match between two substructures can be defined by usinggraph isomorphism or some other weaker graph invariant.When the number of substructures to enumerate is infinite or exponential with the size of the graph(perhaps this is the case for random walks and shortest paths respectively) the kernel between thetwo graphs is computed without generating an explicit feature map. Learning with an implicit fea-ture map is not scalable as it has a space complexity quadratic in the number of training examples(because we need to store in memory the gram matrix).Other graph kernels such as the Weisfeiler-Lehman Subtree Kernel ( WLST ) (Shervashidze et al.,2011) and the Neighborhood Subgraph Pairwise Distance Kernel ( NSPDK ) (Costa & De Grave,2010) deliberately choose a pattern generator that scales polynomially and produces an explicitfeature map. However the vector representations produced by WLST and NSPDK are handcraftedand not learned.A recent work by Yanardag & Vishwanathan (2015) proposes to uses pattern generators such asgraphlets, shortest paths and WLST subtrees to transform input graphs into documents. The gener-ated substructures are then treated as words and embedded in the Euclidean space with a CBOWor a Skip-gram model. The deep upgrade of existing graph kernels is performed by reweighing thecounts of the substructures by the square root of their word-vector self similarity.Another recent work by Niepert et al. (2016) upgrades the convolutional neural networks CNNs forimages to graphs. While the receptive field of a CNN is usually a square window (Niepert et al.,2016) employ neighborhood subgraphs as receptive fields. As nodes in graphs do not have a specifictemporal or spatial order, (Niepert et al., 2016) employ vertex invariants to impose an order on thenodes of the subgraphs/receptive fields.6Under review as a conference paper at ICLR 20174 E XPERIMENTAL EVALUATIONWe answer to the following experimental questions:Q1How does SAEN compare to the state of the art?Q2Can SAEN exploit symmetries in social networks to reduce the memory usage and the runtime?4.1 D ATASETSIn order to answer the experimental questions we tested our method on six publicly available datasetsfirst proposed by Yanardag & Vishwanathan (2015).•COLLAB is a dataset where each graph represent the ego-network of a researcher, and the task isto determine the field of study of the researcher between High Energy Physics ,Condensed MatterPhysics andAstro Physics .•IMDB -BINARY ,IMDB -MULTI are datasets derived from IMDB where in each graph the ver-tices represent actors/actresses and the edges connect people which have performed in the samemovie. Collaboration graphs are generated from movies belonging to genres Action andRomanceforIMDB -BINARY andComedy ,Romance andSci-Fi forIMDB -MULTI , and for each actor/actress inthose genres an ego-graph is extracted. The task is to identify the genre from which the ego-graphhas been generated.•REDDIT -BINARY ,REDDIT -MULTI 5K,REDDIT -MULTI 12Kare datasets where each graph is de-rived from a discussion thread from Reddit. In those datasets each vertex represent a distinct userand two users are connected by an edge if one of them has responded to a post of the other inthat discussion. The task in REDDIT -BINARY is to discriminate between threads originating froma discussion-based subreddit ( TrollXChromosomes ,atheism ) or from a question/answers-basedsubreddit ( IAmA ,AskReddit ). The task in REDDIT -MULTI 5Kand REDDIT -MULTI 12Kis a multi-class classification problem where each graph is labeled with the subreddit where it has originated(worldnews, videos, AdviceAnimals, aww, mildlyinteresting forREDDIT -MULTI 5KandAskReddit,AdviceAnimals, atheism, aww, IAmA, mildlyinteresting, Showerthoughts, videos, todayilearned,worldnews, TrollXChromosomes forREDDIT -MULTI 12K).4.2 E XPERIMENTSIn our experiments we chose an H-hierarchical decomposition called Ego Graph Neural Network(EGNN ), that mimics the graph kernel NSPDK with the distance parameter set to 0.Before applying EGNN we turn unattributed graphs (V,E)into attributed graphs (V,E,X )by an-notating their vertices v∈Vwith attributes xv∈X. We label vertices vofGwith their degree andencode this information into the attributes xvby employing the 1-hot encoding.EGNN decomposes attributed graphs G= (V,E,X )into a 3levelH-hierarchical decompositionwith the following strata (see Figure 1 for a pictorial representation of EGNN ):•stratumS0contains objects ovthat are in one-to-one correspondence with the vertices v∈V.•stratumS1containsvroot-rootedr-neighborhood subgraphs (i.e. ego graphs) e= (vroot,Ve,Ee)of radiusr= 0,1,...,R and has part-of alphabet Π1={ROOT,ELEM}. Objectsov∈S0are“ELEM -part-of” ego graph eifv∈Ve\{vroot}, while the are “ ROOT -part-of” ego graph eifv=vroot.•stratumS2contains the graph Gthat we want to classify and has part-of alphabet Π2={0,1}which correspond to the radius of the ego graphs e∈S1of whichGis made of.E1We experimented with SAEN applying the EGNNH-decomposition on all the datasets. For eachdataset, we manually chose the parameters of SAEN , i.e. the number of hidden layers for eachstratum, the size of each layer and the maximum radius R. We used the Leaky ReLU (Maas et al.)activation function on all the units. We report the chosen parameters in Table A1 of the appendix.In all our experiments we trained the neural networks by using the Adam algorithm to minimize across entropy loss.The classification accuracy of SAEN was measured with 10-times 10-fold cross-validation. We man-ually chose the number of layers and units for each level of the part-of decomposition; the numberof epochs was chosen manually for each dataset and we kept the same value for all the 100runs ofthe10-times 10-fold cross-validation.7Under review as a conference paper at ICLR 2017Figure 4: Comparison of accuracy results.DATASET DGK PSCN SAEN(Yanardag et al. 2015) (Niepert et al., 2016) (our method)COLLAB 73.09±0.25 72.60±2.16 75.63±0.31IMDB -BINARY 66.96±0.56 71.00±2.29 71.26±0.74IMDB -MULTI 44.55±0.52 45.23±2.84 49.11±0.64REDDIT -BINARY 78.04±0.39 86.30±1.58 86.08±0.53REDDIT -MULTI 5K 41.27±0.18 49.10±0.70 52.24±0.38REDDIT -MULTI 12K 32.22±0.10 41.32±0.42 46.72±0.23Figure 5: Comparison of accuracy on bio-informatics datasets.DATASET PSCN (k= 10E) SAEN(Niepert et al., 2016) (our method)MUTAG 92.63±4.21 84.99±1.82PTC 60.00±4.82 57.04±1.30NCI1 78.59±1.89 77.80±0.42PROTEINS 75.89±2.76 75.31±0.70D&D 77.12±2.41 77.69±0.96The mean accuracies and their standard deviations obtained by our method are reported in Ta-ble 4, where we compare these results with those obtained by Yanardag & Vishwanathan (2015)and by Niepert et al. (2016).Although our method was conceived for social network data, it can also handle other types of graphs.For the sake of completeness in Table 5 we report the mean accuracies obtained with SAEN on themolecule and protein datasets studied in previous works (e.g. Niepert et al. (2016)).Table 1: Comparison of sizes and runtimes of the datasets before and after the compression.DATASETSIZE (MB) RUNTIMEORIGINAL COMP . RATIO ORIGINAL COMP . SPEEDUPCOLLAB 1190 448 0.38 43’ 18” 8’ 20” 5.2IMDB -BINARY 68 34 0.50 3’ 9” 0’ 30” 6.3IMDB -MULTI 74 40 0.54 7’ 41” 1’ 54” 4.0REDDIT -BINARY 326 56 0.17 TO 2’ 35”≥100.0REDDIT -MULTI 5K 952 162 0.17 OOM 9’ 51” –REDDIT -MULTI 12K 1788 347 0.19 OOM 29’ 55” –E2In Table 1 we show the file sizes of the preprocessed datasets before and after the compressiontogether with the data compression ratio.3We also estimate the benefit of the relational compressionfrom a computational time point of view and report the measurement of the runtime for 1run withand without compression together with the speedup factor.For the purpose of this experiment, all tests were run on a computer with two 8-cores Intel XeonE5-2665 processors and 94 GB RAM . Uncompressed datasets which exhausted our server’s memoryduring the test are marked as “ OOM ” (out of memory) in the table, while those who exceeded thetime limit of 100times the time needed for the uncompressed version are marked as “ TO” (timeout).4.3 D ISCUSSIONA1As shown in Table 4, EGNN performs consistently better than the other two methods on all thesocial network datasets. This confirms that the chosen H-hierarchical decomposition is effective onthis kind of problems. Also the results for molecule and protein datasets (see Table 5) are in linewith the current state of the art.A2The compression algorithm has proven to be effective in improving the computational cost of ourmethod. Most of the datasets improved their runtimes by a factor of at least 4while maintaining the3The size of the uncompressed files are shown for the sole purpose of computing the data compression ratio.Indeed the last version of our code compresses the files on the fly.8Under review as a conference paper at ICLR 2017same expressive power. Moreover, experiments on REDDIT -MULTI 5Kand REDDIT -MULTI 12Khaveonly been possible thanks to the size reduction operated by the algorithm as the script exhausted thememory while executing the training step on the uncompressed files.5 C ONCLUSIONSWe proposed SAEN , a novel architecture for learning vector representations of H-decompositionsof input graphs. We applied SAEN for graph classification on 6real world social network datasets,outperforming the current state of the art on 4of them and obtaining state-of-the-art classificationaccuracy on the others. Another important contribution of this paper is the domain compressionalgorithm which greatly reduces memory usage and allowed us to speedup the training time of afactor of at least 4. | SJP14kfEx | Might be something good here, but key details are missing. | 3: Clear rejection | Some of the key details in this paper are very poorly explained or not even explained at all. The model sounds interesting and there may be something good here, but it should not be published in it's current form.
Specific comments:
The description of the R_l,pi convolutions in Section 2.1 was unclear. Specifically, I wasn't confident that I understood what the labels pi represented.
The description of the SAEN structure in section 2.2 was worded poorly. My understanding, based on Equation 1, is that the 'shift' operation is simply a summation of the representations of the member objects, and that the 'aggregate' operation simply concatenates the representations from multiple relations. In the 'shift' step, it seems more appropriate to average over the object's member's representations h_j, rather than sum over them.
The compression technique presented in Section 2.3 requires that multiple objects at a level have the same representation. Why would this ever occur, given that the representations are real valued and high-dimensional? The text is unintelligible: "two objects are equivalent if they are made by same sets of parts for all the pi-parameterizations of the R_l,pi decomposition relation."
The 'ego graph patterns' in Figure 1 and 'Ego Graph Neural Network' used in the experiments are never explained in the text, and no references are given. Because of this, I cannot comment on the quality of the experiments. | 2: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper |
HJKkY35le | ICLR.cc/2017/conference | 2017 | Mode Regularized Generative Adversarial Networks | ["Tong Che", "Yanran Li", "Athul Jacob", "Yoshua Bengio", "Wenjie Li"] | Although Generative Adversarial Networks achieve state-of-the-art results on a
variety of generative tasks, they are regarded as highly unstable and prone to miss
modes. We argue that these bad behaviors of GANs are due to the very particular
functional shape of the trained discriminators in high dimensional spaces, which
can easily make training stuck or push probability mass in the wrong direction,
towards that of higher concentration than that of the data generating distribution.
We introduce several ways of regularizing the objective, which can dramatically
stabilize the training of GAN models. We also show that our regularizers can help
the fair distribution of probability mass across the modes of the data generating
distribution during the early phases of training, thus providing a unified solution
to the missing modes problem. | ["Deep learning", "Unsupervised Learning"] | ABSTRACTAlthough Generative Adversarial Networks achieve state-of-the-art results on avariety of generative tasks, they are regarded as highly unstable and prone to missmodes. We argue that these bad behaviors of GANs are due to the very particularfunctional shape of the trained discriminators in high dimensional spaces, whichcan easily make training stuck or push probability mass in the wrong direction,towards that of higher concentration than that of the data generating distribution.We introduce several ways of regularizing the objective, which can dramaticallystabilize the training of GAN models. We also show that our regularizers canhelp the fair distribution of probability mass across the modes of the data gener-ating distribution, during the early phases of training and thus providing a unifiedsolution to the missing modes problem.1 I NTRODUCTIONGenerative adversarial networks (GAN) (Goodfellow et al., 2014) have demonstrated their potentialon various tasks, such as image generation, image super-resolution, 3D object generation, and videoprediction (Radford et al., 2015; Ledig et al., 2016; Sønderby et al., 2016; Nguyen et al., 2016; Wuet al., 2016; Mathieu et al., 2015). The objective is to train a parametrized function (the generator)which maps noise samples (e.g., uniform or Gaussian) to samples whose distribution is close to thatof the data generating distribution. The basic scheme of the GAN training procedure is to traina discriminator which assigns higher probabilities to real data samples and lower probabilities togenerated data samples, while simultaneously trying to move the generated samples towards the realdata manifold using the gradient information provided by the discriminator. In a typical setting, thegenerator and the discriminator are represented by deep neural networks.Despite their success, GANs are generally considered as very hard to train due to training instabilityand sensitivity to hyper-parameters. On the other hand, a common failure pattern observed whiletraining GANs is the collapsing of large volumes of probability mass onto a few modes. Namely,although the generators produce meaningful samples, these samples are often from just a few modes(small regions of high probability under the data distribution). Behind this phenomenon is the miss-ing modes problem, which is widely conceived as a major problem for training GANs: many modesof the data generating distribution are not at all represented in the generated samples, yielding amuch lower entropy distribution, with less variety than the data generating distribution.This issue has been the subject of several recent papers proposing several tricks and new archi-tectures to stabilize GAN’s training and encourage its samples’ diversity. However, we argue that ageneral cause behind these problems is the lack of control on the discriminator during GAN training.We would like to encourage the manifold of the samples produced by the generator to move towardsthat of real data, using the discriminator as a metric. However, even if we train the discriminatorto distinguish between these two manifolds, we have no control over the shape of the discriminatorfunction in between these manifolds. In fact, the shape of the discriminator function in the dataAuthors contributed equally.1Published as a conference paper at ICLR 2017space can be very non-linear with bad plateaus and wrong maxima and this can therefore hurt thetraining of GANs (Figure 1).Figure 1: Samples with very high discrim-ination values (D=1.0) in DCGAN modeltrained on CelebA dataset.To remedy this problem, we propose a novel regu-larizer for the GAN training target. The basic ideais simple yet powerful: in addition to the gradientinformation provided by the discriminator, we wantthe generator to take advantage of other similaritymetrics with much more predictable behavior, suchas theL2norm. Differentiating these similarity met-rics will provide us with more stable gradients totrain our generator. Combining this idea with an ap-proach meant to penalize the missing modes, we pro-pose a family of additional regularizers for the GAN objective. We then design a set of metrics toevaluate the generated samples in terms of both the diversity of modes and the distribution fairnessof the probability mass. These metrics are shown to be more robust in judging complex generativemodels, including those which are well-trained and collapsed ones.Regularizers usually bring a trade-off between model variance and bias. Our results have shownthat, when correctly applied, our regularizers can dramatically reduce model variance, stabilize thetraining, and fix the missing mode problem all at once, with positive or at the least no negative effectson the generated samples. We also discuss a variant of the regularized GAN algorithm, which caneven improve sample quality as compared to the DCGAN baseline.2 R ELATED WORKThe GAN approach was initially proposed by Goodfellow et al. (2014) where both the generator andthe discriminator are defined by deep neural networks.In Goodfellow et al. (2014), the GAN is able to generate interesting local structure but globallyincoherent images on various datasets. Mirza & Osindero (2014) enlarges GAN’s representationcapacity by introducing an extra vector to allow the generator to produce samples conditioned onother beneficial information. Motivated from this, several conditional variants of GAN has beenapplied to a wide range of tasks, including image prediction from a normal map Wang & Gupta(2016), image synthesis from text Reed et al. (2016) and edge map Isola et al. (2016), real-timeimage manipulation Zhu et al. (2016), temporal image generation Zhou & Berg (2016); Saito &Matsumoto (2016); V ondrick et al. (2016), texture synthesis, style transfer, and video stylization Li& Wand (2016).Researchers also aim at stretching GAN’s limit to generate higher-resolution, photo-realistic images.Denton et al. (2015) initially apply a Laplacian pyramid framework on GAN to generate images ofhigh resolution. At each level of their LAPGAN, both the generator and the discriminator are convo-lutional networks. As an alternative to LAPGAN, Radford et al. (2015) successfully designs a classof deep convolutional generative adversarial networks which has led to significant improvements onunsupervised image representation learning. Another line of work aimed at improving GANs arethrough feature learning, including features from the latent space and image space. The motivation isthat features from different spaces are complementary for generating perceptual and natural-lookingimages. With this perspective, some researchers use distances between learned features as losses fortraining objectives for generative models. Larsen et al. (2015) combine a variational autoencoderobjective with a GAN and utilize the learned features from the discriminator in the GANs for betterimage similarity metrics. It is shown that the learned distance from the discriminator is of greathelp for the sample visual fidelity. Recent literature have also shown impressive results on imagesuper-resolution to infer photo-realistic natural images for 4x upscaling factors Ledig et al. (2016);Sønderby et al. (2016); Nguyen et al. (2016).Despite these promising successes, GANs are notably hard to train. Although Radford et al. (2015)provide a class of empirical architectural choices that are critical to stabilize GAN’s training, itwould be even better to train GANs more robustly and systematically. Salimans et al. (2016) pro-pose feature matching technique to stabilize GAN’s training. The generator is required to match thestatistics of intermediate features of the discriminator. Similar idea is adopted by Zhao et al. (2016).2Published as a conference paper at ICLR 2017In addition to feature distances, Dosovitskiy & Brox (2016) found that the counterpart loss in imagespace further improves GAN’s training stability. Furthermore, some researchers make use of infor-mation in both spaces in a unified learning procedure (Dumoulin et al., 2016; Donahue et al., 2016).In Dumoulin et al. (2016), one trains not just a generator but also an encoder, and the discriminatoris trained to distinguish between two joint distributions over image and latent spaces produced eitherby the application of the encoder on the training data or by the application of the generator (decoder)to the latent prior. This is in contrast with the regular GAN training, in which the discriminator onlyattempts to separate the distributions in the image space. Parallelly, Metz et al. (2016) stabilizeGANs by unrolling the optimization of discriminator, which can be considered as an orthogonalwork with ours.Our work is related to V AEGAN (Larsen et al., 2015) in terms of training an autoencoder or V AEjointly with the GAN model. However, the variational autoencoder (V AE) in V AEGAN is used togenerate samples whereas our autoencoder based losses serves as a regularizer to penalize missingmodes and thus improving GAN’s training stability and sample qualities. We demonstrate detaileddifferences from various aspects in Appendix D.3 M ODE REGULARIZERS FOR GAN SThe GAN training procedure can be viewed as a non-cooperative two player game, in which thediscriminator Dtries to distinguish real and generated examples, while the generator Gtries to foolthe discriminator by pushing the generated samples towards the direction of higher discriminationvalues. Training the discriminator Dcan be viewed as training an evaluation metric on the samplespace. Then the generator Ghas to take advantage of the local gradient rlogD(G)provided by thediscriminator to improve itself, namely to move towards the data manifold.We now take a closer look at the root cause of the instabilities while training GANs. The discrim-inator is trained on both generated and real examples. As pointed out by Goodfellow et al. (2014);Denton et al. (2015); Radford et al. (2015), when the data manifold and the generation manifold aredisjoint (which is true in almost all practical situations), it is equivalent to training a characteristicfunction to be very close to 1 on the data manifold, and 0 on the generation manifold. In order topass good gradient information to the generator, it is important that the trained discriminator pro-duces stable and smooth gradients. However, since the discriminator objective does not directlydepend on the behavior of the discriminator in other parts of the space, training can easily fail if theshape of the discriminator function is not as expected. As an example,Denton et al. (2015) noteda common failure pattern for training GANs which is the vanishing gradient problem, in which thediscriminator Dperfectly classifies real and fake examples, such that around the fake examples, Dis nearly zero. In such cases, the generator will receive no gradient to improve itself.1Another important problem while training GANs is mode missing. In theory, if the generated dataand the real data come from the same low dimensional manifold, the discriminator can help thegenerator distribute its probability mass, because the missing modes will not have near-0 probabilityunder the generator and so the samples in these areas can be appropriately concentrated towardsregions where Dis closer to 1. However, in practice since the two manifolds are disjoint, Dtendsto be near 1 on all the real data samples, so large modes usually have a much higher chance ofattracting the gradient of discriminator. For a typical GAN model, since all modes have similar Dvalues, there is no reason why the generator cannot collapse to just a few major modes. In otherwords, since the discriminator’s output is nearly 0 and 1 on fake and real data respectively, thegenerator is not penalized for missing modes.3.1 G EOMETRIC METRICS REGULARIZERCompared with the objective for the GAN generator, the optimization targets for supervised learningare more stable from an optimization point of view. The difference is clear: the optimization targetfor the GAN generator is a learned discriminator. While in supervised models, the optimizationtargets are distance functions with nice geometric properties. The latter usually provides mucheasier training gradients than the former, especially at the early stages of training.1This problem exists even when we use logD(G(z))as target for the generator, as noted by Denton et al.(2015) and our experiments.3Published as a conference paper at ICLR 2017Inspired by this observation, we propose to incorporate a supervised training signal as a regularizeron top of the discriminator target. Assume the generator G(z) :Z!Xgenerates samples by sam-pling first from a fixed prior distribution in space Zfollowed by a deterministic trainable transforma-tionGinto the sample space X. Together with G, we also jointly train an encoder E(x) :X!Z.Assumedis some similarity metric in the data space, we add Expd[d(x;GE(x))]as a regularizer,wherepdis the data generating distribution. The encoder itself is trained by minimizing the samereconstruction error.In practice, there are many options for the distance measure d. For instance, the pixel-wise L2distance, or the distance of learned features by the discriminator (Dumoulin et al., 2016) or by othernetworks, such as a VGG classifier. (Ledig et al., 2016)The geometric intuition for this regularizer is straight-forward. We are trying to move the generatedmanifold to the real data manifold using gradient descent. In addition to the gradient provided bythe discriminator, we can also try to match the two manifolds by other geometric distances, say,Lsmetric. The idea of adding an encoder is equivalent to first training a point to point mappingG(E(x))between the two manifolds and then trying to minimize the expected distance between thepoints on these two manifolds.3.2 M ODE REGULARIZERIn addition to the metric regularizer, we propose a mode regularizer to further penalize miss-ing modes. In traditional GANs, the optimization target for the generator is the empirical sumPirlogD(G(zi)). The missing mode problem is caused by the conjunction of two facts: (1)the areas near missing modes are rarely visited by the generator, by definition, thus providing veryfew examples to improve the generator around those areas, and (2) both missing modes and non-missing modes tend to correspond to a high value of D, because the generator is not perfect sothat the discriminator can take strong decisions locally and obtain a high value of Deven nearnon-missing modes.Figure 2: Illustration of missing modes problem.As an example, consider the situation in Fig-ure 2. For most z, the gradient of the generatorrlogD(G(z))pushes the generator towardsthe major mode M1. Only when G(z)is veryclose to the mode M2can the generator get gra-dients to push itself towards the minor modeM2. However, it is possible that such zis oflow or zero probability in the prior distributionp0.Given this observation, consider a regularizedGAN model with the metric regularizer. As-sumeM0is a minor mode of the data generat-ing distribution. For x2M0, we know thatifGEis a good autoencoder, G(E(x))willbe located very close to mode M0. Since thereare sufficient training examples of mode M0inthe training data, we add the mode regularizerExpd[logD(GE(x))]to our optimizationtarget for the generator, to encourage G(E(x))to move towards a nearby mode of the data generating distribution. In this way, we can achieve fairprobability mass distribution across different modes.In short, our regularized optimization target for the generator and the encoder becomes:TG=Ez[logD(G(z))] +Expd[1d(x;GE(x)) +2logD(GE(x))] (1)TE=Expd[1d(x;GE(x)) +2logD(GE(x))] (2)4Published as a conference paper at ICLR 20173.3 M ANIFOLD -DIFFUSION TRAINING FOR REGULARIZED GAN SOn some large scale datasets, CelebA for example, the regularizers we have discussed do improvethe diversity of generated samples, but the quality of samples may not be as good without care-fully tuning the hyperparameters. Here we propose a new algorithm for training metric-regularizedGANs, which is very stable and much easier to tune for producing good samples.The proposed algorithm divides the training procedure of GANs into two steps: a manifold stepand a diffusion step. In the manifold step, we try to match the generation manifold and the realdata manifold with the help of an encoder and the geometric metric loss. In the diffusion step, wetry to distribute the probability mass on the generation manifold fairly according to the real datadistribution.An example of manifold-diffusion training of GAN (MDGAN for short) is as follows: we train adiscriminator D1which separates between the samples xandGE(x), forxfrom the data, and weoptimizeGwith respect to the regularized GAN loss E[logD1(GE(x))+d(x;GE(x))]in orderto match the two manifolds. In the diffusion step we train a discriminator D2between distributionsG(z)andGE(x), and we train Gto maximize logD2(G(z)). Since these two distributions arenow nearly on the same low dimensional manifold, the discriminator D2provides much smootherand more stable gradients. The detailed training procedure is given in Appendix A. See Figure 6 forthe quality of generated samples.3.4 E VALUATION METRICS FOR MODE MISSINGIn order to estimate both the missing modes and the sample qualities in our experiments, we usedseveral different metrics for different experiments instead of human annotators.The inception score (Salimans et al., 2016) was considered as a good assessment for sample qualityfrom a labelled dataset:exp (ExKL(p(yjx)jjp(y))) (3)Where xdenotes one sample, p(yjx)is the softmax output of a trained classifier of the labels, andp(y)is the overall label distribution of generated samples. The intuition behind this score is thata strong classifier usually has a high confidence for good samples. However, the inception score issometimes not a good metric for our purpose. Assume a generative model that collapse to a very badimage. Although the model is very bad, it can have a perfect inception score, because p(yjx)canhave a high entropy and p(y)can have a low entropy. So instead, for labelled datasets, we proposeanother assessment for both visual quality and variety of samples, the MODE score:exp (ExKL(p(yjx)jjp(y))KL(p(y)jjp(y))) (4)wherep(y)is the distribution of labels in the training data. According to our human evaluationexperiences, the MODE score successfully measures two important aspects of generative models,i.e., variety and visual quality, in one metric.However, in datasets without labels (LSUN) or where the labels are not sufficient to characterizeevery data mode (CelebA), the above metric does not work well. We instead train a third partydiscriminator between the real data and the generated data from the model. It is similar to the GANdiscriminator but is not used to train the generator . We can view the output of the discriminator asan estimator for the quantity (See (Goodfellow et al., 2014) for proof):D(s)pg(s)pg(s) +pd(s)(5)Wherepgis the probability density of the generator and pdis the density of the data distribution.To preventDfrom learning a perfect 0-1 separation of pgandpd, we inject a zero-mean Gaussiannoise to the inputs when training D. After training, we test Don the test set Tof the real dataset.If for any test sample t2T, the discrimination value D(t)is close to 1, we can conclude that themode corresponding to tis missing. In this way, although we cannot measure exactly the numberof modes that are missing, we have a good estimator of the total probability mass of all the missingmodes.5Published as a conference paper at ICLR 20174 E XPERIMENTS4.1 MNISTTable 1: Grid Search for Hyperparameters.nLayerG [2,3,4]nLayerD [2,3,4]sizeG [400,800,1600,3200]sizeD [256, 512, 1024]dropoutD [True,False]optimG [SGD,Adam]optimD [SGD,Adam]lr [1e-2,1e-3,1e-4]We perform two classes of experiments on MNIST.For the MNIST dataset, we can assume that the datagenerating distribution can be approximated with tendominant modes, if we define the term “mode” here asa connected component of the data manifold.4.1.1 G RIDSEARCH FOR MNIST GAN M ODELSIn order to systemically explore the effect of our pro-posed regularizers on GAN models in terms of im-proving stability and sample quality, we use a largescale grid search of different GAN hyper-parameterson the MNIST dataset. The grid search is based on apair of randomly selected loss weights: 1= 0:2and2= 0:4. We use the same hyper-parameter settings for both GAN and Regularized GAN, andlist the search ranges in Table 1. Our grid search is similar to those proposed in Zhao et al. (2016).Please refer to it for detailed explanations regarding these hyper-parameters.For evaluation, we first train a 4-layer CNN classifier on the MNIST digits, and then apply it tocompute the MODE scores for the generated samples from all these models. The resulting distribu-tion of MODE score is shown in Figure 3. Clearly, our proposed regularizer significantly improvesthe MODE scores and thus demonstrates its benefits on stabilizing GANs and improving samplequalities.Figure 3: The distributions of MODE scores for GAN and regularized GAN.To illustrate the effect of regularizers with different coefficients, we randomly pick an architectureand train it with different 1=2. The results are shown in Figure 4.Figure 4: (Left 1-5) Different hyperparameters for MNIST generation. The values of the 1and2in our Regularized GAN are listed below the corresponding samples. (Right 6-7) Best samplesthrough grid search for GAN and Regularized GAN.4.1.2 C OMPOSITIONAL MNIST DATA WITH 1000 MODESIn order to quantitatively study the effect of our regularizers on the missing modes, we concatenatethree MNIST digits to a number in [0,999] in a single 64x64 image, and then train DCGAN asa baseline model on the 1000 modes dataset. The digits on the image are sampled with different6Published as a conference paper at ICLR 2017probabilities, in order to test the model’s capability to preserve small modes in generation. We againuse a pre-trained classifier for MNIST instead of a human to evaluate the models.Table 2: Results for Compositional MNIST with 1000 modes. The proposed regularization (Reg-DCGAN) allows to substantially reduce the number of missed modes as well as the KL divergencethat measures the plausibility of the generated samples (like in the Inception score).Set 1 Set 2 Set 3 Set 4#Miss KL #Miss KL #Miss KL #Miss KLDCGAN 204.7 77.9 204.3 60.2 103.4 75.9 89.3 77.8Reg-DCGAN 32.1 62.3 71.5 58.9 42.7 68.4 31.6 67.8The performances on the compositional experiment are measured by two metrics. #Miss representsthe classifier-reported number of missing modes, which is the size of the set of numbers that themodel never generates. KL stands for the KL divergence between the classifier-reported distributionof generated numbers and the distribution of numbers in the training data (as for the Inceptionscore). The results are shown in Table 2. With the help of our proposed regularizer, both the numberof missing modes and KL divergence drop dramatically among all the sets of the compositionalMNIST dataset, which again proves the effectiveness of our regularizer for preventing the missingmodes problem.4.2 C ELEB ATo test the effectiveness of our proposal on harder problems, we implement an encoder for theDCGAN algorithm and train our model with different hyper-parameters together with the DCGANbaseline on the CelebA dataset. We provide the detailed architecture of our regularized DCGAN inAppendix B.4.2.1 M ISSING MODES ESTIMATION ON CELEB AWe also employ a third party discriminator trained with injected noise as a metric for missing modeestimation. To implement this, we add noise in the input layer in the discriminator network. For eachGAN model to be estimated, we independently train this noisy discriminator, as mode estimator,with the same architecture and hyper-parameters on the generated data and the training data. Wethen apply the mode estimator to the test data. The images which have high mode estimator outputscan be viewed as on the missing modes.Table 3: Number of images on the missing modes on CelebA estimated by a third-party discrimina-tor. The numbers in the brackets indicate the dimension of prior z.denotes the standard deviationof the added Gaussian noise applied at the input of the discriminator to regularize it. MDGANachieves a very high reduction in the number of missing modes, in comparison to other methods . DCGAN (100) DCGAN (200) Reg-GAN (100) Reg-GAN (200) MDGAN (200)3.5 5463 17089 754 3644 744.0 590 15832 42 391 13The comparison result is shown in Table 3. Both our proposed Regularized-GAN and MDGANoutperform baseline DCGAN models on all settings. Especially, MDGAN suppresses other models,showing its superiority on modes preserving. We also find that, although sharing the same architec-ture, the DCGAN with 200-dimensional noise performs quite worse than that with 100-dimensionalnoise as input. On the contrary, our regularized GAN performs more consistently.To get a better understanding of the models’ performance, we want to figure out when and wherethese models miss the modes. Visualizing the test images associated with missed modes is instruc-tive. In Figure 5, the left three images are missed by all models. It is rare to see in the training datathe cap in the second image and the type of background in the third, which thus can be viewed assmall modes under this situation. These three images should be considered as the hardest test data7Published as a conference paper at ICLR 2017for GAN to learn. Nonetheless, our best model, MDGAN still capture certain small modes. Theseven images on the right in Figure 5 are only missed by DCGAN. The sideface, paleface, black,and the berets are special attributes among these images, but our proposed MDGAN performs wellon all of them.Figure 5: Test set images that are on missing mode. Left: Both MDGAN and DCGAN missing.Right: Only DCGAN missing.4.2.2 Q UALITATIVE EVALUATION OF GENERATED SAMPLESAfter quantitative evaluation, we manually examine the generated samples by our regularized GANto see whether the proposed regularizer has side-effects on sample quality. We compare our modelwith ALI (Dumoulin et al., 2016), V AEGAN (Larsen et al., 2015), and DCGAN (Radford et al.,2015) in terms of sample visual quality and mode diversity. Samples generated from these modelsare shown in Figure 62.Figure 6: Samples generated from different generative models. For each compared model, wedirectly take ten decent samples reported in their corresponding papers and code repositories. Notehow MDGAN samples are both globally more coherent and locally have sharp textures.Both MDGAN and Regularized-GAN generate clear and natural-looking face images. AlthoughALI’s samples are plausible, they are sightly deformed in comparison with those from MDGAN.The samples from V AEGAN and DCGAN seem globally less coherent and locally less sharp.As to sample quality, it is worth noting that the samples from MDGAN enjoy fewer distortions.With all four other models, the majority of generated samples suffer from some sort of distortion.However, for the samples generated by MDGAN, the level of distortion is lower compared with theother four compared models. We attribute it to the help of the autoencoder as the regularizer to alterthe generation manifolds. In this way, the generator is able to learn fine-grained details such as faceedges. As a result, MDGAN is able to reduce distortions.2For fair comparison, we also recommend readers to refer to the original papers Dumoulin et al. (2016);Larsen et al. (2015); Radford et al. (2015) for the reported samples of the compared. The ALI sam-ples are from https://github.com/IshmaelBelghazi/ALI/blob/master/paper/celeba_samples.png and we reverted them to the original 64x64 size. The DCGAN samples are from https://github.com/Newmu/dcgan_code/8Published as a conference paper at ICLR 2017Figure 7: Sideface samples generated by Regularized-GAN and MDGAN.In terms of missing modes problem, we instructed five individuals to conduct human evaluation onthe generated samples. They achieve consensus that MDGAN wins in terms of mode diversities.Two people pointed out that MDGAN generates a larger amount of samples with side faces thanother models. We select several of these side face samples in Figure 7. Clearly, our samples maintainacceptable visual fidelity meanwhile share diverse modes. Combined with the above quantitativeresults, it is convincing that our regularizers bring benefits for both training stability and modevariety without the loss of sample quality.5 C ONCLUSIONSAlthough GANs achieve state-of-the-art results on a large variety of unsupervised learning tasks,training them is considered highly unstable, very difficult and sensitive to hyper-parameters, all thewhile, missing modes from the data distribution or even collapsing large amounts of probabilitymass on some modes. Successful GAN training usually requires large amounts of human and com-puting efforts to fine tune the hyper-parameters, in order to stabilize training and avoid collapsing.Researchers usually rely on their own experience and published tricks and hyper-parameters insteadof systematic methods for training GANs.We provide systematic ways to measure and avoid the missing modes problem and stabilize trainingwith the proposed autoencoder-based regularizers. The key idea is that some geometric metrics canprovide more stable gradients than trained discriminators, and when combined with the encoder,they can be used as regularizers for training. These regularizers can also penalize missing modesand encourage a fair distribution of probability mass on the generation manifold.ACKNOWLEDGEMENTSWe thank Naiyan Wang, Jianbo Ye, Yuchen Ding, Saboya Yang for their GPU support. We also wantto thank Huiling Zhen for helpful discussions, Junbo Zhao for providing the details of grid searchexperiments on the EBGAN model, as well as Anders Boesen Lindbo Larsen for kindly helping uson running V AEGAN experiments. We appreciate for the valuable suggestions and comments fromthe anonymous reviewers. The work described in this paper was partially supported by NSERC,Calcul Quebec, Compute Canada, the Canada Research Chairs, CIFAR, National Natural ScienceFoundation of China (61672445 and 61272291), Research Grants Council of Hong Kong (PolyU152094/14E), and The Hong Kong Polytechnic University (G-YBP6). | Skkn6YbNx | Clearly identifies and attacks a key problem in GANs | 7: Good paper, accept | This paper does a good job of clearly articulating a problem in contemporary training of GANs, coming up with an intuitive solution via regularizers in addition to optimizing only the discriminator score, and conducting clever experiments to show that the regularizers have the intended effect.
There are recent related and improved GAN variants (ALI, VAEGAN, potentially others), which are included in qualitative comparisons, but not quantitative. It would be interesting to see whether these other types of modified GANs already make some progress in addressing the missing modes problem. If code is available for those methods, the paper could be strengthened a lot by running the mode-missing benchmarks on them (even if it turns out that a "competing" method can get a better result in some cases).
The experiments on digits and faces are good for validating the proposed regularizers. However, if the authors can show better results on CIFAR-10, ImageNet, MS-COCO or some other more diverse and challenging dataset, I would be more convinced of the value of the proposed method.
| 4: The reviewer is confident but not absolutely certain that the evaluation is correct |
HJKkY35le | ICLR.cc/2017/conference | 2017 | Mode Regularized Generative Adversarial Networks | ["Tong Che", "Yanran Li", "Athul Jacob", "Yoshua Bengio", "Wenjie Li"] | Although Generative Adversarial Networks achieve state-of-the-art results on a
variety of generative tasks, they are regarded as highly unstable and prone to miss
modes. We argue that these bad behaviors of GANs are due to the very particular
functional shape of the trained discriminators in high dimensional spaces, which
can easily make training stuck or push probability mass in the wrong direction,
towards that of higher concentration than that of the data generating distribution.
We introduce several ways of regularizing the objective, which can dramatically
stabilize the training of GAN models. We also show that our regularizers can help
the fair distribution of probability mass across the modes of the data generating
distribution during the early phases of training, thus providing a unified solution
to the missing modes problem. | ["Deep learning", "Unsupervised Learning"] | ABSTRACTAlthough Generative Adversarial Networks achieve state-of-the-art results on avariety of generative tasks, they are regarded as highly unstable and prone to missmodes. We argue that these bad behaviors of GANs are due to the very particularfunctional shape of the trained discriminators in high dimensional spaces, whichcan easily make training stuck or push probability mass in the wrong direction,towards that of higher concentration than that of the data generating distribution.We introduce several ways of regularizing the objective, which can dramaticallystabilize the training of GAN models. We also show that our regularizers canhelp the fair distribution of probability mass across the modes of the data gener-ating distribution, during the early phases of training and thus providing a unifiedsolution to the missing modes problem.1 I NTRODUCTIONGenerative adversarial networks (GAN) (Goodfellow et al., 2014) have demonstrated their potentialon various tasks, such as image generation, image super-resolution, 3D object generation, and videoprediction (Radford et al., 2015; Ledig et al., 2016; Sønderby et al., 2016; Nguyen et al., 2016; Wuet al., 2016; Mathieu et al., 2015). The objective is to train a parametrized function (the generator)which maps noise samples (e.g., uniform or Gaussian) to samples whose distribution is close to thatof the data generating distribution. The basic scheme of the GAN training procedure is to traina discriminator which assigns higher probabilities to real data samples and lower probabilities togenerated data samples, while simultaneously trying to move the generated samples towards the realdata manifold using the gradient information provided by the discriminator. In a typical setting, thegenerator and the discriminator are represented by deep neural networks.Despite their success, GANs are generally considered as very hard to train due to training instabilityand sensitivity to hyper-parameters. On the other hand, a common failure pattern observed whiletraining GANs is the collapsing of large volumes of probability mass onto a few modes. Namely,although the generators produce meaningful samples, these samples are often from just a few modes(small regions of high probability under the data distribution). Behind this phenomenon is the miss-ing modes problem, which is widely conceived as a major problem for training GANs: many modesof the data generating distribution are not at all represented in the generated samples, yielding amuch lower entropy distribution, with less variety than the data generating distribution.This issue has been the subject of several recent papers proposing several tricks and new archi-tectures to stabilize GAN’s training and encourage its samples’ diversity. However, we argue that ageneral cause behind these problems is the lack of control on the discriminator during GAN training.We would like to encourage the manifold of the samples produced by the generator to move towardsthat of real data, using the discriminator as a metric. However, even if we train the discriminatorto distinguish between these two manifolds, we have no control over the shape of the discriminatorfunction in between these manifolds. In fact, the shape of the discriminator function in the dataAuthors contributed equally.1Published as a conference paper at ICLR 2017space can be very non-linear with bad plateaus and wrong maxima and this can therefore hurt thetraining of GANs (Figure 1).Figure 1: Samples with very high discrim-ination values (D=1.0) in DCGAN modeltrained on CelebA dataset.To remedy this problem, we propose a novel regu-larizer for the GAN training target. The basic ideais simple yet powerful: in addition to the gradientinformation provided by the discriminator, we wantthe generator to take advantage of other similaritymetrics with much more predictable behavior, suchas theL2norm. Differentiating these similarity met-rics will provide us with more stable gradients totrain our generator. Combining this idea with an ap-proach meant to penalize the missing modes, we pro-pose a family of additional regularizers for the GAN objective. We then design a set of metrics toevaluate the generated samples in terms of both the diversity of modes and the distribution fairnessof the probability mass. These metrics are shown to be more robust in judging complex generativemodels, including those which are well-trained and collapsed ones.Regularizers usually bring a trade-off between model variance and bias. Our results have shownthat, when correctly applied, our regularizers can dramatically reduce model variance, stabilize thetraining, and fix the missing mode problem all at once, with positive or at the least no negative effectson the generated samples. We also discuss a variant of the regularized GAN algorithm, which caneven improve sample quality as compared to the DCGAN baseline.2 R ELATED WORKThe GAN approach was initially proposed by Goodfellow et al. (2014) where both the generator andthe discriminator are defined by deep neural networks.In Goodfellow et al. (2014), the GAN is able to generate interesting local structure but globallyincoherent images on various datasets. Mirza & Osindero (2014) enlarges GAN’s representationcapacity by introducing an extra vector to allow the generator to produce samples conditioned onother beneficial information. Motivated from this, several conditional variants of GAN has beenapplied to a wide range of tasks, including image prediction from a normal map Wang & Gupta(2016), image synthesis from text Reed et al. (2016) and edge map Isola et al. (2016), real-timeimage manipulation Zhu et al. (2016), temporal image generation Zhou & Berg (2016); Saito &Matsumoto (2016); V ondrick et al. (2016), texture synthesis, style transfer, and video stylization Li& Wand (2016).Researchers also aim at stretching GAN’s limit to generate higher-resolution, photo-realistic images.Denton et al. (2015) initially apply a Laplacian pyramid framework on GAN to generate images ofhigh resolution. At each level of their LAPGAN, both the generator and the discriminator are convo-lutional networks. As an alternative to LAPGAN, Radford et al. (2015) successfully designs a classof deep convolutional generative adversarial networks which has led to significant improvements onunsupervised image representation learning. Another line of work aimed at improving GANs arethrough feature learning, including features from the latent space and image space. The motivation isthat features from different spaces are complementary for generating perceptual and natural-lookingimages. With this perspective, some researchers use distances between learned features as losses fortraining objectives for generative models. Larsen et al. (2015) combine a variational autoencoderobjective with a GAN and utilize the learned features from the discriminator in the GANs for betterimage similarity metrics. It is shown that the learned distance from the discriminator is of greathelp for the sample visual fidelity. Recent literature have also shown impressive results on imagesuper-resolution to infer photo-realistic natural images for 4x upscaling factors Ledig et al. (2016);Sønderby et al. (2016); Nguyen et al. (2016).Despite these promising successes, GANs are notably hard to train. Although Radford et al. (2015)provide a class of empirical architectural choices that are critical to stabilize GAN’s training, itwould be even better to train GANs more robustly and systematically. Salimans et al. (2016) pro-pose feature matching technique to stabilize GAN’s training. The generator is required to match thestatistics of intermediate features of the discriminator. Similar idea is adopted by Zhao et al. (2016).2Published as a conference paper at ICLR 2017In addition to feature distances, Dosovitskiy & Brox (2016) found that the counterpart loss in imagespace further improves GAN’s training stability. Furthermore, some researchers make use of infor-mation in both spaces in a unified learning procedure (Dumoulin et al., 2016; Donahue et al., 2016).In Dumoulin et al. (2016), one trains not just a generator but also an encoder, and the discriminatoris trained to distinguish between two joint distributions over image and latent spaces produced eitherby the application of the encoder on the training data or by the application of the generator (decoder)to the latent prior. This is in contrast with the regular GAN training, in which the discriminator onlyattempts to separate the distributions in the image space. Parallelly, Metz et al. (2016) stabilizeGANs by unrolling the optimization of discriminator, which can be considered as an orthogonalwork with ours.Our work is related to V AEGAN (Larsen et al., 2015) in terms of training an autoencoder or V AEjointly with the GAN model. However, the variational autoencoder (V AE) in V AEGAN is used togenerate samples whereas our autoencoder based losses serves as a regularizer to penalize missingmodes and thus improving GAN’s training stability and sample qualities. We demonstrate detaileddifferences from various aspects in Appendix D.3 M ODE REGULARIZERS FOR GAN SThe GAN training procedure can be viewed as a non-cooperative two player game, in which thediscriminator Dtries to distinguish real and generated examples, while the generator Gtries to foolthe discriminator by pushing the generated samples towards the direction of higher discriminationvalues. Training the discriminator Dcan be viewed as training an evaluation metric on the samplespace. Then the generator Ghas to take advantage of the local gradient rlogD(G)provided by thediscriminator to improve itself, namely to move towards the data manifold.We now take a closer look at the root cause of the instabilities while training GANs. The discrim-inator is trained on both generated and real examples. As pointed out by Goodfellow et al. (2014);Denton et al. (2015); Radford et al. (2015), when the data manifold and the generation manifold aredisjoint (which is true in almost all practical situations), it is equivalent to training a characteristicfunction to be very close to 1 on the data manifold, and 0 on the generation manifold. In order topass good gradient information to the generator, it is important that the trained discriminator pro-duces stable and smooth gradients. However, since the discriminator objective does not directlydepend on the behavior of the discriminator in other parts of the space, training can easily fail if theshape of the discriminator function is not as expected. As an example,Denton et al. (2015) noteda common failure pattern for training GANs which is the vanishing gradient problem, in which thediscriminator Dperfectly classifies real and fake examples, such that around the fake examples, Dis nearly zero. In such cases, the generator will receive no gradient to improve itself.1Another important problem while training GANs is mode missing. In theory, if the generated dataand the real data come from the same low dimensional manifold, the discriminator can help thegenerator distribute its probability mass, because the missing modes will not have near-0 probabilityunder the generator and so the samples in these areas can be appropriately concentrated towardsregions where Dis closer to 1. However, in practice since the two manifolds are disjoint, Dtendsto be near 1 on all the real data samples, so large modes usually have a much higher chance ofattracting the gradient of discriminator. For a typical GAN model, since all modes have similar Dvalues, there is no reason why the generator cannot collapse to just a few major modes. In otherwords, since the discriminator’s output is nearly 0 and 1 on fake and real data respectively, thegenerator is not penalized for missing modes.3.1 G EOMETRIC METRICS REGULARIZERCompared with the objective for the GAN generator, the optimization targets for supervised learningare more stable from an optimization point of view. The difference is clear: the optimization targetfor the GAN generator is a learned discriminator. While in supervised models, the optimizationtargets are distance functions with nice geometric properties. The latter usually provides mucheasier training gradients than the former, especially at the early stages of training.1This problem exists even when we use logD(G(z))as target for the generator, as noted by Denton et al.(2015) and our experiments.3Published as a conference paper at ICLR 2017Inspired by this observation, we propose to incorporate a supervised training signal as a regularizeron top of the discriminator target. Assume the generator G(z) :Z!Xgenerates samples by sam-pling first from a fixed prior distribution in space Zfollowed by a deterministic trainable transforma-tionGinto the sample space X. Together with G, we also jointly train an encoder E(x) :X!Z.Assumedis some similarity metric in the data space, we add Expd[d(x;GE(x))]as a regularizer,wherepdis the data generating distribution. The encoder itself is trained by minimizing the samereconstruction error.In practice, there are many options for the distance measure d. For instance, the pixel-wise L2distance, or the distance of learned features by the discriminator (Dumoulin et al., 2016) or by othernetworks, such as a VGG classifier. (Ledig et al., 2016)The geometric intuition for this regularizer is straight-forward. We are trying to move the generatedmanifold to the real data manifold using gradient descent. In addition to the gradient provided bythe discriminator, we can also try to match the two manifolds by other geometric distances, say,Lsmetric. The idea of adding an encoder is equivalent to first training a point to point mappingG(E(x))between the two manifolds and then trying to minimize the expected distance between thepoints on these two manifolds.3.2 M ODE REGULARIZERIn addition to the metric regularizer, we propose a mode regularizer to further penalize miss-ing modes. In traditional GANs, the optimization target for the generator is the empirical sumPirlogD(G(zi)). The missing mode problem is caused by the conjunction of two facts: (1)the areas near missing modes are rarely visited by the generator, by definition, thus providing veryfew examples to improve the generator around those areas, and (2) both missing modes and non-missing modes tend to correspond to a high value of D, because the generator is not perfect sothat the discriminator can take strong decisions locally and obtain a high value of Deven nearnon-missing modes.Figure 2: Illustration of missing modes problem.As an example, consider the situation in Fig-ure 2. For most z, the gradient of the generatorrlogD(G(z))pushes the generator towardsthe major mode M1. Only when G(z)is veryclose to the mode M2can the generator get gra-dients to push itself towards the minor modeM2. However, it is possible that such zis oflow or zero probability in the prior distributionp0.Given this observation, consider a regularizedGAN model with the metric regularizer. As-sumeM0is a minor mode of the data generat-ing distribution. For x2M0, we know thatifGEis a good autoencoder, G(E(x))willbe located very close to mode M0. Since thereare sufficient training examples of mode M0inthe training data, we add the mode regularizerExpd[logD(GE(x))]to our optimizationtarget for the generator, to encourage G(E(x))to move towards a nearby mode of the data generating distribution. In this way, we can achieve fairprobability mass distribution across different modes.In short, our regularized optimization target for the generator and the encoder becomes:TG=Ez[logD(G(z))] +Expd[1d(x;GE(x)) +2logD(GE(x))] (1)TE=Expd[1d(x;GE(x)) +2logD(GE(x))] (2)4Published as a conference paper at ICLR 20173.3 M ANIFOLD -DIFFUSION TRAINING FOR REGULARIZED GAN SOn some large scale datasets, CelebA for example, the regularizers we have discussed do improvethe diversity of generated samples, but the quality of samples may not be as good without care-fully tuning the hyperparameters. Here we propose a new algorithm for training metric-regularizedGANs, which is very stable and much easier to tune for producing good samples.The proposed algorithm divides the training procedure of GANs into two steps: a manifold stepand a diffusion step. In the manifold step, we try to match the generation manifold and the realdata manifold with the help of an encoder and the geometric metric loss. In the diffusion step, wetry to distribute the probability mass on the generation manifold fairly according to the real datadistribution.An example of manifold-diffusion training of GAN (MDGAN for short) is as follows: we train adiscriminator D1which separates between the samples xandGE(x), forxfrom the data, and weoptimizeGwith respect to the regularized GAN loss E[logD1(GE(x))+d(x;GE(x))]in orderto match the two manifolds. In the diffusion step we train a discriminator D2between distributionsG(z)andGE(x), and we train Gto maximize logD2(G(z)). Since these two distributions arenow nearly on the same low dimensional manifold, the discriminator D2provides much smootherand more stable gradients. The detailed training procedure is given in Appendix A. See Figure 6 forthe quality of generated samples.3.4 E VALUATION METRICS FOR MODE MISSINGIn order to estimate both the missing modes and the sample qualities in our experiments, we usedseveral different metrics for different experiments instead of human annotators.The inception score (Salimans et al., 2016) was considered as a good assessment for sample qualityfrom a labelled dataset:exp (ExKL(p(yjx)jjp(y))) (3)Where xdenotes one sample, p(yjx)is the softmax output of a trained classifier of the labels, andp(y)is the overall label distribution of generated samples. The intuition behind this score is thata strong classifier usually has a high confidence for good samples. However, the inception score issometimes not a good metric for our purpose. Assume a generative model that collapse to a very badimage. Although the model is very bad, it can have a perfect inception score, because p(yjx)canhave a high entropy and p(y)can have a low entropy. So instead, for labelled datasets, we proposeanother assessment for both visual quality and variety of samples, the MODE score:exp (ExKL(p(yjx)jjp(y))KL(p(y)jjp(y))) (4)wherep(y)is the distribution of labels in the training data. According to our human evaluationexperiences, the MODE score successfully measures two important aspects of generative models,i.e., variety and visual quality, in one metric.However, in datasets without labels (LSUN) or where the labels are not sufficient to characterizeevery data mode (CelebA), the above metric does not work well. We instead train a third partydiscriminator between the real data and the generated data from the model. It is similar to the GANdiscriminator but is not used to train the generator . We can view the output of the discriminator asan estimator for the quantity (See (Goodfellow et al., 2014) for proof):D(s)pg(s)pg(s) +pd(s)(5)Wherepgis the probability density of the generator and pdis the density of the data distribution.To preventDfrom learning a perfect 0-1 separation of pgandpd, we inject a zero-mean Gaussiannoise to the inputs when training D. After training, we test Don the test set Tof the real dataset.If for any test sample t2T, the discrimination value D(t)is close to 1, we can conclude that themode corresponding to tis missing. In this way, although we cannot measure exactly the numberof modes that are missing, we have a good estimator of the total probability mass of all the missingmodes.5Published as a conference paper at ICLR 20174 E XPERIMENTS4.1 MNISTTable 1: Grid Search for Hyperparameters.nLayerG [2,3,4]nLayerD [2,3,4]sizeG [400,800,1600,3200]sizeD [256, 512, 1024]dropoutD [True,False]optimG [SGD,Adam]optimD [SGD,Adam]lr [1e-2,1e-3,1e-4]We perform two classes of experiments on MNIST.For the MNIST dataset, we can assume that the datagenerating distribution can be approximated with tendominant modes, if we define the term “mode” here asa connected component of the data manifold.4.1.1 G RIDSEARCH FOR MNIST GAN M ODELSIn order to systemically explore the effect of our pro-posed regularizers on GAN models in terms of im-proving stability and sample quality, we use a largescale grid search of different GAN hyper-parameterson the MNIST dataset. The grid search is based on apair of randomly selected loss weights: 1= 0:2and2= 0:4. We use the same hyper-parameter settings for both GAN and Regularized GAN, andlist the search ranges in Table 1. Our grid search is similar to those proposed in Zhao et al. (2016).Please refer to it for detailed explanations regarding these hyper-parameters.For evaluation, we first train a 4-layer CNN classifier on the MNIST digits, and then apply it tocompute the MODE scores for the generated samples from all these models. The resulting distribu-tion of MODE score is shown in Figure 3. Clearly, our proposed regularizer significantly improvesthe MODE scores and thus demonstrates its benefits on stabilizing GANs and improving samplequalities.Figure 3: The distributions of MODE scores for GAN and regularized GAN.To illustrate the effect of regularizers with different coefficients, we randomly pick an architectureand train it with different 1=2. The results are shown in Figure 4.Figure 4: (Left 1-5) Different hyperparameters for MNIST generation. The values of the 1and2in our Regularized GAN are listed below the corresponding samples. (Right 6-7) Best samplesthrough grid search for GAN and Regularized GAN.4.1.2 C OMPOSITIONAL MNIST DATA WITH 1000 MODESIn order to quantitatively study the effect of our regularizers on the missing modes, we concatenatethree MNIST digits to a number in [0,999] in a single 64x64 image, and then train DCGAN asa baseline model on the 1000 modes dataset. The digits on the image are sampled with different6Published as a conference paper at ICLR 2017probabilities, in order to test the model’s capability to preserve small modes in generation. We againuse a pre-trained classifier for MNIST instead of a human to evaluate the models.Table 2: Results for Compositional MNIST with 1000 modes. The proposed regularization (Reg-DCGAN) allows to substantially reduce the number of missed modes as well as the KL divergencethat measures the plausibility of the generated samples (like in the Inception score).Set 1 Set 2 Set 3 Set 4#Miss KL #Miss KL #Miss KL #Miss KLDCGAN 204.7 77.9 204.3 60.2 103.4 75.9 89.3 77.8Reg-DCGAN 32.1 62.3 71.5 58.9 42.7 68.4 31.6 67.8The performances on the compositional experiment are measured by two metrics. #Miss representsthe classifier-reported number of missing modes, which is the size of the set of numbers that themodel never generates. KL stands for the KL divergence between the classifier-reported distributionof generated numbers and the distribution of numbers in the training data (as for the Inceptionscore). The results are shown in Table 2. With the help of our proposed regularizer, both the numberof missing modes and KL divergence drop dramatically among all the sets of the compositionalMNIST dataset, which again proves the effectiveness of our regularizer for preventing the missingmodes problem.4.2 C ELEB ATo test the effectiveness of our proposal on harder problems, we implement an encoder for theDCGAN algorithm and train our model with different hyper-parameters together with the DCGANbaseline on the CelebA dataset. We provide the detailed architecture of our regularized DCGAN inAppendix B.4.2.1 M ISSING MODES ESTIMATION ON CELEB AWe also employ a third party discriminator trained with injected noise as a metric for missing modeestimation. To implement this, we add noise in the input layer in the discriminator network. For eachGAN model to be estimated, we independently train this noisy discriminator, as mode estimator,with the same architecture and hyper-parameters on the generated data and the training data. Wethen apply the mode estimator to the test data. The images which have high mode estimator outputscan be viewed as on the missing modes.Table 3: Number of images on the missing modes on CelebA estimated by a third-party discrimina-tor. The numbers in the brackets indicate the dimension of prior z.denotes the standard deviationof the added Gaussian noise applied at the input of the discriminator to regularize it. MDGANachieves a very high reduction in the number of missing modes, in comparison to other methods . DCGAN (100) DCGAN (200) Reg-GAN (100) Reg-GAN (200) MDGAN (200)3.5 5463 17089 754 3644 744.0 590 15832 42 391 13The comparison result is shown in Table 3. Both our proposed Regularized-GAN and MDGANoutperform baseline DCGAN models on all settings. Especially, MDGAN suppresses other models,showing its superiority on modes preserving. We also find that, although sharing the same architec-ture, the DCGAN with 200-dimensional noise performs quite worse than that with 100-dimensionalnoise as input. On the contrary, our regularized GAN performs more consistently.To get a better understanding of the models’ performance, we want to figure out when and wherethese models miss the modes. Visualizing the test images associated with missed modes is instruc-tive. In Figure 5, the left three images are missed by all models. It is rare to see in the training datathe cap in the second image and the type of background in the third, which thus can be viewed assmall modes under this situation. These three images should be considered as the hardest test data7Published as a conference paper at ICLR 2017for GAN to learn. Nonetheless, our best model, MDGAN still capture certain small modes. Theseven images on the right in Figure 5 are only missed by DCGAN. The sideface, paleface, black,and the berets are special attributes among these images, but our proposed MDGAN performs wellon all of them.Figure 5: Test set images that are on missing mode. Left: Both MDGAN and DCGAN missing.Right: Only DCGAN missing.4.2.2 Q UALITATIVE EVALUATION OF GENERATED SAMPLESAfter quantitative evaluation, we manually examine the generated samples by our regularized GANto see whether the proposed regularizer has side-effects on sample quality. We compare our modelwith ALI (Dumoulin et al., 2016), V AEGAN (Larsen et al., 2015), and DCGAN (Radford et al.,2015) in terms of sample visual quality and mode diversity. Samples generated from these modelsare shown in Figure 62.Figure 6: Samples generated from different generative models. For each compared model, wedirectly take ten decent samples reported in their corresponding papers and code repositories. Notehow MDGAN samples are both globally more coherent and locally have sharp textures.Both MDGAN and Regularized-GAN generate clear and natural-looking face images. AlthoughALI’s samples are plausible, they are sightly deformed in comparison with those from MDGAN.The samples from V AEGAN and DCGAN seem globally less coherent and locally less sharp.As to sample quality, it is worth noting that the samples from MDGAN enjoy fewer distortions.With all four other models, the majority of generated samples suffer from some sort of distortion.However, for the samples generated by MDGAN, the level of distortion is lower compared with theother four compared models. We attribute it to the help of the autoencoder as the regularizer to alterthe generation manifolds. In this way, the generator is able to learn fine-grained details such as faceedges. As a result, MDGAN is able to reduce distortions.2For fair comparison, we also recommend readers to refer to the original papers Dumoulin et al. (2016);Larsen et al. (2015); Radford et al. (2015) for the reported samples of the compared. The ALI sam-ples are from https://github.com/IshmaelBelghazi/ALI/blob/master/paper/celeba_samples.png and we reverted them to the original 64x64 size. The DCGAN samples are from https://github.com/Newmu/dcgan_code/8Published as a conference paper at ICLR 2017Figure 7: Sideface samples generated by Regularized-GAN and MDGAN.In terms of missing modes problem, we instructed five individuals to conduct human evaluation onthe generated samples. They achieve consensus that MDGAN wins in terms of mode diversities.Two people pointed out that MDGAN generates a larger amount of samples with side faces thanother models. We select several of these side face samples in Figure 7. Clearly, our samples maintainacceptable visual fidelity meanwhile share diverse modes. Combined with the above quantitativeresults, it is convincing that our regularizers bring benefits for both training stability and modevariety without the loss of sample quality.5 C ONCLUSIONSAlthough GANs achieve state-of-the-art results on a large variety of unsupervised learning tasks,training them is considered highly unstable, very difficult and sensitive to hyper-parameters, all thewhile, missing modes from the data distribution or even collapsing large amounts of probabilitymass on some modes. Successful GAN training usually requires large amounts of human and com-puting efforts to fine tune the hyper-parameters, in order to stabilize training and avoid collapsing.Researchers usually rely on their own experience and published tricks and hyper-parameters insteadof systematic methods for training GANs.We provide systematic ways to measure and avoid the missing modes problem and stabilize trainingwith the proposed autoencoder-based regularizers. The key idea is that some geometric metrics canprovide more stable gradients than trained discriminators, and when combined with the encoder,they can be used as regularizers for training. These regularizers can also penalize missing modesand encourage a fair distribution of probability mass on the generation manifold.ACKNOWLEDGEMENTSWe thank Naiyan Wang, Jianbo Ye, Yuchen Ding, Saboya Yang for their GPU support. We also wantto thank Huiling Zhen for helpful discussions, Junbo Zhao for providing the details of grid searchexperiments on the EBGAN model, as well as Anders Boesen Lindbo Larsen for kindly helping uson running V AEGAN experiments. We appreciate for the valuable suggestions and comments fromthe anonymous reviewers. The work described in this paper was partially supported by NSERC,Calcul Quebec, Compute Canada, the Canada Research Chairs, CIFAR, National Natural ScienceFoundation of China (61672445 and 61272291), Research Grants Council of Hong Kong (PolyU152094/14E), and The Hong Kong Polytechnic University (G-YBP6). | SkMk-sH4g | Review | 4: Ok but not good enough - rejection | Summary:
This paper proposes several regularization objective such as "geometric regularizer" and "mode regularizer" to stabilize the training of GAN models. Specifically, these regularizes are proposed to alleviate the mode-missing behaviors of GANs.
Review:
I think this is an interesting paper that discusses the mode-missing behavior of GANs and proposes new evaluation metric to evaluate this behavior. However, the core ideas of this paper are not very innovative to me. Specifically, there has been a lot of papers that combine GAN with an autoencoder and the settings of this paper is very similar to the other papers such as Larsen et al. As I pointed out in my pre-review comments, in the Larsen et al. both the geometric regularizer and model regularizer has been proposed in the context of VAEs and the way they are used is essentially the same as this paper. I understand the argument of the authors that the VAEGAN is a VAE that is regularized by GAN and in this paper the main generative model is a GAN that is regularized by an autoencoder, but at the end of the day, both the models are combining the autoencoder and GAN in a pretty much same way, and to me the resulting model is not very different. I also understand the other argument of the authors that Larsen et al is using VAE while this paper is using an autoencoder, but I am still not convinced how this paper outperforms the VAEGAN by just removing the KL term of the VAE. I do like that this paper looks at the autoencoder objective as a way to alleviate the missing mode problem of GANs, but I think that alone does not have enough originality to carry the paper.
As pointed out in the public comments by other people, I also suggest that the authors do an extensive comparison of this work and Larsen et al. in terms of missing mode, sample quality and quantitative performances such as inception score. | 4: The reviewer is confident but not absolutely certain that the evaluation is correct |
HJKkY35le | ICLR.cc/2017/conference | 2017 | Mode Regularized Generative Adversarial Networks | ["Tong Che", "Yanran Li", "Athul Jacob", "Yoshua Bengio", "Wenjie Li"] | Although Generative Adversarial Networks achieve state-of-the-art results on a
variety of generative tasks, they are regarded as highly unstable and prone to miss
modes. We argue that these bad behaviors of GANs are due to the very particular
functional shape of the trained discriminators in high dimensional spaces, which
can easily make training stuck or push probability mass in the wrong direction,
towards that of higher concentration than that of the data generating distribution.
We introduce several ways of regularizing the objective, which can dramatically
stabilize the training of GAN models. We also show that our regularizers can help
the fair distribution of probability mass across the modes of the data generating
distribution during the early phases of training, thus providing a unified solution
to the missing modes problem. | ["Deep learning", "Unsupervised Learning"] | ABSTRACTAlthough Generative Adversarial Networks achieve state-of-the-art results on avariety of generative tasks, they are regarded as highly unstable and prone to missmodes. We argue that these bad behaviors of GANs are due to the very particularfunctional shape of the trained discriminators in high dimensional spaces, whichcan easily make training stuck or push probability mass in the wrong direction,towards that of higher concentration than that of the data generating distribution.We introduce several ways of regularizing the objective, which can dramaticallystabilize the training of GAN models. We also show that our regularizers canhelp the fair distribution of probability mass across the modes of the data gener-ating distribution, during the early phases of training and thus providing a unifiedsolution to the missing modes problem.1 I NTRODUCTIONGenerative adversarial networks (GAN) (Goodfellow et al., 2014) have demonstrated their potentialon various tasks, such as image generation, image super-resolution, 3D object generation, and videoprediction (Radford et al., 2015; Ledig et al., 2016; Sønderby et al., 2016; Nguyen et al., 2016; Wuet al., 2016; Mathieu et al., 2015). The objective is to train a parametrized function (the generator)which maps noise samples (e.g., uniform or Gaussian) to samples whose distribution is close to thatof the data generating distribution. The basic scheme of the GAN training procedure is to traina discriminator which assigns higher probabilities to real data samples and lower probabilities togenerated data samples, while simultaneously trying to move the generated samples towards the realdata manifold using the gradient information provided by the discriminator. In a typical setting, thegenerator and the discriminator are represented by deep neural networks.Despite their success, GANs are generally considered as very hard to train due to training instabilityand sensitivity to hyper-parameters. On the other hand, a common failure pattern observed whiletraining GANs is the collapsing of large volumes of probability mass onto a few modes. Namely,although the generators produce meaningful samples, these samples are often from just a few modes(small regions of high probability under the data distribution). Behind this phenomenon is the miss-ing modes problem, which is widely conceived as a major problem for training GANs: many modesof the data generating distribution are not at all represented in the generated samples, yielding amuch lower entropy distribution, with less variety than the data generating distribution.This issue has been the subject of several recent papers proposing several tricks and new archi-tectures to stabilize GAN’s training and encourage its samples’ diversity. However, we argue that ageneral cause behind these problems is the lack of control on the discriminator during GAN training.We would like to encourage the manifold of the samples produced by the generator to move towardsthat of real data, using the discriminator as a metric. However, even if we train the discriminatorto distinguish between these two manifolds, we have no control over the shape of the discriminatorfunction in between these manifolds. In fact, the shape of the discriminator function in the dataAuthors contributed equally.1Published as a conference paper at ICLR 2017space can be very non-linear with bad plateaus and wrong maxima and this can therefore hurt thetraining of GANs (Figure 1).Figure 1: Samples with very high discrim-ination values (D=1.0) in DCGAN modeltrained on CelebA dataset.To remedy this problem, we propose a novel regu-larizer for the GAN training target. The basic ideais simple yet powerful: in addition to the gradientinformation provided by the discriminator, we wantthe generator to take advantage of other similaritymetrics with much more predictable behavior, suchas theL2norm. Differentiating these similarity met-rics will provide us with more stable gradients totrain our generator. Combining this idea with an ap-proach meant to penalize the missing modes, we pro-pose a family of additional regularizers for the GAN objective. We then design a set of metrics toevaluate the generated samples in terms of both the diversity of modes and the distribution fairnessof the probability mass. These metrics are shown to be more robust in judging complex generativemodels, including those which are well-trained and collapsed ones.Regularizers usually bring a trade-off between model variance and bias. Our results have shownthat, when correctly applied, our regularizers can dramatically reduce model variance, stabilize thetraining, and fix the missing mode problem all at once, with positive or at the least no negative effectson the generated samples. We also discuss a variant of the regularized GAN algorithm, which caneven improve sample quality as compared to the DCGAN baseline.2 R ELATED WORKThe GAN approach was initially proposed by Goodfellow et al. (2014) where both the generator andthe discriminator are defined by deep neural networks.In Goodfellow et al. (2014), the GAN is able to generate interesting local structure but globallyincoherent images on various datasets. Mirza & Osindero (2014) enlarges GAN’s representationcapacity by introducing an extra vector to allow the generator to produce samples conditioned onother beneficial information. Motivated from this, several conditional variants of GAN has beenapplied to a wide range of tasks, including image prediction from a normal map Wang & Gupta(2016), image synthesis from text Reed et al. (2016) and edge map Isola et al. (2016), real-timeimage manipulation Zhu et al. (2016), temporal image generation Zhou & Berg (2016); Saito &Matsumoto (2016); V ondrick et al. (2016), texture synthesis, style transfer, and video stylization Li& Wand (2016).Researchers also aim at stretching GAN’s limit to generate higher-resolution, photo-realistic images.Denton et al. (2015) initially apply a Laplacian pyramid framework on GAN to generate images ofhigh resolution. At each level of their LAPGAN, both the generator and the discriminator are convo-lutional networks. As an alternative to LAPGAN, Radford et al. (2015) successfully designs a classof deep convolutional generative adversarial networks which has led to significant improvements onunsupervised image representation learning. Another line of work aimed at improving GANs arethrough feature learning, including features from the latent space and image space. The motivation isthat features from different spaces are complementary for generating perceptual and natural-lookingimages. With this perspective, some researchers use distances between learned features as losses fortraining objectives for generative models. Larsen et al. (2015) combine a variational autoencoderobjective with a GAN and utilize the learned features from the discriminator in the GANs for betterimage similarity metrics. It is shown that the learned distance from the discriminator is of greathelp for the sample visual fidelity. Recent literature have also shown impressive results on imagesuper-resolution to infer photo-realistic natural images for 4x upscaling factors Ledig et al. (2016);Sønderby et al. (2016); Nguyen et al. (2016).Despite these promising successes, GANs are notably hard to train. Although Radford et al. (2015)provide a class of empirical architectural choices that are critical to stabilize GAN’s training, itwould be even better to train GANs more robustly and systematically. Salimans et al. (2016) pro-pose feature matching technique to stabilize GAN’s training. The generator is required to match thestatistics of intermediate features of the discriminator. Similar idea is adopted by Zhao et al. (2016).2Published as a conference paper at ICLR 2017In addition to feature distances, Dosovitskiy & Brox (2016) found that the counterpart loss in imagespace further improves GAN’s training stability. Furthermore, some researchers make use of infor-mation in both spaces in a unified learning procedure (Dumoulin et al., 2016; Donahue et al., 2016).In Dumoulin et al. (2016), one trains not just a generator but also an encoder, and the discriminatoris trained to distinguish between two joint distributions over image and latent spaces produced eitherby the application of the encoder on the training data or by the application of the generator (decoder)to the latent prior. This is in contrast with the regular GAN training, in which the discriminator onlyattempts to separate the distributions in the image space. Parallelly, Metz et al. (2016) stabilizeGANs by unrolling the optimization of discriminator, which can be considered as an orthogonalwork with ours.Our work is related to V AEGAN (Larsen et al., 2015) in terms of training an autoencoder or V AEjointly with the GAN model. However, the variational autoencoder (V AE) in V AEGAN is used togenerate samples whereas our autoencoder based losses serves as a regularizer to penalize missingmodes and thus improving GAN’s training stability and sample qualities. We demonstrate detaileddifferences from various aspects in Appendix D.3 M ODE REGULARIZERS FOR GAN SThe GAN training procedure can be viewed as a non-cooperative two player game, in which thediscriminator Dtries to distinguish real and generated examples, while the generator Gtries to foolthe discriminator by pushing the generated samples towards the direction of higher discriminationvalues. Training the discriminator Dcan be viewed as training an evaluation metric on the samplespace. Then the generator Ghas to take advantage of the local gradient rlogD(G)provided by thediscriminator to improve itself, namely to move towards the data manifold.We now take a closer look at the root cause of the instabilities while training GANs. The discrim-inator is trained on both generated and real examples. As pointed out by Goodfellow et al. (2014);Denton et al. (2015); Radford et al. (2015), when the data manifold and the generation manifold aredisjoint (which is true in almost all practical situations), it is equivalent to training a characteristicfunction to be very close to 1 on the data manifold, and 0 on the generation manifold. In order topass good gradient information to the generator, it is important that the trained discriminator pro-duces stable and smooth gradients. However, since the discriminator objective does not directlydepend on the behavior of the discriminator in other parts of the space, training can easily fail if theshape of the discriminator function is not as expected. As an example,Denton et al. (2015) noteda common failure pattern for training GANs which is the vanishing gradient problem, in which thediscriminator Dperfectly classifies real and fake examples, such that around the fake examples, Dis nearly zero. In such cases, the generator will receive no gradient to improve itself.1Another important problem while training GANs is mode missing. In theory, if the generated dataand the real data come from the same low dimensional manifold, the discriminator can help thegenerator distribute its probability mass, because the missing modes will not have near-0 probabilityunder the generator and so the samples in these areas can be appropriately concentrated towardsregions where Dis closer to 1. However, in practice since the two manifolds are disjoint, Dtendsto be near 1 on all the real data samples, so large modes usually have a much higher chance ofattracting the gradient of discriminator. For a typical GAN model, since all modes have similar Dvalues, there is no reason why the generator cannot collapse to just a few major modes. In otherwords, since the discriminator’s output is nearly 0 and 1 on fake and real data respectively, thegenerator is not penalized for missing modes.3.1 G EOMETRIC METRICS REGULARIZERCompared with the objective for the GAN generator, the optimization targets for supervised learningare more stable from an optimization point of view. The difference is clear: the optimization targetfor the GAN generator is a learned discriminator. While in supervised models, the optimizationtargets are distance functions with nice geometric properties. The latter usually provides mucheasier training gradients than the former, especially at the early stages of training.1This problem exists even when we use logD(G(z))as target for the generator, as noted by Denton et al.(2015) and our experiments.3Published as a conference paper at ICLR 2017Inspired by this observation, we propose to incorporate a supervised training signal as a regularizeron top of the discriminator target. Assume the generator G(z) :Z!Xgenerates samples by sam-pling first from a fixed prior distribution in space Zfollowed by a deterministic trainable transforma-tionGinto the sample space X. Together with G, we also jointly train an encoder E(x) :X!Z.Assumedis some similarity metric in the data space, we add Expd[d(x;GE(x))]as a regularizer,wherepdis the data generating distribution. The encoder itself is trained by minimizing the samereconstruction error.In practice, there are many options for the distance measure d. For instance, the pixel-wise L2distance, or the distance of learned features by the discriminator (Dumoulin et al., 2016) or by othernetworks, such as a VGG classifier. (Ledig et al., 2016)The geometric intuition for this regularizer is straight-forward. We are trying to move the generatedmanifold to the real data manifold using gradient descent. In addition to the gradient provided bythe discriminator, we can also try to match the two manifolds by other geometric distances, say,Lsmetric. The idea of adding an encoder is equivalent to first training a point to point mappingG(E(x))between the two manifolds and then trying to minimize the expected distance between thepoints on these two manifolds.3.2 M ODE REGULARIZERIn addition to the metric regularizer, we propose a mode regularizer to further penalize miss-ing modes. In traditional GANs, the optimization target for the generator is the empirical sumPirlogD(G(zi)). The missing mode problem is caused by the conjunction of two facts: (1)the areas near missing modes are rarely visited by the generator, by definition, thus providing veryfew examples to improve the generator around those areas, and (2) both missing modes and non-missing modes tend to correspond to a high value of D, because the generator is not perfect sothat the discriminator can take strong decisions locally and obtain a high value of Deven nearnon-missing modes.Figure 2: Illustration of missing modes problem.As an example, consider the situation in Fig-ure 2. For most z, the gradient of the generatorrlogD(G(z))pushes the generator towardsthe major mode M1. Only when G(z)is veryclose to the mode M2can the generator get gra-dients to push itself towards the minor modeM2. However, it is possible that such zis oflow or zero probability in the prior distributionp0.Given this observation, consider a regularizedGAN model with the metric regularizer. As-sumeM0is a minor mode of the data generat-ing distribution. For x2M0, we know thatifGEis a good autoencoder, G(E(x))willbe located very close to mode M0. Since thereare sufficient training examples of mode M0inthe training data, we add the mode regularizerExpd[logD(GE(x))]to our optimizationtarget for the generator, to encourage G(E(x))to move towards a nearby mode of the data generating distribution. In this way, we can achieve fairprobability mass distribution across different modes.In short, our regularized optimization target for the generator and the encoder becomes:TG=Ez[logD(G(z))] +Expd[1d(x;GE(x)) +2logD(GE(x))] (1)TE=Expd[1d(x;GE(x)) +2logD(GE(x))] (2)4Published as a conference paper at ICLR 20173.3 M ANIFOLD -DIFFUSION TRAINING FOR REGULARIZED GAN SOn some large scale datasets, CelebA for example, the regularizers we have discussed do improvethe diversity of generated samples, but the quality of samples may not be as good without care-fully tuning the hyperparameters. Here we propose a new algorithm for training metric-regularizedGANs, which is very stable and much easier to tune for producing good samples.The proposed algorithm divides the training procedure of GANs into two steps: a manifold stepand a diffusion step. In the manifold step, we try to match the generation manifold and the realdata manifold with the help of an encoder and the geometric metric loss. In the diffusion step, wetry to distribute the probability mass on the generation manifold fairly according to the real datadistribution.An example of manifold-diffusion training of GAN (MDGAN for short) is as follows: we train adiscriminator D1which separates between the samples xandGE(x), forxfrom the data, and weoptimizeGwith respect to the regularized GAN loss E[logD1(GE(x))+d(x;GE(x))]in orderto match the two manifolds. In the diffusion step we train a discriminator D2between distributionsG(z)andGE(x), and we train Gto maximize logD2(G(z)). Since these two distributions arenow nearly on the same low dimensional manifold, the discriminator D2provides much smootherand more stable gradients. The detailed training procedure is given in Appendix A. See Figure 6 forthe quality of generated samples.3.4 E VALUATION METRICS FOR MODE MISSINGIn order to estimate both the missing modes and the sample qualities in our experiments, we usedseveral different metrics for different experiments instead of human annotators.The inception score (Salimans et al., 2016) was considered as a good assessment for sample qualityfrom a labelled dataset:exp (ExKL(p(yjx)jjp(y))) (3)Where xdenotes one sample, p(yjx)is the softmax output of a trained classifier of the labels, andp(y)is the overall label distribution of generated samples. The intuition behind this score is thata strong classifier usually has a high confidence for good samples. However, the inception score issometimes not a good metric for our purpose. Assume a generative model that collapse to a very badimage. Although the model is very bad, it can have a perfect inception score, because p(yjx)canhave a high entropy and p(y)can have a low entropy. So instead, for labelled datasets, we proposeanother assessment for both visual quality and variety of samples, the MODE score:exp (ExKL(p(yjx)jjp(y))KL(p(y)jjp(y))) (4)wherep(y)is the distribution of labels in the training data. According to our human evaluationexperiences, the MODE score successfully measures two important aspects of generative models,i.e., variety and visual quality, in one metric.However, in datasets without labels (LSUN) or where the labels are not sufficient to characterizeevery data mode (CelebA), the above metric does not work well. We instead train a third partydiscriminator between the real data and the generated data from the model. It is similar to the GANdiscriminator but is not used to train the generator . We can view the output of the discriminator asan estimator for the quantity (See (Goodfellow et al., 2014) for proof):D(s)pg(s)pg(s) +pd(s)(5)Wherepgis the probability density of the generator and pdis the density of the data distribution.To preventDfrom learning a perfect 0-1 separation of pgandpd, we inject a zero-mean Gaussiannoise to the inputs when training D. After training, we test Don the test set Tof the real dataset.If for any test sample t2T, the discrimination value D(t)is close to 1, we can conclude that themode corresponding to tis missing. In this way, although we cannot measure exactly the numberof modes that are missing, we have a good estimator of the total probability mass of all the missingmodes.5Published as a conference paper at ICLR 20174 E XPERIMENTS4.1 MNISTTable 1: Grid Search for Hyperparameters.nLayerG [2,3,4]nLayerD [2,3,4]sizeG [400,800,1600,3200]sizeD [256, 512, 1024]dropoutD [True,False]optimG [SGD,Adam]optimD [SGD,Adam]lr [1e-2,1e-3,1e-4]We perform two classes of experiments on MNIST.For the MNIST dataset, we can assume that the datagenerating distribution can be approximated with tendominant modes, if we define the term “mode” here asa connected component of the data manifold.4.1.1 G RIDSEARCH FOR MNIST GAN M ODELSIn order to systemically explore the effect of our pro-posed regularizers on GAN models in terms of im-proving stability and sample quality, we use a largescale grid search of different GAN hyper-parameterson the MNIST dataset. The grid search is based on apair of randomly selected loss weights: 1= 0:2and2= 0:4. We use the same hyper-parameter settings for both GAN and Regularized GAN, andlist the search ranges in Table 1. Our grid search is similar to those proposed in Zhao et al. (2016).Please refer to it for detailed explanations regarding these hyper-parameters.For evaluation, we first train a 4-layer CNN classifier on the MNIST digits, and then apply it tocompute the MODE scores for the generated samples from all these models. The resulting distribu-tion of MODE score is shown in Figure 3. Clearly, our proposed regularizer significantly improvesthe MODE scores and thus demonstrates its benefits on stabilizing GANs and improving samplequalities.Figure 3: The distributions of MODE scores for GAN and regularized GAN.To illustrate the effect of regularizers with different coefficients, we randomly pick an architectureand train it with different 1=2. The results are shown in Figure 4.Figure 4: (Left 1-5) Different hyperparameters for MNIST generation. The values of the 1and2in our Regularized GAN are listed below the corresponding samples. (Right 6-7) Best samplesthrough grid search for GAN and Regularized GAN.4.1.2 C OMPOSITIONAL MNIST DATA WITH 1000 MODESIn order to quantitatively study the effect of our regularizers on the missing modes, we concatenatethree MNIST digits to a number in [0,999] in a single 64x64 image, and then train DCGAN asa baseline model on the 1000 modes dataset. The digits on the image are sampled with different6Published as a conference paper at ICLR 2017probabilities, in order to test the model’s capability to preserve small modes in generation. We againuse a pre-trained classifier for MNIST instead of a human to evaluate the models.Table 2: Results for Compositional MNIST with 1000 modes. The proposed regularization (Reg-DCGAN) allows to substantially reduce the number of missed modes as well as the KL divergencethat measures the plausibility of the generated samples (like in the Inception score).Set 1 Set 2 Set 3 Set 4#Miss KL #Miss KL #Miss KL #Miss KLDCGAN 204.7 77.9 204.3 60.2 103.4 75.9 89.3 77.8Reg-DCGAN 32.1 62.3 71.5 58.9 42.7 68.4 31.6 67.8The performances on the compositional experiment are measured by two metrics. #Miss representsthe classifier-reported number of missing modes, which is the size of the set of numbers that themodel never generates. KL stands for the KL divergence between the classifier-reported distributionof generated numbers and the distribution of numbers in the training data (as for the Inceptionscore). The results are shown in Table 2. With the help of our proposed regularizer, both the numberof missing modes and KL divergence drop dramatically among all the sets of the compositionalMNIST dataset, which again proves the effectiveness of our regularizer for preventing the missingmodes problem.4.2 C ELEB ATo test the effectiveness of our proposal on harder problems, we implement an encoder for theDCGAN algorithm and train our model with different hyper-parameters together with the DCGANbaseline on the CelebA dataset. We provide the detailed architecture of our regularized DCGAN inAppendix B.4.2.1 M ISSING MODES ESTIMATION ON CELEB AWe also employ a third party discriminator trained with injected noise as a metric for missing modeestimation. To implement this, we add noise in the input layer in the discriminator network. For eachGAN model to be estimated, we independently train this noisy discriminator, as mode estimator,with the same architecture and hyper-parameters on the generated data and the training data. Wethen apply the mode estimator to the test data. The images which have high mode estimator outputscan be viewed as on the missing modes.Table 3: Number of images on the missing modes on CelebA estimated by a third-party discrimina-tor. The numbers in the brackets indicate the dimension of prior z.denotes the standard deviationof the added Gaussian noise applied at the input of the discriminator to regularize it. MDGANachieves a very high reduction in the number of missing modes, in comparison to other methods . DCGAN (100) DCGAN (200) Reg-GAN (100) Reg-GAN (200) MDGAN (200)3.5 5463 17089 754 3644 744.0 590 15832 42 391 13The comparison result is shown in Table 3. Both our proposed Regularized-GAN and MDGANoutperform baseline DCGAN models on all settings. Especially, MDGAN suppresses other models,showing its superiority on modes preserving. We also find that, although sharing the same architec-ture, the DCGAN with 200-dimensional noise performs quite worse than that with 100-dimensionalnoise as input. On the contrary, our regularized GAN performs more consistently.To get a better understanding of the models’ performance, we want to figure out when and wherethese models miss the modes. Visualizing the test images associated with missed modes is instruc-tive. In Figure 5, the left three images are missed by all models. It is rare to see in the training datathe cap in the second image and the type of background in the third, which thus can be viewed assmall modes under this situation. These three images should be considered as the hardest test data7Published as a conference paper at ICLR 2017for GAN to learn. Nonetheless, our best model, MDGAN still capture certain small modes. Theseven images on the right in Figure 5 are only missed by DCGAN. The sideface, paleface, black,and the berets are special attributes among these images, but our proposed MDGAN performs wellon all of them.Figure 5: Test set images that are on missing mode. Left: Both MDGAN and DCGAN missing.Right: Only DCGAN missing.4.2.2 Q UALITATIVE EVALUATION OF GENERATED SAMPLESAfter quantitative evaluation, we manually examine the generated samples by our regularized GANto see whether the proposed regularizer has side-effects on sample quality. We compare our modelwith ALI (Dumoulin et al., 2016), V AEGAN (Larsen et al., 2015), and DCGAN (Radford et al.,2015) in terms of sample visual quality and mode diversity. Samples generated from these modelsare shown in Figure 62.Figure 6: Samples generated from different generative models. For each compared model, wedirectly take ten decent samples reported in their corresponding papers and code repositories. Notehow MDGAN samples are both globally more coherent and locally have sharp textures.Both MDGAN and Regularized-GAN generate clear and natural-looking face images. AlthoughALI’s samples are plausible, they are sightly deformed in comparison with those from MDGAN.The samples from V AEGAN and DCGAN seem globally less coherent and locally less sharp.As to sample quality, it is worth noting that the samples from MDGAN enjoy fewer distortions.With all four other models, the majority of generated samples suffer from some sort of distortion.However, for the samples generated by MDGAN, the level of distortion is lower compared with theother four compared models. We attribute it to the help of the autoencoder as the regularizer to alterthe generation manifolds. In this way, the generator is able to learn fine-grained details such as faceedges. As a result, MDGAN is able to reduce distortions.2For fair comparison, we also recommend readers to refer to the original papers Dumoulin et al. (2016);Larsen et al. (2015); Radford et al. (2015) for the reported samples of the compared. The ALI sam-ples are from https://github.com/IshmaelBelghazi/ALI/blob/master/paper/celeba_samples.png and we reverted them to the original 64x64 size. The DCGAN samples are from https://github.com/Newmu/dcgan_code/8Published as a conference paper at ICLR 2017Figure 7: Sideface samples generated by Regularized-GAN and MDGAN.In terms of missing modes problem, we instructed five individuals to conduct human evaluation onthe generated samples. They achieve consensus that MDGAN wins in terms of mode diversities.Two people pointed out that MDGAN generates a larger amount of samples with side faces thanother models. We select several of these side face samples in Figure 7. Clearly, our samples maintainacceptable visual fidelity meanwhile share diverse modes. Combined with the above quantitativeresults, it is convincing that our regularizers bring benefits for both training stability and modevariety without the loss of sample quality.5 C ONCLUSIONSAlthough GANs achieve state-of-the-art results on a large variety of unsupervised learning tasks,training them is considered highly unstable, very difficult and sensitive to hyper-parameters, all thewhile, missing modes from the data distribution or even collapsing large amounts of probabilitymass on some modes. Successful GAN training usually requires large amounts of human and com-puting efforts to fine tune the hyper-parameters, in order to stabilize training and avoid collapsing.Researchers usually rely on their own experience and published tricks and hyper-parameters insteadof systematic methods for training GANs.We provide systematic ways to measure and avoid the missing modes problem and stabilize trainingwith the proposed autoencoder-based regularizers. The key idea is that some geometric metrics canprovide more stable gradients than trained discriminators, and when combined with the encoder,they can be used as regularizers for training. These regularizers can also penalize missing modesand encourage a fair distribution of probability mass on the generation manifold.ACKNOWLEDGEMENTSWe thank Naiyan Wang, Jianbo Ye, Yuchen Ding, Saboya Yang for their GPU support. We also wantto thank Huiling Zhen for helpful discussions, Junbo Zhao for providing the details of grid searchexperiments on the EBGAN model, as well as Anders Boesen Lindbo Larsen for kindly helping uson running V AEGAN experiments. We appreciate for the valuable suggestions and comments fromthe anonymous reviewers. The work described in this paper was partially supported by NSERC,Calcul Quebec, Compute Canada, the Canada Research Chairs, CIFAR, National Natural ScienceFoundation of China (61672445 and 61272291), Research Grants Council of Hong Kong (PolyU152094/14E), and The Hong Kong Polytechnic University (G-YBP6). | rkGUAyfNl | Review | 7: Good paper, accept | The authors identify two very valid problem of mode-missing in Generative Adversarial Networks, explain their intuitions as to why these problems occur and propose ways to remedy it. The first problem is about the discriminator becoming too good (close to 0 on fake, and 1 on real data) and providing 0 gradients to the generator. The second problem is that GANs are prone to missing modes of the data generating distribution entirely. The authors propose two regularization techniques to address these problems: Geometric Metrics Regularizer and Mode Regularizer
Overall, I felt that this is a good paper, providing a good analysis of the problems and proposing sensible solutions - if lacking solid from-first-principles motivation for the particular choices made. My other critique is the focus on manifolds, almost completely disregarding the probability density on the manifold - see my detailed comment below.
Detailed comments on the Geometric Metrics Regularizer: The motivation for this is to provide a way to measure and penalize distance between two degenerate probability distributions concentrated on non-overlapping manifolds, those of the generator and of the real data. There are different ways one could go about measuring difference between two manifolds or probability distributions concentrated on manifolds, for example:
- projection heuristic: measure the average distance between each point x on manifold A and the corresponding nearest point on manifold B (let’s call it the projection of x onto B).
- earth mover’s distance: establish a smooth mapping between the two manifolds that maps denser areas on manifold A to nearby denser areas of manifold B, and measure the average distance between corresponding pairs.
The two heuristics are similar but while the earth mover distance is a divergence measure for distributions, the projection heuristic only measures the divergence of the manifolds, disregarding the distributions in question.
The authors propose measuring the average distance between a point on the real data manifold and a point it gets mapped to by the composition of the encoder and the generator. While E○G will map to the generative manifold, it is unclear to me if they would map to a high-probability region on that manifold, so this probably doesn’t implement anything like Earth Mover’s Distance. On this note, I have just remembered seeing this before: https://github.com/danielvarga/earth-moving-generative-net As the encoder is trained so that E○G(x) is close to x on average, it feels like a variant of the projection heuristic above. Would the authors agree with this assessment? | 4: The reviewer is confident but not absolutely certain that the evaluation is correct |
Hkg4TI9xl | ICLR.cc/2017/conference | 2017 | A Baseline for Detecting Misclassified and Out-of-Distribution Examples in Neural Networks | ["Dan Hendrycks", "Kevin Gimpel"] | We consider the two related problems of detecting if an example is misclassified or out-of-distribution. We present a simple baseline that utilizes probabilities from softmax distributions. Correctly classified examples tend to have greater maximum softmax probabilities than erroneously classified and out-of-distribution examples, allowing for their detection. We assess performance by defining several tasks in computer vision, natural language processing, and automatic speech recognition, showing the effectiveness of this baseline across all. We then show the baseline can sometimes be surpassed, demonstrating the room for future research on these underexplored detection tasks. | ["Computer vision"] | ABSTRACTWe consider the two related problems of detecting if an example is misclassified orout-of-distribution. We present a simple baseline that utilizes probabilities fromsoftmax distributions. Correctly classified examples tend to have greater maxi-mum softmax probabilities than erroneously classified and out-of-distribution ex-amples, allowing for their detection. We assess performance by defining sev-eral tasks in computer vision, natural language processing, and automatic speechrecognition, showing the effectiveness of this baseline across all. We then showthe baseline can sometimes be surpassed, demonstrating the room for future re-search on these underexplored detection tasks.1 I NTRODUCTIONWhen machine learning classifiers are employed in real-world tasks, they tend to fail when thetraining and test distributions differ. Worse, these classifiers often fail silently by providing high-confidence predictions while being woefully incorrect (Goodfellow et al., 2015; Amodei et al.,2016). Classifiers failing to indicate when they are likely mistaken can limit their adoption orcause serious accidents. For example, a medical diagnosis model may consistently classify withhigh confidence, even while it should flag difficult examples for human intervention. The resultingunflagged, erroneous diagnoses could blockade future machine learning technologies in medicine.More generally and importantly, estimating when a model is in error is of great concern to AI Safety(Amodei et al., 2016).These high-confidence predictions are frequently produced by softmaxes because softmax probabil-ities are computed with the fast-growing exponential function. Thus minor additions to the softmaxinputs, i.e. the logits, can lead to substantial changes in the output distribution. Since the soft-max function is a smooth approximation of an indicator function, it is uncommon to see a uniformdistribution outputted for out-of-distribution examples. Indeed, random Gaussian noise fed into anMNIST image classifier gives a “prediction confidence” or predicted class probability of 91%, as weshow later. Throughout our experiments we establish that the prediction probability from a softmaxdistribution has a poor direct correspondence to confidence. This is consistent with a great deal ofanecdotal evidence from researchers (Nguyen & O’Connor, 2015; Yu et al., 2010; Provost et al.,1998; Nguyen et al., 2015).However, in this work we also show the prediction probability of incorrect and out-of-distributionexamples tends to be lower than the prediction probability for correct examples. Therefore, cap-turing prediction probability statistics about correct or in-sample examples is often sufficient fordetecting whether an example is in error or abnormal, even though the prediction probability viewedin isolation can be misleading.These prediction probabilities form our detection baseline, and we demonstrate its efficacy throughvarious computer vision, natural language processing, and automatic speech recognition tasks.While these prediction probabilities create a consistently useful baseline, at times they are less ef-fective, revealing room for improvement. To give ideas for future detection research, we contributeWork done while the author was at TTIC. Code is available at github.com/hendrycks/error-detection1arXiv:1610.02136v3 [cs.NE] 3 Oct 2018Published as a conference paper at ICLR 2017one method which outperforms the baseline on some (but not all) tasks. This new method evaluatesthe quality of a neural network’s input reconstruction to determine if an example is abnormal.In addition to the baseline methods, another contribution of this work is the designation of standardtasks and evaluation metrics for assessing the automatic detection of errors and out-of-distributionexamples. We use a large number of well-studied tasks across three research areas, using standardneural network architectures that perform well on them. For out-of-distribution detection, we pro-vide ways to supply the out-of-distribution examples at test time like using images from differentdatasets and realistically distorting inputs. We hope that other researchers will pursue these tasks infuture work and surpass the performance of our baselines.In summary, while softmax classifier probabilities are not directly useful as confidence estimates,estimating model confidence is not as bleak as previously believed. Simple statistics derived fromsoftmax distributions provide a surprisingly effective way to determine whether an example is mis-classified or from a different distribution from the training data, as demonstrated by our experimentalresults spanning computer vision, natural language processing, and speech recognition tasks. Thiscreates a strong baseline for detecting errors and out-of-distribution examples which we hope futureresearch surpasses.2 P ROBLEM FORMULATION AND EVALUATIONIn this paper, we are interested in two related problems. The first is error and success prediction :can we predict whether a trained classifier will make an error on a particular held-out test example;can we predict if it will correctly classify said example? The second is in- and out-of-distributiondetection : can we predict whether a test example is from a different distribution from the trainingdata; can we predict if it is from within the same distribution?1Below we present a simple baselinefor solving these two problems. To evaluate our solution, we use two evaluation metrics.Before mentioning the two evaluation metrics, we first note that comparing detectors is not asstraightforward as using accuracy. For detection we have two classes, and the detector outputs ascore for both the positive and negative class. If the negative class is far more likely than the positiveclass, a model may always guess the negative class and obtain high accuracy, which can be mislead-ing (Provost et al., 1998). We must then specify a score threshold so that some positive examplesare classified correctly, but this depends upon the trade-off between false negatives (fn) and falsepositives (fp).Faced with this issue, we employ the Area Under the Receiver Operating Characteristic curve (AU-ROC) metric, which is a threshold-independent performance evaluation (Davis & Goadrich, 2006).The ROC curve is a graph showing the true positive rate (tpr =tp=(tp+fn)) and the false positiverate (fpr =fp=(fp+tn)) against each other. Moreover, the AUROC can be interpreted as the prob-ability that a positive example has a greater detector score/value than a negative example (Fawcett,2005). Consequently, a random positive example detector corresponds to a 50% AUROC, and a“perfect” classifier corresponds to 100%.2The AUROC sidesteps the issue of threshold selection, as does the Area Under the Precision-Recallcurve (AUPR) which is sometimes deemed more informative (Manning & Sch ̈utze, 1999). This isbecause the AUROC is not ideal when the positive class and negative class have greatly differingbase rates, and the AUPR adjusts for these different positive and negative base rates. For this reason,the AUPR is our second evaluation metric. The PR curve plots the precision (tp =(tp+fp)) and recall(tp=(tp+fn)) against each other. The baseline detector has an AUPR approximately equal to theprecision (Saito & Rehmsmeier, 2015), and a “perfect” classifier has an AUPR of 100% . Conse-quently, the base rate of the positive class greatly influences the AUPR, so for detection we mustspecify which class is positive. In view of this, we show the AUPRs when we treat success/normalclasses as positive, and then we show the areas when we treat the error/abnormal classes as positive.We can treat the error/abnormal classes as positive by multiplying the scores by 1and labelingthem positive. Note that treating error/abnormal classes as positive classes does not change the AU-1We consider adversarial example detection techniques in a separate work (Hendrycks & Gimpel, 2016a).2A debatable, imprecise interpretation of AUROC values may be as follows: 90%—100%: Excellent,80%—90%: Good, 70%—80%: Fair, 60%—70%: Poor, 50%—60%: Fail.2Published as a conference paper at ICLR 2017ROC since if Sis a score for a successfully classified value, and Eis the score for an erroneouslyclassified value, AUROC =P(S > E ) =P(E >S).We begin our experiments in Section 3 where we describe a simple baseline which uses the maxi-mum probability from the softmax label distribution in neural network classifiers. Then in Section 4we describe a method that uses an additional, auxiliary model component trained to reconstruct theinput.3 S OFTMAX PREDICTION PROBABILITY AS A BASELINEIn what follows we retrieve the maximum/predicted class probability from a softmax distributionand thereby detect whether an example is erroneously classified or out-of-distribution. Specifically,we separate correctly and incorrectly classified test set examples and, for each example, computethe softmax probability of the predicted class, i.e., the maximum softmax probability.3From thesetwo groups we obtain the area under PR and ROC curves. These areas summarize the performanceof a binary classifier discriminating with values/scores (in this case, maximum probabilities fromthe softmaxes) across different thresholds. This description treats correctly classified examples asthe positive class, denoted “Success” or “Succ” in our tables. In “Error” or “Err” we treat thethe incorrectly classified examples as the positive class; to do this we label incorrectly classifiedexamples as positive and take the negatives of the softmax probabilities of the predicted classes asthe scores.For “In,” we treat the in-distribution, correctly classified test set examples as positive and use thesoftmax probability for the predicted class as a score, while for “Out” we treat the out-of-distributionexamples as positive and use the negative of the aforementioned probability. Since the AUPRs forSuccess, Error, In, Out classifiers depend on the rate of positive examples, we list what area a randomdetector would achieve with “Base” values. Also in the upcoming results we list the mean predictedclass probability of wrongly classified examples (Pred Prob Wrong (mean)) to demonstrate that thesoftmax prediction probability is a misleading confidence proxy when viewed in isolation. The“Pred. Prob (mean)” columns show this same shortcoming but for out-of-distribution examples.Table labels aside, we begin experimentation with datasets from vision then consider tasks in naturallanguage processing and automatic speech recognition. In all of the following experiments, the AU-ROCs differ from the random baselines with high statistical significance according to the Wilcoxonrank-sum test.3.1 C OMPUTER VISIONIn the following computer vision tasks, we use three datasets: MNIST, CIFAR-10, and CIFAR-100 (Krizhevsky, 2009). MNIST is a dataset of handwritten digits, consisting of 60000 trainingand 10000 testing examples. Meanwhile, CIFAR-10 has colored images belonging to 10 differentclasses, with 50000 training and 10000 testing examples. CIFAR-100 is more difficult, as it has 100different classes with 50000 training and 10000 testing examples.In Table 1, we see that correctly classified and incorrectly classified examples are sufficiently distinctand thus allow reliable discrimination. Note that the area under the curves degrade with imagerecognizer test error.Next, let us consider using softmax distributions to determine whether an example is in- or out-of-distribution. We use all test set examples as the in-distribution (positive) examples. For out-of-distribution (negative) examples, we use realistic images and noise. For CIFAR-10 and CIFAR-100,we use realistic images from the Scene UNderstanding dataset (SUN), which consists of 397 differ-ent scenes (Xiao et al., 2010). For MNIST, we use grayscale realistic images from three sources.Omniglot (Lake et al., 2015) images are handwritten characters rather than the handwritten digits inMNIST. Next, notMNIST (Bulatov, 2011) consists of typeface characters. Last of the realistic im-ages, CIFAR-10bw are black and white rescaled CIFAR-10 images. The synthetic “Gaussian” data3We also tried using the KL divergence of the softmax distribution from the uniform distribution for detec-tion. With divergence values, detector AUROCs and AUPRs were highly correlated with AUROCs and AUPRsfrom a detector using the maximum softmax probability. This divergence is similar to entropy (Steinhardt &Liang, 2016; Williams & Renals, 1997).3Published as a conference paper at ICLR 2017Dataset AUROC/BaseAUPRSucc/BaseAUPRErr/BasePred. ProbWrong(mean)Test SetErrorMNIST 97/50 100/98 48/1.7 86 1.69CIFAR-10 93/50 100/95 43/5 80 4.96CIFAR-100 87/50 96/79 62/21 66 20.7Table 1: The softmax predicted class probability allows for discrimination between correctly andincorrectly classified test set examples. “Pred. Prob Wrong(mean)” is the mean softmax probabilityfor wrongly classified examples, showcasing its shortcoming as a direct measure of confidence.Succ/Err Base values are the AUROCs or AUPRs achieved by random classifiers. All entries arepercentages.In-Distribution /Out-of-DistributionAUROC/BaseAUPR In/BaseAUPROut/BasePred. Prob(mean)CIFAR-10/SUN 95/50 89/33 97/67 72CIFAR-10/Gaussian 97/50 98/49 95/51 77CIFAR-10/All 96/50 88/24 98/76 74CIFAR-100/SUN 91/50 83/27 96/73 56CIFAR-100/Gaussian 88/50 92/43 80/57 77CIFAR-100/All 90/50 81/21 96/79 63MNIST/Omniglot 96/50 97/52 96/48 86MNIST/notMNIST 85/50 86/50 88/50 92MNIST/CIFAR-10bw 95/50 95/50 95/50 87MNIST/Gaussian 90/50 90/50 91/50 91MNIST/Uniform 99/50 99/50 98/50 83MNIST/All 91/50 76/20 98/80 89Table 2: Distinguishing in- and out-of-distribution test set data for image classification. CIFAR-10/All is the same as CIFAR-10/(SUN, Gaussian). All values are percentages.is random normal noise, and “Uniform” data is random uniform noise. Images are resized whennecessary.The results are shown in Table 2. Notice that the mean predicted/maximum class probabilities (Pred.Prob (mean)) are above 75%, but if the prediction probability alone is translated to confidence, thesoftmax distribution should be more uniform for CIFAR-100. This again shows softmax probabil-ities should not be viewed as a direct representation of confidence. Fortunately, out-of-distributionexamples sufficiently differ in the prediction probabilities from in-distribution examples, allowingfor successful detection and generally high area under PR and ROC curves.For reproducibility, let us specify the model architectures. The MNIST classifier is a three-layer,256 neuron-wide, fully-connected network trained for 30 epochs with Adam (Kingma & Ba, 2015).It uses a GELU nonlinearity (Hendrycks & Gimpel, 2016b), x(x), where (x)is the CDF of thestandard normal distribution. We initialize our weights according to (Hendrycks & Gimpel, 2016c),as it is suited for arbitrary nonlinearities. For CIFAR-10 and CIFAR-100, we train a 40-4 wideresidual network (Zagoruyko & Komodakis, 2016) for 50 epochs with stochastic gradient descentusing restarts (Loshchilov & Hutter, 2016), the GELU nonlinearity, and standard mirroring andcropping data augmentation.3.2 N ATURAL LANGUAGE PROCESSINGLet us turn to a variety of tasks and architectures used in natural language processing.3.2.1 S ENTIMENT CLASSIFICATIONThe first NLP task is binary sentiment classification using the IMDB dataset (Maas et al., 2011), adataset of polarized movie reviews with 25000 training and 25000 test reviews. This task allowsus to determine if classifiers trained on a relatively small dataset still produce informative softmax4Published as a conference paper at ICLR 2017Dataset AUROC/BaseAUPRSucc/BaseAUPRErr/BasePred. ProbWrong(mean)Test SetErrorIMDB 82/50 97/88 36/12 74 11.9Table 3: Detecting correct and incorrect classifications for binary sentiment classification.In-Distribution /Out-of-DistributionAUROC/BaseAUPR In/BaseAUPROut/BasePred. Prob(mean)IMDB/Customer Reviews 95/50 99/89 60/11 62IMDB/Movie Reviews 94/50 98/72 80/28 63IMDB/All 94/50 97/66 84/34 63Table 4: Distinguishing in- and out-of-distribution test set data for binary sentiment classification.IMDB/All is the same as IMDB/(Customer Reviews, Movie Reviews). All values are percentages.distributions. For this task we use a linear classifier taking as input the average of trainable, randomlyinitialized word vectors with dimension 50 (Joulin et al., 2016; Iyyer et al., 2015). We train for 15epochs with Adam and early stopping based upon 5000 held-out training reviews. Again, Table 3shows that the softmax distributions differ between correctly and incorrectly classified examples, soprediction probabilities allow us to detect reliably which examples are right and wrong.Now we use the Customer Review (Hu & Liu, 2004) and Movie Review (Pang et al., 2002) datasetsas out-of-distribution examples. The Customer Review dataset has reviews of products rather thanonly movies, and the Movie Review dataset has snippets from professional movie reviewers ratherthan full-length amateur reviews. We leave all test set examples from IMDB as in-distributionexamples, and out-of-distribution examples are the 500 or 1000 test reviews from Customer Reviewand Movie Review datasets, respectively. Table 4 displays detection results, showing a similar storyto Table 2.3.2.2 T EXT CATEGORIZATIONWe turn to text categorization tasks to determine whether softmax distributions are useful for de-tecting similar but out-of-distribution examples. In the following text categorization tasks, we trainclassifiers to predict the subject of the text they are processing. In the 20 Newsgroups dataset (Lang,1995), there are 20 different newsgroup subjects with a total of 20000 documents for the wholedataset. The Reuters 8 (Lewis et al., 2004) dataset has eight different news subjects with nearly8000 stories in total. The Reuters 52 dataset has 52 news subjects with slightly over 9000 newsstories; this dataset can have as few as three stories for a single subject.For the 20 Newsgroups dataset we train a linear classifier on 30-dimensional word vectors for 20epochs. Meanwhile, Reuters 8 and Retuers 52 use one-layer neural networks with a bag-of-wordsinput and a GELU nonlinearity, all optimized with Adam for 5 epochs. We train on a subset ofsubjects, leaving out 5 newsgroup subjects from 20 Newsgroups, 2 news subjects from Reuters8, and 12 news subjects from Reuters 52, leaving the rest as out-of-distribution examples. Table5 shows that with these datasets and architectures, we can detect errors dependably, and Table 6informs us that the softmax prediction probabilities allow for detecting out-of-distribution subjects.Dataset AUROC/BaseAUPRSucc/BaseAUPRErr/BasePred.ProbWrong(mean)Test SetError15 Newsgroups 89/50 99/93 42/7.3 53 7.31Reuters 6 89/50 100/98 35/2.5 77 2.53Reuters 40 91/50 99/92 45/7.6 62 7.55Table 5: Detecting correct and incorrect classifications for text categorization.5Published as a conference paper at ICLR 2017In-Distribution /Out-of-DistributionAUROC/BaseAUPRIn/BaseAUPROut/BasePred. Prob(mean)15/5 Newsgroups 75/50 92/84 45/16 65Reuters6/Reuters2 92/50 100/95 56/4.5 72Reuters40/Reuters12 95/50 100/93 60/7.2 47Table 6: Distinguishing in- and out-of-distribution test set data for text categorization.Dataset AUROC/BaseAUPRSucc/BaseAUPRErr/BasePred. ProbWrong(mean)Test SetErrorWSJ 96/50 100/96 51/3.7 71 3.68Twitter 89/50 98/87 53/13 69 12.59Table 7: Detecting correct and incorrect classifications for part-of-speech tagging.3.2.3 P ART-OF-SPEECH TAGGINGPart-of-speech (POS) tagging of newswire and social media text is our next challenge. We use theWall Street Journal portion of the Penn Treebank (Marcus et al., 1993) which contains 45 distinctPOS tags. For social media, we use POS-annotated tweets (Gimpel et al., 2011; Owoputi et al.,2013) which contain 25 tags. For the WSJ tagger, we train a bidirectional long short-term memoryrecurrent neural network (Hochreiter & Schmidhuber, 1997) with three layers, 128 neurons perlayer, with randomly initialized word vectors, and this is trained on 90% of the corpus for 10 epochswith stochastic gradient descent with a batch size of 32. The tweet tagger is simpler, as it is two-layer neural network with a GELU nonlinearity, a weight initialization according to (Hendrycks &Gimpel, 2016c), pretrained word vectors trained on a corpus of 56 million tweets (Owoputi et al.,2013), and a hidden layer size of 256, all while training on 1000 tweets for 30 epochs with Adamand early stopping with 327 validation tweets. Error detection results are in Table 7. For out-of-distribution detection, we use the WSJ tagger on the tweets as well as weblog data from the EnglishWeb Treebank (Bies et al., 2012). The results are shown in Table 8. Since the weblog data is closerin style to newswire than are the tweets, it is harder to detect whether a weblog sentence is out-of-distribution than a tweet. Indeed, since POS tagging is done at the word-level, we are detectingwhether each word is out-of-distribution given the word and contextual features. With this in mind,we see that it is easier to detect words as out-of-distribution if they are from tweets than from blogs.In-Distribution /Out-of-DistributionAUROC/BaseAUPRIn/BaseAUPROut/BasePred. Prob(mean)WSJ/Twitter 80/50 98/92 41/7.7 81WSJ/Weblog* 61/50 88/86 30/14 93Table 8: Detecting out-of-distribution tweets and blog articles for part-of-speech tagging. All valuesare percentages. *These examples are atypically close to the training distribution.3.3 A UTOMATIC SPEECH RECOGNITIONNow we consider a task which uses softmax values to construct entire sequences rather than deter-mine an input’s class. Our sequence prediction system uses a bidirectional LSTM with two-layersand a clipped GELU nonlinearity, optimized for 60 epochs with RMSProp trained on 80% of theTIMIT corpus (Garofolo et al., 1993). The LSTM is trained with connectionist temporal classifica-tion (CTC) (Graves et al., 2006) for predicting sequences of phones given MFCCs, energy, and firstand second deltas of a 25ms frame. When trained with CTC, the LSTM learns to have its phonelabel probabilities spike momentarily while mostly predicting blank symbols otherwise. In this way,the softmax is used differently from typical classification problems, providing a unique test for ourdetection methods.We do not show how the system performs on correctness/incorrectness detection because errorsare not binary and instead lie along a range of edit distances. However, we can perform out-of-6Published as a conference paper at ICLR 2017In-Distribution /Out-of-DistributionAUROC/BaseAUPRIn/BaseAUPROut/BasePred. Prob(mean)TIMIT/TIMIT+Airport 99/50 99/50 99/50 59TIMIT/TIMIT+Babble 100/50 100/50 100/50 55TIMIT/TIMIT+Car 98/50 98/50 98/50 59TIMIT/TIMIT+Exhibition 100/50 100/50 100/50 57TIMIT/TIMIT+Restaurant 98/50 98/50 98/50 60TIMIT/TIMIT+Street 100/50 100/50 100/50 52TIMIT/TIMIT+Subway 100/50 100/50 100/50 56TIMIT/TIMIT+Train 100/50 100/50 100/50 58TIMIT/Chinese 85/50 80/34 90/66 64TIMIT/All 97/50 79/10 100/90 58Table 9: Detecting out-of-distribution distorted speech. All values are percentages.distribution detection. Mixing the TIMIT audio with realistic noises from the Aurora-2 dataset(Hirsch & Pearce, 2000), we keep the TIMIT audio volume at 100% and noise volume at 30%,giving a mean SNR of approximately 5. Speakers are still clearly audible to the human ear butconfuse the phone recognizer because the prediction edit distance more than doubles. For more out-of-distribution examples, we use the test examples from the THCHS-30 dataset (Wang & Zhang,2015), a Chinese speech corpus. Table 9 shows the results. Crucially, when performing detection,we compute the softmax probabilities while ignoring the blank symbol’s logit. With the blanksymbol’s presence, the softmax distributions at most time steps predict a blank symbol with highconfidence, but without the blank symbol we can better differentiate between normal and abnormaldistributions. With this modification, the softmax prediction probabilities allow us to detect whetheran example is out-of-distribution.4 A BNORMALITY DETECTION WITH AUXILIARY DECODERSHaving seen that softmax prediction probabilities enable abnormality detection, we now show thereis other information sometimes more useful for detection. To demonstrate this, we exploit thelearned internal representations of neural networks. We start by training a normal classifier andappend an auxiliary decoder which reconstructs the input, shown in Figure 1. Auxiliary decodersare sometimes known to increase classification performance (Zhang et al., 2016). The decoder andscorer are trained jointly on in-distribution examples. Thereafter, the blue layers in Figure 1 arefrozen. Then we train red layers on clean and noised training examples, and the sigmoid output ofthe red layers scores how normal the input is. Consequently, noised examples are in the abnormalclass, clean examples are of the normal class, and the sigmoid is trained to output to which class aninput belongs. After training we consequently have a normal classifier, an auxiliary decoder, andwhat we call an abnormality module . The gains from the abnormality module demonstrate thereare possible research avenues for outperforming the baseline.4.1 TIMITWe test the abnormality module by revisiting the TIMIT task with a different architecture and showhow these auxiliary components can greatly improve detection. The system is a three-layer, 1024-neuron wide classifier with an auxiliary decoder and abnormality module. This network takes asinput 11 frames and must predict the phone of the center frame, 26 features per frame. Weights areinitialized according to (Hendrycks & Gimpel, 2016c). This network trains for 20 epochs, and theabnormality module trains for two. The abnormality module sees clean examples and, as negativeexamples, TIMIT examples distorted with either white noise, brown noise (noise with its spectraldensity proportional to 1=f2), or pink noise (noise with its spectral density proportional to 1=f) atvarious volumes.We note that the abnormality module is nottrained on the same type of noise added to the testexamples. Nonetheless, Table 10 shows that simple noised examples translate to effective detectionof realistically distorted audio. We detect abnormal examples by comparing the typical abnormality7Published as a conference paper at ICLR 2017In-Distribution /Out-of-DistributionAUROC/BaseSoftmaxAUROC/BaseAbModAUPRIn/BaseSoftmaxAUPRIn/BaseAbModAUPROut/BaseSoftmaxAUPROut/BaseAbModTIMIT/+Airport 75/50 100/50 77/41 100/41 73/59 100/59TIMIT/+Babble 94/50 100/50 95/41 100/41 91/59 100/59TIMIT/+Car 70/50 98/50 69/41 98/41 70/59 98/59TIMIT/+Exhib. 91/50 98/50 92/41 98/41 91/59 98/59TIMIT/+Rest. 68/50 95/50 70/41 96/41 67/59 95/59TIMIT/+Subway 76/50 96/50 77/41 96/41 74/59 96/59TIMIT/+Street 89/50 98/50 91/41 99/41 85/59 98/59TIMIT/+Train 80/50 100/50 82/41 100/41 77/59 100/59TIMIT/Chinese 79/50 90/50 41/12 66/12 96/88 98/88Average 80 97 77 95 80 98Table 10: Abnormality modules can generalize to novel distortions and detect out-of-distributionexamples even when they do not severely degrade accuracy. All values are percentages.In-Distribution /Out-of-DistributionAUROC/BaseSoftmaxAUROC/BaseAbModAUPRIn/BaseSoftmaxAUPRIn/BaseAbModAUPROut/BaseSoftmaxAUPROut/BaseAbModMNIST/Omniglot 95/50 100/50 95/52 100/52 95/48 100/48MNIST/notMNIST 87/50 100/50 88/50 100/50 90/50 100/50MNIST/CIFAR-10bw 98/50 100/50 98/50 100/50 98/50 100/50MNIST/Gaussian 88/50 100/50 88/50 100/50 90/50 100/50MNIST/Uniform 99/50 100/50 99/50 100/50 99/50 100/50Average 93 100 94 100 94 100Table 11: Improved detection using the abnormality module. All values are percentages.module outputs for clean examples with the outputs for the distorted examples. The noises are fromAurora-2 and are added to TIMIT examples with 30% volume. We also use the THCHS-30 datasetfor Chinese speech. Unlike before, we use the THCHS-30 training examples rather than test setexamples because fully connected networks can evaluate the whole training set sufficiently quickly.It is worth mentioning that fully connected deep neural networks are noise robust (Seltzer et al.,2013), yet the abnormality module can still detect whether an example is out-of-distribution. To seewhy this is remarkable, note that the network’s frame classification error is 29.69% on the entiretest (not core) dataset, and the average classification error for distorted examples is 30.43%—thisis unlike the bidirectional LSTM which had a more pronounced performance decline. Because theclassification degradation was only slight, the softmax statistics alone did not provide useful out-of-distribution detection. In contrast, the abnormality module provided scores which allowed thedetection of different-but-similar examples. In practice, it may be important to determine whetheran example is out-of-distribution even if it does not greatly confuse the network, and the abnormalitymodule facilitates this.4.2 MNISTFinally, much like in a previous experiment, we train an MNIST classifier with three layers of width256. This time, we also use an auxiliary decoder and abnormality module rather than relying on onlysoftmax statistics. For abnormal examples we blur, rotate, or add Gaussian noise to training images.Gains from the abnormality module are shown in Table 11, and there is a consistent out-of-sampledetection improvement compared to softmax prediction probabilities. Even for highly dissimilarexamples the abnormality module can further improve detection.8Published as a conference paper at ICLR 20175 D ISCUSSION AND FUTURE WORKThe abnormality module demonstrates that in some cases the baseline can be beaten by exploitingthe representations of a network, suggesting myriad research directions. Some promising futureavenues may utilize the intra-class variance: if the distance from an example to another of the samepredicted class is abnormally high, it may be out-of-distribution (Giryes et al., 2015). Another pathis to feed in a vector summarizing a layer’s activations into an RNN, one vector for each layer.The RNN may determine that the activation patterns are abnormal for out-of-distribution examples.Others could make the detections fine-grained: is the out-of-distribution example a known-unknownor an unknown-unknown? A different avenue is not just to detect correct classifications but tooutput the probability of a correct detection. These are but a few ideas for improving error andout-of-distribution detection.We hope that any new detection methods are tested on a variety of tasks and architectures of theresearcher’s choice. A basic demonstration could include the following datasets: MNIST, CIFAR,IMDB, and tweets because vision-only demonstrations may not transfer well to other architecturesand datasets. Reporting the AUPR and AUROC values is important, and so is the underlying classi-fier’s accuracy since an always-wrong classifier gets a maximum AUPR for error detection if erroris the positive class. Also, future research need not use the exact values from this paper for com-parisons. Machine learning systems evolve, so tethering the evaluations to the exact architecturesand datasets in this paper is needless. Instead, one could simply choose a variety of datasets andarchitectures possibly like those above and compare their detection method with a detector based onthe softmax prediction probabilities from their classifiers. These are our basic recommendations forothers who try to surpass the baseline on this underexplored challenge.6 C ONCLUSIONWe demonstrated a softmax prediction probability baseline for error and out-of-distribution detec-tion across several architectures and numerous datasets. We then presented the abnormality module,which provided superior scores for discriminating between normal and abnormal examples on testedcases. The abnormality module demonstrates that the baseline can be beaten in some cases, and thisimplies there is room for future research. Our hope is that other researchers investigate architec-tures which make predictions in view of abnormality estimates, and that others pursue more reliablemethods for detecting errors and out-of-distribution inputs because knowing when a machine learn-ing system fails strikes us as highly important.ACKNOWLEDGMENTSWe would like to thank John Wieting, Hao Tang, Karen Livescu, Greg Shakhnarovich, and ourreviewers for their suggestions. We would also like to thank NVIDIA Corporation for donatingseveral TITAN X GPUs used in this research. | ByBs9JfEx | Paper provides a simple baseline for out-of-domain/misclassification detection. Statistics on maximum softmax probabilities for in/out domain examples appear to be sufficient to classify examples as out-of-domain. | 6: Marginally above acceptance threshold | The authors present results on a number of different tasks where the goal is to determine whether a given test example is out-of-domain or likely to be mis-classified. This is accomplished by examining statistics for the softmax probability for the most likely class; although the score by itself is not a particularly good measure of confidence, the statistics for out-of-domain examples are different enough from in-domain examples to allow these to be identified with some certainty.
My comments appear below:
1. As the authors point out, the AUROC/AUPR criterion is threshold independent. As a result, it is not obvious whether the thresholds that would correspond to a certain operating point (say a true positive rate of 10%) would be similar across different data sets. In other words, it would be interesting to know how sensitive the thresholds are to different test sets (or different splits of the test set). This is important if we want to use the thresholds determined on a given held-out set during evaluation on unseen data (where we would need to select a threshold).
2. Performance is reported in terms of AUROC/AUPR and models are compared against a random baseline. I think it’s a little hard to look at the differences in AUC/AUPR to get a sense for how much better the proposed classifier is than the random baseline. It would be useful, for example, if the authors could also report how strongly statistically significant some of these differences are (although admittedly they look to be pretty large in most cases).
3. In the experiments on speech recognition presented in Section 3.3, I was not entirely clear on how the model was evaluated. In Table 9, for example, is an “example” the entire utterance or just a single (stacked?) speech frame. Assuming that each “example” is an utterance, are the softmax probabilities the probability of the entire phone sequence (obtained by multiplying the local probability estimates from a Viterbi decoding?)
4. I’m curious about the decision to ignore the blank symbol’s logit in Section 3.3. Why is this required?
5. As I mentioned in the pre-review question, at least in the speech recognition case, it would have been interesting to compare performance obtained using a simple generative baseline (e.g., GMM-HMM). I think that would serve as a good indication of the ability of the proposed model to detect out-of-domain examples over the baseline. | 3: The reviewer is fairly confident that the evaluation is correct |
Hkg4TI9xl | ICLR.cc/2017/conference | 2017 | A Baseline for Detecting Misclassified and Out-of-Distribution Examples in Neural Networks | ["Dan Hendrycks", "Kevin Gimpel"] | We consider the two related problems of detecting if an example is misclassified or out-of-distribution. We present a simple baseline that utilizes probabilities from softmax distributions. Correctly classified examples tend to have greater maximum softmax probabilities than erroneously classified and out-of-distribution examples, allowing for their detection. We assess performance by defining several tasks in computer vision, natural language processing, and automatic speech recognition, showing the effectiveness of this baseline across all. We then show the baseline can sometimes be surpassed, demonstrating the room for future research on these underexplored detection tasks. | ["Computer vision"] | ABSTRACTWe consider the two related problems of detecting if an example is misclassified orout-of-distribution. We present a simple baseline that utilizes probabilities fromsoftmax distributions. Correctly classified examples tend to have greater maxi-mum softmax probabilities than erroneously classified and out-of-distribution ex-amples, allowing for their detection. We assess performance by defining sev-eral tasks in computer vision, natural language processing, and automatic speechrecognition, showing the effectiveness of this baseline across all. We then showthe baseline can sometimes be surpassed, demonstrating the room for future re-search on these underexplored detection tasks.1 I NTRODUCTIONWhen machine learning classifiers are employed in real-world tasks, they tend to fail when thetraining and test distributions differ. Worse, these classifiers often fail silently by providing high-confidence predictions while being woefully incorrect (Goodfellow et al., 2015; Amodei et al.,2016). Classifiers failing to indicate when they are likely mistaken can limit their adoption orcause serious accidents. For example, a medical diagnosis model may consistently classify withhigh confidence, even while it should flag difficult examples for human intervention. The resultingunflagged, erroneous diagnoses could blockade future machine learning technologies in medicine.More generally and importantly, estimating when a model is in error is of great concern to AI Safety(Amodei et al., 2016).These high-confidence predictions are frequently produced by softmaxes because softmax probabil-ities are computed with the fast-growing exponential function. Thus minor additions to the softmaxinputs, i.e. the logits, can lead to substantial changes in the output distribution. Since the soft-max function is a smooth approximation of an indicator function, it is uncommon to see a uniformdistribution outputted for out-of-distribution examples. Indeed, random Gaussian noise fed into anMNIST image classifier gives a “prediction confidence” or predicted class probability of 91%, as weshow later. Throughout our experiments we establish that the prediction probability from a softmaxdistribution has a poor direct correspondence to confidence. This is consistent with a great deal ofanecdotal evidence from researchers (Nguyen & O’Connor, 2015; Yu et al., 2010; Provost et al.,1998; Nguyen et al., 2015).However, in this work we also show the prediction probability of incorrect and out-of-distributionexamples tends to be lower than the prediction probability for correct examples. Therefore, cap-turing prediction probability statistics about correct or in-sample examples is often sufficient fordetecting whether an example is in error or abnormal, even though the prediction probability viewedin isolation can be misleading.These prediction probabilities form our detection baseline, and we demonstrate its efficacy throughvarious computer vision, natural language processing, and automatic speech recognition tasks.While these prediction probabilities create a consistently useful baseline, at times they are less ef-fective, revealing room for improvement. To give ideas for future detection research, we contributeWork done while the author was at TTIC. Code is available at github.com/hendrycks/error-detection1arXiv:1610.02136v3 [cs.NE] 3 Oct 2018Published as a conference paper at ICLR 2017one method which outperforms the baseline on some (but not all) tasks. This new method evaluatesthe quality of a neural network’s input reconstruction to determine if an example is abnormal.In addition to the baseline methods, another contribution of this work is the designation of standardtasks and evaluation metrics for assessing the automatic detection of errors and out-of-distributionexamples. We use a large number of well-studied tasks across three research areas, using standardneural network architectures that perform well on them. For out-of-distribution detection, we pro-vide ways to supply the out-of-distribution examples at test time like using images from differentdatasets and realistically distorting inputs. We hope that other researchers will pursue these tasks infuture work and surpass the performance of our baselines.In summary, while softmax classifier probabilities are not directly useful as confidence estimates,estimating model confidence is not as bleak as previously believed. Simple statistics derived fromsoftmax distributions provide a surprisingly effective way to determine whether an example is mis-classified or from a different distribution from the training data, as demonstrated by our experimentalresults spanning computer vision, natural language processing, and speech recognition tasks. Thiscreates a strong baseline for detecting errors and out-of-distribution examples which we hope futureresearch surpasses.2 P ROBLEM FORMULATION AND EVALUATIONIn this paper, we are interested in two related problems. The first is error and success prediction :can we predict whether a trained classifier will make an error on a particular held-out test example;can we predict if it will correctly classify said example? The second is in- and out-of-distributiondetection : can we predict whether a test example is from a different distribution from the trainingdata; can we predict if it is from within the same distribution?1Below we present a simple baselinefor solving these two problems. To evaluate our solution, we use two evaluation metrics.Before mentioning the two evaluation metrics, we first note that comparing detectors is not asstraightforward as using accuracy. For detection we have two classes, and the detector outputs ascore for both the positive and negative class. If the negative class is far more likely than the positiveclass, a model may always guess the negative class and obtain high accuracy, which can be mislead-ing (Provost et al., 1998). We must then specify a score threshold so that some positive examplesare classified correctly, but this depends upon the trade-off between false negatives (fn) and falsepositives (fp).Faced with this issue, we employ the Area Under the Receiver Operating Characteristic curve (AU-ROC) metric, which is a threshold-independent performance evaluation (Davis & Goadrich, 2006).The ROC curve is a graph showing the true positive rate (tpr =tp=(tp+fn)) and the false positiverate (fpr =fp=(fp+tn)) against each other. Moreover, the AUROC can be interpreted as the prob-ability that a positive example has a greater detector score/value than a negative example (Fawcett,2005). Consequently, a random positive example detector corresponds to a 50% AUROC, and a“perfect” classifier corresponds to 100%.2The AUROC sidesteps the issue of threshold selection, as does the Area Under the Precision-Recallcurve (AUPR) which is sometimes deemed more informative (Manning & Sch ̈utze, 1999). This isbecause the AUROC is not ideal when the positive class and negative class have greatly differingbase rates, and the AUPR adjusts for these different positive and negative base rates. For this reason,the AUPR is our second evaluation metric. The PR curve plots the precision (tp =(tp+fp)) and recall(tp=(tp+fn)) against each other. The baseline detector has an AUPR approximately equal to theprecision (Saito & Rehmsmeier, 2015), and a “perfect” classifier has an AUPR of 100% . Conse-quently, the base rate of the positive class greatly influences the AUPR, so for detection we mustspecify which class is positive. In view of this, we show the AUPRs when we treat success/normalclasses as positive, and then we show the areas when we treat the error/abnormal classes as positive.We can treat the error/abnormal classes as positive by multiplying the scores by 1and labelingthem positive. Note that treating error/abnormal classes as positive classes does not change the AU-1We consider adversarial example detection techniques in a separate work (Hendrycks & Gimpel, 2016a).2A debatable, imprecise interpretation of AUROC values may be as follows: 90%—100%: Excellent,80%—90%: Good, 70%—80%: Fair, 60%—70%: Poor, 50%—60%: Fail.2Published as a conference paper at ICLR 2017ROC since if Sis a score for a successfully classified value, and Eis the score for an erroneouslyclassified value, AUROC =P(S > E ) =P(E >S).We begin our experiments in Section 3 where we describe a simple baseline which uses the maxi-mum probability from the softmax label distribution in neural network classifiers. Then in Section 4we describe a method that uses an additional, auxiliary model component trained to reconstruct theinput.3 S OFTMAX PREDICTION PROBABILITY AS A BASELINEIn what follows we retrieve the maximum/predicted class probability from a softmax distributionand thereby detect whether an example is erroneously classified or out-of-distribution. Specifically,we separate correctly and incorrectly classified test set examples and, for each example, computethe softmax probability of the predicted class, i.e., the maximum softmax probability.3From thesetwo groups we obtain the area under PR and ROC curves. These areas summarize the performanceof a binary classifier discriminating with values/scores (in this case, maximum probabilities fromthe softmaxes) across different thresholds. This description treats correctly classified examples asthe positive class, denoted “Success” or “Succ” in our tables. In “Error” or “Err” we treat thethe incorrectly classified examples as the positive class; to do this we label incorrectly classifiedexamples as positive and take the negatives of the softmax probabilities of the predicted classes asthe scores.For “In,” we treat the in-distribution, correctly classified test set examples as positive and use thesoftmax probability for the predicted class as a score, while for “Out” we treat the out-of-distributionexamples as positive and use the negative of the aforementioned probability. Since the AUPRs forSuccess, Error, In, Out classifiers depend on the rate of positive examples, we list what area a randomdetector would achieve with “Base” values. Also in the upcoming results we list the mean predictedclass probability of wrongly classified examples (Pred Prob Wrong (mean)) to demonstrate that thesoftmax prediction probability is a misleading confidence proxy when viewed in isolation. The“Pred. Prob (mean)” columns show this same shortcoming but for out-of-distribution examples.Table labels aside, we begin experimentation with datasets from vision then consider tasks in naturallanguage processing and automatic speech recognition. In all of the following experiments, the AU-ROCs differ from the random baselines with high statistical significance according to the Wilcoxonrank-sum test.3.1 C OMPUTER VISIONIn the following computer vision tasks, we use three datasets: MNIST, CIFAR-10, and CIFAR-100 (Krizhevsky, 2009). MNIST is a dataset of handwritten digits, consisting of 60000 trainingand 10000 testing examples. Meanwhile, CIFAR-10 has colored images belonging to 10 differentclasses, with 50000 training and 10000 testing examples. CIFAR-100 is more difficult, as it has 100different classes with 50000 training and 10000 testing examples.In Table 1, we see that correctly classified and incorrectly classified examples are sufficiently distinctand thus allow reliable discrimination. Note that the area under the curves degrade with imagerecognizer test error.Next, let us consider using softmax distributions to determine whether an example is in- or out-of-distribution. We use all test set examples as the in-distribution (positive) examples. For out-of-distribution (negative) examples, we use realistic images and noise. For CIFAR-10 and CIFAR-100,we use realistic images from the Scene UNderstanding dataset (SUN), which consists of 397 differ-ent scenes (Xiao et al., 2010). For MNIST, we use grayscale realistic images from three sources.Omniglot (Lake et al., 2015) images are handwritten characters rather than the handwritten digits inMNIST. Next, notMNIST (Bulatov, 2011) consists of typeface characters. Last of the realistic im-ages, CIFAR-10bw are black and white rescaled CIFAR-10 images. The synthetic “Gaussian” data3We also tried using the KL divergence of the softmax distribution from the uniform distribution for detec-tion. With divergence values, detector AUROCs and AUPRs were highly correlated with AUROCs and AUPRsfrom a detector using the maximum softmax probability. This divergence is similar to entropy (Steinhardt &Liang, 2016; Williams & Renals, 1997).3Published as a conference paper at ICLR 2017Dataset AUROC/BaseAUPRSucc/BaseAUPRErr/BasePred. ProbWrong(mean)Test SetErrorMNIST 97/50 100/98 48/1.7 86 1.69CIFAR-10 93/50 100/95 43/5 80 4.96CIFAR-100 87/50 96/79 62/21 66 20.7Table 1: The softmax predicted class probability allows for discrimination between correctly andincorrectly classified test set examples. “Pred. Prob Wrong(mean)” is the mean softmax probabilityfor wrongly classified examples, showcasing its shortcoming as a direct measure of confidence.Succ/Err Base values are the AUROCs or AUPRs achieved by random classifiers. All entries arepercentages.In-Distribution /Out-of-DistributionAUROC/BaseAUPR In/BaseAUPROut/BasePred. Prob(mean)CIFAR-10/SUN 95/50 89/33 97/67 72CIFAR-10/Gaussian 97/50 98/49 95/51 77CIFAR-10/All 96/50 88/24 98/76 74CIFAR-100/SUN 91/50 83/27 96/73 56CIFAR-100/Gaussian 88/50 92/43 80/57 77CIFAR-100/All 90/50 81/21 96/79 63MNIST/Omniglot 96/50 97/52 96/48 86MNIST/notMNIST 85/50 86/50 88/50 92MNIST/CIFAR-10bw 95/50 95/50 95/50 87MNIST/Gaussian 90/50 90/50 91/50 91MNIST/Uniform 99/50 99/50 98/50 83MNIST/All 91/50 76/20 98/80 89Table 2: Distinguishing in- and out-of-distribution test set data for image classification. CIFAR-10/All is the same as CIFAR-10/(SUN, Gaussian). All values are percentages.is random normal noise, and “Uniform” data is random uniform noise. Images are resized whennecessary.The results are shown in Table 2. Notice that the mean predicted/maximum class probabilities (Pred.Prob (mean)) are above 75%, but if the prediction probability alone is translated to confidence, thesoftmax distribution should be more uniform for CIFAR-100. This again shows softmax probabil-ities should not be viewed as a direct representation of confidence. Fortunately, out-of-distributionexamples sufficiently differ in the prediction probabilities from in-distribution examples, allowingfor successful detection and generally high area under PR and ROC curves.For reproducibility, let us specify the model architectures. The MNIST classifier is a three-layer,256 neuron-wide, fully-connected network trained for 30 epochs with Adam (Kingma & Ba, 2015).It uses a GELU nonlinearity (Hendrycks & Gimpel, 2016b), x(x), where (x)is the CDF of thestandard normal distribution. We initialize our weights according to (Hendrycks & Gimpel, 2016c),as it is suited for arbitrary nonlinearities. For CIFAR-10 and CIFAR-100, we train a 40-4 wideresidual network (Zagoruyko & Komodakis, 2016) for 50 epochs with stochastic gradient descentusing restarts (Loshchilov & Hutter, 2016), the GELU nonlinearity, and standard mirroring andcropping data augmentation.3.2 N ATURAL LANGUAGE PROCESSINGLet us turn to a variety of tasks and architectures used in natural language processing.3.2.1 S ENTIMENT CLASSIFICATIONThe first NLP task is binary sentiment classification using the IMDB dataset (Maas et al., 2011), adataset of polarized movie reviews with 25000 training and 25000 test reviews. This task allowsus to determine if classifiers trained on a relatively small dataset still produce informative softmax4Published as a conference paper at ICLR 2017Dataset AUROC/BaseAUPRSucc/BaseAUPRErr/BasePred. ProbWrong(mean)Test SetErrorIMDB 82/50 97/88 36/12 74 11.9Table 3: Detecting correct and incorrect classifications for binary sentiment classification.In-Distribution /Out-of-DistributionAUROC/BaseAUPR In/BaseAUPROut/BasePred. Prob(mean)IMDB/Customer Reviews 95/50 99/89 60/11 62IMDB/Movie Reviews 94/50 98/72 80/28 63IMDB/All 94/50 97/66 84/34 63Table 4: Distinguishing in- and out-of-distribution test set data for binary sentiment classification.IMDB/All is the same as IMDB/(Customer Reviews, Movie Reviews). All values are percentages.distributions. For this task we use a linear classifier taking as input the average of trainable, randomlyinitialized word vectors with dimension 50 (Joulin et al., 2016; Iyyer et al., 2015). We train for 15epochs with Adam and early stopping based upon 5000 held-out training reviews. Again, Table 3shows that the softmax distributions differ between correctly and incorrectly classified examples, soprediction probabilities allow us to detect reliably which examples are right and wrong.Now we use the Customer Review (Hu & Liu, 2004) and Movie Review (Pang et al., 2002) datasetsas out-of-distribution examples. The Customer Review dataset has reviews of products rather thanonly movies, and the Movie Review dataset has snippets from professional movie reviewers ratherthan full-length amateur reviews. We leave all test set examples from IMDB as in-distributionexamples, and out-of-distribution examples are the 500 or 1000 test reviews from Customer Reviewand Movie Review datasets, respectively. Table 4 displays detection results, showing a similar storyto Table 2.3.2.2 T EXT CATEGORIZATIONWe turn to text categorization tasks to determine whether softmax distributions are useful for de-tecting similar but out-of-distribution examples. In the following text categorization tasks, we trainclassifiers to predict the subject of the text they are processing. In the 20 Newsgroups dataset (Lang,1995), there are 20 different newsgroup subjects with a total of 20000 documents for the wholedataset. The Reuters 8 (Lewis et al., 2004) dataset has eight different news subjects with nearly8000 stories in total. The Reuters 52 dataset has 52 news subjects with slightly over 9000 newsstories; this dataset can have as few as three stories for a single subject.For the 20 Newsgroups dataset we train a linear classifier on 30-dimensional word vectors for 20epochs. Meanwhile, Reuters 8 and Retuers 52 use one-layer neural networks with a bag-of-wordsinput and a GELU nonlinearity, all optimized with Adam for 5 epochs. We train on a subset ofsubjects, leaving out 5 newsgroup subjects from 20 Newsgroups, 2 news subjects from Reuters8, and 12 news subjects from Reuters 52, leaving the rest as out-of-distribution examples. Table5 shows that with these datasets and architectures, we can detect errors dependably, and Table 6informs us that the softmax prediction probabilities allow for detecting out-of-distribution subjects.Dataset AUROC/BaseAUPRSucc/BaseAUPRErr/BasePred.ProbWrong(mean)Test SetError15 Newsgroups 89/50 99/93 42/7.3 53 7.31Reuters 6 89/50 100/98 35/2.5 77 2.53Reuters 40 91/50 99/92 45/7.6 62 7.55Table 5: Detecting correct and incorrect classifications for text categorization.5Published as a conference paper at ICLR 2017In-Distribution /Out-of-DistributionAUROC/BaseAUPRIn/BaseAUPROut/BasePred. Prob(mean)15/5 Newsgroups 75/50 92/84 45/16 65Reuters6/Reuters2 92/50 100/95 56/4.5 72Reuters40/Reuters12 95/50 100/93 60/7.2 47Table 6: Distinguishing in- and out-of-distribution test set data for text categorization.Dataset AUROC/BaseAUPRSucc/BaseAUPRErr/BasePred. ProbWrong(mean)Test SetErrorWSJ 96/50 100/96 51/3.7 71 3.68Twitter 89/50 98/87 53/13 69 12.59Table 7: Detecting correct and incorrect classifications for part-of-speech tagging.3.2.3 P ART-OF-SPEECH TAGGINGPart-of-speech (POS) tagging of newswire and social media text is our next challenge. We use theWall Street Journal portion of the Penn Treebank (Marcus et al., 1993) which contains 45 distinctPOS tags. For social media, we use POS-annotated tweets (Gimpel et al., 2011; Owoputi et al.,2013) which contain 25 tags. For the WSJ tagger, we train a bidirectional long short-term memoryrecurrent neural network (Hochreiter & Schmidhuber, 1997) with three layers, 128 neurons perlayer, with randomly initialized word vectors, and this is trained on 90% of the corpus for 10 epochswith stochastic gradient descent with a batch size of 32. The tweet tagger is simpler, as it is two-layer neural network with a GELU nonlinearity, a weight initialization according to (Hendrycks &Gimpel, 2016c), pretrained word vectors trained on a corpus of 56 million tweets (Owoputi et al.,2013), and a hidden layer size of 256, all while training on 1000 tweets for 30 epochs with Adamand early stopping with 327 validation tweets. Error detection results are in Table 7. For out-of-distribution detection, we use the WSJ tagger on the tweets as well as weblog data from the EnglishWeb Treebank (Bies et al., 2012). The results are shown in Table 8. Since the weblog data is closerin style to newswire than are the tweets, it is harder to detect whether a weblog sentence is out-of-distribution than a tweet. Indeed, since POS tagging is done at the word-level, we are detectingwhether each word is out-of-distribution given the word and contextual features. With this in mind,we see that it is easier to detect words as out-of-distribution if they are from tweets than from blogs.In-Distribution /Out-of-DistributionAUROC/BaseAUPRIn/BaseAUPROut/BasePred. Prob(mean)WSJ/Twitter 80/50 98/92 41/7.7 81WSJ/Weblog* 61/50 88/86 30/14 93Table 8: Detecting out-of-distribution tweets and blog articles for part-of-speech tagging. All valuesare percentages. *These examples are atypically close to the training distribution.3.3 A UTOMATIC SPEECH RECOGNITIONNow we consider a task which uses softmax values to construct entire sequences rather than deter-mine an input’s class. Our sequence prediction system uses a bidirectional LSTM with two-layersand a clipped GELU nonlinearity, optimized for 60 epochs with RMSProp trained on 80% of theTIMIT corpus (Garofolo et al., 1993). The LSTM is trained with connectionist temporal classifica-tion (CTC) (Graves et al., 2006) for predicting sequences of phones given MFCCs, energy, and firstand second deltas of a 25ms frame. When trained with CTC, the LSTM learns to have its phonelabel probabilities spike momentarily while mostly predicting blank symbols otherwise. In this way,the softmax is used differently from typical classification problems, providing a unique test for ourdetection methods.We do not show how the system performs on correctness/incorrectness detection because errorsare not binary and instead lie along a range of edit distances. However, we can perform out-of-6Published as a conference paper at ICLR 2017In-Distribution /Out-of-DistributionAUROC/BaseAUPRIn/BaseAUPROut/BasePred. Prob(mean)TIMIT/TIMIT+Airport 99/50 99/50 99/50 59TIMIT/TIMIT+Babble 100/50 100/50 100/50 55TIMIT/TIMIT+Car 98/50 98/50 98/50 59TIMIT/TIMIT+Exhibition 100/50 100/50 100/50 57TIMIT/TIMIT+Restaurant 98/50 98/50 98/50 60TIMIT/TIMIT+Street 100/50 100/50 100/50 52TIMIT/TIMIT+Subway 100/50 100/50 100/50 56TIMIT/TIMIT+Train 100/50 100/50 100/50 58TIMIT/Chinese 85/50 80/34 90/66 64TIMIT/All 97/50 79/10 100/90 58Table 9: Detecting out-of-distribution distorted speech. All values are percentages.distribution detection. Mixing the TIMIT audio with realistic noises from the Aurora-2 dataset(Hirsch & Pearce, 2000), we keep the TIMIT audio volume at 100% and noise volume at 30%,giving a mean SNR of approximately 5. Speakers are still clearly audible to the human ear butconfuse the phone recognizer because the prediction edit distance more than doubles. For more out-of-distribution examples, we use the test examples from the THCHS-30 dataset (Wang & Zhang,2015), a Chinese speech corpus. Table 9 shows the results. Crucially, when performing detection,we compute the softmax probabilities while ignoring the blank symbol’s logit. With the blanksymbol’s presence, the softmax distributions at most time steps predict a blank symbol with highconfidence, but without the blank symbol we can better differentiate between normal and abnormaldistributions. With this modification, the softmax prediction probabilities allow us to detect whetheran example is out-of-distribution.4 A BNORMALITY DETECTION WITH AUXILIARY DECODERSHaving seen that softmax prediction probabilities enable abnormality detection, we now show thereis other information sometimes more useful for detection. To demonstrate this, we exploit thelearned internal representations of neural networks. We start by training a normal classifier andappend an auxiliary decoder which reconstructs the input, shown in Figure 1. Auxiliary decodersare sometimes known to increase classification performance (Zhang et al., 2016). The decoder andscorer are trained jointly on in-distribution examples. Thereafter, the blue layers in Figure 1 arefrozen. Then we train red layers on clean and noised training examples, and the sigmoid output ofthe red layers scores how normal the input is. Consequently, noised examples are in the abnormalclass, clean examples are of the normal class, and the sigmoid is trained to output to which class aninput belongs. After training we consequently have a normal classifier, an auxiliary decoder, andwhat we call an abnormality module . The gains from the abnormality module demonstrate thereare possible research avenues for outperforming the baseline.4.1 TIMITWe test the abnormality module by revisiting the TIMIT task with a different architecture and showhow these auxiliary components can greatly improve detection. The system is a three-layer, 1024-neuron wide classifier with an auxiliary decoder and abnormality module. This network takes asinput 11 frames and must predict the phone of the center frame, 26 features per frame. Weights areinitialized according to (Hendrycks & Gimpel, 2016c). This network trains for 20 epochs, and theabnormality module trains for two. The abnormality module sees clean examples and, as negativeexamples, TIMIT examples distorted with either white noise, brown noise (noise with its spectraldensity proportional to 1=f2), or pink noise (noise with its spectral density proportional to 1=f) atvarious volumes.We note that the abnormality module is nottrained on the same type of noise added to the testexamples. Nonetheless, Table 10 shows that simple noised examples translate to effective detectionof realistically distorted audio. We detect abnormal examples by comparing the typical abnormality7Published as a conference paper at ICLR 2017In-Distribution /Out-of-DistributionAUROC/BaseSoftmaxAUROC/BaseAbModAUPRIn/BaseSoftmaxAUPRIn/BaseAbModAUPROut/BaseSoftmaxAUPROut/BaseAbModTIMIT/+Airport 75/50 100/50 77/41 100/41 73/59 100/59TIMIT/+Babble 94/50 100/50 95/41 100/41 91/59 100/59TIMIT/+Car 70/50 98/50 69/41 98/41 70/59 98/59TIMIT/+Exhib. 91/50 98/50 92/41 98/41 91/59 98/59TIMIT/+Rest. 68/50 95/50 70/41 96/41 67/59 95/59TIMIT/+Subway 76/50 96/50 77/41 96/41 74/59 96/59TIMIT/+Street 89/50 98/50 91/41 99/41 85/59 98/59TIMIT/+Train 80/50 100/50 82/41 100/41 77/59 100/59TIMIT/Chinese 79/50 90/50 41/12 66/12 96/88 98/88Average 80 97 77 95 80 98Table 10: Abnormality modules can generalize to novel distortions and detect out-of-distributionexamples even when they do not severely degrade accuracy. All values are percentages.In-Distribution /Out-of-DistributionAUROC/BaseSoftmaxAUROC/BaseAbModAUPRIn/BaseSoftmaxAUPRIn/BaseAbModAUPROut/BaseSoftmaxAUPROut/BaseAbModMNIST/Omniglot 95/50 100/50 95/52 100/52 95/48 100/48MNIST/notMNIST 87/50 100/50 88/50 100/50 90/50 100/50MNIST/CIFAR-10bw 98/50 100/50 98/50 100/50 98/50 100/50MNIST/Gaussian 88/50 100/50 88/50 100/50 90/50 100/50MNIST/Uniform 99/50 100/50 99/50 100/50 99/50 100/50Average 93 100 94 100 94 100Table 11: Improved detection using the abnormality module. All values are percentages.module outputs for clean examples with the outputs for the distorted examples. The noises are fromAurora-2 and are added to TIMIT examples with 30% volume. We also use the THCHS-30 datasetfor Chinese speech. Unlike before, we use the THCHS-30 training examples rather than test setexamples because fully connected networks can evaluate the whole training set sufficiently quickly.It is worth mentioning that fully connected deep neural networks are noise robust (Seltzer et al.,2013), yet the abnormality module can still detect whether an example is out-of-distribution. To seewhy this is remarkable, note that the network’s frame classification error is 29.69% on the entiretest (not core) dataset, and the average classification error for distorted examples is 30.43%—thisis unlike the bidirectional LSTM which had a more pronounced performance decline. Because theclassification degradation was only slight, the softmax statistics alone did not provide useful out-of-distribution detection. In contrast, the abnormality module provided scores which allowed thedetection of different-but-similar examples. In practice, it may be important to determine whetheran example is out-of-distribution even if it does not greatly confuse the network, and the abnormalitymodule facilitates this.4.2 MNISTFinally, much like in a previous experiment, we train an MNIST classifier with three layers of width256. This time, we also use an auxiliary decoder and abnormality module rather than relying on onlysoftmax statistics. For abnormal examples we blur, rotate, or add Gaussian noise to training images.Gains from the abnormality module are shown in Table 11, and there is a consistent out-of-sampledetection improvement compared to softmax prediction probabilities. Even for highly dissimilarexamples the abnormality module can further improve detection.8Published as a conference paper at ICLR 20175 D ISCUSSION AND FUTURE WORKThe abnormality module demonstrates that in some cases the baseline can be beaten by exploitingthe representations of a network, suggesting myriad research directions. Some promising futureavenues may utilize the intra-class variance: if the distance from an example to another of the samepredicted class is abnormally high, it may be out-of-distribution (Giryes et al., 2015). Another pathis to feed in a vector summarizing a layer’s activations into an RNN, one vector for each layer.The RNN may determine that the activation patterns are abnormal for out-of-distribution examples.Others could make the detections fine-grained: is the out-of-distribution example a known-unknownor an unknown-unknown? A different avenue is not just to detect correct classifications but tooutput the probability of a correct detection. These are but a few ideas for improving error andout-of-distribution detection.We hope that any new detection methods are tested on a variety of tasks and architectures of theresearcher’s choice. A basic demonstration could include the following datasets: MNIST, CIFAR,IMDB, and tweets because vision-only demonstrations may not transfer well to other architecturesand datasets. Reporting the AUPR and AUROC values is important, and so is the underlying classi-fier’s accuracy since an always-wrong classifier gets a maximum AUPR for error detection if erroris the positive class. Also, future research need not use the exact values from this paper for com-parisons. Machine learning systems evolve, so tethering the evaluations to the exact architecturesand datasets in this paper is needless. Instead, one could simply choose a variety of datasets andarchitectures possibly like those above and compare their detection method with a detector based onthe softmax prediction probabilities from their classifiers. These are our basic recommendations forothers who try to surpass the baseline on this underexplored challenge.6 C ONCLUSIONWe demonstrated a softmax prediction probability baseline for error and out-of-distribution detec-tion across several architectures and numerous datasets. We then presented the abnormality module,which provided superior scores for discriminating between normal and abnormal examples on testedcases. The abnormality module demonstrates that the baseline can be beaten in some cases, and thisimplies there is room for future research. Our hope is that other researchers investigate architec-tures which make predictions in view of abnormality estimates, and that others pursue more reliablemethods for detecting errors and out-of-distribution inputs because knowing when a machine learn-ing system fails strikes us as highly important.ACKNOWLEDGMENTSWe would like to thank John Wieting, Hao Tang, Karen Livescu, Greg Shakhnarovich, and ourreviewers for their suggestions. We would also like to thank NVIDIA Corporation for donatingseveral TITAN X GPUs used in this research. | ryGs_6r4g | Paper explores the problem of classifier accuracy estimation and out of domain probability estimation. | 6: Marginally above acceptance threshold | The authors propose the use of statistics of softmax outputs to estimate the probability of error and probability of a test sample being out-of-domain. They contrast the performance of the proposed method with directly using the softmax output probabilities, and not their statistics, as a measure of confidence.
It would be great if the authors elaborate on the idea of ignoring the logit of the blank symbol.
It would be interesting to see the performance of the proposed methods in more confusable settings, ie., in cases where the out-of-domain examples are more similar to the in-domain examples. e.g., in the case of speech recognition this might correspond to using a different language's speech with an ASR system trained in a particular language. Here the acoustic characteristics of the speech signals from two different languages might be more similar as compared to the noisy and clean speech signals from the same language.
In section 4, the description of the auxiliary decoder setup might benefit from more detail.
There has been recent work on performance monitoring and accuracy prediction in the area of speech recognition, some of this work is listed below.
1. Ogawa, Tetsuji, et al. "Delta-M measure for accuracy prediction and its application to multi-stream based unsupervised adaptation." Proceedings of ICASSP. 2015.
2. Hermansky, Hynek, et al. "Towards machines that know when they do not know." Proceedings of ICASSP, 2015.
3. Variani, Ehsan et al. "Multi-stream recognition of noisy speech with performance monitoring." INTERSPEECH. 2013. | 3: The reviewer is fairly confident that the evaluation is correct |
Hkg4TI9xl | ICLR.cc/2017/conference | 2017 | A Baseline for Detecting Misclassified and Out-of-Distribution Examples in Neural Networks | ["Dan Hendrycks", "Kevin Gimpel"] | We consider the two related problems of detecting if an example is misclassified or out-of-distribution. We present a simple baseline that utilizes probabilities from softmax distributions. Correctly classified examples tend to have greater maximum softmax probabilities than erroneously classified and out-of-distribution examples, allowing for their detection. We assess performance by defining several tasks in computer vision, natural language processing, and automatic speech recognition, showing the effectiveness of this baseline across all. We then show the baseline can sometimes be surpassed, demonstrating the room for future research on these underexplored detection tasks. | ["Computer vision"] | ABSTRACTWe consider the two related problems of detecting if an example is misclassified orout-of-distribution. We present a simple baseline that utilizes probabilities fromsoftmax distributions. Correctly classified examples tend to have greater maxi-mum softmax probabilities than erroneously classified and out-of-distribution ex-amples, allowing for their detection. We assess performance by defining sev-eral tasks in computer vision, natural language processing, and automatic speechrecognition, showing the effectiveness of this baseline across all. We then showthe baseline can sometimes be surpassed, demonstrating the room for future re-search on these underexplored detection tasks.1 I NTRODUCTIONWhen machine learning classifiers are employed in real-world tasks, they tend to fail when thetraining and test distributions differ. Worse, these classifiers often fail silently by providing high-confidence predictions while being woefully incorrect (Goodfellow et al., 2015; Amodei et al.,2016). Classifiers failing to indicate when they are likely mistaken can limit their adoption orcause serious accidents. For example, a medical diagnosis model may consistently classify withhigh confidence, even while it should flag difficult examples for human intervention. The resultingunflagged, erroneous diagnoses could blockade future machine learning technologies in medicine.More generally and importantly, estimating when a model is in error is of great concern to AI Safety(Amodei et al., 2016).These high-confidence predictions are frequently produced by softmaxes because softmax probabil-ities are computed with the fast-growing exponential function. Thus minor additions to the softmaxinputs, i.e. the logits, can lead to substantial changes in the output distribution. Since the soft-max function is a smooth approximation of an indicator function, it is uncommon to see a uniformdistribution outputted for out-of-distribution examples. Indeed, random Gaussian noise fed into anMNIST image classifier gives a “prediction confidence” or predicted class probability of 91%, as weshow later. Throughout our experiments we establish that the prediction probability from a softmaxdistribution has a poor direct correspondence to confidence. This is consistent with a great deal ofanecdotal evidence from researchers (Nguyen & O’Connor, 2015; Yu et al., 2010; Provost et al.,1998; Nguyen et al., 2015).However, in this work we also show the prediction probability of incorrect and out-of-distributionexamples tends to be lower than the prediction probability for correct examples. Therefore, cap-turing prediction probability statistics about correct or in-sample examples is often sufficient fordetecting whether an example is in error or abnormal, even though the prediction probability viewedin isolation can be misleading.These prediction probabilities form our detection baseline, and we demonstrate its efficacy throughvarious computer vision, natural language processing, and automatic speech recognition tasks.While these prediction probabilities create a consistently useful baseline, at times they are less ef-fective, revealing room for improvement. To give ideas for future detection research, we contributeWork done while the author was at TTIC. Code is available at github.com/hendrycks/error-detection1arXiv:1610.02136v3 [cs.NE] 3 Oct 2018Published as a conference paper at ICLR 2017one method which outperforms the baseline on some (but not all) tasks. This new method evaluatesthe quality of a neural network’s input reconstruction to determine if an example is abnormal.In addition to the baseline methods, another contribution of this work is the designation of standardtasks and evaluation metrics for assessing the automatic detection of errors and out-of-distributionexamples. We use a large number of well-studied tasks across three research areas, using standardneural network architectures that perform well on them. For out-of-distribution detection, we pro-vide ways to supply the out-of-distribution examples at test time like using images from differentdatasets and realistically distorting inputs. We hope that other researchers will pursue these tasks infuture work and surpass the performance of our baselines.In summary, while softmax classifier probabilities are not directly useful as confidence estimates,estimating model confidence is not as bleak as previously believed. Simple statistics derived fromsoftmax distributions provide a surprisingly effective way to determine whether an example is mis-classified or from a different distribution from the training data, as demonstrated by our experimentalresults spanning computer vision, natural language processing, and speech recognition tasks. Thiscreates a strong baseline for detecting errors and out-of-distribution examples which we hope futureresearch surpasses.2 P ROBLEM FORMULATION AND EVALUATIONIn this paper, we are interested in two related problems. The first is error and success prediction :can we predict whether a trained classifier will make an error on a particular held-out test example;can we predict if it will correctly classify said example? The second is in- and out-of-distributiondetection : can we predict whether a test example is from a different distribution from the trainingdata; can we predict if it is from within the same distribution?1Below we present a simple baselinefor solving these two problems. To evaluate our solution, we use two evaluation metrics.Before mentioning the two evaluation metrics, we first note that comparing detectors is not asstraightforward as using accuracy. For detection we have two classes, and the detector outputs ascore for both the positive and negative class. If the negative class is far more likely than the positiveclass, a model may always guess the negative class and obtain high accuracy, which can be mislead-ing (Provost et al., 1998). We must then specify a score threshold so that some positive examplesare classified correctly, but this depends upon the trade-off between false negatives (fn) and falsepositives (fp).Faced with this issue, we employ the Area Under the Receiver Operating Characteristic curve (AU-ROC) metric, which is a threshold-independent performance evaluation (Davis & Goadrich, 2006).The ROC curve is a graph showing the true positive rate (tpr =tp=(tp+fn)) and the false positiverate (fpr =fp=(fp+tn)) against each other. Moreover, the AUROC can be interpreted as the prob-ability that a positive example has a greater detector score/value than a negative example (Fawcett,2005). Consequently, a random positive example detector corresponds to a 50% AUROC, and a“perfect” classifier corresponds to 100%.2The AUROC sidesteps the issue of threshold selection, as does the Area Under the Precision-Recallcurve (AUPR) which is sometimes deemed more informative (Manning & Sch ̈utze, 1999). This isbecause the AUROC is not ideal when the positive class and negative class have greatly differingbase rates, and the AUPR adjusts for these different positive and negative base rates. For this reason,the AUPR is our second evaluation metric. The PR curve plots the precision (tp =(tp+fp)) and recall(tp=(tp+fn)) against each other. The baseline detector has an AUPR approximately equal to theprecision (Saito & Rehmsmeier, 2015), and a “perfect” classifier has an AUPR of 100% . Conse-quently, the base rate of the positive class greatly influences the AUPR, so for detection we mustspecify which class is positive. In view of this, we show the AUPRs when we treat success/normalclasses as positive, and then we show the areas when we treat the error/abnormal classes as positive.We can treat the error/abnormal classes as positive by multiplying the scores by 1and labelingthem positive. Note that treating error/abnormal classes as positive classes does not change the AU-1We consider adversarial example detection techniques in a separate work (Hendrycks & Gimpel, 2016a).2A debatable, imprecise interpretation of AUROC values may be as follows: 90%—100%: Excellent,80%—90%: Good, 70%—80%: Fair, 60%—70%: Poor, 50%—60%: Fail.2Published as a conference paper at ICLR 2017ROC since if Sis a score for a successfully classified value, and Eis the score for an erroneouslyclassified value, AUROC =P(S > E ) =P(E >S).We begin our experiments in Section 3 where we describe a simple baseline which uses the maxi-mum probability from the softmax label distribution in neural network classifiers. Then in Section 4we describe a method that uses an additional, auxiliary model component trained to reconstruct theinput.3 S OFTMAX PREDICTION PROBABILITY AS A BASELINEIn what follows we retrieve the maximum/predicted class probability from a softmax distributionand thereby detect whether an example is erroneously classified or out-of-distribution. Specifically,we separate correctly and incorrectly classified test set examples and, for each example, computethe softmax probability of the predicted class, i.e., the maximum softmax probability.3From thesetwo groups we obtain the area under PR and ROC curves. These areas summarize the performanceof a binary classifier discriminating with values/scores (in this case, maximum probabilities fromthe softmaxes) across different thresholds. This description treats correctly classified examples asthe positive class, denoted “Success” or “Succ” in our tables. In “Error” or “Err” we treat thethe incorrectly classified examples as the positive class; to do this we label incorrectly classifiedexamples as positive and take the negatives of the softmax probabilities of the predicted classes asthe scores.For “In,” we treat the in-distribution, correctly classified test set examples as positive and use thesoftmax probability for the predicted class as a score, while for “Out” we treat the out-of-distributionexamples as positive and use the negative of the aforementioned probability. Since the AUPRs forSuccess, Error, In, Out classifiers depend on the rate of positive examples, we list what area a randomdetector would achieve with “Base” values. Also in the upcoming results we list the mean predictedclass probability of wrongly classified examples (Pred Prob Wrong (mean)) to demonstrate that thesoftmax prediction probability is a misleading confidence proxy when viewed in isolation. The“Pred. Prob (mean)” columns show this same shortcoming but for out-of-distribution examples.Table labels aside, we begin experimentation with datasets from vision then consider tasks in naturallanguage processing and automatic speech recognition. In all of the following experiments, the AU-ROCs differ from the random baselines with high statistical significance according to the Wilcoxonrank-sum test.3.1 C OMPUTER VISIONIn the following computer vision tasks, we use three datasets: MNIST, CIFAR-10, and CIFAR-100 (Krizhevsky, 2009). MNIST is a dataset of handwritten digits, consisting of 60000 trainingand 10000 testing examples. Meanwhile, CIFAR-10 has colored images belonging to 10 differentclasses, with 50000 training and 10000 testing examples. CIFAR-100 is more difficult, as it has 100different classes with 50000 training and 10000 testing examples.In Table 1, we see that correctly classified and incorrectly classified examples are sufficiently distinctand thus allow reliable discrimination. Note that the area under the curves degrade with imagerecognizer test error.Next, let us consider using softmax distributions to determine whether an example is in- or out-of-distribution. We use all test set examples as the in-distribution (positive) examples. For out-of-distribution (negative) examples, we use realistic images and noise. For CIFAR-10 and CIFAR-100,we use realistic images from the Scene UNderstanding dataset (SUN), which consists of 397 differ-ent scenes (Xiao et al., 2010). For MNIST, we use grayscale realistic images from three sources.Omniglot (Lake et al., 2015) images are handwritten characters rather than the handwritten digits inMNIST. Next, notMNIST (Bulatov, 2011) consists of typeface characters. Last of the realistic im-ages, CIFAR-10bw are black and white rescaled CIFAR-10 images. The synthetic “Gaussian” data3We also tried using the KL divergence of the softmax distribution from the uniform distribution for detec-tion. With divergence values, detector AUROCs and AUPRs were highly correlated with AUROCs and AUPRsfrom a detector using the maximum softmax probability. This divergence is similar to entropy (Steinhardt &Liang, 2016; Williams & Renals, 1997).3Published as a conference paper at ICLR 2017Dataset AUROC/BaseAUPRSucc/BaseAUPRErr/BasePred. ProbWrong(mean)Test SetErrorMNIST 97/50 100/98 48/1.7 86 1.69CIFAR-10 93/50 100/95 43/5 80 4.96CIFAR-100 87/50 96/79 62/21 66 20.7Table 1: The softmax predicted class probability allows for discrimination between correctly andincorrectly classified test set examples. “Pred. Prob Wrong(mean)” is the mean softmax probabilityfor wrongly classified examples, showcasing its shortcoming as a direct measure of confidence.Succ/Err Base values are the AUROCs or AUPRs achieved by random classifiers. All entries arepercentages.In-Distribution /Out-of-DistributionAUROC/BaseAUPR In/BaseAUPROut/BasePred. Prob(mean)CIFAR-10/SUN 95/50 89/33 97/67 72CIFAR-10/Gaussian 97/50 98/49 95/51 77CIFAR-10/All 96/50 88/24 98/76 74CIFAR-100/SUN 91/50 83/27 96/73 56CIFAR-100/Gaussian 88/50 92/43 80/57 77CIFAR-100/All 90/50 81/21 96/79 63MNIST/Omniglot 96/50 97/52 96/48 86MNIST/notMNIST 85/50 86/50 88/50 92MNIST/CIFAR-10bw 95/50 95/50 95/50 87MNIST/Gaussian 90/50 90/50 91/50 91MNIST/Uniform 99/50 99/50 98/50 83MNIST/All 91/50 76/20 98/80 89Table 2: Distinguishing in- and out-of-distribution test set data for image classification. CIFAR-10/All is the same as CIFAR-10/(SUN, Gaussian). All values are percentages.is random normal noise, and “Uniform” data is random uniform noise. Images are resized whennecessary.The results are shown in Table 2. Notice that the mean predicted/maximum class probabilities (Pred.Prob (mean)) are above 75%, but if the prediction probability alone is translated to confidence, thesoftmax distribution should be more uniform for CIFAR-100. This again shows softmax probabil-ities should not be viewed as a direct representation of confidence. Fortunately, out-of-distributionexamples sufficiently differ in the prediction probabilities from in-distribution examples, allowingfor successful detection and generally high area under PR and ROC curves.For reproducibility, let us specify the model architectures. The MNIST classifier is a three-layer,256 neuron-wide, fully-connected network trained for 30 epochs with Adam (Kingma & Ba, 2015).It uses a GELU nonlinearity (Hendrycks & Gimpel, 2016b), x(x), where (x)is the CDF of thestandard normal distribution. We initialize our weights according to (Hendrycks & Gimpel, 2016c),as it is suited for arbitrary nonlinearities. For CIFAR-10 and CIFAR-100, we train a 40-4 wideresidual network (Zagoruyko & Komodakis, 2016) for 50 epochs with stochastic gradient descentusing restarts (Loshchilov & Hutter, 2016), the GELU nonlinearity, and standard mirroring andcropping data augmentation.3.2 N ATURAL LANGUAGE PROCESSINGLet us turn to a variety of tasks and architectures used in natural language processing.3.2.1 S ENTIMENT CLASSIFICATIONThe first NLP task is binary sentiment classification using the IMDB dataset (Maas et al., 2011), adataset of polarized movie reviews with 25000 training and 25000 test reviews. This task allowsus to determine if classifiers trained on a relatively small dataset still produce informative softmax4Published as a conference paper at ICLR 2017Dataset AUROC/BaseAUPRSucc/BaseAUPRErr/BasePred. ProbWrong(mean)Test SetErrorIMDB 82/50 97/88 36/12 74 11.9Table 3: Detecting correct and incorrect classifications for binary sentiment classification.In-Distribution /Out-of-DistributionAUROC/BaseAUPR In/BaseAUPROut/BasePred. Prob(mean)IMDB/Customer Reviews 95/50 99/89 60/11 62IMDB/Movie Reviews 94/50 98/72 80/28 63IMDB/All 94/50 97/66 84/34 63Table 4: Distinguishing in- and out-of-distribution test set data for binary sentiment classification.IMDB/All is the same as IMDB/(Customer Reviews, Movie Reviews). All values are percentages.distributions. For this task we use a linear classifier taking as input the average of trainable, randomlyinitialized word vectors with dimension 50 (Joulin et al., 2016; Iyyer et al., 2015). We train for 15epochs with Adam and early stopping based upon 5000 held-out training reviews. Again, Table 3shows that the softmax distributions differ between correctly and incorrectly classified examples, soprediction probabilities allow us to detect reliably which examples are right and wrong.Now we use the Customer Review (Hu & Liu, 2004) and Movie Review (Pang et al., 2002) datasetsas out-of-distribution examples. The Customer Review dataset has reviews of products rather thanonly movies, and the Movie Review dataset has snippets from professional movie reviewers ratherthan full-length amateur reviews. We leave all test set examples from IMDB as in-distributionexamples, and out-of-distribution examples are the 500 or 1000 test reviews from Customer Reviewand Movie Review datasets, respectively. Table 4 displays detection results, showing a similar storyto Table 2.3.2.2 T EXT CATEGORIZATIONWe turn to text categorization tasks to determine whether softmax distributions are useful for de-tecting similar but out-of-distribution examples. In the following text categorization tasks, we trainclassifiers to predict the subject of the text they are processing. In the 20 Newsgroups dataset (Lang,1995), there are 20 different newsgroup subjects with a total of 20000 documents for the wholedataset. The Reuters 8 (Lewis et al., 2004) dataset has eight different news subjects with nearly8000 stories in total. The Reuters 52 dataset has 52 news subjects with slightly over 9000 newsstories; this dataset can have as few as three stories for a single subject.For the 20 Newsgroups dataset we train a linear classifier on 30-dimensional word vectors for 20epochs. Meanwhile, Reuters 8 and Retuers 52 use one-layer neural networks with a bag-of-wordsinput and a GELU nonlinearity, all optimized with Adam for 5 epochs. We train on a subset ofsubjects, leaving out 5 newsgroup subjects from 20 Newsgroups, 2 news subjects from Reuters8, and 12 news subjects from Reuters 52, leaving the rest as out-of-distribution examples. Table5 shows that with these datasets and architectures, we can detect errors dependably, and Table 6informs us that the softmax prediction probabilities allow for detecting out-of-distribution subjects.Dataset AUROC/BaseAUPRSucc/BaseAUPRErr/BasePred.ProbWrong(mean)Test SetError15 Newsgroups 89/50 99/93 42/7.3 53 7.31Reuters 6 89/50 100/98 35/2.5 77 2.53Reuters 40 91/50 99/92 45/7.6 62 7.55Table 5: Detecting correct and incorrect classifications for text categorization.5Published as a conference paper at ICLR 2017In-Distribution /Out-of-DistributionAUROC/BaseAUPRIn/BaseAUPROut/BasePred. Prob(mean)15/5 Newsgroups 75/50 92/84 45/16 65Reuters6/Reuters2 92/50 100/95 56/4.5 72Reuters40/Reuters12 95/50 100/93 60/7.2 47Table 6: Distinguishing in- and out-of-distribution test set data for text categorization.Dataset AUROC/BaseAUPRSucc/BaseAUPRErr/BasePred. ProbWrong(mean)Test SetErrorWSJ 96/50 100/96 51/3.7 71 3.68Twitter 89/50 98/87 53/13 69 12.59Table 7: Detecting correct and incorrect classifications for part-of-speech tagging.3.2.3 P ART-OF-SPEECH TAGGINGPart-of-speech (POS) tagging of newswire and social media text is our next challenge. We use theWall Street Journal portion of the Penn Treebank (Marcus et al., 1993) which contains 45 distinctPOS tags. For social media, we use POS-annotated tweets (Gimpel et al., 2011; Owoputi et al.,2013) which contain 25 tags. For the WSJ tagger, we train a bidirectional long short-term memoryrecurrent neural network (Hochreiter & Schmidhuber, 1997) with three layers, 128 neurons perlayer, with randomly initialized word vectors, and this is trained on 90% of the corpus for 10 epochswith stochastic gradient descent with a batch size of 32. The tweet tagger is simpler, as it is two-layer neural network with a GELU nonlinearity, a weight initialization according to (Hendrycks &Gimpel, 2016c), pretrained word vectors trained on a corpus of 56 million tweets (Owoputi et al.,2013), and a hidden layer size of 256, all while training on 1000 tweets for 30 epochs with Adamand early stopping with 327 validation tweets. Error detection results are in Table 7. For out-of-distribution detection, we use the WSJ tagger on the tweets as well as weblog data from the EnglishWeb Treebank (Bies et al., 2012). The results are shown in Table 8. Since the weblog data is closerin style to newswire than are the tweets, it is harder to detect whether a weblog sentence is out-of-distribution than a tweet. Indeed, since POS tagging is done at the word-level, we are detectingwhether each word is out-of-distribution given the word and contextual features. With this in mind,we see that it is easier to detect words as out-of-distribution if they are from tweets than from blogs.In-Distribution /Out-of-DistributionAUROC/BaseAUPRIn/BaseAUPROut/BasePred. Prob(mean)WSJ/Twitter 80/50 98/92 41/7.7 81WSJ/Weblog* 61/50 88/86 30/14 93Table 8: Detecting out-of-distribution tweets and blog articles for part-of-speech tagging. All valuesare percentages. *These examples are atypically close to the training distribution.3.3 A UTOMATIC SPEECH RECOGNITIONNow we consider a task which uses softmax values to construct entire sequences rather than deter-mine an input’s class. Our sequence prediction system uses a bidirectional LSTM with two-layersand a clipped GELU nonlinearity, optimized for 60 epochs with RMSProp trained on 80% of theTIMIT corpus (Garofolo et al., 1993). The LSTM is trained with connectionist temporal classifica-tion (CTC) (Graves et al., 2006) for predicting sequences of phones given MFCCs, energy, and firstand second deltas of a 25ms frame. When trained with CTC, the LSTM learns to have its phonelabel probabilities spike momentarily while mostly predicting blank symbols otherwise. In this way,the softmax is used differently from typical classification problems, providing a unique test for ourdetection methods.We do not show how the system performs on correctness/incorrectness detection because errorsare not binary and instead lie along a range of edit distances. However, we can perform out-of-6Published as a conference paper at ICLR 2017In-Distribution /Out-of-DistributionAUROC/BaseAUPRIn/BaseAUPROut/BasePred. Prob(mean)TIMIT/TIMIT+Airport 99/50 99/50 99/50 59TIMIT/TIMIT+Babble 100/50 100/50 100/50 55TIMIT/TIMIT+Car 98/50 98/50 98/50 59TIMIT/TIMIT+Exhibition 100/50 100/50 100/50 57TIMIT/TIMIT+Restaurant 98/50 98/50 98/50 60TIMIT/TIMIT+Street 100/50 100/50 100/50 52TIMIT/TIMIT+Subway 100/50 100/50 100/50 56TIMIT/TIMIT+Train 100/50 100/50 100/50 58TIMIT/Chinese 85/50 80/34 90/66 64TIMIT/All 97/50 79/10 100/90 58Table 9: Detecting out-of-distribution distorted speech. All values are percentages.distribution detection. Mixing the TIMIT audio with realistic noises from the Aurora-2 dataset(Hirsch & Pearce, 2000), we keep the TIMIT audio volume at 100% and noise volume at 30%,giving a mean SNR of approximately 5. Speakers are still clearly audible to the human ear butconfuse the phone recognizer because the prediction edit distance more than doubles. For more out-of-distribution examples, we use the test examples from the THCHS-30 dataset (Wang & Zhang,2015), a Chinese speech corpus. Table 9 shows the results. Crucially, when performing detection,we compute the softmax probabilities while ignoring the blank symbol’s logit. With the blanksymbol’s presence, the softmax distributions at most time steps predict a blank symbol with highconfidence, but without the blank symbol we can better differentiate between normal and abnormaldistributions. With this modification, the softmax prediction probabilities allow us to detect whetheran example is out-of-distribution.4 A BNORMALITY DETECTION WITH AUXILIARY DECODERSHaving seen that softmax prediction probabilities enable abnormality detection, we now show thereis other information sometimes more useful for detection. To demonstrate this, we exploit thelearned internal representations of neural networks. We start by training a normal classifier andappend an auxiliary decoder which reconstructs the input, shown in Figure 1. Auxiliary decodersare sometimes known to increase classification performance (Zhang et al., 2016). The decoder andscorer are trained jointly on in-distribution examples. Thereafter, the blue layers in Figure 1 arefrozen. Then we train red layers on clean and noised training examples, and the sigmoid output ofthe red layers scores how normal the input is. Consequently, noised examples are in the abnormalclass, clean examples are of the normal class, and the sigmoid is trained to output to which class aninput belongs. After training we consequently have a normal classifier, an auxiliary decoder, andwhat we call an abnormality module . The gains from the abnormality module demonstrate thereare possible research avenues for outperforming the baseline.4.1 TIMITWe test the abnormality module by revisiting the TIMIT task with a different architecture and showhow these auxiliary components can greatly improve detection. The system is a three-layer, 1024-neuron wide classifier with an auxiliary decoder and abnormality module. This network takes asinput 11 frames and must predict the phone of the center frame, 26 features per frame. Weights areinitialized according to (Hendrycks & Gimpel, 2016c). This network trains for 20 epochs, and theabnormality module trains for two. The abnormality module sees clean examples and, as negativeexamples, TIMIT examples distorted with either white noise, brown noise (noise with its spectraldensity proportional to 1=f2), or pink noise (noise with its spectral density proportional to 1=f) atvarious volumes.We note that the abnormality module is nottrained on the same type of noise added to the testexamples. Nonetheless, Table 10 shows that simple noised examples translate to effective detectionof realistically distorted audio. We detect abnormal examples by comparing the typical abnormality7Published as a conference paper at ICLR 2017In-Distribution /Out-of-DistributionAUROC/BaseSoftmaxAUROC/BaseAbModAUPRIn/BaseSoftmaxAUPRIn/BaseAbModAUPROut/BaseSoftmaxAUPROut/BaseAbModTIMIT/+Airport 75/50 100/50 77/41 100/41 73/59 100/59TIMIT/+Babble 94/50 100/50 95/41 100/41 91/59 100/59TIMIT/+Car 70/50 98/50 69/41 98/41 70/59 98/59TIMIT/+Exhib. 91/50 98/50 92/41 98/41 91/59 98/59TIMIT/+Rest. 68/50 95/50 70/41 96/41 67/59 95/59TIMIT/+Subway 76/50 96/50 77/41 96/41 74/59 96/59TIMIT/+Street 89/50 98/50 91/41 99/41 85/59 98/59TIMIT/+Train 80/50 100/50 82/41 100/41 77/59 100/59TIMIT/Chinese 79/50 90/50 41/12 66/12 96/88 98/88Average 80 97 77 95 80 98Table 10: Abnormality modules can generalize to novel distortions and detect out-of-distributionexamples even when they do not severely degrade accuracy. All values are percentages.In-Distribution /Out-of-DistributionAUROC/BaseSoftmaxAUROC/BaseAbModAUPRIn/BaseSoftmaxAUPRIn/BaseAbModAUPROut/BaseSoftmaxAUPROut/BaseAbModMNIST/Omniglot 95/50 100/50 95/52 100/52 95/48 100/48MNIST/notMNIST 87/50 100/50 88/50 100/50 90/50 100/50MNIST/CIFAR-10bw 98/50 100/50 98/50 100/50 98/50 100/50MNIST/Gaussian 88/50 100/50 88/50 100/50 90/50 100/50MNIST/Uniform 99/50 100/50 99/50 100/50 99/50 100/50Average 93 100 94 100 94 100Table 11: Improved detection using the abnormality module. All values are percentages.module outputs for clean examples with the outputs for the distorted examples. The noises are fromAurora-2 and are added to TIMIT examples with 30% volume. We also use the THCHS-30 datasetfor Chinese speech. Unlike before, we use the THCHS-30 training examples rather than test setexamples because fully connected networks can evaluate the whole training set sufficiently quickly.It is worth mentioning that fully connected deep neural networks are noise robust (Seltzer et al.,2013), yet the abnormality module can still detect whether an example is out-of-distribution. To seewhy this is remarkable, note that the network’s frame classification error is 29.69% on the entiretest (not core) dataset, and the average classification error for distorted examples is 30.43%—thisis unlike the bidirectional LSTM which had a more pronounced performance decline. Because theclassification degradation was only slight, the softmax statistics alone did not provide useful out-of-distribution detection. In contrast, the abnormality module provided scores which allowed thedetection of different-but-similar examples. In practice, it may be important to determine whetheran example is out-of-distribution even if it does not greatly confuse the network, and the abnormalitymodule facilitates this.4.2 MNISTFinally, much like in a previous experiment, we train an MNIST classifier with three layers of width256. This time, we also use an auxiliary decoder and abnormality module rather than relying on onlysoftmax statistics. For abnormal examples we blur, rotate, or add Gaussian noise to training images.Gains from the abnormality module are shown in Table 11, and there is a consistent out-of-sampledetection improvement compared to softmax prediction probabilities. Even for highly dissimilarexamples the abnormality module can further improve detection.8Published as a conference paper at ICLR 20175 D ISCUSSION AND FUTURE WORKThe abnormality module demonstrates that in some cases the baseline can be beaten by exploitingthe representations of a network, suggesting myriad research directions. Some promising futureavenues may utilize the intra-class variance: if the distance from an example to another of the samepredicted class is abnormally high, it may be out-of-distribution (Giryes et al., 2015). Another pathis to feed in a vector summarizing a layer’s activations into an RNN, one vector for each layer.The RNN may determine that the activation patterns are abnormal for out-of-distribution examples.Others could make the detections fine-grained: is the out-of-distribution example a known-unknownor an unknown-unknown? A different avenue is not just to detect correct classifications but tooutput the probability of a correct detection. These are but a few ideas for improving error andout-of-distribution detection.We hope that any new detection methods are tested on a variety of tasks and architectures of theresearcher’s choice. A basic demonstration could include the following datasets: MNIST, CIFAR,IMDB, and tweets because vision-only demonstrations may not transfer well to other architecturesand datasets. Reporting the AUPR and AUROC values is important, and so is the underlying classi-fier’s accuracy since an always-wrong classifier gets a maximum AUPR for error detection if erroris the positive class. Also, future research need not use the exact values from this paper for com-parisons. Machine learning systems evolve, so tethering the evaluations to the exact architecturesand datasets in this paper is needless. Instead, one could simply choose a variety of datasets andarchitectures possibly like those above and compare their detection method with a detector based onthe softmax prediction probabilities from their classifiers. These are our basic recommendations forothers who try to surpass the baseline on this underexplored challenge.6 C ONCLUSIONWe demonstrated a softmax prediction probability baseline for error and out-of-distribution detec-tion across several architectures and numerous datasets. We then presented the abnormality module,which provided superior scores for discriminating between normal and abnormal examples on testedcases. The abnormality module demonstrates that the baseline can be beaten in some cases, and thisimplies there is room for future research. Our hope is that other researchers investigate architec-tures which make predictions in view of abnormality estimates, and that others pursue more reliablemethods for detecting errors and out-of-distribution inputs because knowing when a machine learn-ing system fails strikes us as highly important.ACKNOWLEDGMENTSWe would like to thank John Wieting, Hao Tang, Karen Livescu, Greg Shakhnarovich, and ourreviewers for their suggestions. We would also like to thank NVIDIA Corporation for donatingseveral TITAN X GPUs used in this research. | BkIlDCr4g | Important topic | 6: Marginally above acceptance threshold | The paper address the problem of detecting if an example is misclassified or out-of-distribution. This is an very important topic and the study provides a good baseline. Although it misses strong novel methods for the task, the study contributes to the community. | 3: The reviewer is fairly confident that the evaluation is correct |
Bks8cPcxe | ICLR.cc/2017/conference | 2017 | DeepDSL: A Compilation-based Domain-Specific Language for Deep Learning | ["Tian Zhao", "Xiao Bing Huang", "Yu Cao"] | In recent years, Deep Learning (DL) has found great success in domains such as multimedia understanding. However, the complex nature of multimedia data makes it difficult to develop DL-based software. The state-of-the-art tools, such as Caffe, TensorFlow, Torch7, and CNTK, while are successful in their applicable domains, are programming libraries with fixed user interface, internal representation, and execution environment. This makes it difficult to implement portable and customized DL applications.
In this paper, we present DeepDSL, a domain specific language (DSL) embedded in Scala, that compiles deep networks written in DeepDSL to Java source code. Deep DSL provides
(1) intuitive constructs to support compact encoding of deep networks;
(2) symbolic gradient derivation of the networks;
(3) static analysis for memory consumption and error detection; and
(4) DSL-level optimization to improve memory and runtime efficiency.
DeepDSL programs are compiled into compact, efficient, customizable, and portable Java source code, which operates the CUDA and CUDNN interfaces running on NVIDIA GPU via a Java Native Interface (JNI) library. We evaluated DeepDSL with a number of popular DL networks. Our experiments show that the compiled programs have very competitive runtime performance and memory efficiency compared to the existing libraries. | ["Deep learning", "Applications", "Optimization"] | ABSTRACTIn recent years, Deep Learning (DL) has found great success in domains suchas multimedia understanding. However, the complex nature of multimedia datamakes it difficult to develop DL-based software. The state-of-the-art tools, suchas Caffe, TensorFlow, Torch7, and CNTK, while are successful in their applicabledomains, are programming libraries with fixed user interface, internal represen-tation, and execution environment. This makes it difficult to implement portableand customized DL applications.In this paper, we present DeepDSL , adomain specific language (DSL) embeddedin Scala, that compiles deep networks written in DeepDSL to Java source code.Deep DSL provides (1) intuitive constructs to support compact encoding of deepnetworks; (2) symbolic gradient derivation of the networks; (3) static analysisfor memory consumption and error detection; and (4) DSL-level optimization toimprove memory and runtime efficiency.DeepDSL programs are compiled into compact, efficient, customizable, andportable Java source code, which operates the CUDA and CUDNN interfaces run-ning on NVIDIA GPU via a Java Native Interface (JNI) library. We evaluatedDeepDSL with a number of popular DL networks. Our experiments show thatthe compiled programs have very competitive runtime performance and memoryefficiency compared to the existing libraries.1 I NTRODUCTIONMultimedia is increasingly becoming the ”biggest big data” as the most important and valuablesource for insights and information Chen et al. (2015a). Recently, a new set of machine learningalgorithms named ”Deep Learning” (DL) LeCun et al. (2015), which aims at learning multiple levelsof representation and abstraction that help infer knowledge from multimedia data (e.g. text, image,audio, and video) is making astonishing gains in machine vision, speech recognition, multimediaanalysis, and drug designing.However, current tools, such as Theano Bergstra et al. (2010), Torch7 Collobert et al. (2011),Caffe Jia et al. (2014), Computational Network Toolkit (CNTK) Agarwal et al. (2014), and Tensor-Flow Abadi et al. (2016), while are efficient in their applicable domains, are essentially applicationlibraries with some inherent limitations.As with all programming libraries, the DL libraries have fixed bindings for key data structures suchas tensors and tensor related computations. Users have to adhere to the data structure, which limitstheir ability to apply application-specific optimization or port it to target runtime platforms. Theinternal representation of their control flow logic is opaque to users. For example, TensorFlow andCNTK use directed acyclic graphs to represent the DL network computation and generate runtimebinaries from the graphs. However, these graphs are not designed for user-level access, which limitsthe runtime platforms of the DL applications to what the libraries provide.In general, the current libraries have to be built against specific platforms that they are designedfor, which can be difficult for platforms such as Windows. Also, changing the implementation of1Published as a conference paper at ICLR 2017Figure 1: Basic workflow of DeepDSL.specific type of layers or data structure is very challenging without thorough understanding of theunderlying implementation. This limits the portability and reusability of these libraries.To address these limitations, we present DeepDSL, a domain specific language embedded in Scala,for developing DL applications. DeepDSL allows users to define DL networks as tensor functions.Unlike the existing DL libraries, DSL tensors are not built-in entities. Instead, they are defined asindexed scalar expressions. This exposes tensor related computation at DSL level. As a result, thesymbolic gradient derivation of the DL network is fully abstract and the resulting DSL programallows compiler-based optimizations such as code motion and common sub-expression elimination.DeepDSL compiler translates the optimized DSL program into a Java source program that is com-pact, efficient, customizable, and portable. The generated Java source only requires a small Javalibrary JCuda1that calls the NVIDIA CUDA interface using JNI. Since JVM is supported on allmajor operating systems, the generated Java source can run on any CUDA enabled platforms. Also,since the generated Java source is compact and human readable, users can customize it easily throughan editor or IDE such as eclipse2. The generated Java source automatically saves the learned pa-rameters into files after a training period is over. When the user starts the program again (perhapsafter adjusting some parameters such as momentum and learning rate), it automatically loads thesaved parameters and continues the training from where it stopped at the previous execution. Thecode also supports loading parameters trained with different data for fine tuning purpose.DeepDSL supports statical analysis of the DSL program to detect network design errors such asmismatching tensor dimensions before compiling the DSL program into Java source. It staticallyanalyzes the memory consumption used at each step of the computation and produces a table detail-ing the memory usage that would occur at runtime, which includes the memory for feature maps,gradient maps, parameter weights, and convolution workspace. It also uses the static information toreschedule computation so that tensor memory can be freed as early as possible to reduce memoryconsumption at runtime. Such processing has demonstrated to have great benefit. For example,DeepDSL continues to run well under the GPU memory limit on the testing server with a singleGPU when the batch size of ResNet is increased from 32 to 64, while both Caffe and Tensorflowfail due to out of memory exception.DeepDSL is available at https://github.com/deepdsl/deepdsl .The rest of the paper is organized as follows. We give an overview of DeepDSL in Section 2 andexplain the DSL syntax using examples in Section 3. We discuss the intermediate representationin Section 4 and code generation in Section 5. We present details of performance evaluation usingDeepDSL in Section 6 and related work in Section 7. We conclude the paper in Section 8.2 O VERVIEWDeepDSL directly encodes the mathematical representation of DL networks, where each layer isrepresented as a tensor function. The entire network is then represented as a composition of these1http://www.jcuda.org2http://www.eclipse.org2Published as a conference paper at ICLR 20171 val K = 10 // # of classes2 val N = 500; val C = 1; val N1 = 28; val N2 = 28 // batch size, channel, and x/y size34 // Specifying training (and test) dataSet5 val y = Vec._new(Mnist, "label", "Y", N) // labels6 val x = Vec._new(Mnist, "image", "X", N, C, N1, N2) // images78 val cv1 = CudaLayer.convolv("cv1", 5, 20) // kernel size (5,5), output channel 209 val cv2 = CudaLayer.convolv("cv2", 5, 50)10 val mp = CudaLayer.max_pool(2) // max pooling, kernel 2 stride 211 val flat = Layer.flatten(4, 1) // flatten a 4-D tensor from axis 1 to 312 val f = Layer.full("fc1", 500) // fully connected layer, output 50013 val f2 = Layer.full("fc2", K)14 val relu = CudaLayer.relu(2) // 2-D ReLU activation15 val softmax = CudaLayer.softmax // softmax1617 // o is a left-associative operator for function composition18 val network = f2 o relu o f o flat o mp o cv2 o mp o cv11920 val x1 = x.asCuda // load x to GPU21 val y1 = y.asIndicator(K).asCuda // turn each label into an indicator vector22 val c = (Layer.log_loss(y1) o softmax o network) (x1) // training loss23 val p = (Layer.precision(y1) o network) (x1) // test accuracy2425 val param = c.freeVar.toList // parameters to be trained2627 // output file, train and test iteration, learn rate, momentum, decay, gradient cropping (0 means none)28 val solver = Train("lenet", 1000, 10, 0.01f, 0.9f, 0.0005f, 0)2930 val loop = Loop(c, p, (x, y), param, solver) // training and testing loop31 cudnn_gen.print(loop) // generate Java source programFigure 2: DeepDSL code for training and testing Lenet .functions. DeepDSL symbolically derives the partial derivatives of the tensor functions with respectto tensor variables so that the backward gradients of network parameters are generated automatically.A high-level overview of DeepDSL is shown in Figure 1. A DeepDSL program is compiled in sev-eral stages. At the first stage, the backward gradients of deep networks are derived symbolicallyto become the intermediate representation (IR). The IR expressions are in turn passed through aseries of simplification and optimization at the second stage. At the third stage, DeepDSL compilerperforms a SSA (Static Single Assignment) transformation of the optimized IR to break down com-plex expressions. Redundant computation is eliminated at this stage and the resulting expressionsare reordered to optimize memory usage. Memory deallocation and in-place computation are alsoscheduled at this stage. Lastly, the finalized IR expressions are translated to Java source code.DeepDSL supports two mode of computation: memory efficient or runtime efficient. In the memoryefficient mode, tensor memory in GPU will be dynamically allocated and deallocated, which mightdecrease runtime performance. In the runtime efficient mode, tensor memory in GPU is reused andnot deallocated until the end of the training. In this mode, more memory may be used but with greaterruntime performance. To make the switch, the user only needs to switch a flag in the generated Javasource.The memory efficient mode can be used for machines with limited GPU memory. Furthermemory reduction can be achieved by placing a limit on the (convolution) workspace memory.3 S YNTAXFigure 2 shows the complete implementation for compiling a program to train and test Lenet LeCunet al. (1998). Since DeepDSL is embedded in Scala, the program is in Scala syntax and it can becompiled and executed with a programming tool such as eclipse. This program consists of variabledeclarations of the form val x = e , where val starts a declaration for the variable xand assignsit with the value of e.Line 5 and 6 declare the tensors that represent labels and images for the training data. We also usethe same variables for testing since the DSL compiles the same variables into different code fortraining and testing.3Published as a conference paper at ICLR 2017Line 8–15 declare the tensor functions that represent the layers in the network. Most of the layersare self-explanatory except val flat = Layer.flatten(4, 1) , which is used to convertthe 4-D tensor returned by the last pooling layer into a 2-D layer for the next fully connected layer.Line 18 constructs the network as function compositions using the operator o, which is left asso-ciative. For example, f2 o relu o f should be read as (f2 o relu) o f . A composedfunction such as network is still a function.Line 22 defines the expression that represents the loss of the network when applied to the trainingdata. Line 23 defines the testing accuracy of the trained network.Line 25 extracts the parameters such as weights and biases from the loss expression. Line 28–31defines the solver object, passes it to the loop object for training and testing, and then generates theJava source code.Layer reuse Since each layer is a tensor function, for the layers such as ReLU and pooling thatdo not contain parameters, we can simply reuse them in a network. For example, in the followingdefinition for Alexnet ,relu2 (2 dimensional), relu (4 dimensional), pool (max pooling), drop(drop out), and lrn (local response normalization) are reused.1 val network = full8 o2 drop o relu2 o full7 o3 drop o relu2 o full6 o flat o4 pool o relu o cv5 o5 relu o cv4 o6 relu o cv3 o7 pool o lrn o relu o cv2 o8 pool o lrn o relu o cv1Layer function reuse simplifies the definitions of deep networks. For Alexnet , only 5 convolutionlayers and 3 fully connected layers need to be defined separately. Note that the above definition canbe written in just one line and the line breaks are only for clarity.Network reuse For complex network such as Googlenet , we can define reusable subnet to achievecompact definitions. For example, the Scala method inception below returns a tensor functionthat represents an inception subnet in Googlenet .1 val w = Param.xavier // Xavier initialization for weight2 val b0 = Param.const(0, 2, 0) // constant 0 for bias, learn rate/decay multiplier 2 and 03 val b02 = Param.const(0.2f, 2, 0) // constant 0.2 for bias4 val ipool = CudaLayer.max_pool(3, 1, 1) // max pooling kernel size, stride, and padding56 def inception(n: Int) = {7 // convolution name, kernel size, channel, stride, padding, weight and bias configuration8 val icv1 = CudaLayer.convolv(s"cv${n}1", 1, 64, 1, 0, w, b02)9 val icv2 = CudaLayer.convolv(s"cv${n}2", 1, 96, 1, 0, w, b02)10 val icv3 = CudaLayer.convolv(s"cv${n}3", 3, 128, 1, 1, w, b02)11 val icv4 = CudaLayer.convolv(s"cv${n}4", 1, 16, 1, 0, w, b02)12 val icv5 = CudaLayer.convolv(s"cv${n}5", 5, 32, 1, 2, w, b02)13 val icv6 = CudaLayer.convolv(s"cv${n}6", 1, 32, 1, 0, w, b02)1415 val p = Vec._new(4) // a 4-dimensional tensor variable1617 // a tensor function with parameter p18 VecFun(p, CudaLayer.concat( (relu o icv1)(p), // concatenation of 4 subnets connected to p19 (relu o icv3 o relu o icv2)(p),20 (relu o icv5 o relu o icv4)(p),21 (relu o icv6 o ipool)(p) )22 )23 }Using the inception method, we can define three subnets that are used to define the test accuracyp(line 6 below) of the main branch of Googlenet .1 val network3 = full7 o flat o drop o pool7 o inception(9) o inception(8) o pool o inception(7)2 val network2 = inception(6) o inception(5) o inception(4)3 val network1 = inception(3) o pool o inception(2) o inception(1) o4 pool o lrn o relu o cv3 o relu o cv2 o lrn o pool o relu o cv156 val p = Layer.precision(y1)(network3(network2(network1(x1)))) // accuracy at main branch4Published as a conference paper at ICLR 2017The three subnets are also used to define the training loss c(line 16 below) that adds up the lossesof the three branches of Googlenet .1 def branch(n: Int) = { // a subnet reused in the two side branches of Googlenet2 val cv = CudaLayer.convolv(s"b${n}cv", 1, 128, 1, 0, w, b02)3 val f1 = Layer.full(s"b${n}fc1", 1024, w, b02)4 val f2 = Layer.full(s"b${n}fc2", K, w, b0)5 f2 o drop2 o relu2 o f1 o flat o relu o cv o bpool6 }7 val stage2 = { // Vec2ScalarFun defines a function from tensor to scalar8 val p = Vec._new(4)9 Vec2ScalarFun(p, softmax_loss(network3(p)) + softmax_loss(branch(2)(p)) *Real(0.3f, "loss2"))10 }11 val stage1 = { // Real(0.3f, "loss1") is a named constant of value 0.312 val p = Vec._new(4)13 Vec2ScalarFun(p, stage2(network2(p)) + softmax_loss(branch(1)(p)) *Real(0.3f, "loss1"))14 }1516 val c = (stage1 o network1)(x1) // training loss of the three branchesOther than some definitions of shared layers such as activation, pooling, normalization, drop out,and softmax loss, this is the complete definition of Googlenet .This compact style of definition is similar to that of Theano, Tensorflow, Torch, and Mxnet. Inthe example, we used two types of functions VecFun andVec2ScalarFun , which model com-putation that takes a tensor as input and returns a tensor or scalar respectively. These functionscan be composed or applied to arguments. When applied, they are similar to functions in Theano,Tensorflow, and Mxnet. When composed, they are similar to the sequential container of Torch.4 I NTERMEDIATE REPRESENTATIONThe unique advantage of DeepDSL is that it is entirely high-level so that it permits static analysis ofthe deep networks for error checking, memory analysis, optimization, and code generation.While DeepDSL compiler is implemented in Scala, it has no runtime dependency on code in Scala atall. The whole purpose of using Scala as the host language for DeepDSL is that Scala is a stronglytyped language with flexible syntax. As a result, the syntax of DeepDSL can resemble that of astandalone DSL without having a parser. After taking symbolic gradients, a DeepDSL program isimmediately evaluated to intermediate representation (IR), which is essentially an abstract syntaxtree(AST). DeepDSL compiler analyzes its IR expressions by performing a series of optimizationand simplification steps. During this process, DeepDSL checks the compatibility of the layers, infersconcrete dimensions for variables, removes duplicated computation, and optimizes IR expressionsfor code generation.The IR expressions of DeepDSL are also abstract and human readable. For example, Figure 3shows a portion of the IR expressions for Lenet, where the first column shows an IR expression thatrepresents a single-step computation, the second column shows the dimensions of the tensor beingcomputed if applicable, the third column shows the memory usage of that tensor, the fourth columnshows the current memory consumption if memory is dynamically allocated and deallocated, andthe last column shows the memory consumption if memory is reused instead of deallocated.IR expression such as Line 14 is for GPU memory deallocation. DeepDSL compiler analyzes thedependencies of the IR expressions, reorders them, and determines the earliest point where a tensorcan be freed. For example, the last use of the tensor X18 in at line 13, it can be freed next. Thetensor X8cannot be freed until much later since it is used at line 26.If we compile IR expressions such as line 14 to actual memory deallocation, then the maximumdynamic memory consumed is peaked at line 27, which is about 59 MB. However, frequent memoryallocation and deallocation in NVIDIA GPU reduces runtime performance. Therefore, DeepDSLruntime library (implemented in Java) supports memory reuse instead of deallocation. DeepDSLruntime maintains a pool of allocated memory blocks and when a tensor is freed, its memory isreturned to the pool and when a tensor is allocated, the runtime tries to find a suitable block in thepool first. With memory reuse, the memory consumption always peaks at the last line, which is about77 MB. Note that the above memory figure is for storing intermediate results such as gradients; thestatic memory allocated for parameters and convolution workspace are calculated separately.5Published as a conference paper at ICLR 20171IR expression Dimensions Current mem Total w/o dealloc2---------------------------------------------------------------------------------------------3val X7 = Cuda(X) 500 1 28 28 1.568000 1.568000 1.5680004val X8 = Convolv(1,0)(X7,cv1_W,cv1_B) 500 20 24 24 23.040001 24.608000 24.6080005val X9 = Pooling(2,2,0,true)(X8) 500 20 12 12 5.760000 30.368000 30.3680006val X10 = Convolv(1,0)(X9,cv2_W,cv2_B) 500 50 8 8 6.400000 36.768002 36.7680027val X11 = Pooling(2,2,0,true)(X10) 500 50 4 4 1.600000 38.368000 38.3680008val X12 = (X11[1><3])(i | @) *(fc1_W)(j | @) 500 500 1.000000 39.368000 39.3680009val X14 = (X12 + (i) => fc1_B) 500 500 0.000000 39.368000 39.36800010val X15 = ReLU()(X14) 500 500 0.000000 39.368000 39.36800011val X16 = (X15)(i | @) *(fc2_W)(j | @) 500 10 0.020000 39.388000 39.38800012val X18 = (X16 + (i) => fc2_B) 500 10 0.000000 39.388000 39.38800013val X19 = Softmax()(X18) 500 10 0.020000 39.408001 39.40800114Dealloc(X18) -0.020000 39.388000 39.40800115val X20 = Cuda(Indicator(Y, 10)) 500 10 0.020000 39.408001 39.40800116val X21 = Log X19.copy 500 10 0.020000 39.428001 39.42800117val X52 = 1/(X19.copy) 500 10 0.020000 39.448002 39.44800218Print(((0 - (X20 . X21)) / |500|)) 0.000000 39.448002 39.4480021920................... 30 lines omitted .......................................................2122cv2_B < X71 *d_Convolv(1,0)()/d_cv2_B 0.000000 36.768002 48.44800223val X72 = X71 *d_Convolv(1,0)(cv2_W)/d_X9 500 20 12 12 5.760000 42.528000 54.20800024cv2_W < X71 *d_Convolv(1,0)(X9)/d_cv2_W 0.000000 42.528000 54.20800025Dealloc(X71) -6.400000 36.127998 54.20800026val X74 = X72 *d_Pooling(2,2,0,true)(X9,X8)/d_X827 500 20 24 24 23.040001 59.167999 77.24800128Dealloc(X72) -5.760000 53.408001 77.24800129Dealloc(X9) -5.760000 47.647999 77.24800130Dealloc(X8) -23.040001 24.608000 77.24800131cv1_B < X74 *d_Convolv(1,0)()/d_cv1_B 0.000000 24.608000 77.24800132cv1_W < X74 *d_Convolv(1,0)(X7)/d_cv1_W 0.000000 24.608000 77.24800133Dealloc(X74) -23.040001 1.568000 77.24800134Dealloc(X7) -1.568000 0.000000 77.248001Figure 3: A portion of the IR expressions and memory information compiled from LenetDeepDSL compiler generates Java source code for each of the IR expressions. For example, line3 loads a batch of images into GPU memory. Line 4 and line 5 perform forward convolution andpooling computation respectively. Line 18 prints out the training loss. Line 22 updates of the biasof the second convolution layer with its gradient.Some computation (e.g. Log) is always in-place. Therefore we make a copy of a ten-sor if it is passed to such computation (e.g. Log X19.copy ). Gradient update such ascv1_W <X74 *d_Convolv(1,0)(X7)/d_cv1_W may be implemented as in-placecomputation as well by directly updating the tensor cv1_W when computing the backward filtergradient of the convolution layer cv1.5 C OMPILATIONA DeepDSL program compiles to a Java source program, which uses a small Java library JCuda tocall CUDA and CuDNN via a JNI wrapper. The compiled Java code does not depend on DeepDSLcompiler or Scala, which makes it more portable and easier to integrate with other applications.Most of the current tools use platform dependent programming languages such as C, Python, andLua, which compiles to specific binary for each installation. Since our compiled program is Java, itruns directly on any platforms that support JVM. Compilation of Java is trivial for most computingplatforms. For example, the Java source generated by DeepDSL on a Windows laptop can run on aLinux server without any modifications. While it takes efforts to install tools like Tensorflow, Caffe,or Torch on machines with different system architectures, running the Java code generated fromDeepDSL require very little effort.Gradient Derivation and Optimization The gradient derivation and optimization are imple-mented by the Loop class called in code below:1val loop = Loop(loss, accuracy, (x, y), param, solver)6Published as a conference paper at ICLR 2017To derive the gradient of a scalar expression loss with respect to a tensor variable p, we can writeval grad = loss.grad(p) , which evaluates to a tensor expression. The gradient updatesare formed by expressions of the form Update (p;grad;; ), which represents the computationp=p+grad .The gradient updates of all parameters together with the loss expression are then passed to opti-mization functions to obtain a list of IR expressions ready for code generation. The optimizationfunctions implement simplification, loop merging, code motion, vectorization, SSA transformation,common sub-expression elimination, inlining, tensor deallocation, and code scheduling.Generated code The compiled Java code includes just one class. The class fields include theobjects that handle computations such as convolution and activation and the objects that store tensorssuch as parameters and gradients. The class includes one method for the training loop and onemethod for testing.The generated code includes the corresponding IR expressions in the comments to improve read-ability. For example, the code below shows the Java statements generated for the forward inferenceof max pooling. Note that the variable name in the comments has no relation to the variable namesin the code as they are independently generated.1 // val X9 = Pooling(2,2,0,true)(X8)2 JCudaTensor x16;3 JCudaTensor x17;4 x17 = x9;5 x16 = x18.forward(x17);It is easy to perform some customization of the generated code such as changing the number oftraining iterations or reducing learning rate at a specified interval. User can also use the generatedcode as a component of another application.Persistency The compiled Java source includes code to save the trained parameters into files afterthe training is complete. When the same program or another program compiled from the samenetwork starts, it can load the same parameters to resume training or for forward testing.Workspace The convolution layers in the compiled Java source share the same workspace. Thus,users can place a limit on the total workspace by making one change. By reducing workspace andusing memory efficient mode, users may reduce memory consumption to fit into a particular GPU.6 P ERFORMANCEThe primary compilation target of DeepDSL is a Java program that runs on Nvidia GPU throughits CUDA/CuDNN library3. DeepDSL can encode well-known networks such as Alexnet, Overfeat,GoogleNet, Vgg, and Deep Residual Networks (Resnet). In this section, we evaluate the perfor-mance of DeepDSL with Caffe and Tensorflow using these networks. To be consistent, DeepDSL,Caffe, and Tensorflow tests all follow the same Caffe prototxt definitions. Specifically, for Alexnetand GoogleNet, we followed the prototxt from Caffe’s website4; for Vgg (Vgg-16), we followedthe prototxt from this link5; for Overfeat, we followed the prototxt from IntelLabs6; and for DeepResidual Network (ResNet-50), we followed the prototxt from the author’s website7. The Tensor-flow implementation of these networks are either modified from versions of convnet-benchmarks8or created from scratch. Note there are a couple of differences between the tests of Tensorflow andthose of DeepDSL and Caffe. The training data in the Tensorflow tests is generated from randomdata in memory while DeepDSL and Caffe tests load real images from the Lmdb database. Also,3DeepDSL has limited support for CPU with features sufficient to implement Lenet.4github.com/BVLC/caffe/tree/master/models5github.com/ruimashita/caffe-train/blob/master/vgg.train_val.prototxt6github.com/IntelLabs/Latte.jl/blob/master/benchmarks/overfeat/overfeat.prototxt7github.com/KaimingHe/deep-residual-networks/blob/master/prototxt/ResNet-50-deploy.prototxt8github.com/soumith/convnet-benchmarks7Published as a conference paper at ICLR 2017the GoogleNet test of Tensorflow only includes the main branch of the GoogleNet while DeepDSLand Caffe train with the full network. All our tests are trained with ImageNet images that have beenresized to 224 by 224 (though DeepDSL do support random cropping of images when their sizes arelarger than specified dimensions).Alexnet128 Alexnet256 Overfeat128 Overfeat256 Googlenet128 Googlenet256 Vgg64 ResNet32 ResNet6420002004006008001;0001;2001;4001;6001;8002;0002;2002;4002;6002;8003;0003;2003;4003;600Time in milli secondsDeepDSL DeepDSLTensorflow CaffeFigure 4: Runtime performance of DeepDSL, Tensorflow, and Caffe (1 forward/backward iteration).DeepDSL and DeepDSLare performance in runtime-efficient and memory-efficient mode respec-tively. The names of the networks are followed by the batch size. Caffe failed to run GoogleNet(batch 256) and ResNet (batch 64) and Tensorflow failed to run ResNet (batch 64) due to exhaustionof GPU memory.The tests are run on a server with a single NVIDIA Tesla K40C GPU equipped with 12 gigabytesof memory. The server runs the CentOS 7 Linux distribution. DeepDSL uses the JCuda 0.8.0RCbinding that runs against CUDA 8.0.279. DeepDSL programs are publicly available10.The runtime performance of DeepDSL, Tensorflow, and Caffe is compared in Figure 4, whereDeepDSL has significant advantage over Caffe in Alexnet, Overfeat, and Googlenet while onlymarginally slower than Caffe in Vgg and ResNet (Deep Residual Network). DeepDSL is also fasterthan Tensorflow in Alexnet, Googlenet, and ResNet while slightly slower in Overfeat and Vgg.The memory consumption of the DeepDSL, Tensorflow, and Caffe is compared in Figure 5, whereDeepDSL uses less memory in Alexnet, Googlenet, and ResNet while Caffe uses less memory inOverfeat and Vgg. DeepDSL uses significantly less memory for Googlenet and ResNet where Cafferuns out of memory for Googlenet at batch size 256 and ResNet at batch size 64. DeepDSL uses lessmemory than Tensorflow in all tests except Vgg. Tensorflow also ran out of memory for ResNet atbatch size 64. It is unclear why Tensorflow uses similar amount of memory for Overfeat with batchsize 128 and 256.In the tests, DeepDSL programs are run with runtime efficient mode which caches tensor objectsand with memory efficient mode (denoted by DeepDSL) which deallocates tensor objects as soonas possible. DeepDSLuses 10 to 30% less memory with similar percentage of runtime overheadexcept Vgg and Googlenet where runtime overhead is relatively smaller than memory saving.DeepDSL also lets CUDNN to pick the convolution algorithms with max performance. In Overfeat(batch size 128), out of the 4290 megabytes of GPU memory consumed, more than 2700 megabytesare for convolution workspace. While Caffe uses less memory in this test, it also runs much slower.9Note previous CUDA versions such as 6.5 or 7.x can also be used with minor modifications.10github.com/deepdsl/deepdsl8Published as a conference paper at ICLR 2017Alexnet128 Alexnet256 Overfeat128 Overfeat256 Googlenet128 Googlenet256 Vgg64 ResNet32 ResNet640:100:10:20:30:40:50:60:70:80:911:11:2104Memory in megabytesDeepDSL DeepDSLTensorFlow CaffeFigure 5: Peak GPU memory use of DeepDSL, Tensorflow, and Caffe during training. DeepDSLand DeepDSLare performance in runtime-efficient and memory-efficient mode respectively. Cafferan out of GPU memory for Googlenet (batch 256) and ResNet (batch 64). Tensorflow ran out ofmemory for ResNet (batch 64).Among all tests, DeepDSL either outperforms Caffe by a large margin or uses significantly lessmemory with Vgg being the only exception where Caffe uses slightly less time and memory.DeepDSL also has competitive runtime performance when compared with Tensorflow.As a side note, while running DeepDSL requires little setup, installing libraries such as Caffe andTensorflow requires a list of dependencies and long compilation sessions. Consequently, we skippedtesting with Torch 7 due to time limitation.7 R ELATED WORKIn this section, we review some popular tools: Torch7, Theano, Caffe, TensorFlow, and CNTK, andnewer ones such as Chainer Tokui et al. (2015) and MXNet Chen et al. (2015b).Torch7 Collobert et al. (2011) uses Lua language for integration with C program and achieves C-like performance. It has a large set of optimized routines to support CPU, GPU, mobile and FPGAbackends. Theano Bergstra et al. (2010), hosted in Python, allows users to define symbolic variablesand functions (using NumPy van der Walt et al. (2011)) to encode DL networks and compiles thesymbolic expressions to C. Theano performs optimization such as normalizing mathematical expres-sions, numerical stabilization, and code specialization during compilation and the target code canrun on CPU or GPU devices. Caffe Jia et al. (2014) constructs a graph for DL network by connectingthe layers with the 4D arrays that store tensors. Caffe separates its DL network model representa-tion (using ProtocolBuffers Google) from the actual model parameter calculations. With its layeredstructure, Caffe computes the memory needed for each layer and reserves memory accordingly.TensorFlow Abadi et al. (2016) shares largely common design paradigms as that of Caffe. Its coreis written in C++ and its computation graph is described with a graph where tensors and layersare alternatively arranged. Unlike Caffe’s tensor, TensorFlow’s tensor is a typed multi-dimensionalarray and is persistent mutable. Like TensorFlow and Caffe, CNTK describes a network with aconfiguration file. CNTK can encode arbitrary computational network and it can map computationonto multiple GPUs across multiple machines by assigning each computation node to a particularCPU/GPU device.9Published as a conference paper at ICLR 2017Comparing to the “define-and-run” paradigm (adopted by Torch7, Theano, and Caffe),Chainer Tokui et al. (2015) follows a “define-by-run” pattern, which essentially allows modifyingthe control flow during the execution of a computational graph. MXNet Chen et al. (2015b) providesboth declarative and imperative programming styles and multiple language supports by embeddinginto multiple host languages and unifying the execution with one backend engine.The major difference between DeepDSL and the above tools is that DeepDSL is fully abstract untilcode generation. This means that DeepDSL’s intermediate representation can be compiled to dif-ferent languages or to run on different platforms. While the current compilation target of DeepDSLis Java, targeting a different language mainly involves building an interface library to call CUDAroutines while the optimization components of DeepDSL remain the same. This separation betweenoptimization and code generation also means that we can apply generic optimization techniques atIR level without worrying about the underlying data structure such as the representation of tensorsor how the layers are connected. In fact, the optimization of DeepDSL involves nothing specific todeep neural networks since they are mostly compilation techniques.Note that while Theano and DeepDSL have similarity in the way that DSL expressions are optimizedand transformed, there are two important differences that make DeepDSL more efficient and flexible.The first is that while Theano expressions are treated as graphs during optimization, DeepDSL ex-pressions are optimized in two phases. The first phase is at expression level where the training lossand the parameter gradients go through the process of simplification, loop merging, code motion,and vectorization. In the second phase, DeepDSL expressions are reduced to static single assign-ment form for additional optimization such as common subexpression elimination, code scheduling,inlining of in-place computation, and tensor deallocation.The second is that DeepDSL generates target code using a single-pass generator (about 1200 linesof code) that prints Java source code as strings to a file. The input of the generator is DeepDSLexpressions, which are completely independent from the generated code. The generated Java codeis high-level and human readable with a simple Java API that allows customization. This cleanseparation between DSL expression and target code also allows independent evolution of DSL opti-mization and target-code generation. In contrast, the code generation of Theano is embedded in itsfunctions for low-level computation and is tied to C code that is not readable to users.8 C ONCLUSIONWe have developed a domain specific language DeepDSL that compiles to Java source program fordeep learning. The compiled DeepDSL programs are very easy to use and extend as its primarydependencies are just JCuda and CUDA libraries. DeepDSL programs are also efficient and itsruntime performance and memory consumption are significantly better than Caffe and Tensorflowin some DL networks. DeepDSL performs static analysis for early error detection and providesreadable intermediate representation and memory consumption analysis. DeepDSL allows compactencoding of complex networks and since it is based on a strongly typed language Scala, writingDeepDSL programs is less error prone than dynamic languages such as Python.While the compiled DeepDSL programs are efficient, DeepDSL itself is not optimized. Thoughcompiling simpler networks such as Alexnet takes a few seconds, the compilation of complex net-works such as ResNet can take a few minutes. As the future work, we plan to optimize DeepDSL toimprove the compilation efficiency. Also while the memory efficient mode of DeepDSL can reduceGPU memory consumption, it may not be enough for memory intensive networks such as Vgg. Asfuture work, we plan to implement GPU memory virtualization by paging out tensors that are notimmediately needed. | r1kZu5WEl | 8: Top 50% of accepted papers, clear accept | This paper presents a domain specific language for the specification of deep learning models. The intermediate representation offers many possibilities for optimization and to focus on speed or runtime.
The paper is well-written and makes conclusive statements and comparisons. The experiments cover five fundamentally differenct CNN architectures, each evaluated for two batch sizes. They include the two competing frameworks Tensorflow and Caffe and show convincing performance. Overall, the paper is well-written and structured. | 3: The reviewer is fairly confident that the evaluation is correct |
|
Bks8cPcxe | ICLR.cc/2017/conference | 2017 | DeepDSL: A Compilation-based Domain-Specific Language for Deep Learning | ["Tian Zhao", "Xiao Bing Huang", "Yu Cao"] | In recent years, Deep Learning (DL) has found great success in domains such as multimedia understanding. However, the complex nature of multimedia data makes it difficult to develop DL-based software. The state-of-the-art tools, such as Caffe, TensorFlow, Torch7, and CNTK, while are successful in their applicable domains, are programming libraries with fixed user interface, internal representation, and execution environment. This makes it difficult to implement portable and customized DL applications.
In this paper, we present DeepDSL, a domain specific language (DSL) embedded in Scala, that compiles deep networks written in DeepDSL to Java source code. Deep DSL provides
(1) intuitive constructs to support compact encoding of deep networks;
(2) symbolic gradient derivation of the networks;
(3) static analysis for memory consumption and error detection; and
(4) DSL-level optimization to improve memory and runtime efficiency.
DeepDSL programs are compiled into compact, efficient, customizable, and portable Java source code, which operates the CUDA and CUDNN interfaces running on NVIDIA GPU via a Java Native Interface (JNI) library. We evaluated DeepDSL with a number of popular DL networks. Our experiments show that the compiled programs have very competitive runtime performance and memory efficiency compared to the existing libraries. | ["Deep learning", "Applications", "Optimization"] | ABSTRACTIn recent years, Deep Learning (DL) has found great success in domains suchas multimedia understanding. However, the complex nature of multimedia datamakes it difficult to develop DL-based software. The state-of-the-art tools, suchas Caffe, TensorFlow, Torch7, and CNTK, while are successful in their applicabledomains, are programming libraries with fixed user interface, internal represen-tation, and execution environment. This makes it difficult to implement portableand customized DL applications.In this paper, we present DeepDSL , adomain specific language (DSL) embeddedin Scala, that compiles deep networks written in DeepDSL to Java source code.Deep DSL provides (1) intuitive constructs to support compact encoding of deepnetworks; (2) symbolic gradient derivation of the networks; (3) static analysisfor memory consumption and error detection; and (4) DSL-level optimization toimprove memory and runtime efficiency.DeepDSL programs are compiled into compact, efficient, customizable, andportable Java source code, which operates the CUDA and CUDNN interfaces run-ning on NVIDIA GPU via a Java Native Interface (JNI) library. We evaluatedDeepDSL with a number of popular DL networks. Our experiments show thatthe compiled programs have very competitive runtime performance and memoryefficiency compared to the existing libraries.1 I NTRODUCTIONMultimedia is increasingly becoming the ”biggest big data” as the most important and valuablesource for insights and information Chen et al. (2015a). Recently, a new set of machine learningalgorithms named ”Deep Learning” (DL) LeCun et al. (2015), which aims at learning multiple levelsof representation and abstraction that help infer knowledge from multimedia data (e.g. text, image,audio, and video) is making astonishing gains in machine vision, speech recognition, multimediaanalysis, and drug designing.However, current tools, such as Theano Bergstra et al. (2010), Torch7 Collobert et al. (2011),Caffe Jia et al. (2014), Computational Network Toolkit (CNTK) Agarwal et al. (2014), and Tensor-Flow Abadi et al. (2016), while are efficient in their applicable domains, are essentially applicationlibraries with some inherent limitations.As with all programming libraries, the DL libraries have fixed bindings for key data structures suchas tensors and tensor related computations. Users have to adhere to the data structure, which limitstheir ability to apply application-specific optimization or port it to target runtime platforms. Theinternal representation of their control flow logic is opaque to users. For example, TensorFlow andCNTK use directed acyclic graphs to represent the DL network computation and generate runtimebinaries from the graphs. However, these graphs are not designed for user-level access, which limitsthe runtime platforms of the DL applications to what the libraries provide.In general, the current libraries have to be built against specific platforms that they are designedfor, which can be difficult for platforms such as Windows. Also, changing the implementation of1Published as a conference paper at ICLR 2017Figure 1: Basic workflow of DeepDSL.specific type of layers or data structure is very challenging without thorough understanding of theunderlying implementation. This limits the portability and reusability of these libraries.To address these limitations, we present DeepDSL, a domain specific language embedded in Scala,for developing DL applications. DeepDSL allows users to define DL networks as tensor functions.Unlike the existing DL libraries, DSL tensors are not built-in entities. Instead, they are defined asindexed scalar expressions. This exposes tensor related computation at DSL level. As a result, thesymbolic gradient derivation of the DL network is fully abstract and the resulting DSL programallows compiler-based optimizations such as code motion and common sub-expression elimination.DeepDSL compiler translates the optimized DSL program into a Java source program that is com-pact, efficient, customizable, and portable. The generated Java source only requires a small Javalibrary JCuda1that calls the NVIDIA CUDA interface using JNI. Since JVM is supported on allmajor operating systems, the generated Java source can run on any CUDA enabled platforms. Also,since the generated Java source is compact and human readable, users can customize it easily throughan editor or IDE such as eclipse2. The generated Java source automatically saves the learned pa-rameters into files after a training period is over. When the user starts the program again (perhapsafter adjusting some parameters such as momentum and learning rate), it automatically loads thesaved parameters and continues the training from where it stopped at the previous execution. Thecode also supports loading parameters trained with different data for fine tuning purpose.DeepDSL supports statical analysis of the DSL program to detect network design errors such asmismatching tensor dimensions before compiling the DSL program into Java source. It staticallyanalyzes the memory consumption used at each step of the computation and produces a table detail-ing the memory usage that would occur at runtime, which includes the memory for feature maps,gradient maps, parameter weights, and convolution workspace. It also uses the static information toreschedule computation so that tensor memory can be freed as early as possible to reduce memoryconsumption at runtime. Such processing has demonstrated to have great benefit. For example,DeepDSL continues to run well under the GPU memory limit on the testing server with a singleGPU when the batch size of ResNet is increased from 32 to 64, while both Caffe and Tensorflowfail due to out of memory exception.DeepDSL is available at https://github.com/deepdsl/deepdsl .The rest of the paper is organized as follows. We give an overview of DeepDSL in Section 2 andexplain the DSL syntax using examples in Section 3. We discuss the intermediate representationin Section 4 and code generation in Section 5. We present details of performance evaluation usingDeepDSL in Section 6 and related work in Section 7. We conclude the paper in Section 8.2 O VERVIEWDeepDSL directly encodes the mathematical representation of DL networks, where each layer isrepresented as a tensor function. The entire network is then represented as a composition of these1http://www.jcuda.org2http://www.eclipse.org2Published as a conference paper at ICLR 20171 val K = 10 // # of classes2 val N = 500; val C = 1; val N1 = 28; val N2 = 28 // batch size, channel, and x/y size34 // Specifying training (and test) dataSet5 val y = Vec._new(Mnist, "label", "Y", N) // labels6 val x = Vec._new(Mnist, "image", "X", N, C, N1, N2) // images78 val cv1 = CudaLayer.convolv("cv1", 5, 20) // kernel size (5,5), output channel 209 val cv2 = CudaLayer.convolv("cv2", 5, 50)10 val mp = CudaLayer.max_pool(2) // max pooling, kernel 2 stride 211 val flat = Layer.flatten(4, 1) // flatten a 4-D tensor from axis 1 to 312 val f = Layer.full("fc1", 500) // fully connected layer, output 50013 val f2 = Layer.full("fc2", K)14 val relu = CudaLayer.relu(2) // 2-D ReLU activation15 val softmax = CudaLayer.softmax // softmax1617 // o is a left-associative operator for function composition18 val network = f2 o relu o f o flat o mp o cv2 o mp o cv11920 val x1 = x.asCuda // load x to GPU21 val y1 = y.asIndicator(K).asCuda // turn each label into an indicator vector22 val c = (Layer.log_loss(y1) o softmax o network) (x1) // training loss23 val p = (Layer.precision(y1) o network) (x1) // test accuracy2425 val param = c.freeVar.toList // parameters to be trained2627 // output file, train and test iteration, learn rate, momentum, decay, gradient cropping (0 means none)28 val solver = Train("lenet", 1000, 10, 0.01f, 0.9f, 0.0005f, 0)2930 val loop = Loop(c, p, (x, y), param, solver) // training and testing loop31 cudnn_gen.print(loop) // generate Java source programFigure 2: DeepDSL code for training and testing Lenet .functions. DeepDSL symbolically derives the partial derivatives of the tensor functions with respectto tensor variables so that the backward gradients of network parameters are generated automatically.A high-level overview of DeepDSL is shown in Figure 1. A DeepDSL program is compiled in sev-eral stages. At the first stage, the backward gradients of deep networks are derived symbolicallyto become the intermediate representation (IR). The IR expressions are in turn passed through aseries of simplification and optimization at the second stage. At the third stage, DeepDSL compilerperforms a SSA (Static Single Assignment) transformation of the optimized IR to break down com-plex expressions. Redundant computation is eliminated at this stage and the resulting expressionsare reordered to optimize memory usage. Memory deallocation and in-place computation are alsoscheduled at this stage. Lastly, the finalized IR expressions are translated to Java source code.DeepDSL supports two mode of computation: memory efficient or runtime efficient. In the memoryefficient mode, tensor memory in GPU will be dynamically allocated and deallocated, which mightdecrease runtime performance. In the runtime efficient mode, tensor memory in GPU is reused andnot deallocated until the end of the training. In this mode, more memory may be used but with greaterruntime performance. To make the switch, the user only needs to switch a flag in the generated Javasource.The memory efficient mode can be used for machines with limited GPU memory. Furthermemory reduction can be achieved by placing a limit on the (convolution) workspace memory.3 S YNTAXFigure 2 shows the complete implementation for compiling a program to train and test Lenet LeCunet al. (1998). Since DeepDSL is embedded in Scala, the program is in Scala syntax and it can becompiled and executed with a programming tool such as eclipse. This program consists of variabledeclarations of the form val x = e , where val starts a declaration for the variable xand assignsit with the value of e.Line 5 and 6 declare the tensors that represent labels and images for the training data. We also usethe same variables for testing since the DSL compiles the same variables into different code fortraining and testing.3Published as a conference paper at ICLR 2017Line 8–15 declare the tensor functions that represent the layers in the network. Most of the layersare self-explanatory except val flat = Layer.flatten(4, 1) , which is used to convertthe 4-D tensor returned by the last pooling layer into a 2-D layer for the next fully connected layer.Line 18 constructs the network as function compositions using the operator o, which is left asso-ciative. For example, f2 o relu o f should be read as (f2 o relu) o f . A composedfunction such as network is still a function.Line 22 defines the expression that represents the loss of the network when applied to the trainingdata. Line 23 defines the testing accuracy of the trained network.Line 25 extracts the parameters such as weights and biases from the loss expression. Line 28–31defines the solver object, passes it to the loop object for training and testing, and then generates theJava source code.Layer reuse Since each layer is a tensor function, for the layers such as ReLU and pooling thatdo not contain parameters, we can simply reuse them in a network. For example, in the followingdefinition for Alexnet ,relu2 (2 dimensional), relu (4 dimensional), pool (max pooling), drop(drop out), and lrn (local response normalization) are reused.1 val network = full8 o2 drop o relu2 o full7 o3 drop o relu2 o full6 o flat o4 pool o relu o cv5 o5 relu o cv4 o6 relu o cv3 o7 pool o lrn o relu o cv2 o8 pool o lrn o relu o cv1Layer function reuse simplifies the definitions of deep networks. For Alexnet , only 5 convolutionlayers and 3 fully connected layers need to be defined separately. Note that the above definition canbe written in just one line and the line breaks are only for clarity.Network reuse For complex network such as Googlenet , we can define reusable subnet to achievecompact definitions. For example, the Scala method inception below returns a tensor functionthat represents an inception subnet in Googlenet .1 val w = Param.xavier // Xavier initialization for weight2 val b0 = Param.const(0, 2, 0) // constant 0 for bias, learn rate/decay multiplier 2 and 03 val b02 = Param.const(0.2f, 2, 0) // constant 0.2 for bias4 val ipool = CudaLayer.max_pool(3, 1, 1) // max pooling kernel size, stride, and padding56 def inception(n: Int) = {7 // convolution name, kernel size, channel, stride, padding, weight and bias configuration8 val icv1 = CudaLayer.convolv(s"cv${n}1", 1, 64, 1, 0, w, b02)9 val icv2 = CudaLayer.convolv(s"cv${n}2", 1, 96, 1, 0, w, b02)10 val icv3 = CudaLayer.convolv(s"cv${n}3", 3, 128, 1, 1, w, b02)11 val icv4 = CudaLayer.convolv(s"cv${n}4", 1, 16, 1, 0, w, b02)12 val icv5 = CudaLayer.convolv(s"cv${n}5", 5, 32, 1, 2, w, b02)13 val icv6 = CudaLayer.convolv(s"cv${n}6", 1, 32, 1, 0, w, b02)1415 val p = Vec._new(4) // a 4-dimensional tensor variable1617 // a tensor function with parameter p18 VecFun(p, CudaLayer.concat( (relu o icv1)(p), // concatenation of 4 subnets connected to p19 (relu o icv3 o relu o icv2)(p),20 (relu o icv5 o relu o icv4)(p),21 (relu o icv6 o ipool)(p) )22 )23 }Using the inception method, we can define three subnets that are used to define the test accuracyp(line 6 below) of the main branch of Googlenet .1 val network3 = full7 o flat o drop o pool7 o inception(9) o inception(8) o pool o inception(7)2 val network2 = inception(6) o inception(5) o inception(4)3 val network1 = inception(3) o pool o inception(2) o inception(1) o4 pool o lrn o relu o cv3 o relu o cv2 o lrn o pool o relu o cv156 val p = Layer.precision(y1)(network3(network2(network1(x1)))) // accuracy at main branch4Published as a conference paper at ICLR 2017The three subnets are also used to define the training loss c(line 16 below) that adds up the lossesof the three branches of Googlenet .1 def branch(n: Int) = { // a subnet reused in the two side branches of Googlenet2 val cv = CudaLayer.convolv(s"b${n}cv", 1, 128, 1, 0, w, b02)3 val f1 = Layer.full(s"b${n}fc1", 1024, w, b02)4 val f2 = Layer.full(s"b${n}fc2", K, w, b0)5 f2 o drop2 o relu2 o f1 o flat o relu o cv o bpool6 }7 val stage2 = { // Vec2ScalarFun defines a function from tensor to scalar8 val p = Vec._new(4)9 Vec2ScalarFun(p, softmax_loss(network3(p)) + softmax_loss(branch(2)(p)) *Real(0.3f, "loss2"))10 }11 val stage1 = { // Real(0.3f, "loss1") is a named constant of value 0.312 val p = Vec._new(4)13 Vec2ScalarFun(p, stage2(network2(p)) + softmax_loss(branch(1)(p)) *Real(0.3f, "loss1"))14 }1516 val c = (stage1 o network1)(x1) // training loss of the three branchesOther than some definitions of shared layers such as activation, pooling, normalization, drop out,and softmax loss, this is the complete definition of Googlenet .This compact style of definition is similar to that of Theano, Tensorflow, Torch, and Mxnet. Inthe example, we used two types of functions VecFun andVec2ScalarFun , which model com-putation that takes a tensor as input and returns a tensor or scalar respectively. These functionscan be composed or applied to arguments. When applied, they are similar to functions in Theano,Tensorflow, and Mxnet. When composed, they are similar to the sequential container of Torch.4 I NTERMEDIATE REPRESENTATIONThe unique advantage of DeepDSL is that it is entirely high-level so that it permits static analysis ofthe deep networks for error checking, memory analysis, optimization, and code generation.While DeepDSL compiler is implemented in Scala, it has no runtime dependency on code in Scala atall. The whole purpose of using Scala as the host language for DeepDSL is that Scala is a stronglytyped language with flexible syntax. As a result, the syntax of DeepDSL can resemble that of astandalone DSL without having a parser. After taking symbolic gradients, a DeepDSL program isimmediately evaluated to intermediate representation (IR), which is essentially an abstract syntaxtree(AST). DeepDSL compiler analyzes its IR expressions by performing a series of optimizationand simplification steps. During this process, DeepDSL checks the compatibility of the layers, infersconcrete dimensions for variables, removes duplicated computation, and optimizes IR expressionsfor code generation.The IR expressions of DeepDSL are also abstract and human readable. For example, Figure 3shows a portion of the IR expressions for Lenet, where the first column shows an IR expression thatrepresents a single-step computation, the second column shows the dimensions of the tensor beingcomputed if applicable, the third column shows the memory usage of that tensor, the fourth columnshows the current memory consumption if memory is dynamically allocated and deallocated, andthe last column shows the memory consumption if memory is reused instead of deallocated.IR expression such as Line 14 is for GPU memory deallocation. DeepDSL compiler analyzes thedependencies of the IR expressions, reorders them, and determines the earliest point where a tensorcan be freed. For example, the last use of the tensor X18 in at line 13, it can be freed next. Thetensor X8cannot be freed until much later since it is used at line 26.If we compile IR expressions such as line 14 to actual memory deallocation, then the maximumdynamic memory consumed is peaked at line 27, which is about 59 MB. However, frequent memoryallocation and deallocation in NVIDIA GPU reduces runtime performance. Therefore, DeepDSLruntime library (implemented in Java) supports memory reuse instead of deallocation. DeepDSLruntime maintains a pool of allocated memory blocks and when a tensor is freed, its memory isreturned to the pool and when a tensor is allocated, the runtime tries to find a suitable block in thepool first. With memory reuse, the memory consumption always peaks at the last line, which is about77 MB. Note that the above memory figure is for storing intermediate results such as gradients; thestatic memory allocated for parameters and convolution workspace are calculated separately.5Published as a conference paper at ICLR 20171IR expression Dimensions Current mem Total w/o dealloc2---------------------------------------------------------------------------------------------3val X7 = Cuda(X) 500 1 28 28 1.568000 1.568000 1.5680004val X8 = Convolv(1,0)(X7,cv1_W,cv1_B) 500 20 24 24 23.040001 24.608000 24.6080005val X9 = Pooling(2,2,0,true)(X8) 500 20 12 12 5.760000 30.368000 30.3680006val X10 = Convolv(1,0)(X9,cv2_W,cv2_B) 500 50 8 8 6.400000 36.768002 36.7680027val X11 = Pooling(2,2,0,true)(X10) 500 50 4 4 1.600000 38.368000 38.3680008val X12 = (X11[1><3])(i | @) *(fc1_W)(j | @) 500 500 1.000000 39.368000 39.3680009val X14 = (X12 + (i) => fc1_B) 500 500 0.000000 39.368000 39.36800010val X15 = ReLU()(X14) 500 500 0.000000 39.368000 39.36800011val X16 = (X15)(i | @) *(fc2_W)(j | @) 500 10 0.020000 39.388000 39.38800012val X18 = (X16 + (i) => fc2_B) 500 10 0.000000 39.388000 39.38800013val X19 = Softmax()(X18) 500 10 0.020000 39.408001 39.40800114Dealloc(X18) -0.020000 39.388000 39.40800115val X20 = Cuda(Indicator(Y, 10)) 500 10 0.020000 39.408001 39.40800116val X21 = Log X19.copy 500 10 0.020000 39.428001 39.42800117val X52 = 1/(X19.copy) 500 10 0.020000 39.448002 39.44800218Print(((0 - (X20 . X21)) / |500|)) 0.000000 39.448002 39.4480021920................... 30 lines omitted .......................................................2122cv2_B < X71 *d_Convolv(1,0)()/d_cv2_B 0.000000 36.768002 48.44800223val X72 = X71 *d_Convolv(1,0)(cv2_W)/d_X9 500 20 12 12 5.760000 42.528000 54.20800024cv2_W < X71 *d_Convolv(1,0)(X9)/d_cv2_W 0.000000 42.528000 54.20800025Dealloc(X71) -6.400000 36.127998 54.20800026val X74 = X72 *d_Pooling(2,2,0,true)(X9,X8)/d_X827 500 20 24 24 23.040001 59.167999 77.24800128Dealloc(X72) -5.760000 53.408001 77.24800129Dealloc(X9) -5.760000 47.647999 77.24800130Dealloc(X8) -23.040001 24.608000 77.24800131cv1_B < X74 *d_Convolv(1,0)()/d_cv1_B 0.000000 24.608000 77.24800132cv1_W < X74 *d_Convolv(1,0)(X7)/d_cv1_W 0.000000 24.608000 77.24800133Dealloc(X74) -23.040001 1.568000 77.24800134Dealloc(X7) -1.568000 0.000000 77.248001Figure 3: A portion of the IR expressions and memory information compiled from LenetDeepDSL compiler generates Java source code for each of the IR expressions. For example, line3 loads a batch of images into GPU memory. Line 4 and line 5 perform forward convolution andpooling computation respectively. Line 18 prints out the training loss. Line 22 updates of the biasof the second convolution layer with its gradient.Some computation (e.g. Log) is always in-place. Therefore we make a copy of a ten-sor if it is passed to such computation (e.g. Log X19.copy ). Gradient update such ascv1_W <X74 *d_Convolv(1,0)(X7)/d_cv1_W may be implemented as in-placecomputation as well by directly updating the tensor cv1_W when computing the backward filtergradient of the convolution layer cv1.5 C OMPILATIONA DeepDSL program compiles to a Java source program, which uses a small Java library JCuda tocall CUDA and CuDNN via a JNI wrapper. The compiled Java code does not depend on DeepDSLcompiler or Scala, which makes it more portable and easier to integrate with other applications.Most of the current tools use platform dependent programming languages such as C, Python, andLua, which compiles to specific binary for each installation. Since our compiled program is Java, itruns directly on any platforms that support JVM. Compilation of Java is trivial for most computingplatforms. For example, the Java source generated by DeepDSL on a Windows laptop can run on aLinux server without any modifications. While it takes efforts to install tools like Tensorflow, Caffe,or Torch on machines with different system architectures, running the Java code generated fromDeepDSL require very little effort.Gradient Derivation and Optimization The gradient derivation and optimization are imple-mented by the Loop class called in code below:1val loop = Loop(loss, accuracy, (x, y), param, solver)6Published as a conference paper at ICLR 2017To derive the gradient of a scalar expression loss with respect to a tensor variable p, we can writeval grad = loss.grad(p) , which evaluates to a tensor expression. The gradient updatesare formed by expressions of the form Update (p;grad;; ), which represents the computationp=p+grad .The gradient updates of all parameters together with the loss expression are then passed to opti-mization functions to obtain a list of IR expressions ready for code generation. The optimizationfunctions implement simplification, loop merging, code motion, vectorization, SSA transformation,common sub-expression elimination, inlining, tensor deallocation, and code scheduling.Generated code The compiled Java code includes just one class. The class fields include theobjects that handle computations such as convolution and activation and the objects that store tensorssuch as parameters and gradients. The class includes one method for the training loop and onemethod for testing.The generated code includes the corresponding IR expressions in the comments to improve read-ability. For example, the code below shows the Java statements generated for the forward inferenceof max pooling. Note that the variable name in the comments has no relation to the variable namesin the code as they are independently generated.1 // val X9 = Pooling(2,2,0,true)(X8)2 JCudaTensor x16;3 JCudaTensor x17;4 x17 = x9;5 x16 = x18.forward(x17);It is easy to perform some customization of the generated code such as changing the number oftraining iterations or reducing learning rate at a specified interval. User can also use the generatedcode as a component of another application.Persistency The compiled Java source includes code to save the trained parameters into files afterthe training is complete. When the same program or another program compiled from the samenetwork starts, it can load the same parameters to resume training or for forward testing.Workspace The convolution layers in the compiled Java source share the same workspace. Thus,users can place a limit on the total workspace by making one change. By reducing workspace andusing memory efficient mode, users may reduce memory consumption to fit into a particular GPU.6 P ERFORMANCEThe primary compilation target of DeepDSL is a Java program that runs on Nvidia GPU throughits CUDA/CuDNN library3. DeepDSL can encode well-known networks such as Alexnet, Overfeat,GoogleNet, Vgg, and Deep Residual Networks (Resnet). In this section, we evaluate the perfor-mance of DeepDSL with Caffe and Tensorflow using these networks. To be consistent, DeepDSL,Caffe, and Tensorflow tests all follow the same Caffe prototxt definitions. Specifically, for Alexnetand GoogleNet, we followed the prototxt from Caffe’s website4; for Vgg (Vgg-16), we followedthe prototxt from this link5; for Overfeat, we followed the prototxt from IntelLabs6; and for DeepResidual Network (ResNet-50), we followed the prototxt from the author’s website7. The Tensor-flow implementation of these networks are either modified from versions of convnet-benchmarks8or created from scratch. Note there are a couple of differences between the tests of Tensorflow andthose of DeepDSL and Caffe. The training data in the Tensorflow tests is generated from randomdata in memory while DeepDSL and Caffe tests load real images from the Lmdb database. Also,3DeepDSL has limited support for CPU with features sufficient to implement Lenet.4github.com/BVLC/caffe/tree/master/models5github.com/ruimashita/caffe-train/blob/master/vgg.train_val.prototxt6github.com/IntelLabs/Latte.jl/blob/master/benchmarks/overfeat/overfeat.prototxt7github.com/KaimingHe/deep-residual-networks/blob/master/prototxt/ResNet-50-deploy.prototxt8github.com/soumith/convnet-benchmarks7Published as a conference paper at ICLR 2017the GoogleNet test of Tensorflow only includes the main branch of the GoogleNet while DeepDSLand Caffe train with the full network. All our tests are trained with ImageNet images that have beenresized to 224 by 224 (though DeepDSL do support random cropping of images when their sizes arelarger than specified dimensions).Alexnet128 Alexnet256 Overfeat128 Overfeat256 Googlenet128 Googlenet256 Vgg64 ResNet32 ResNet6420002004006008001;0001;2001;4001;6001;8002;0002;2002;4002;6002;8003;0003;2003;4003;600Time in milli secondsDeepDSL DeepDSLTensorflow CaffeFigure 4: Runtime performance of DeepDSL, Tensorflow, and Caffe (1 forward/backward iteration).DeepDSL and DeepDSLare performance in runtime-efficient and memory-efficient mode respec-tively. The names of the networks are followed by the batch size. Caffe failed to run GoogleNet(batch 256) and ResNet (batch 64) and Tensorflow failed to run ResNet (batch 64) due to exhaustionof GPU memory.The tests are run on a server with a single NVIDIA Tesla K40C GPU equipped with 12 gigabytesof memory. The server runs the CentOS 7 Linux distribution. DeepDSL uses the JCuda 0.8.0RCbinding that runs against CUDA 8.0.279. DeepDSL programs are publicly available10.The runtime performance of DeepDSL, Tensorflow, and Caffe is compared in Figure 4, whereDeepDSL has significant advantage over Caffe in Alexnet, Overfeat, and Googlenet while onlymarginally slower than Caffe in Vgg and ResNet (Deep Residual Network). DeepDSL is also fasterthan Tensorflow in Alexnet, Googlenet, and ResNet while slightly slower in Overfeat and Vgg.The memory consumption of the DeepDSL, Tensorflow, and Caffe is compared in Figure 5, whereDeepDSL uses less memory in Alexnet, Googlenet, and ResNet while Caffe uses less memory inOverfeat and Vgg. DeepDSL uses significantly less memory for Googlenet and ResNet where Cafferuns out of memory for Googlenet at batch size 256 and ResNet at batch size 64. DeepDSL uses lessmemory than Tensorflow in all tests except Vgg. Tensorflow also ran out of memory for ResNet atbatch size 64. It is unclear why Tensorflow uses similar amount of memory for Overfeat with batchsize 128 and 256.In the tests, DeepDSL programs are run with runtime efficient mode which caches tensor objectsand with memory efficient mode (denoted by DeepDSL) which deallocates tensor objects as soonas possible. DeepDSLuses 10 to 30% less memory with similar percentage of runtime overheadexcept Vgg and Googlenet where runtime overhead is relatively smaller than memory saving.DeepDSL also lets CUDNN to pick the convolution algorithms with max performance. In Overfeat(batch size 128), out of the 4290 megabytes of GPU memory consumed, more than 2700 megabytesare for convolution workspace. While Caffe uses less memory in this test, it also runs much slower.9Note previous CUDA versions such as 6.5 or 7.x can also be used with minor modifications.10github.com/deepdsl/deepdsl8Published as a conference paper at ICLR 2017Alexnet128 Alexnet256 Overfeat128 Overfeat256 Googlenet128 Googlenet256 Vgg64 ResNet32 ResNet640:100:10:20:30:40:50:60:70:80:911:11:2104Memory in megabytesDeepDSL DeepDSLTensorFlow CaffeFigure 5: Peak GPU memory use of DeepDSL, Tensorflow, and Caffe during training. DeepDSLand DeepDSLare performance in runtime-efficient and memory-efficient mode respectively. Cafferan out of GPU memory for Googlenet (batch 256) and ResNet (batch 64). Tensorflow ran out ofmemory for ResNet (batch 64).Among all tests, DeepDSL either outperforms Caffe by a large margin or uses significantly lessmemory with Vgg being the only exception where Caffe uses slightly less time and memory.DeepDSL also has competitive runtime performance when compared with Tensorflow.As a side note, while running DeepDSL requires little setup, installing libraries such as Caffe andTensorflow requires a list of dependencies and long compilation sessions. Consequently, we skippedtesting with Torch 7 due to time limitation.7 R ELATED WORKIn this section, we review some popular tools: Torch7, Theano, Caffe, TensorFlow, and CNTK, andnewer ones such as Chainer Tokui et al. (2015) and MXNet Chen et al. (2015b).Torch7 Collobert et al. (2011) uses Lua language for integration with C program and achieves C-like performance. It has a large set of optimized routines to support CPU, GPU, mobile and FPGAbackends. Theano Bergstra et al. (2010), hosted in Python, allows users to define symbolic variablesand functions (using NumPy van der Walt et al. (2011)) to encode DL networks and compiles thesymbolic expressions to C. Theano performs optimization such as normalizing mathematical expres-sions, numerical stabilization, and code specialization during compilation and the target code canrun on CPU or GPU devices. Caffe Jia et al. (2014) constructs a graph for DL network by connectingthe layers with the 4D arrays that store tensors. Caffe separates its DL network model representa-tion (using ProtocolBuffers Google) from the actual model parameter calculations. With its layeredstructure, Caffe computes the memory needed for each layer and reserves memory accordingly.TensorFlow Abadi et al. (2016) shares largely common design paradigms as that of Caffe. Its coreis written in C++ and its computation graph is described with a graph where tensors and layersare alternatively arranged. Unlike Caffe’s tensor, TensorFlow’s tensor is a typed multi-dimensionalarray and is persistent mutable. Like TensorFlow and Caffe, CNTK describes a network with aconfiguration file. CNTK can encode arbitrary computational network and it can map computationonto multiple GPUs across multiple machines by assigning each computation node to a particularCPU/GPU device.9Published as a conference paper at ICLR 2017Comparing to the “define-and-run” paradigm (adopted by Torch7, Theano, and Caffe),Chainer Tokui et al. (2015) follows a “define-by-run” pattern, which essentially allows modifyingthe control flow during the execution of a computational graph. MXNet Chen et al. (2015b) providesboth declarative and imperative programming styles and multiple language supports by embeddinginto multiple host languages and unifying the execution with one backend engine.The major difference between DeepDSL and the above tools is that DeepDSL is fully abstract untilcode generation. This means that DeepDSL’s intermediate representation can be compiled to dif-ferent languages or to run on different platforms. While the current compilation target of DeepDSLis Java, targeting a different language mainly involves building an interface library to call CUDAroutines while the optimization components of DeepDSL remain the same. This separation betweenoptimization and code generation also means that we can apply generic optimization techniques atIR level without worrying about the underlying data structure such as the representation of tensorsor how the layers are connected. In fact, the optimization of DeepDSL involves nothing specific todeep neural networks since they are mostly compilation techniques.Note that while Theano and DeepDSL have similarity in the way that DSL expressions are optimizedand transformed, there are two important differences that make DeepDSL more efficient and flexible.The first is that while Theano expressions are treated as graphs during optimization, DeepDSL ex-pressions are optimized in two phases. The first phase is at expression level where the training lossand the parameter gradients go through the process of simplification, loop merging, code motion,and vectorization. In the second phase, DeepDSL expressions are reduced to static single assign-ment form for additional optimization such as common subexpression elimination, code scheduling,inlining of in-place computation, and tensor deallocation.The second is that DeepDSL generates target code using a single-pass generator (about 1200 linesof code) that prints Java source code as strings to a file. The input of the generator is DeepDSLexpressions, which are completely independent from the generated code. The generated Java codeis high-level and human readable with a simple Java API that allows customization. This cleanseparation between DSL expression and target code also allows independent evolution of DSL opti-mization and target-code generation. In contrast, the code generation of Theano is embedded in itsfunctions for low-level computation and is tied to C code that is not readable to users.8 C ONCLUSIONWe have developed a domain specific language DeepDSL that compiles to Java source program fordeep learning. The compiled DeepDSL programs are very easy to use and extend as its primarydependencies are just JCuda and CUDA libraries. DeepDSL programs are also efficient and itsruntime performance and memory consumption are significantly better than Caffe and Tensorflowin some DL networks. DeepDSL performs static analysis for early error detection and providesreadable intermediate representation and memory consumption analysis. DeepDSL allows compactencoding of complex networks and since it is based on a strongly typed language Scala, writingDeepDSL programs is less error prone than dynamic languages such as Python.While the compiled DeepDSL programs are efficient, DeepDSL itself is not optimized. Thoughcompiling simpler networks such as Alexnet takes a few seconds, the compilation of complex net-works such as ResNet can take a few minutes. As the future work, we plan to optimize DeepDSL toimprove the compilation efficiency. Also while the memory efficient mode of DeepDSL can reduceGPU memory consumption, it may not be enough for memory intensive networks such as Vgg. Asfuture work, we plan to implement GPU memory virtualization by paging out tensors that are notimmediately needed. | rJ02NpS4x | 7: Good paper, accept | This paper presents and evaluates a Scala-based deep learning framework called “DeepDSL,” describing the language’s syntactic and performance benefits with respect to existing frameworks.
Pros:
The use of Scala is unique among deep learning frameworks, to my knowledge, making this framework interesting for Scala users. The fact that Scala compiles to Java and therefore cross-platform support comes for free is also nice.
The ability to inspect memory information as shown in Figure 3 is interesting and potentially useful for large networks or situations where memory is limited.
DeepDSL compares favorably with existing frameworks in terms of memory use and speed for many common convolutional network architectures.
Cons:
There appears to be special privileged handling of parameters, gradients, and updates in the compilation process itself (as in Caffe), rather than having gradients/updates as a normal part of the full user-defined computation graph (as in Theano + TensorFlow). This makes certain applications, such as RNNs (which require parameter sharing) and GANs (which require gradients wrt multiple objectives), impossible to implement in DeepDSL without further extension of the underlying API. (Note: I might be wrong about this -- and please correct me if I am -- but all the examples in the paper are nets trained by gradient descent on a single objective, and do not share parameters or access gradients directly.)
The paper repeatedly refers to line counts from the verbose Protobuf-based low-level representation of networks in Caffe to demonstrate the compactness of its own syntax. This is misleading as Caffe has officially supported a compact network definition style called “NetSpec” for years -- see a ~15 line definition of AlexNet [1]. Given that, Protobuf is essentially an intermediate representation for Caffe (as with TensorFlow), which happens to have a human-readable text format.
DeepDSL is not especially novel when compared with existing frameworks, which is not a problem in and of itself, but some statements misleadingly or incorrectly oversell the novelty of the framework. Some examples:
“This separation between network definition and training is an unique advantage of DeepDSL comparing to other tools.” This separation is not unique -- it’s certainly a feature of Caffe where the network definition is its own file, and can be attained in TensorFlow as well (though it’s not the default workflow there).
“The difference [between our framework and Theano, TensorFlow, etc.] is that we do not model deep networks as ‘networks’ but as abstract ‘functions’.” There is no notion of a “network” in Theano or TensorFlow (not sure about the others) either -- there are only functions, like in DeepDSL. I asked about this statement, and the response didn’t convince me otherwise. The counterexample given was that in TensorFlow the number of input channels needs to be specified separately for each convolution. This is only true using the low-level API and can easily be worked around with higher-level wrappers like TensorFlow Slim -- e.g., see the definition of AlexNet [2]. It may be true that DeepDSL is more “batteries included” for writing compact network definitions than these other frameworks, but the paper’s claims seem to go beyond this.
Overall, the DeepDSL framework seems to have real value in its use of Scala and its memory/speed efficiency as demonstrated by the experiments, but the current version of the paper contains statements that overclaim novelty in ways that are misleading and unfair to existing frameworks. I will consider upgrading my rating if these statements are removed or amended to be more technically precise.
[1] https://github.com/BVLC/caffe/blob/master/examples/pycaffe/caffenet.py#L24
[2] https://github.com/tensorflow/tensorflow/blob/master/tensorflow/contrib/slim/python/slim/nets/alexnet.py#L92
=====================
Update: the authors have revised their paper to address the concerns that I considered grounds for rejection in my review. I've upgraded my rating from 5 (below threshold) to 7 (good paper, accept). | 4: The reviewer is confident but not absolutely certain that the evaluation is correct |
|
Bks8cPcxe | ICLR.cc/2017/conference | 2017 | DeepDSL: A Compilation-based Domain-Specific Language for Deep Learning | ["Tian Zhao", "Xiao Bing Huang", "Yu Cao"] | In recent years, Deep Learning (DL) has found great success in domains such as multimedia understanding. However, the complex nature of multimedia data makes it difficult to develop DL-based software. The state-of-the-art tools, such as Caffe, TensorFlow, Torch7, and CNTK, while are successful in their applicable domains, are programming libraries with fixed user interface, internal representation, and execution environment. This makes it difficult to implement portable and customized DL applications.
In this paper, we present DeepDSL, a domain specific language (DSL) embedded in Scala, that compiles deep networks written in DeepDSL to Java source code. Deep DSL provides
(1) intuitive constructs to support compact encoding of deep networks;
(2) symbolic gradient derivation of the networks;
(3) static analysis for memory consumption and error detection; and
(4) DSL-level optimization to improve memory and runtime efficiency.
DeepDSL programs are compiled into compact, efficient, customizable, and portable Java source code, which operates the CUDA and CUDNN interfaces running on NVIDIA GPU via a Java Native Interface (JNI) library. We evaluated DeepDSL with a number of popular DL networks. Our experiments show that the compiled programs have very competitive runtime performance and memory efficiency compared to the existing libraries. | ["Deep learning", "Applications", "Optimization"] | ABSTRACTIn recent years, Deep Learning (DL) has found great success in domains suchas multimedia understanding. However, the complex nature of multimedia datamakes it difficult to develop DL-based software. The state-of-the-art tools, suchas Caffe, TensorFlow, Torch7, and CNTK, while are successful in their applicabledomains, are programming libraries with fixed user interface, internal represen-tation, and execution environment. This makes it difficult to implement portableand customized DL applications.In this paper, we present DeepDSL , adomain specific language (DSL) embeddedin Scala, that compiles deep networks written in DeepDSL to Java source code.Deep DSL provides (1) intuitive constructs to support compact encoding of deepnetworks; (2) symbolic gradient derivation of the networks; (3) static analysisfor memory consumption and error detection; and (4) DSL-level optimization toimprove memory and runtime efficiency.DeepDSL programs are compiled into compact, efficient, customizable, andportable Java source code, which operates the CUDA and CUDNN interfaces run-ning on NVIDIA GPU via a Java Native Interface (JNI) library. We evaluatedDeepDSL with a number of popular DL networks. Our experiments show thatthe compiled programs have very competitive runtime performance and memoryefficiency compared to the existing libraries.1 I NTRODUCTIONMultimedia is increasingly becoming the ”biggest big data” as the most important and valuablesource for insights and information Chen et al. (2015a). Recently, a new set of machine learningalgorithms named ”Deep Learning” (DL) LeCun et al. (2015), which aims at learning multiple levelsof representation and abstraction that help infer knowledge from multimedia data (e.g. text, image,audio, and video) is making astonishing gains in machine vision, speech recognition, multimediaanalysis, and drug designing.However, current tools, such as Theano Bergstra et al. (2010), Torch7 Collobert et al. (2011),Caffe Jia et al. (2014), Computational Network Toolkit (CNTK) Agarwal et al. (2014), and Tensor-Flow Abadi et al. (2016), while are efficient in their applicable domains, are essentially applicationlibraries with some inherent limitations.As with all programming libraries, the DL libraries have fixed bindings for key data structures suchas tensors and tensor related computations. Users have to adhere to the data structure, which limitstheir ability to apply application-specific optimization or port it to target runtime platforms. Theinternal representation of their control flow logic is opaque to users. For example, TensorFlow andCNTK use directed acyclic graphs to represent the DL network computation and generate runtimebinaries from the graphs. However, these graphs are not designed for user-level access, which limitsthe runtime platforms of the DL applications to what the libraries provide.In general, the current libraries have to be built against specific platforms that they are designedfor, which can be difficult for platforms such as Windows. Also, changing the implementation of1Published as a conference paper at ICLR 2017Figure 1: Basic workflow of DeepDSL.specific type of layers or data structure is very challenging without thorough understanding of theunderlying implementation. This limits the portability and reusability of these libraries.To address these limitations, we present DeepDSL, a domain specific language embedded in Scala,for developing DL applications. DeepDSL allows users to define DL networks as tensor functions.Unlike the existing DL libraries, DSL tensors are not built-in entities. Instead, they are defined asindexed scalar expressions. This exposes tensor related computation at DSL level. As a result, thesymbolic gradient derivation of the DL network is fully abstract and the resulting DSL programallows compiler-based optimizations such as code motion and common sub-expression elimination.DeepDSL compiler translates the optimized DSL program into a Java source program that is com-pact, efficient, customizable, and portable. The generated Java source only requires a small Javalibrary JCuda1that calls the NVIDIA CUDA interface using JNI. Since JVM is supported on allmajor operating systems, the generated Java source can run on any CUDA enabled platforms. Also,since the generated Java source is compact and human readable, users can customize it easily throughan editor or IDE such as eclipse2. The generated Java source automatically saves the learned pa-rameters into files after a training period is over. When the user starts the program again (perhapsafter adjusting some parameters such as momentum and learning rate), it automatically loads thesaved parameters and continues the training from where it stopped at the previous execution. Thecode also supports loading parameters trained with different data for fine tuning purpose.DeepDSL supports statical analysis of the DSL program to detect network design errors such asmismatching tensor dimensions before compiling the DSL program into Java source. It staticallyanalyzes the memory consumption used at each step of the computation and produces a table detail-ing the memory usage that would occur at runtime, which includes the memory for feature maps,gradient maps, parameter weights, and convolution workspace. It also uses the static information toreschedule computation so that tensor memory can be freed as early as possible to reduce memoryconsumption at runtime. Such processing has demonstrated to have great benefit. For example,DeepDSL continues to run well under the GPU memory limit on the testing server with a singleGPU when the batch size of ResNet is increased from 32 to 64, while both Caffe and Tensorflowfail due to out of memory exception.DeepDSL is available at https://github.com/deepdsl/deepdsl .The rest of the paper is organized as follows. We give an overview of DeepDSL in Section 2 andexplain the DSL syntax using examples in Section 3. We discuss the intermediate representationin Section 4 and code generation in Section 5. We present details of performance evaluation usingDeepDSL in Section 6 and related work in Section 7. We conclude the paper in Section 8.2 O VERVIEWDeepDSL directly encodes the mathematical representation of DL networks, where each layer isrepresented as a tensor function. The entire network is then represented as a composition of these1http://www.jcuda.org2http://www.eclipse.org2Published as a conference paper at ICLR 20171 val K = 10 // # of classes2 val N = 500; val C = 1; val N1 = 28; val N2 = 28 // batch size, channel, and x/y size34 // Specifying training (and test) dataSet5 val y = Vec._new(Mnist, "label", "Y", N) // labels6 val x = Vec._new(Mnist, "image", "X", N, C, N1, N2) // images78 val cv1 = CudaLayer.convolv("cv1", 5, 20) // kernel size (5,5), output channel 209 val cv2 = CudaLayer.convolv("cv2", 5, 50)10 val mp = CudaLayer.max_pool(2) // max pooling, kernel 2 stride 211 val flat = Layer.flatten(4, 1) // flatten a 4-D tensor from axis 1 to 312 val f = Layer.full("fc1", 500) // fully connected layer, output 50013 val f2 = Layer.full("fc2", K)14 val relu = CudaLayer.relu(2) // 2-D ReLU activation15 val softmax = CudaLayer.softmax // softmax1617 // o is a left-associative operator for function composition18 val network = f2 o relu o f o flat o mp o cv2 o mp o cv11920 val x1 = x.asCuda // load x to GPU21 val y1 = y.asIndicator(K).asCuda // turn each label into an indicator vector22 val c = (Layer.log_loss(y1) o softmax o network) (x1) // training loss23 val p = (Layer.precision(y1) o network) (x1) // test accuracy2425 val param = c.freeVar.toList // parameters to be trained2627 // output file, train and test iteration, learn rate, momentum, decay, gradient cropping (0 means none)28 val solver = Train("lenet", 1000, 10, 0.01f, 0.9f, 0.0005f, 0)2930 val loop = Loop(c, p, (x, y), param, solver) // training and testing loop31 cudnn_gen.print(loop) // generate Java source programFigure 2: DeepDSL code for training and testing Lenet .functions. DeepDSL symbolically derives the partial derivatives of the tensor functions with respectto tensor variables so that the backward gradients of network parameters are generated automatically.A high-level overview of DeepDSL is shown in Figure 1. A DeepDSL program is compiled in sev-eral stages. At the first stage, the backward gradients of deep networks are derived symbolicallyto become the intermediate representation (IR). The IR expressions are in turn passed through aseries of simplification and optimization at the second stage. At the third stage, DeepDSL compilerperforms a SSA (Static Single Assignment) transformation of the optimized IR to break down com-plex expressions. Redundant computation is eliminated at this stage and the resulting expressionsare reordered to optimize memory usage. Memory deallocation and in-place computation are alsoscheduled at this stage. Lastly, the finalized IR expressions are translated to Java source code.DeepDSL supports two mode of computation: memory efficient or runtime efficient. In the memoryefficient mode, tensor memory in GPU will be dynamically allocated and deallocated, which mightdecrease runtime performance. In the runtime efficient mode, tensor memory in GPU is reused andnot deallocated until the end of the training. In this mode, more memory may be used but with greaterruntime performance. To make the switch, the user only needs to switch a flag in the generated Javasource.The memory efficient mode can be used for machines with limited GPU memory. Furthermemory reduction can be achieved by placing a limit on the (convolution) workspace memory.3 S YNTAXFigure 2 shows the complete implementation for compiling a program to train and test Lenet LeCunet al. (1998). Since DeepDSL is embedded in Scala, the program is in Scala syntax and it can becompiled and executed with a programming tool such as eclipse. This program consists of variabledeclarations of the form val x = e , where val starts a declaration for the variable xand assignsit with the value of e.Line 5 and 6 declare the tensors that represent labels and images for the training data. We also usethe same variables for testing since the DSL compiles the same variables into different code fortraining and testing.3Published as a conference paper at ICLR 2017Line 8–15 declare the tensor functions that represent the layers in the network. Most of the layersare self-explanatory except val flat = Layer.flatten(4, 1) , which is used to convertthe 4-D tensor returned by the last pooling layer into a 2-D layer for the next fully connected layer.Line 18 constructs the network as function compositions using the operator o, which is left asso-ciative. For example, f2 o relu o f should be read as (f2 o relu) o f . A composedfunction such as network is still a function.Line 22 defines the expression that represents the loss of the network when applied to the trainingdata. Line 23 defines the testing accuracy of the trained network.Line 25 extracts the parameters such as weights and biases from the loss expression. Line 28–31defines the solver object, passes it to the loop object for training and testing, and then generates theJava source code.Layer reuse Since each layer is a tensor function, for the layers such as ReLU and pooling thatdo not contain parameters, we can simply reuse them in a network. For example, in the followingdefinition for Alexnet ,relu2 (2 dimensional), relu (4 dimensional), pool (max pooling), drop(drop out), and lrn (local response normalization) are reused.1 val network = full8 o2 drop o relu2 o full7 o3 drop o relu2 o full6 o flat o4 pool o relu o cv5 o5 relu o cv4 o6 relu o cv3 o7 pool o lrn o relu o cv2 o8 pool o lrn o relu o cv1Layer function reuse simplifies the definitions of deep networks. For Alexnet , only 5 convolutionlayers and 3 fully connected layers need to be defined separately. Note that the above definition canbe written in just one line and the line breaks are only for clarity.Network reuse For complex network such as Googlenet , we can define reusable subnet to achievecompact definitions. For example, the Scala method inception below returns a tensor functionthat represents an inception subnet in Googlenet .1 val w = Param.xavier // Xavier initialization for weight2 val b0 = Param.const(0, 2, 0) // constant 0 for bias, learn rate/decay multiplier 2 and 03 val b02 = Param.const(0.2f, 2, 0) // constant 0.2 for bias4 val ipool = CudaLayer.max_pool(3, 1, 1) // max pooling kernel size, stride, and padding56 def inception(n: Int) = {7 // convolution name, kernel size, channel, stride, padding, weight and bias configuration8 val icv1 = CudaLayer.convolv(s"cv${n}1", 1, 64, 1, 0, w, b02)9 val icv2 = CudaLayer.convolv(s"cv${n}2", 1, 96, 1, 0, w, b02)10 val icv3 = CudaLayer.convolv(s"cv${n}3", 3, 128, 1, 1, w, b02)11 val icv4 = CudaLayer.convolv(s"cv${n}4", 1, 16, 1, 0, w, b02)12 val icv5 = CudaLayer.convolv(s"cv${n}5", 5, 32, 1, 2, w, b02)13 val icv6 = CudaLayer.convolv(s"cv${n}6", 1, 32, 1, 0, w, b02)1415 val p = Vec._new(4) // a 4-dimensional tensor variable1617 // a tensor function with parameter p18 VecFun(p, CudaLayer.concat( (relu o icv1)(p), // concatenation of 4 subnets connected to p19 (relu o icv3 o relu o icv2)(p),20 (relu o icv5 o relu o icv4)(p),21 (relu o icv6 o ipool)(p) )22 )23 }Using the inception method, we can define three subnets that are used to define the test accuracyp(line 6 below) of the main branch of Googlenet .1 val network3 = full7 o flat o drop o pool7 o inception(9) o inception(8) o pool o inception(7)2 val network2 = inception(6) o inception(5) o inception(4)3 val network1 = inception(3) o pool o inception(2) o inception(1) o4 pool o lrn o relu o cv3 o relu o cv2 o lrn o pool o relu o cv156 val p = Layer.precision(y1)(network3(network2(network1(x1)))) // accuracy at main branch4Published as a conference paper at ICLR 2017The three subnets are also used to define the training loss c(line 16 below) that adds up the lossesof the three branches of Googlenet .1 def branch(n: Int) = { // a subnet reused in the two side branches of Googlenet2 val cv = CudaLayer.convolv(s"b${n}cv", 1, 128, 1, 0, w, b02)3 val f1 = Layer.full(s"b${n}fc1", 1024, w, b02)4 val f2 = Layer.full(s"b${n}fc2", K, w, b0)5 f2 o drop2 o relu2 o f1 o flat o relu o cv o bpool6 }7 val stage2 = { // Vec2ScalarFun defines a function from tensor to scalar8 val p = Vec._new(4)9 Vec2ScalarFun(p, softmax_loss(network3(p)) + softmax_loss(branch(2)(p)) *Real(0.3f, "loss2"))10 }11 val stage1 = { // Real(0.3f, "loss1") is a named constant of value 0.312 val p = Vec._new(4)13 Vec2ScalarFun(p, stage2(network2(p)) + softmax_loss(branch(1)(p)) *Real(0.3f, "loss1"))14 }1516 val c = (stage1 o network1)(x1) // training loss of the three branchesOther than some definitions of shared layers such as activation, pooling, normalization, drop out,and softmax loss, this is the complete definition of Googlenet .This compact style of definition is similar to that of Theano, Tensorflow, Torch, and Mxnet. Inthe example, we used two types of functions VecFun andVec2ScalarFun , which model com-putation that takes a tensor as input and returns a tensor or scalar respectively. These functionscan be composed or applied to arguments. When applied, they are similar to functions in Theano,Tensorflow, and Mxnet. When composed, they are similar to the sequential container of Torch.4 I NTERMEDIATE REPRESENTATIONThe unique advantage of DeepDSL is that it is entirely high-level so that it permits static analysis ofthe deep networks for error checking, memory analysis, optimization, and code generation.While DeepDSL compiler is implemented in Scala, it has no runtime dependency on code in Scala atall. The whole purpose of using Scala as the host language for DeepDSL is that Scala is a stronglytyped language with flexible syntax. As a result, the syntax of DeepDSL can resemble that of astandalone DSL without having a parser. After taking symbolic gradients, a DeepDSL program isimmediately evaluated to intermediate representation (IR), which is essentially an abstract syntaxtree(AST). DeepDSL compiler analyzes its IR expressions by performing a series of optimizationand simplification steps. During this process, DeepDSL checks the compatibility of the layers, infersconcrete dimensions for variables, removes duplicated computation, and optimizes IR expressionsfor code generation.The IR expressions of DeepDSL are also abstract and human readable. For example, Figure 3shows a portion of the IR expressions for Lenet, where the first column shows an IR expression thatrepresents a single-step computation, the second column shows the dimensions of the tensor beingcomputed if applicable, the third column shows the memory usage of that tensor, the fourth columnshows the current memory consumption if memory is dynamically allocated and deallocated, andthe last column shows the memory consumption if memory is reused instead of deallocated.IR expression such as Line 14 is for GPU memory deallocation. DeepDSL compiler analyzes thedependencies of the IR expressions, reorders them, and determines the earliest point where a tensorcan be freed. For example, the last use of the tensor X18 in at line 13, it can be freed next. Thetensor X8cannot be freed until much later since it is used at line 26.If we compile IR expressions such as line 14 to actual memory deallocation, then the maximumdynamic memory consumed is peaked at line 27, which is about 59 MB. However, frequent memoryallocation and deallocation in NVIDIA GPU reduces runtime performance. Therefore, DeepDSLruntime library (implemented in Java) supports memory reuse instead of deallocation. DeepDSLruntime maintains a pool of allocated memory blocks and when a tensor is freed, its memory isreturned to the pool and when a tensor is allocated, the runtime tries to find a suitable block in thepool first. With memory reuse, the memory consumption always peaks at the last line, which is about77 MB. Note that the above memory figure is for storing intermediate results such as gradients; thestatic memory allocated for parameters and convolution workspace are calculated separately.5Published as a conference paper at ICLR 20171IR expression Dimensions Current mem Total w/o dealloc2---------------------------------------------------------------------------------------------3val X7 = Cuda(X) 500 1 28 28 1.568000 1.568000 1.5680004val X8 = Convolv(1,0)(X7,cv1_W,cv1_B) 500 20 24 24 23.040001 24.608000 24.6080005val X9 = Pooling(2,2,0,true)(X8) 500 20 12 12 5.760000 30.368000 30.3680006val X10 = Convolv(1,0)(X9,cv2_W,cv2_B) 500 50 8 8 6.400000 36.768002 36.7680027val X11 = Pooling(2,2,0,true)(X10) 500 50 4 4 1.600000 38.368000 38.3680008val X12 = (X11[1><3])(i | @) *(fc1_W)(j | @) 500 500 1.000000 39.368000 39.3680009val X14 = (X12 + (i) => fc1_B) 500 500 0.000000 39.368000 39.36800010val X15 = ReLU()(X14) 500 500 0.000000 39.368000 39.36800011val X16 = (X15)(i | @) *(fc2_W)(j | @) 500 10 0.020000 39.388000 39.38800012val X18 = (X16 + (i) => fc2_B) 500 10 0.000000 39.388000 39.38800013val X19 = Softmax()(X18) 500 10 0.020000 39.408001 39.40800114Dealloc(X18) -0.020000 39.388000 39.40800115val X20 = Cuda(Indicator(Y, 10)) 500 10 0.020000 39.408001 39.40800116val X21 = Log X19.copy 500 10 0.020000 39.428001 39.42800117val X52 = 1/(X19.copy) 500 10 0.020000 39.448002 39.44800218Print(((0 - (X20 . X21)) / |500|)) 0.000000 39.448002 39.4480021920................... 30 lines omitted .......................................................2122cv2_B < X71 *d_Convolv(1,0)()/d_cv2_B 0.000000 36.768002 48.44800223val X72 = X71 *d_Convolv(1,0)(cv2_W)/d_X9 500 20 12 12 5.760000 42.528000 54.20800024cv2_W < X71 *d_Convolv(1,0)(X9)/d_cv2_W 0.000000 42.528000 54.20800025Dealloc(X71) -6.400000 36.127998 54.20800026val X74 = X72 *d_Pooling(2,2,0,true)(X9,X8)/d_X827 500 20 24 24 23.040001 59.167999 77.24800128Dealloc(X72) -5.760000 53.408001 77.24800129Dealloc(X9) -5.760000 47.647999 77.24800130Dealloc(X8) -23.040001 24.608000 77.24800131cv1_B < X74 *d_Convolv(1,0)()/d_cv1_B 0.000000 24.608000 77.24800132cv1_W < X74 *d_Convolv(1,0)(X7)/d_cv1_W 0.000000 24.608000 77.24800133Dealloc(X74) -23.040001 1.568000 77.24800134Dealloc(X7) -1.568000 0.000000 77.248001Figure 3: A portion of the IR expressions and memory information compiled from LenetDeepDSL compiler generates Java source code for each of the IR expressions. For example, line3 loads a batch of images into GPU memory. Line 4 and line 5 perform forward convolution andpooling computation respectively. Line 18 prints out the training loss. Line 22 updates of the biasof the second convolution layer with its gradient.Some computation (e.g. Log) is always in-place. Therefore we make a copy of a ten-sor if it is passed to such computation (e.g. Log X19.copy ). Gradient update such ascv1_W <X74 *d_Convolv(1,0)(X7)/d_cv1_W may be implemented as in-placecomputation as well by directly updating the tensor cv1_W when computing the backward filtergradient of the convolution layer cv1.5 C OMPILATIONA DeepDSL program compiles to a Java source program, which uses a small Java library JCuda tocall CUDA and CuDNN via a JNI wrapper. The compiled Java code does not depend on DeepDSLcompiler or Scala, which makes it more portable and easier to integrate with other applications.Most of the current tools use platform dependent programming languages such as C, Python, andLua, which compiles to specific binary for each installation. Since our compiled program is Java, itruns directly on any platforms that support JVM. Compilation of Java is trivial for most computingplatforms. For example, the Java source generated by DeepDSL on a Windows laptop can run on aLinux server without any modifications. While it takes efforts to install tools like Tensorflow, Caffe,or Torch on machines with different system architectures, running the Java code generated fromDeepDSL require very little effort.Gradient Derivation and Optimization The gradient derivation and optimization are imple-mented by the Loop class called in code below:1val loop = Loop(loss, accuracy, (x, y), param, solver)6Published as a conference paper at ICLR 2017To derive the gradient of a scalar expression loss with respect to a tensor variable p, we can writeval grad = loss.grad(p) , which evaluates to a tensor expression. The gradient updatesare formed by expressions of the form Update (p;grad;; ), which represents the computationp=p+grad .The gradient updates of all parameters together with the loss expression are then passed to opti-mization functions to obtain a list of IR expressions ready for code generation. The optimizationfunctions implement simplification, loop merging, code motion, vectorization, SSA transformation,common sub-expression elimination, inlining, tensor deallocation, and code scheduling.Generated code The compiled Java code includes just one class. The class fields include theobjects that handle computations such as convolution and activation and the objects that store tensorssuch as parameters and gradients. The class includes one method for the training loop and onemethod for testing.The generated code includes the corresponding IR expressions in the comments to improve read-ability. For example, the code below shows the Java statements generated for the forward inferenceof max pooling. Note that the variable name in the comments has no relation to the variable namesin the code as they are independently generated.1 // val X9 = Pooling(2,2,0,true)(X8)2 JCudaTensor x16;3 JCudaTensor x17;4 x17 = x9;5 x16 = x18.forward(x17);It is easy to perform some customization of the generated code such as changing the number oftraining iterations or reducing learning rate at a specified interval. User can also use the generatedcode as a component of another application.Persistency The compiled Java source includes code to save the trained parameters into files afterthe training is complete. When the same program or another program compiled from the samenetwork starts, it can load the same parameters to resume training or for forward testing.Workspace The convolution layers in the compiled Java source share the same workspace. Thus,users can place a limit on the total workspace by making one change. By reducing workspace andusing memory efficient mode, users may reduce memory consumption to fit into a particular GPU.6 P ERFORMANCEThe primary compilation target of DeepDSL is a Java program that runs on Nvidia GPU throughits CUDA/CuDNN library3. DeepDSL can encode well-known networks such as Alexnet, Overfeat,GoogleNet, Vgg, and Deep Residual Networks (Resnet). In this section, we evaluate the perfor-mance of DeepDSL with Caffe and Tensorflow using these networks. To be consistent, DeepDSL,Caffe, and Tensorflow tests all follow the same Caffe prototxt definitions. Specifically, for Alexnetand GoogleNet, we followed the prototxt from Caffe’s website4; for Vgg (Vgg-16), we followedthe prototxt from this link5; for Overfeat, we followed the prototxt from IntelLabs6; and for DeepResidual Network (ResNet-50), we followed the prototxt from the author’s website7. The Tensor-flow implementation of these networks are either modified from versions of convnet-benchmarks8or created from scratch. Note there are a couple of differences between the tests of Tensorflow andthose of DeepDSL and Caffe. The training data in the Tensorflow tests is generated from randomdata in memory while DeepDSL and Caffe tests load real images from the Lmdb database. Also,3DeepDSL has limited support for CPU with features sufficient to implement Lenet.4github.com/BVLC/caffe/tree/master/models5github.com/ruimashita/caffe-train/blob/master/vgg.train_val.prototxt6github.com/IntelLabs/Latte.jl/blob/master/benchmarks/overfeat/overfeat.prototxt7github.com/KaimingHe/deep-residual-networks/blob/master/prototxt/ResNet-50-deploy.prototxt8github.com/soumith/convnet-benchmarks7Published as a conference paper at ICLR 2017the GoogleNet test of Tensorflow only includes the main branch of the GoogleNet while DeepDSLand Caffe train with the full network. All our tests are trained with ImageNet images that have beenresized to 224 by 224 (though DeepDSL do support random cropping of images when their sizes arelarger than specified dimensions).Alexnet128 Alexnet256 Overfeat128 Overfeat256 Googlenet128 Googlenet256 Vgg64 ResNet32 ResNet6420002004006008001;0001;2001;4001;6001;8002;0002;2002;4002;6002;8003;0003;2003;4003;600Time in milli secondsDeepDSL DeepDSLTensorflow CaffeFigure 4: Runtime performance of DeepDSL, Tensorflow, and Caffe (1 forward/backward iteration).DeepDSL and DeepDSLare performance in runtime-efficient and memory-efficient mode respec-tively. The names of the networks are followed by the batch size. Caffe failed to run GoogleNet(batch 256) and ResNet (batch 64) and Tensorflow failed to run ResNet (batch 64) due to exhaustionof GPU memory.The tests are run on a server with a single NVIDIA Tesla K40C GPU equipped with 12 gigabytesof memory. The server runs the CentOS 7 Linux distribution. DeepDSL uses the JCuda 0.8.0RCbinding that runs against CUDA 8.0.279. DeepDSL programs are publicly available10.The runtime performance of DeepDSL, Tensorflow, and Caffe is compared in Figure 4, whereDeepDSL has significant advantage over Caffe in Alexnet, Overfeat, and Googlenet while onlymarginally slower than Caffe in Vgg and ResNet (Deep Residual Network). DeepDSL is also fasterthan Tensorflow in Alexnet, Googlenet, and ResNet while slightly slower in Overfeat and Vgg.The memory consumption of the DeepDSL, Tensorflow, and Caffe is compared in Figure 5, whereDeepDSL uses less memory in Alexnet, Googlenet, and ResNet while Caffe uses less memory inOverfeat and Vgg. DeepDSL uses significantly less memory for Googlenet and ResNet where Cafferuns out of memory for Googlenet at batch size 256 and ResNet at batch size 64. DeepDSL uses lessmemory than Tensorflow in all tests except Vgg. Tensorflow also ran out of memory for ResNet atbatch size 64. It is unclear why Tensorflow uses similar amount of memory for Overfeat with batchsize 128 and 256.In the tests, DeepDSL programs are run with runtime efficient mode which caches tensor objectsand with memory efficient mode (denoted by DeepDSL) which deallocates tensor objects as soonas possible. DeepDSLuses 10 to 30% less memory with similar percentage of runtime overheadexcept Vgg and Googlenet where runtime overhead is relatively smaller than memory saving.DeepDSL also lets CUDNN to pick the convolution algorithms with max performance. In Overfeat(batch size 128), out of the 4290 megabytes of GPU memory consumed, more than 2700 megabytesare for convolution workspace. While Caffe uses less memory in this test, it also runs much slower.9Note previous CUDA versions such as 6.5 or 7.x can also be used with minor modifications.10github.com/deepdsl/deepdsl8Published as a conference paper at ICLR 2017Alexnet128 Alexnet256 Overfeat128 Overfeat256 Googlenet128 Googlenet256 Vgg64 ResNet32 ResNet640:100:10:20:30:40:50:60:70:80:911:11:2104Memory in megabytesDeepDSL DeepDSLTensorFlow CaffeFigure 5: Peak GPU memory use of DeepDSL, Tensorflow, and Caffe during training. DeepDSLand DeepDSLare performance in runtime-efficient and memory-efficient mode respectively. Cafferan out of GPU memory for Googlenet (batch 256) and ResNet (batch 64). Tensorflow ran out ofmemory for ResNet (batch 64).Among all tests, DeepDSL either outperforms Caffe by a large margin or uses significantly lessmemory with Vgg being the only exception where Caffe uses slightly less time and memory.DeepDSL also has competitive runtime performance when compared with Tensorflow.As a side note, while running DeepDSL requires little setup, installing libraries such as Caffe andTensorflow requires a list of dependencies and long compilation sessions. Consequently, we skippedtesting with Torch 7 due to time limitation.7 R ELATED WORKIn this section, we review some popular tools: Torch7, Theano, Caffe, TensorFlow, and CNTK, andnewer ones such as Chainer Tokui et al. (2015) and MXNet Chen et al. (2015b).Torch7 Collobert et al. (2011) uses Lua language for integration with C program and achieves C-like performance. It has a large set of optimized routines to support CPU, GPU, mobile and FPGAbackends. Theano Bergstra et al. (2010), hosted in Python, allows users to define symbolic variablesand functions (using NumPy van der Walt et al. (2011)) to encode DL networks and compiles thesymbolic expressions to C. Theano performs optimization such as normalizing mathematical expres-sions, numerical stabilization, and code specialization during compilation and the target code canrun on CPU or GPU devices. Caffe Jia et al. (2014) constructs a graph for DL network by connectingthe layers with the 4D arrays that store tensors. Caffe separates its DL network model representa-tion (using ProtocolBuffers Google) from the actual model parameter calculations. With its layeredstructure, Caffe computes the memory needed for each layer and reserves memory accordingly.TensorFlow Abadi et al. (2016) shares largely common design paradigms as that of Caffe. Its coreis written in C++ and its computation graph is described with a graph where tensors and layersare alternatively arranged. Unlike Caffe’s tensor, TensorFlow’s tensor is a typed multi-dimensionalarray and is persistent mutable. Like TensorFlow and Caffe, CNTK describes a network with aconfiguration file. CNTK can encode arbitrary computational network and it can map computationonto multiple GPUs across multiple machines by assigning each computation node to a particularCPU/GPU device.9Published as a conference paper at ICLR 2017Comparing to the “define-and-run” paradigm (adopted by Torch7, Theano, and Caffe),Chainer Tokui et al. (2015) follows a “define-by-run” pattern, which essentially allows modifyingthe control flow during the execution of a computational graph. MXNet Chen et al. (2015b) providesboth declarative and imperative programming styles and multiple language supports by embeddinginto multiple host languages and unifying the execution with one backend engine.The major difference between DeepDSL and the above tools is that DeepDSL is fully abstract untilcode generation. This means that DeepDSL’s intermediate representation can be compiled to dif-ferent languages or to run on different platforms. While the current compilation target of DeepDSLis Java, targeting a different language mainly involves building an interface library to call CUDAroutines while the optimization components of DeepDSL remain the same. This separation betweenoptimization and code generation also means that we can apply generic optimization techniques atIR level without worrying about the underlying data structure such as the representation of tensorsor how the layers are connected. In fact, the optimization of DeepDSL involves nothing specific todeep neural networks since they are mostly compilation techniques.Note that while Theano and DeepDSL have similarity in the way that DSL expressions are optimizedand transformed, there are two important differences that make DeepDSL more efficient and flexible.The first is that while Theano expressions are treated as graphs during optimization, DeepDSL ex-pressions are optimized in two phases. The first phase is at expression level where the training lossand the parameter gradients go through the process of simplification, loop merging, code motion,and vectorization. In the second phase, DeepDSL expressions are reduced to static single assign-ment form for additional optimization such as common subexpression elimination, code scheduling,inlining of in-place computation, and tensor deallocation.The second is that DeepDSL generates target code using a single-pass generator (about 1200 linesof code) that prints Java source code as strings to a file. The input of the generator is DeepDSLexpressions, which are completely independent from the generated code. The generated Java codeis high-level and human readable with a simple Java API that allows customization. This cleanseparation between DSL expression and target code also allows independent evolution of DSL opti-mization and target-code generation. In contrast, the code generation of Theano is embedded in itsfunctions for low-level computation and is tied to C code that is not readable to users.8 C ONCLUSIONWe have developed a domain specific language DeepDSL that compiles to Java source program fordeep learning. The compiled DeepDSL programs are very easy to use and extend as its primarydependencies are just JCuda and CUDA libraries. DeepDSL programs are also efficient and itsruntime performance and memory consumption are significantly better than Caffe and Tensorflowin some DL networks. DeepDSL performs static analysis for early error detection and providesreadable intermediate representation and memory consumption analysis. DeepDSL allows compactencoding of complex networks and since it is based on a strongly typed language Scala, writingDeepDSL programs is less error prone than dynamic languages such as Python.While the compiled DeepDSL programs are efficient, DeepDSL itself is not optimized. Thoughcompiling simpler networks such as Alexnet takes a few seconds, the compilation of complex net-works such as ResNet can take a few minutes. As the future work, we plan to optimize DeepDSL toimprove the compilation efficiency. Also while the memory efficient mode of DeepDSL can reduceGPU memory consumption, it may not be enough for memory intensive networks such as Vgg. Asfuture work, we plan to implement GPU memory virtualization by paging out tensors that are notimmediately needed. | HkMceoZVl | ICLR is not the right venue for this paper. | 6: Marginally above acceptance threshold | The paper presents DeepDSL, a "domain specific language (DSL) embedded in
Scala, that compiles deep networks written in DeepDSL to Java source code". It
introduces its syntax and the key concepts which differentiate it
from other existing frameworks, such as Torch7, Theano, Caffe, TensorFlow,
CNTK, Chainer and MXNet. It also benchmarks speed and memory usage against
TensorFlow and Caffe on a variety of convolutional neural network architectures.
The paper is clear and well written and it does a good job of presenting DeepDSL
in the context of existing deep learning frameworks.
However, I don't think ICLR is the right venue for this type of work. Some of
the ideas it presents are interesting, but overall the paper lacks novelty and
potential impact and stays firmly within the realm of deep learning framework
whitepapers such as [1,2,3,4], which to my knowledge don't have a precedent of
being accepted at venues like ICLR.
[1]: Bergstra, James, et al. "Theano: A CPU and GPU math compiler in Python."
Proc. 9th Python in Science Conf. 2010.
[2]: Bastien, Frédéric, et al. "Theano: new features and speed improvements."
arXiv preprint arXiv:1211.5590 (2012).
[3]: Abadi, Martın, et al. "Tensorflow: Large-scale machine learning on
heterogeneous distributed systems." arXiv preprint arXiv:1603.04467 (2016).
[4]: The Theano Development Team et al. "Theano: A Python framework for fast
computation of mathematical expressions." arXiv preprint arXiv:1605.02688
(2016).
UPDATE: The rating has been revised to a 6 following the authors' reply. | 4: The reviewer is confident but not absolutely certain that the evaluation is correct |
r1kQkVFgl | ICLR.cc/2017/conference | 2017 | Learning Python Code Suggestion with a Sparse Pointer Network | ["Avishkar Bhoopchand", "Tim Rockt\u00e4schel", "Earl Barr", "Sebastian Riedel"] | To enhance developer productivity, all modern integrated development environments (IDEs) include code suggestion functionality that proposes likely next tokens at the cursor. While current IDEs work well for statically-typed languages, their reliance on type annotations means that they do not provide the same level of support for dynamic programming languages as for statically-typed languages. Moreover, suggestion engines in modern IDEs do not propose expressions or multi-statement idiomatic code. Recent work has shown that language models can improve code suggestion systems by learning from software repositories. This paper introduces a neural language model with a sparse pointer network aimed at capturing very long range dependencies. We release a large-scale code suggestion corpus of 41M lines of Python code crawled from GitHub. On this corpus, we found standard neural language models to perform well at suggesting local phenomena, but struggle to refer to identifiers that are introduced many tokens in the past. By augmenting a neural language model with a pointer network specialized in referring to predefined classes of identifiers, we obtain a much lower perplexity and a 5 percentage points increase in accuracy for code suggestion compared to an LSTM baseline. In fact, this increase in code suggestion accuracy is due to a 13 times more accurate prediction of identifiers. Furthermore, a qualitative analysis shows this model indeed captures interesting long-range dependencies, like referring to a class member defined over 60 tokens in the past. | ["sparse pointer network", "identifiers", "python code suggestion", "languages", "neural language model", "past", "developer productivity", "modern", "development environments", "ides"] | ABSTRACTTo enhance developer productivity, all modern integrated development environ-ments (IDEs) include code suggestion functionality that proposes likely next tokensat the cursor. While current IDEs work well for statically-typed languages, their re-liance on type annotations means that they do not provide the same level of supportfor dynamic programming languages as for statically-typed languages. Moreover,suggestion engines in modern IDEs do not propose expressions or multi-statementidiomatic code. Recent work has shown that language models can improve codesuggestion systems by learning from software repositories. This paper introduces aneural language model with a sparse pointer network aimed at capturing very long-range dependencies. We release a large-scale code suggestion corpus of 41M linesof Python code crawled from GitHub. On this corpus, we found standard neurallanguage models to perform well at suggesting local phenomena, but struggle torefer to identifiers that are introduced many tokens in the past. By augmenting aneural language model with a pointer network specialized in referring to predefinedclasses of identifiers, we obtain a much lower perplexity and a 5percentage pointsincrease in accuracy for code suggestion compared to an LSTM baseline. In fact,this increase in code suggestion accuracy is due to a 13times more accurate pre-diction of identifiers. Furthermore, a qualitative analysis shows this model indeedcaptures interesting long-range dependencies, like referring to a class memberdefined over 60tokens in the past.1 I NTRODUCTIONIntegrated development environments (IDEs) are essential tools for programmers. Especially when adeveloper is new to a codebase, one of their most useful features is code suggestion: given a piece ofcode as context, suggest a likely sequence of next tokens. Typically, the IDE suggests an identifier ora function call, including API calls. While extensive support exists for statically-typed languagessuch as Java, code suggestion for dynamic languages like Python is harder and less well supportedbecause of the lack of type annotations. Moreover, suggestion engines in modern IDEs do not proposeexpressions or multi-statement idiomatic code.Recently, methods from statistical natural language processing (NLP) have been used to train codesuggestion systems from code usage in large code repositories (Hindle et al., 2012; Allamanis &Sutton, 2013; Tu et al., 2014). To this end, usually an n-gram language model is trained to scorepossible completions. Neural language models for code suggestion (White et al., 2015; Das &Shah, 2015) have extended this line of work to capture more long-range dependencies. Yet, thesestandard neural language models are limited by the so-called hidden state bottleneck, i.e., all contextinformation has to be stored in a fixed-dimensional internal vector representation. This limitationrestricts such models to local phenomena and does not capture very long-range semantic relationshipslike suggesting calling a function that has been defined many tokens before.To address these issues, we create a large corpus of 41M lines of Python code by using a heuristic forcrawling high-quality code repositories from GitHub. We investigate, for the first time, the use ofattention (Bahdanau et al., 2014) for code suggestion and find that, despite a substantial improvement1Under review as a conference paper at ICLR 2017in accuracy, it still makes avoidable mistakes. Hence, we introduce a model that leverages long-rangePython dependencies by selectively attending over the introduction of identifiers as determinedby examining the Abstract Syntax Tree. The model is a form of pointer network (Vinyals et al.,2015a), and learns to dynamically choose between syntax-aware pointing for modeling long-rangedependencies and free form generation to deal with local phenomena, based on the current context.Our contributions are threefold: (i) We release a code suggestion corpus of 41M lines of Python codecrawled from GitHub, (ii) We introduce a sparse attention mechanism that captures very long-rangedependencies for code suggestion of this dynamic programming language efficiently, and (iii) Weprovide a qualitative analysis demonstrating that this model is indeed able to learn such long-rangedependencies.2 M ETHODSWe first revisit neural language models, before briefly describing how to extend such a languagemodel with an attention mechanism. Then we introduce a sparse attention mechanism for a pointernetwork that can exploit the Python abstract syntax tree of the current context for code suggestion.2.1 N EURAL LANGUAGE MODELCode suggestion can be approached by a language model that measures the probability of observinga sequence of tokens in a Python program. For example, for the sequence S=a1; :::; aN, the jointprobability of Sfactorizes according toP(S) =P(a1)NYt=2P(atjat1; :::; a 1) (1)where the parameters are estimated from a training corpus. Given a sequence of Python tokens, weseek to predict the next Mtokensat+1; :::; at+Mthat maximize Equation 1arg maxat+1;:::;a t+MP(a1; :::; at; at+1; :::; at+M): (2)In this work, we build upon neural language models using Recurrent Neural Networks (RNNs) andLong Short-Term Memory (LSTM, Hochreiter & Schmidhuber, 1997). This neural language modelestimates the probabilities in Equation 1 using the output vector of an LSTM at time step t(denotedhthere) according toP(at=jat1; :::; a 1) =exp (vTht+b)P0exp (vT0ht+b0)(3)wherevis a parameter vector associated with token in the vocabulary.Neural language models can, in theory, capture long-term dependencies in token sequences throughtheir internal memory. However, as this internal memory has fixed dimension and can be updated atevery time step, such models often only capture local phenomena. In contrast, we are interested invery long-range dependencies like referring to a function identifier introduced many tokens in thepast. For example, a function identifier may be introduced at the top of a file and only used nearthe bottom. In the following, we investigate various external memory architectures for neural codesuggestion.2.2 A TTENTIONA straight-forward approach to capturing long-range dependencies is to use a neural attention mech-anism (Bahdanau et al., 2014) on the previous Koutput vectors of the language model. Attentionmechanisms have been successfully applied to sequence-to-sequence tasks such as machine transla-tion (Bahdanau et al., 2014), question-answering (Hermann et al., 2015), syntactic parsing (Vinyalset al., 2015b), as well as dual-sequence modeling like recognizing textual entailment (Rockt ̈aschelet al., 2016). The idea is to overcome the hidden-state bottleneck by allowing referral back to previousoutput vectors. Recently, these mechanisms were applied to language modelling by Cheng et al.(2016) and Tran et al. (2016).2Under review as a conference paper at ICLR 2017Formally, an attention mechanism with a fixed memory Mt2RkKofKvectorsmi2Rkfori2[1;K], produces an attention distribution t2RKand context vector ct2Rkat each timesteptaccording to Equations 4 to 7. Furthermore, WM;Wh2Rkkandw2Rkare trainableparameters. Finally, note that 1Krepresents a K-dimensional vector of ones.Mt= [m1:::mK] 2RkK(4)Gt=tanh(WMMt+1TK(Whht)) 2RkK(5)t=softmax (wTGt) 2R1K(6)ct=MtTt 2Rk(7)For language modeling, we populate Mtwith a fixed window of the previous KLSTM outputvectors. To obtain a distribution over the next token we combine the context vector ctof the attentionmechanism with the output vector htof the LSTM using a trainable projection matrix WA2Rk2k.The resulting final output vector nt2Rkencodes the next-word distribution and is projected to thesize of the vocabulary jVj. Subsequently, we apply a softmax to arrive at a probability distributionyt2RjVjover the next token. This process is presented in Equation 9 where WV2RjVjkandbV2RjVjare trainable parameters.nt=tanhWAhtct2Rk(8)yt= softmax(WVnt+bV) 2RjVj(9)The problem of the attention mechanism above is that it quickly becomes computationally expensivefor largeK. Moreover, attending over many memories can make training hard as a lot of noise isintroduced in early stages of optimization where the LSTM outputs (and thus the memory Mt) aremore or less random. To alleviate these problems we now turn to pointer networks and a simpleheuristic for populating Mtthat permits the efficient retrieval of identifiers in a large history ofPython code.2.3 S PARSE POINTER NETWORKWe develop an attention mechanism that provides a filtered view of a large history of Python tokens.At any given time step, the memory consists of context representations of the previous Kidentifiersintroduced in the history. This allows us to model long-range dependencies found in identifier usage.For instance, a class identifier may be declared hundreds of lines of code before it is used. Given ahistory of Python tokens, we obtain a next-word distribution from a weighed average of the sparsepointer network for identifier reference and a standard neural language model. The weighting of thetwo is determined by a controller.Formally, at time-step t, the sparse pointer network operates on a memory Mt2RkKof only theKprevious identifier representations ( e.g.function identifiers, class identifiers and so on). In addition,we maintain a vector mt= [id1; :::; idK]2NKof symbol ids for these identifier representations(i.e.pointers into the large global vocabulary).As before, we calculate a context vector ctusing the attention mechanism (Equation 7), but on amemoryMtonly containing representations of identifiers that were declared in the history. Next, weobtain a pseudo-sparse distribution over the global vocabulary fromst[i] =t[j]ifmt[j] =iC otherwise(10)it=softmax (st) 2RjVj(11)whereCis a large negative constant ( e.g.1000 ). In addition, we calculate a next-word distributionfrom a standard neural language modelyt=softmax (WVht+bV) 2RjVj(12)3Under review as a conference paper at ICLR 2017Figure 1: Sparse pointer network for code suggestion on a Python code snippet, showing the next-word distributions of the language model and identifier attention and their weighted combinationthroughand we use a controller to calculate a distribution t2R2over the language model and pointernetwork for the final weighted next-word distribution ytviaht="htxtct#2R3k(13)t=softmax (Wht+b) 2R2(14)yt= [ytit]t 2RjVj(15)Here,xtis the representation of the input token, and W2R23kandb2R2a trainableweight matrix and bias respectively. This controller is conditioned on the input, output and contextrepresentations. This means for deciding whether to refer to an identifier or generate from the globalvocabulary, the controller has access to information from the encoded next-word distribution htofthe standard neural language model, as well as the attention-weighted identifier representations ctfrom the current history.Figure 1 overviews this process. In it, the identifier base_path appears twice, once as an argumentto a function and once as a member of a class (denoted by *). Each appearance has a different idin the vocabulary and obtains a different probability from the model. In the example, the modelcorrectly chooses to refer to the member of the class instead of the out-of-scope function argument,although, from a user point-of-view, the suggestion would be the same in both cases.3 L ARGE -SCALE PYTHON CORPUSPrevious work on code suggestion either focused on statically-typed languages (particularly Java)or trained on very small corpora. Thus, we decided to collect a new large-scale corpus of thedynamic programming language Python. According to the programming language popularity websitePypl (Carbonnelle, 2016), Python is the second most popular language after Java. It is also the3rd most common language in terms of number of repositories on the open-source code repositoryGitHub, after JavaScript and Java (Zapponi, 2016).We collected a corpus of 41M lines of Python code from GitHub projects. Ideally, we would like thiscorpus to only contain high-quality Python code, as our language model learns to suggest code fromhow users write code. However, it is difficult to automatically assess what constitutes high-qualitycode. Thus, we resort to the heuristic that popular code projects tend to be of good quality, There are4Under review as a conference paper at ICLR 2017Table 1: Python corpus statistics.Dataset #Projects #Files #Lines #Tokens V ocabulary SizeTrain 489 118 298 26 868 583 88 935 698 2 323 819Dev 179 26 466 5 804 826 18 147 341Test 281 43 062 8 398 100 30 178 356Total 949 187 826 41 071 509 137 261 395Figure 2: Example of the Python code normalization. Original file on the left and normalized versionon the right.two metrics on GitHub that we can use for this purpose, namely stars (similar to bookmarks) andforks (copies of a repository that allow users to freely experiment with changes without affecting theoriginal repository). Similar to Allamanis & Sutton (2013) and Allamanis et al. (2014), we selectPython projects with more than 100stars, sort by the number of forks descending, and take the top1000 projects. We then removed projects that did not compile with Python3, leaving us with 949projects. We split the corpus on the project level into train, dev, and test. Table 1 presents the corpusstatistics.3.1 N ORMALIZATION OF IDENTIFIERSUnsurprisingly, the long tail of words in the vocabulary consists of rare identifiers. To improvegeneralization, we normalize identifiers before feeding the resulting token stream to our models.That is, we replace every identifier name with an anonymous identifier indicating the identifier group(class, variable, argument, attribute or function) concatenated with a random number that makesthe identifier unique in its scope. Note that we only replace novel identifiers defined within a file.Identifier references to external APIs and libraries are left untouched. Consistent with previous corpuscreation for code suggestion ( e.g.Khanh Dam et al., 2016; White et al., 2015), we replace numericalconstant tokens with $NUM$ , remove comments, reformat the code, and replace tokens appearingless than five times with an $OOV$ (out of vocabulary) token.4 E XPERIMENTSAlthough previous work by White et al. (2015) already established that a simple neural languagemodel outperforms an n-gram model for code suggestion, we include a number of n-gram baselinesto confirm this observation. Specifically, we use n-gram models for n2f3;4;5;6gwith ModifiedKneser-Ney smoothing (Kneser & Ney, 1995) from the Kyoto Language Modelling Toolkit (Neubig,2012).We train the sparse pointer network using mini-batch SGD with a batch size of 30and truncatedbackpropagation through time (Werbos, 1990) with a history of 20identifier representations. We use5Under review as a conference paper at ICLR 2017Table 2: Perplexity (PP), Accuracy (Acc) and Accuarcy among top 5 predictions (Acc@5).Model Train PP Dev PP Test PP Acc [%] Acc@5 [%]All IDs Other All IDs Other3-gram 12.90 24.19 26.90 13.19 – – 50.81 – –4-gram 7.60 21.07 23.85 13.68 – – 51.26 – –5-gram 4.52 19.33 21.22 13.90 – – 51.49 – –6-gram 3.37 18.73 20.17 14.51 – – 51.76 – –LSTM 9.29 13.08 14.01 57.91 2.1 62.8 76.30 4.5 82.6LSTM w/ Attention 20 7.30 11.07 11.74 61.30 21.4 64.8 79.32 29.9 83.7LSTM w/ Attention 50 7.09 9.83 10.05 63.21 30.2 65.3 81.69 41.3 84.1Sparse Pointer Network 6.41 9.40 9.18 62.97 27.3 64.9 82.62 43.6 84.5an initial learning rate of 0:7and decay it by 0:9after every epoch. As additional baselines, we testa neural language model with LSTM units with and without attention. For the attention languagemodels, we experiment with a fixed-window attention memory of the previous 20and50tokensrespectively, and a batch size of 75. We found during testing that the baseline models performedworse with the same batch size as the sparse pointer network of 30. We therefore chose to report thestronger results obtained with a batch size of 75.All neural language models were developed in TensorFlow (Abadi et al., 2016) and trained usingcross-entropy loss. While processing a Python source code file, the last recurrent state of the RNNis fed as the initial state of the subsequent sequence of the same file and reset between files. Allmodels use an input and hidden size of 200, an LSTM forget gate bias of 1(Jozefowicz et al., 2015),gradient norm clipping of 5(Pascanu et al., 2013), and randomly initialized parameters in the interval(0:05;0:05). As regularizer, we use a dropout of 0:1on the input representations. Furthermore, weuse a sampled softmax (Jean et al., 2015) with a log-uniform sampling distribution and a sample sizeof 1000.5 R ESULTSWe evaluate all models using perplexity (PP), as well as accuracy of the top prediction (Acc) and thetop five predictions (Acc@5). The results are summarized in Table 2.We can confirm that for code suggestion neural models outperform n-gram language models by alarge margin. Furthermore, adding attention improves the results substantially ( 2:3lower perplexityand3:4percentage points increased accuracy). Interestingly, this increase can be attributed to asuperior prediction of identifiers, which increased from an accuracy of 2:1%to21:4%. An LSTMwith an attention window of 50gives us the best accuracy for the top prediction. We achieve furtherimprovements for perplexity and accuracy of the top five predictions by using a sparse pointer networkthat uses a smaller memory of the past 20identifier representations.5.1 Q UALITATIVE ANALYSISFigures 3a-d show a code suggestion example involving an identifier usage. While the LSTM baselineis uncertain about the next token, we get a sensible prediction by using attention or the sparse pointernetwork. The sparse pointer network provides more reasonable alternative suggestions beyond thecorrect top suggestion.Figures 3e-h show the use-case referring to a class attribute declared 67tokens in the past. Onlythe Sparse Pointer Network makes a good suggestion. Furthermore, the attention weights in 3idemonstrate that this model distinguished attributes from other groups of identifiers. We give a fullexample of a token-by-token suggestion of the Sparse Pointer Network in Figure 4 in the Appendix.6Under review as a conference paper at ICLR 2017(a) Code snippet for referencingvariable.(b) LSTM Model. (c) LSTM w/ Attention50.(d) Sparse Pointer Net-work.(e) Code snippet for referencing classmember.(f) LSTM Model. (g) LSTM w/ Attention50.(h) Sparse Pointer Net-work.(i) Sparse Pointer Network attention over memory of identifier representations.Figure 3: Code suggestion example involving a reference to a variable (a-d), a long-range dependency(e-h), and the attention weights of the Sparse Pointer Network (i).6 R ELATED WORKPrevious code suggestion work using methods from statistical NLP has mostly focused on n-grammodels. Much of this work is inspired by Hindle et al. (2012) who argued that real programs fallinto a much smaller space than the flexibility of programming languages allows. They were ableto capture the repetitiveness and predictable statistical properties of real programs using languagemodels. Subsequently, Tu et al. (2014) improved upon Hindle et al.’s work by adding a cachemechanism that allowed them to exploit locality stemming from the specialisation and decoupling ofprogram modules. Tu et al.’s idea of adding a cache mechanism to the language model is specificallydesigned to exploit the properties of source code, and thus follows the same aim as the sparse attentionmechanism introduced in this paper.While the majority of preceding work trained on small corpora, Allamanis & Sutton (2013) created acorpus of 352M lines of Java code which they analysed with n-gram language models. The size ofthe corpus allowed them to train a single language model that was effective across multiple differentproject domains. White et al. (2015) later demonstrated that neural language models outperformn-gram models for code suggestion. They compared various n-gram models (up to nine grams),including Tu et al.’s cache model, with a basic RNN neural language model. Khanh Dam et al.(2016) compared White et al.’s basic RNN with LSTMs and found that the latter are better at codesuggestion due to their improved ability to learn long-range dependencies found in source code. Ourpaper extends this line of work by introducing a sparse attention model that captures even longerdependencies.The combination of lagged attention mechanisms with language modelling is inspired by Chenget al. (2016) who equipped LSTM cells with a fixed-length memory tape rather than a single memorycell. They achieved promising results on the standard Penn Treebank benchmark corpus (Marcuset al., 1993). Similarly, Tran et al. added a memory block to LSTMs for language modelling ofEnglish, German and Italian and outperformed both n-gram and neural language models. Theirmemory encompasses representations of all possible words in the vocabulary rather than providing asparse view as we do. Attention mechanisms were previously applied to the study of source code byAllamanis et al. who used a convolutional neural network combined with an attention mechanism togenerate method names from bodies.7Under review as a conference paper at ICLR 2017An alternative to our purely lexical approach to code suggestion involves the use of probabilisticcontext-free grammars (PCFGs) which exploit the formal grammar specifications and well-defined,deterministic parsers available for source code. These were used by Allamanis & Sutton (2014)to extract idiomatic patterns from source code. A weakness of PCFGs is their inability to modelcontext-dependent rules of programming languages such as that variables need to be declared beforebeing used. Maddison & Tarlow (2014) added context-aware variables to their PCFG model in orderto capture such rules.Ling et al. (2016) recently used a pointer network to generate code from natural language descriptions.Our use of a controller for deciding whether to generate from a language model or copy an identifierusing a sparse pointer network is inspired by their latent code predictor. However, their inputs (textualdescriptions) are short whereas code suggestion requires capturing very long-range dependencies thatwe addressed by a filtered view on the memory of previous identifier representations.7 C ONCLUSIONS AND FUTURE WORKIn this paper, we investigated neural language models for code suggestion of the dynamically-typedprogramming language Python. We released a corpus of 41M lines of Python crawled from GitHuband compared n-gram, standard neural language models, and attention. By using attention, weobserved an order of magnitude more accurate prediction of identifiers. Furthermore, we proposed asparse pointer network that can efficiently capture long-range dependencies by only operating ona filtered view of a memory of previous identifier representations. This model achieves the lowestperplexity and best accuracy among the top five predictions. The Python corpus and the code for ourmodels is released at https://github.com/uclmr/pycodesuggest .The presented methods were only tested for code suggestion within the same Python file. We areinterested in scaling the approach to the level of entire code projects and collections thereof, as wellas integrating a trained code suggestion model into an existing IDE. Furthermore, we plan to work oncode completion, i.e., models that provide a likely continuation of a partial token, using characterlanguage models (Graves, 2013).ACKNOWLEDGMENTSThis work was supported by Microsoft Research through its PhD Scholarship Programme, an AllenDistinguished Investigator Award, and a Marie Curie Career Integration Award. | SJ8ttDXNx | 6: Marginally above acceptance threshold | This paper presents an improved neural language models designed for selected long-term dependency, i.e., to predict more accurately the next identifier for the dynamic programming language such as Python. The improvements are obtained by:
1) replacing the fixed-widow attention with a pointer network, in which the memory only consists of context representation of the previous K identifies introduced for the entire history.
2) a conventional neural LSTM-based language model is combined with such a sparse pointer network with a controller, which linearly combines the prediction of both components using a dynamic weights, decided by the input, hidden state, and the context representations at the time stamp.
Such a model avoids the the need of large window size of the attention to predict next identifier, which usually requires a long-term dependency in the programming language. This is partly validated by the python codebase (which is another contribution of this paper) experiments in the paper.
While the paper still misses some critical information that I would like to see, including how the sparse pointer network performance chances with different size of K, and how computationally efficient it is for both training and inference time compared to LSTM w/ attention of various window size, and ablation experiments about how much (1) and (2) contribute respectively, it might be of interest to the ICLR community to see it accepted.
| 4: The reviewer is confident but not absolutely certain that the evaluation is correct |
|
r1kQkVFgl | ICLR.cc/2017/conference | 2017 | Learning Python Code Suggestion with a Sparse Pointer Network | ["Avishkar Bhoopchand", "Tim Rockt\u00e4schel", "Earl Barr", "Sebastian Riedel"] | To enhance developer productivity, all modern integrated development environments (IDEs) include code suggestion functionality that proposes likely next tokens at the cursor. While current IDEs work well for statically-typed languages, their reliance on type annotations means that they do not provide the same level of support for dynamic programming languages as for statically-typed languages. Moreover, suggestion engines in modern IDEs do not propose expressions or multi-statement idiomatic code. Recent work has shown that language models can improve code suggestion systems by learning from software repositories. This paper introduces a neural language model with a sparse pointer network aimed at capturing very long range dependencies. We release a large-scale code suggestion corpus of 41M lines of Python code crawled from GitHub. On this corpus, we found standard neural language models to perform well at suggesting local phenomena, but struggle to refer to identifiers that are introduced many tokens in the past. By augmenting a neural language model with a pointer network specialized in referring to predefined classes of identifiers, we obtain a much lower perplexity and a 5 percentage points increase in accuracy for code suggestion compared to an LSTM baseline. In fact, this increase in code suggestion accuracy is due to a 13 times more accurate prediction of identifiers. Furthermore, a qualitative analysis shows this model indeed captures interesting long-range dependencies, like referring to a class member defined over 60 tokens in the past. | ["sparse pointer network", "identifiers", "python code suggestion", "languages", "neural language model", "past", "developer productivity", "modern", "development environments", "ides"] | ABSTRACTTo enhance developer productivity, all modern integrated development environ-ments (IDEs) include code suggestion functionality that proposes likely next tokensat the cursor. While current IDEs work well for statically-typed languages, their re-liance on type annotations means that they do not provide the same level of supportfor dynamic programming languages as for statically-typed languages. Moreover,suggestion engines in modern IDEs do not propose expressions or multi-statementidiomatic code. Recent work has shown that language models can improve codesuggestion systems by learning from software repositories. This paper introduces aneural language model with a sparse pointer network aimed at capturing very long-range dependencies. We release a large-scale code suggestion corpus of 41M linesof Python code crawled from GitHub. On this corpus, we found standard neurallanguage models to perform well at suggesting local phenomena, but struggle torefer to identifiers that are introduced many tokens in the past. By augmenting aneural language model with a pointer network specialized in referring to predefinedclasses of identifiers, we obtain a much lower perplexity and a 5percentage pointsincrease in accuracy for code suggestion compared to an LSTM baseline. In fact,this increase in code suggestion accuracy is due to a 13times more accurate pre-diction of identifiers. Furthermore, a qualitative analysis shows this model indeedcaptures interesting long-range dependencies, like referring to a class memberdefined over 60tokens in the past.1 I NTRODUCTIONIntegrated development environments (IDEs) are essential tools for programmers. Especially when adeveloper is new to a codebase, one of their most useful features is code suggestion: given a piece ofcode as context, suggest a likely sequence of next tokens. Typically, the IDE suggests an identifier ora function call, including API calls. While extensive support exists for statically-typed languagessuch as Java, code suggestion for dynamic languages like Python is harder and less well supportedbecause of the lack of type annotations. Moreover, suggestion engines in modern IDEs do not proposeexpressions or multi-statement idiomatic code.Recently, methods from statistical natural language processing (NLP) have been used to train codesuggestion systems from code usage in large code repositories (Hindle et al., 2012; Allamanis &Sutton, 2013; Tu et al., 2014). To this end, usually an n-gram language model is trained to scorepossible completions. Neural language models for code suggestion (White et al., 2015; Das &Shah, 2015) have extended this line of work to capture more long-range dependencies. Yet, thesestandard neural language models are limited by the so-called hidden state bottleneck, i.e., all contextinformation has to be stored in a fixed-dimensional internal vector representation. This limitationrestricts such models to local phenomena and does not capture very long-range semantic relationshipslike suggesting calling a function that has been defined many tokens before.To address these issues, we create a large corpus of 41M lines of Python code by using a heuristic forcrawling high-quality code repositories from GitHub. We investigate, for the first time, the use ofattention (Bahdanau et al., 2014) for code suggestion and find that, despite a substantial improvement1Under review as a conference paper at ICLR 2017in accuracy, it still makes avoidable mistakes. Hence, we introduce a model that leverages long-rangePython dependencies by selectively attending over the introduction of identifiers as determinedby examining the Abstract Syntax Tree. The model is a form of pointer network (Vinyals et al.,2015a), and learns to dynamically choose between syntax-aware pointing for modeling long-rangedependencies and free form generation to deal with local phenomena, based on the current context.Our contributions are threefold: (i) We release a code suggestion corpus of 41M lines of Python codecrawled from GitHub, (ii) We introduce a sparse attention mechanism that captures very long-rangedependencies for code suggestion of this dynamic programming language efficiently, and (iii) Weprovide a qualitative analysis demonstrating that this model is indeed able to learn such long-rangedependencies.2 M ETHODSWe first revisit neural language models, before briefly describing how to extend such a languagemodel with an attention mechanism. Then we introduce a sparse attention mechanism for a pointernetwork that can exploit the Python abstract syntax tree of the current context for code suggestion.2.1 N EURAL LANGUAGE MODELCode suggestion can be approached by a language model that measures the probability of observinga sequence of tokens in a Python program. For example, for the sequence S=a1; :::; aN, the jointprobability of Sfactorizes according toP(S) =P(a1)NYt=2P(atjat1; :::; a 1) (1)where the parameters are estimated from a training corpus. Given a sequence of Python tokens, weseek to predict the next Mtokensat+1; :::; at+Mthat maximize Equation 1arg maxat+1;:::;a t+MP(a1; :::; at; at+1; :::; at+M): (2)In this work, we build upon neural language models using Recurrent Neural Networks (RNNs) andLong Short-Term Memory (LSTM, Hochreiter & Schmidhuber, 1997). This neural language modelestimates the probabilities in Equation 1 using the output vector of an LSTM at time step t(denotedhthere) according toP(at=jat1; :::; a 1) =exp (vTht+b)P0exp (vT0ht+b0)(3)wherevis a parameter vector associated with token in the vocabulary.Neural language models can, in theory, capture long-term dependencies in token sequences throughtheir internal memory. However, as this internal memory has fixed dimension and can be updated atevery time step, such models often only capture local phenomena. In contrast, we are interested invery long-range dependencies like referring to a function identifier introduced many tokens in thepast. For example, a function identifier may be introduced at the top of a file and only used nearthe bottom. In the following, we investigate various external memory architectures for neural codesuggestion.2.2 A TTENTIONA straight-forward approach to capturing long-range dependencies is to use a neural attention mech-anism (Bahdanau et al., 2014) on the previous Koutput vectors of the language model. Attentionmechanisms have been successfully applied to sequence-to-sequence tasks such as machine transla-tion (Bahdanau et al., 2014), question-answering (Hermann et al., 2015), syntactic parsing (Vinyalset al., 2015b), as well as dual-sequence modeling like recognizing textual entailment (Rockt ̈aschelet al., 2016). The idea is to overcome the hidden-state bottleneck by allowing referral back to previousoutput vectors. Recently, these mechanisms were applied to language modelling by Cheng et al.(2016) and Tran et al. (2016).2Under review as a conference paper at ICLR 2017Formally, an attention mechanism with a fixed memory Mt2RkKofKvectorsmi2Rkfori2[1;K], produces an attention distribution t2RKand context vector ct2Rkat each timesteptaccording to Equations 4 to 7. Furthermore, WM;Wh2Rkkandw2Rkare trainableparameters. Finally, note that 1Krepresents a K-dimensional vector of ones.Mt= [m1:::mK] 2RkK(4)Gt=tanh(WMMt+1TK(Whht)) 2RkK(5)t=softmax (wTGt) 2R1K(6)ct=MtTt 2Rk(7)For language modeling, we populate Mtwith a fixed window of the previous KLSTM outputvectors. To obtain a distribution over the next token we combine the context vector ctof the attentionmechanism with the output vector htof the LSTM using a trainable projection matrix WA2Rk2k.The resulting final output vector nt2Rkencodes the next-word distribution and is projected to thesize of the vocabulary jVj. Subsequently, we apply a softmax to arrive at a probability distributionyt2RjVjover the next token. This process is presented in Equation 9 where WV2RjVjkandbV2RjVjare trainable parameters.nt=tanhWAhtct2Rk(8)yt= softmax(WVnt+bV) 2RjVj(9)The problem of the attention mechanism above is that it quickly becomes computationally expensivefor largeK. Moreover, attending over many memories can make training hard as a lot of noise isintroduced in early stages of optimization where the LSTM outputs (and thus the memory Mt) aremore or less random. To alleviate these problems we now turn to pointer networks and a simpleheuristic for populating Mtthat permits the efficient retrieval of identifiers in a large history ofPython code.2.3 S PARSE POINTER NETWORKWe develop an attention mechanism that provides a filtered view of a large history of Python tokens.At any given time step, the memory consists of context representations of the previous Kidentifiersintroduced in the history. This allows us to model long-range dependencies found in identifier usage.For instance, a class identifier may be declared hundreds of lines of code before it is used. Given ahistory of Python tokens, we obtain a next-word distribution from a weighed average of the sparsepointer network for identifier reference and a standard neural language model. The weighting of thetwo is determined by a controller.Formally, at time-step t, the sparse pointer network operates on a memory Mt2RkKof only theKprevious identifier representations ( e.g.function identifiers, class identifiers and so on). In addition,we maintain a vector mt= [id1; :::; idK]2NKof symbol ids for these identifier representations(i.e.pointers into the large global vocabulary).As before, we calculate a context vector ctusing the attention mechanism (Equation 7), but on amemoryMtonly containing representations of identifiers that were declared in the history. Next, weobtain a pseudo-sparse distribution over the global vocabulary fromst[i] =t[j]ifmt[j] =iC otherwise(10)it=softmax (st) 2RjVj(11)whereCis a large negative constant ( e.g.1000 ). In addition, we calculate a next-word distributionfrom a standard neural language modelyt=softmax (WVht+bV) 2RjVj(12)3Under review as a conference paper at ICLR 2017Figure 1: Sparse pointer network for code suggestion on a Python code snippet, showing the next-word distributions of the language model and identifier attention and their weighted combinationthroughand we use a controller to calculate a distribution t2R2over the language model and pointernetwork for the final weighted next-word distribution ytviaht="htxtct#2R3k(13)t=softmax (Wht+b) 2R2(14)yt= [ytit]t 2RjVj(15)Here,xtis the representation of the input token, and W2R23kandb2R2a trainableweight matrix and bias respectively. This controller is conditioned on the input, output and contextrepresentations. This means for deciding whether to refer to an identifier or generate from the globalvocabulary, the controller has access to information from the encoded next-word distribution htofthe standard neural language model, as well as the attention-weighted identifier representations ctfrom the current history.Figure 1 overviews this process. In it, the identifier base_path appears twice, once as an argumentto a function and once as a member of a class (denoted by *). Each appearance has a different idin the vocabulary and obtains a different probability from the model. In the example, the modelcorrectly chooses to refer to the member of the class instead of the out-of-scope function argument,although, from a user point-of-view, the suggestion would be the same in both cases.3 L ARGE -SCALE PYTHON CORPUSPrevious work on code suggestion either focused on statically-typed languages (particularly Java)or trained on very small corpora. Thus, we decided to collect a new large-scale corpus of thedynamic programming language Python. According to the programming language popularity websitePypl (Carbonnelle, 2016), Python is the second most popular language after Java. It is also the3rd most common language in terms of number of repositories on the open-source code repositoryGitHub, after JavaScript and Java (Zapponi, 2016).We collected a corpus of 41M lines of Python code from GitHub projects. Ideally, we would like thiscorpus to only contain high-quality Python code, as our language model learns to suggest code fromhow users write code. However, it is difficult to automatically assess what constitutes high-qualitycode. Thus, we resort to the heuristic that popular code projects tend to be of good quality, There are4Under review as a conference paper at ICLR 2017Table 1: Python corpus statistics.Dataset #Projects #Files #Lines #Tokens V ocabulary SizeTrain 489 118 298 26 868 583 88 935 698 2 323 819Dev 179 26 466 5 804 826 18 147 341Test 281 43 062 8 398 100 30 178 356Total 949 187 826 41 071 509 137 261 395Figure 2: Example of the Python code normalization. Original file on the left and normalized versionon the right.two metrics on GitHub that we can use for this purpose, namely stars (similar to bookmarks) andforks (copies of a repository that allow users to freely experiment with changes without affecting theoriginal repository). Similar to Allamanis & Sutton (2013) and Allamanis et al. (2014), we selectPython projects with more than 100stars, sort by the number of forks descending, and take the top1000 projects. We then removed projects that did not compile with Python3, leaving us with 949projects. We split the corpus on the project level into train, dev, and test. Table 1 presents the corpusstatistics.3.1 N ORMALIZATION OF IDENTIFIERSUnsurprisingly, the long tail of words in the vocabulary consists of rare identifiers. To improvegeneralization, we normalize identifiers before feeding the resulting token stream to our models.That is, we replace every identifier name with an anonymous identifier indicating the identifier group(class, variable, argument, attribute or function) concatenated with a random number that makesthe identifier unique in its scope. Note that we only replace novel identifiers defined within a file.Identifier references to external APIs and libraries are left untouched. Consistent with previous corpuscreation for code suggestion ( e.g.Khanh Dam et al., 2016; White et al., 2015), we replace numericalconstant tokens with $NUM$ , remove comments, reformat the code, and replace tokens appearingless than five times with an $OOV$ (out of vocabulary) token.4 E XPERIMENTSAlthough previous work by White et al. (2015) already established that a simple neural languagemodel outperforms an n-gram model for code suggestion, we include a number of n-gram baselinesto confirm this observation. Specifically, we use n-gram models for n2f3;4;5;6gwith ModifiedKneser-Ney smoothing (Kneser & Ney, 1995) from the Kyoto Language Modelling Toolkit (Neubig,2012).We train the sparse pointer network using mini-batch SGD with a batch size of 30and truncatedbackpropagation through time (Werbos, 1990) with a history of 20identifier representations. We use5Under review as a conference paper at ICLR 2017Table 2: Perplexity (PP), Accuracy (Acc) and Accuarcy among top 5 predictions (Acc@5).Model Train PP Dev PP Test PP Acc [%] Acc@5 [%]All IDs Other All IDs Other3-gram 12.90 24.19 26.90 13.19 – – 50.81 – –4-gram 7.60 21.07 23.85 13.68 – – 51.26 – –5-gram 4.52 19.33 21.22 13.90 – – 51.49 – –6-gram 3.37 18.73 20.17 14.51 – – 51.76 – –LSTM 9.29 13.08 14.01 57.91 2.1 62.8 76.30 4.5 82.6LSTM w/ Attention 20 7.30 11.07 11.74 61.30 21.4 64.8 79.32 29.9 83.7LSTM w/ Attention 50 7.09 9.83 10.05 63.21 30.2 65.3 81.69 41.3 84.1Sparse Pointer Network 6.41 9.40 9.18 62.97 27.3 64.9 82.62 43.6 84.5an initial learning rate of 0:7and decay it by 0:9after every epoch. As additional baselines, we testa neural language model with LSTM units with and without attention. For the attention languagemodels, we experiment with a fixed-window attention memory of the previous 20and50tokensrespectively, and a batch size of 75. We found during testing that the baseline models performedworse with the same batch size as the sparse pointer network of 30. We therefore chose to report thestronger results obtained with a batch size of 75.All neural language models were developed in TensorFlow (Abadi et al., 2016) and trained usingcross-entropy loss. While processing a Python source code file, the last recurrent state of the RNNis fed as the initial state of the subsequent sequence of the same file and reset between files. Allmodels use an input and hidden size of 200, an LSTM forget gate bias of 1(Jozefowicz et al., 2015),gradient norm clipping of 5(Pascanu et al., 2013), and randomly initialized parameters in the interval(0:05;0:05). As regularizer, we use a dropout of 0:1on the input representations. Furthermore, weuse a sampled softmax (Jean et al., 2015) with a log-uniform sampling distribution and a sample sizeof 1000.5 R ESULTSWe evaluate all models using perplexity (PP), as well as accuracy of the top prediction (Acc) and thetop five predictions (Acc@5). The results are summarized in Table 2.We can confirm that for code suggestion neural models outperform n-gram language models by alarge margin. Furthermore, adding attention improves the results substantially ( 2:3lower perplexityand3:4percentage points increased accuracy). Interestingly, this increase can be attributed to asuperior prediction of identifiers, which increased from an accuracy of 2:1%to21:4%. An LSTMwith an attention window of 50gives us the best accuracy for the top prediction. We achieve furtherimprovements for perplexity and accuracy of the top five predictions by using a sparse pointer networkthat uses a smaller memory of the past 20identifier representations.5.1 Q UALITATIVE ANALYSISFigures 3a-d show a code suggestion example involving an identifier usage. While the LSTM baselineis uncertain about the next token, we get a sensible prediction by using attention or the sparse pointernetwork. The sparse pointer network provides more reasonable alternative suggestions beyond thecorrect top suggestion.Figures 3e-h show the use-case referring to a class attribute declared 67tokens in the past. Onlythe Sparse Pointer Network makes a good suggestion. Furthermore, the attention weights in 3idemonstrate that this model distinguished attributes from other groups of identifiers. We give a fullexample of a token-by-token suggestion of the Sparse Pointer Network in Figure 4 in the Appendix.6Under review as a conference paper at ICLR 2017(a) Code snippet for referencingvariable.(b) LSTM Model. (c) LSTM w/ Attention50.(d) Sparse Pointer Net-work.(e) Code snippet for referencing classmember.(f) LSTM Model. (g) LSTM w/ Attention50.(h) Sparse Pointer Net-work.(i) Sparse Pointer Network attention over memory of identifier representations.Figure 3: Code suggestion example involving a reference to a variable (a-d), a long-range dependency(e-h), and the attention weights of the Sparse Pointer Network (i).6 R ELATED WORKPrevious code suggestion work using methods from statistical NLP has mostly focused on n-grammodels. Much of this work is inspired by Hindle et al. (2012) who argued that real programs fallinto a much smaller space than the flexibility of programming languages allows. They were ableto capture the repetitiveness and predictable statistical properties of real programs using languagemodels. Subsequently, Tu et al. (2014) improved upon Hindle et al.’s work by adding a cachemechanism that allowed them to exploit locality stemming from the specialisation and decoupling ofprogram modules. Tu et al.’s idea of adding a cache mechanism to the language model is specificallydesigned to exploit the properties of source code, and thus follows the same aim as the sparse attentionmechanism introduced in this paper.While the majority of preceding work trained on small corpora, Allamanis & Sutton (2013) created acorpus of 352M lines of Java code which they analysed with n-gram language models. The size ofthe corpus allowed them to train a single language model that was effective across multiple differentproject domains. White et al. (2015) later demonstrated that neural language models outperformn-gram models for code suggestion. They compared various n-gram models (up to nine grams),including Tu et al.’s cache model, with a basic RNN neural language model. Khanh Dam et al.(2016) compared White et al.’s basic RNN with LSTMs and found that the latter are better at codesuggestion due to their improved ability to learn long-range dependencies found in source code. Ourpaper extends this line of work by introducing a sparse attention model that captures even longerdependencies.The combination of lagged attention mechanisms with language modelling is inspired by Chenget al. (2016) who equipped LSTM cells with a fixed-length memory tape rather than a single memorycell. They achieved promising results on the standard Penn Treebank benchmark corpus (Marcuset al., 1993). Similarly, Tran et al. added a memory block to LSTMs for language modelling ofEnglish, German and Italian and outperformed both n-gram and neural language models. Theirmemory encompasses representations of all possible words in the vocabulary rather than providing asparse view as we do. Attention mechanisms were previously applied to the study of source code byAllamanis et al. who used a convolutional neural network combined with an attention mechanism togenerate method names from bodies.7Under review as a conference paper at ICLR 2017An alternative to our purely lexical approach to code suggestion involves the use of probabilisticcontext-free grammars (PCFGs) which exploit the formal grammar specifications and well-defined,deterministic parsers available for source code. These were used by Allamanis & Sutton (2014)to extract idiomatic patterns from source code. A weakness of PCFGs is their inability to modelcontext-dependent rules of programming languages such as that variables need to be declared beforebeing used. Maddison & Tarlow (2014) added context-aware variables to their PCFG model in orderto capture such rules.Ling et al. (2016) recently used a pointer network to generate code from natural language descriptions.Our use of a controller for deciding whether to generate from a language model or copy an identifierusing a sparse pointer network is inspired by their latent code predictor. However, their inputs (textualdescriptions) are short whereas code suggestion requires capturing very long-range dependencies thatwe addressed by a filtered view on the memory of previous identifier representations.7 C ONCLUSIONS AND FUTURE WORKIn this paper, we investigated neural language models for code suggestion of the dynamically-typedprogramming language Python. We released a corpus of 41M lines of Python crawled from GitHuband compared n-gram, standard neural language models, and attention. By using attention, weobserved an order of magnitude more accurate prediction of identifiers. Furthermore, we proposed asparse pointer network that can efficiently capture long-range dependencies by only operating ona filtered view of a memory of previous identifier representations. This model achieves the lowestperplexity and best accuracy among the top five predictions. The Python corpus and the code for ourmodels is released at https://github.com/uclmr/pycodesuggest .The presented methods were only tested for code suggestion within the same Python file. We areinterested in scaling the approach to the level of entire code projects and collections thereof, as wellas integrating a trained code suggestion model into an existing IDE. Furthermore, we plan to work oncode completion, i.e., models that provide a likely continuation of a partial token, using characterlanguage models (Graves, 2013).ACKNOWLEDGMENTSThis work was supported by Microsoft Research through its PhD Scholarship Programme, an AllenDistinguished Investigator Award, and a Marie Curie Career Integration Award. | HJ7hhhfVl | Review | 6: Marginally above acceptance threshold | This paper uses a pointer network over a sparse window of identifiers to improve code suggestion for dynamically-typed languages. Code suggestion seems an area where attention and/or pointers truly show an advantage in capturing long term dependencies.
The sparse pointer method does seem to provide better results than attention for similar window sizes - specifically, comparing a window size of 20 for the attention and sparse pointer method shows the sparse pointer winning fairly definitively across the board. Given a major advantage of the pointer method is being able to use a large window size well thanks to the supervision the pointer provides, it was unfortunate (though understandable due to potential memory issues) not to see larger window sizes. Having a different batch size for the sparse pointer and attention models is unfortunate given it complicates an otherwise straight comparison between the two models.
The construction and filtering of the Python corpus sounds promising but as of now it is still inaccessible (listed in the paper as TODO). Given that code suggestion seems an interesting area for future long term dependency work, it may be promising as an avenue for future task exploration.
Overall this paper and the dataset are likely an interesting contribution even though there are a few potential issues. | 4: The reviewer is confident but not absolutely certain that the evaluation is correct |
r1kQkVFgl | ICLR.cc/2017/conference | 2017 | Learning Python Code Suggestion with a Sparse Pointer Network | ["Avishkar Bhoopchand", "Tim Rockt\u00e4schel", "Earl Barr", "Sebastian Riedel"] | To enhance developer productivity, all modern integrated development environments (IDEs) include code suggestion functionality that proposes likely next tokens at the cursor. While current IDEs work well for statically-typed languages, their reliance on type annotations means that they do not provide the same level of support for dynamic programming languages as for statically-typed languages. Moreover, suggestion engines in modern IDEs do not propose expressions or multi-statement idiomatic code. Recent work has shown that language models can improve code suggestion systems by learning from software repositories. This paper introduces a neural language model with a sparse pointer network aimed at capturing very long range dependencies. We release a large-scale code suggestion corpus of 41M lines of Python code crawled from GitHub. On this corpus, we found standard neural language models to perform well at suggesting local phenomena, but struggle to refer to identifiers that are introduced many tokens in the past. By augmenting a neural language model with a pointer network specialized in referring to predefined classes of identifiers, we obtain a much lower perplexity and a 5 percentage points increase in accuracy for code suggestion compared to an LSTM baseline. In fact, this increase in code suggestion accuracy is due to a 13 times more accurate prediction of identifiers. Furthermore, a qualitative analysis shows this model indeed captures interesting long-range dependencies, like referring to a class member defined over 60 tokens in the past. | ["sparse pointer network", "identifiers", "python code suggestion", "languages", "neural language model", "past", "developer productivity", "modern", "development environments", "ides"] | ABSTRACTTo enhance developer productivity, all modern integrated development environ-ments (IDEs) include code suggestion functionality that proposes likely next tokensat the cursor. While current IDEs work well for statically-typed languages, their re-liance on type annotations means that they do not provide the same level of supportfor dynamic programming languages as for statically-typed languages. Moreover,suggestion engines in modern IDEs do not propose expressions or multi-statementidiomatic code. Recent work has shown that language models can improve codesuggestion systems by learning from software repositories. This paper introduces aneural language model with a sparse pointer network aimed at capturing very long-range dependencies. We release a large-scale code suggestion corpus of 41M linesof Python code crawled from GitHub. On this corpus, we found standard neurallanguage models to perform well at suggesting local phenomena, but struggle torefer to identifiers that are introduced many tokens in the past. By augmenting aneural language model with a pointer network specialized in referring to predefinedclasses of identifiers, we obtain a much lower perplexity and a 5percentage pointsincrease in accuracy for code suggestion compared to an LSTM baseline. In fact,this increase in code suggestion accuracy is due to a 13times more accurate pre-diction of identifiers. Furthermore, a qualitative analysis shows this model indeedcaptures interesting long-range dependencies, like referring to a class memberdefined over 60tokens in the past.1 I NTRODUCTIONIntegrated development environments (IDEs) are essential tools for programmers. Especially when adeveloper is new to a codebase, one of their most useful features is code suggestion: given a piece ofcode as context, suggest a likely sequence of next tokens. Typically, the IDE suggests an identifier ora function call, including API calls. While extensive support exists for statically-typed languagessuch as Java, code suggestion for dynamic languages like Python is harder and less well supportedbecause of the lack of type annotations. Moreover, suggestion engines in modern IDEs do not proposeexpressions or multi-statement idiomatic code.Recently, methods from statistical natural language processing (NLP) have been used to train codesuggestion systems from code usage in large code repositories (Hindle et al., 2012; Allamanis &Sutton, 2013; Tu et al., 2014). To this end, usually an n-gram language model is trained to scorepossible completions. Neural language models for code suggestion (White et al., 2015; Das &Shah, 2015) have extended this line of work to capture more long-range dependencies. Yet, thesestandard neural language models are limited by the so-called hidden state bottleneck, i.e., all contextinformation has to be stored in a fixed-dimensional internal vector representation. This limitationrestricts such models to local phenomena and does not capture very long-range semantic relationshipslike suggesting calling a function that has been defined many tokens before.To address these issues, we create a large corpus of 41M lines of Python code by using a heuristic forcrawling high-quality code repositories from GitHub. We investigate, for the first time, the use ofattention (Bahdanau et al., 2014) for code suggestion and find that, despite a substantial improvement1Under review as a conference paper at ICLR 2017in accuracy, it still makes avoidable mistakes. Hence, we introduce a model that leverages long-rangePython dependencies by selectively attending over the introduction of identifiers as determinedby examining the Abstract Syntax Tree. The model is a form of pointer network (Vinyals et al.,2015a), and learns to dynamically choose between syntax-aware pointing for modeling long-rangedependencies and free form generation to deal with local phenomena, based on the current context.Our contributions are threefold: (i) We release a code suggestion corpus of 41M lines of Python codecrawled from GitHub, (ii) We introduce a sparse attention mechanism that captures very long-rangedependencies for code suggestion of this dynamic programming language efficiently, and (iii) Weprovide a qualitative analysis demonstrating that this model is indeed able to learn such long-rangedependencies.2 M ETHODSWe first revisit neural language models, before briefly describing how to extend such a languagemodel with an attention mechanism. Then we introduce a sparse attention mechanism for a pointernetwork that can exploit the Python abstract syntax tree of the current context for code suggestion.2.1 N EURAL LANGUAGE MODELCode suggestion can be approached by a language model that measures the probability of observinga sequence of tokens in a Python program. For example, for the sequence S=a1; :::; aN, the jointprobability of Sfactorizes according toP(S) =P(a1)NYt=2P(atjat1; :::; a 1) (1)where the parameters are estimated from a training corpus. Given a sequence of Python tokens, weseek to predict the next Mtokensat+1; :::; at+Mthat maximize Equation 1arg maxat+1;:::;a t+MP(a1; :::; at; at+1; :::; at+M): (2)In this work, we build upon neural language models using Recurrent Neural Networks (RNNs) andLong Short-Term Memory (LSTM, Hochreiter & Schmidhuber, 1997). This neural language modelestimates the probabilities in Equation 1 using the output vector of an LSTM at time step t(denotedhthere) according toP(at=jat1; :::; a 1) =exp (vTht+b)P0exp (vT0ht+b0)(3)wherevis a parameter vector associated with token in the vocabulary.Neural language models can, in theory, capture long-term dependencies in token sequences throughtheir internal memory. However, as this internal memory has fixed dimension and can be updated atevery time step, such models often only capture local phenomena. In contrast, we are interested invery long-range dependencies like referring to a function identifier introduced many tokens in thepast. For example, a function identifier may be introduced at the top of a file and only used nearthe bottom. In the following, we investigate various external memory architectures for neural codesuggestion.2.2 A TTENTIONA straight-forward approach to capturing long-range dependencies is to use a neural attention mech-anism (Bahdanau et al., 2014) on the previous Koutput vectors of the language model. Attentionmechanisms have been successfully applied to sequence-to-sequence tasks such as machine transla-tion (Bahdanau et al., 2014), question-answering (Hermann et al., 2015), syntactic parsing (Vinyalset al., 2015b), as well as dual-sequence modeling like recognizing textual entailment (Rockt ̈aschelet al., 2016). The idea is to overcome the hidden-state bottleneck by allowing referral back to previousoutput vectors. Recently, these mechanisms were applied to language modelling by Cheng et al.(2016) and Tran et al. (2016).2Under review as a conference paper at ICLR 2017Formally, an attention mechanism with a fixed memory Mt2RkKofKvectorsmi2Rkfori2[1;K], produces an attention distribution t2RKand context vector ct2Rkat each timesteptaccording to Equations 4 to 7. Furthermore, WM;Wh2Rkkandw2Rkare trainableparameters. Finally, note that 1Krepresents a K-dimensional vector of ones.Mt= [m1:::mK] 2RkK(4)Gt=tanh(WMMt+1TK(Whht)) 2RkK(5)t=softmax (wTGt) 2R1K(6)ct=MtTt 2Rk(7)For language modeling, we populate Mtwith a fixed window of the previous KLSTM outputvectors. To obtain a distribution over the next token we combine the context vector ctof the attentionmechanism with the output vector htof the LSTM using a trainable projection matrix WA2Rk2k.The resulting final output vector nt2Rkencodes the next-word distribution and is projected to thesize of the vocabulary jVj. Subsequently, we apply a softmax to arrive at a probability distributionyt2RjVjover the next token. This process is presented in Equation 9 where WV2RjVjkandbV2RjVjare trainable parameters.nt=tanhWAhtct2Rk(8)yt= softmax(WVnt+bV) 2RjVj(9)The problem of the attention mechanism above is that it quickly becomes computationally expensivefor largeK. Moreover, attending over many memories can make training hard as a lot of noise isintroduced in early stages of optimization where the LSTM outputs (and thus the memory Mt) aremore or less random. To alleviate these problems we now turn to pointer networks and a simpleheuristic for populating Mtthat permits the efficient retrieval of identifiers in a large history ofPython code.2.3 S PARSE POINTER NETWORKWe develop an attention mechanism that provides a filtered view of a large history of Python tokens.At any given time step, the memory consists of context representations of the previous Kidentifiersintroduced in the history. This allows us to model long-range dependencies found in identifier usage.For instance, a class identifier may be declared hundreds of lines of code before it is used. Given ahistory of Python tokens, we obtain a next-word distribution from a weighed average of the sparsepointer network for identifier reference and a standard neural language model. The weighting of thetwo is determined by a controller.Formally, at time-step t, the sparse pointer network operates on a memory Mt2RkKof only theKprevious identifier representations ( e.g.function identifiers, class identifiers and so on). In addition,we maintain a vector mt= [id1; :::; idK]2NKof symbol ids for these identifier representations(i.e.pointers into the large global vocabulary).As before, we calculate a context vector ctusing the attention mechanism (Equation 7), but on amemoryMtonly containing representations of identifiers that were declared in the history. Next, weobtain a pseudo-sparse distribution over the global vocabulary fromst[i] =t[j]ifmt[j] =iC otherwise(10)it=softmax (st) 2RjVj(11)whereCis a large negative constant ( e.g.1000 ). In addition, we calculate a next-word distributionfrom a standard neural language modelyt=softmax (WVht+bV) 2RjVj(12)3Under review as a conference paper at ICLR 2017Figure 1: Sparse pointer network for code suggestion on a Python code snippet, showing the next-word distributions of the language model and identifier attention and their weighted combinationthroughand we use a controller to calculate a distribution t2R2over the language model and pointernetwork for the final weighted next-word distribution ytviaht="htxtct#2R3k(13)t=softmax (Wht+b) 2R2(14)yt= [ytit]t 2RjVj(15)Here,xtis the representation of the input token, and W2R23kandb2R2a trainableweight matrix and bias respectively. This controller is conditioned on the input, output and contextrepresentations. This means for deciding whether to refer to an identifier or generate from the globalvocabulary, the controller has access to information from the encoded next-word distribution htofthe standard neural language model, as well as the attention-weighted identifier representations ctfrom the current history.Figure 1 overviews this process. In it, the identifier base_path appears twice, once as an argumentto a function and once as a member of a class (denoted by *). Each appearance has a different idin the vocabulary and obtains a different probability from the model. In the example, the modelcorrectly chooses to refer to the member of the class instead of the out-of-scope function argument,although, from a user point-of-view, the suggestion would be the same in both cases.3 L ARGE -SCALE PYTHON CORPUSPrevious work on code suggestion either focused on statically-typed languages (particularly Java)or trained on very small corpora. Thus, we decided to collect a new large-scale corpus of thedynamic programming language Python. According to the programming language popularity websitePypl (Carbonnelle, 2016), Python is the second most popular language after Java. It is also the3rd most common language in terms of number of repositories on the open-source code repositoryGitHub, after JavaScript and Java (Zapponi, 2016).We collected a corpus of 41M lines of Python code from GitHub projects. Ideally, we would like thiscorpus to only contain high-quality Python code, as our language model learns to suggest code fromhow users write code. However, it is difficult to automatically assess what constitutes high-qualitycode. Thus, we resort to the heuristic that popular code projects tend to be of good quality, There are4Under review as a conference paper at ICLR 2017Table 1: Python corpus statistics.Dataset #Projects #Files #Lines #Tokens V ocabulary SizeTrain 489 118 298 26 868 583 88 935 698 2 323 819Dev 179 26 466 5 804 826 18 147 341Test 281 43 062 8 398 100 30 178 356Total 949 187 826 41 071 509 137 261 395Figure 2: Example of the Python code normalization. Original file on the left and normalized versionon the right.two metrics on GitHub that we can use for this purpose, namely stars (similar to bookmarks) andforks (copies of a repository that allow users to freely experiment with changes without affecting theoriginal repository). Similar to Allamanis & Sutton (2013) and Allamanis et al. (2014), we selectPython projects with more than 100stars, sort by the number of forks descending, and take the top1000 projects. We then removed projects that did not compile with Python3, leaving us with 949projects. We split the corpus on the project level into train, dev, and test. Table 1 presents the corpusstatistics.3.1 N ORMALIZATION OF IDENTIFIERSUnsurprisingly, the long tail of words in the vocabulary consists of rare identifiers. To improvegeneralization, we normalize identifiers before feeding the resulting token stream to our models.That is, we replace every identifier name with an anonymous identifier indicating the identifier group(class, variable, argument, attribute or function) concatenated with a random number that makesthe identifier unique in its scope. Note that we only replace novel identifiers defined within a file.Identifier references to external APIs and libraries are left untouched. Consistent with previous corpuscreation for code suggestion ( e.g.Khanh Dam et al., 2016; White et al., 2015), we replace numericalconstant tokens with $NUM$ , remove comments, reformat the code, and replace tokens appearingless than five times with an $OOV$ (out of vocabulary) token.4 E XPERIMENTSAlthough previous work by White et al. (2015) already established that a simple neural languagemodel outperforms an n-gram model for code suggestion, we include a number of n-gram baselinesto confirm this observation. Specifically, we use n-gram models for n2f3;4;5;6gwith ModifiedKneser-Ney smoothing (Kneser & Ney, 1995) from the Kyoto Language Modelling Toolkit (Neubig,2012).We train the sparse pointer network using mini-batch SGD with a batch size of 30and truncatedbackpropagation through time (Werbos, 1990) with a history of 20identifier representations. We use5Under review as a conference paper at ICLR 2017Table 2: Perplexity (PP), Accuracy (Acc) and Accuarcy among top 5 predictions (Acc@5).Model Train PP Dev PP Test PP Acc [%] Acc@5 [%]All IDs Other All IDs Other3-gram 12.90 24.19 26.90 13.19 – – 50.81 – –4-gram 7.60 21.07 23.85 13.68 – – 51.26 – –5-gram 4.52 19.33 21.22 13.90 – – 51.49 – –6-gram 3.37 18.73 20.17 14.51 – – 51.76 – –LSTM 9.29 13.08 14.01 57.91 2.1 62.8 76.30 4.5 82.6LSTM w/ Attention 20 7.30 11.07 11.74 61.30 21.4 64.8 79.32 29.9 83.7LSTM w/ Attention 50 7.09 9.83 10.05 63.21 30.2 65.3 81.69 41.3 84.1Sparse Pointer Network 6.41 9.40 9.18 62.97 27.3 64.9 82.62 43.6 84.5an initial learning rate of 0:7and decay it by 0:9after every epoch. As additional baselines, we testa neural language model with LSTM units with and without attention. For the attention languagemodels, we experiment with a fixed-window attention memory of the previous 20and50tokensrespectively, and a batch size of 75. We found during testing that the baseline models performedworse with the same batch size as the sparse pointer network of 30. We therefore chose to report thestronger results obtained with a batch size of 75.All neural language models were developed in TensorFlow (Abadi et al., 2016) and trained usingcross-entropy loss. While processing a Python source code file, the last recurrent state of the RNNis fed as the initial state of the subsequent sequence of the same file and reset between files. Allmodels use an input and hidden size of 200, an LSTM forget gate bias of 1(Jozefowicz et al., 2015),gradient norm clipping of 5(Pascanu et al., 2013), and randomly initialized parameters in the interval(0:05;0:05). As regularizer, we use a dropout of 0:1on the input representations. Furthermore, weuse a sampled softmax (Jean et al., 2015) with a log-uniform sampling distribution and a sample sizeof 1000.5 R ESULTSWe evaluate all models using perplexity (PP), as well as accuracy of the top prediction (Acc) and thetop five predictions (Acc@5). The results are summarized in Table 2.We can confirm that for code suggestion neural models outperform n-gram language models by alarge margin. Furthermore, adding attention improves the results substantially ( 2:3lower perplexityand3:4percentage points increased accuracy). Interestingly, this increase can be attributed to asuperior prediction of identifiers, which increased from an accuracy of 2:1%to21:4%. An LSTMwith an attention window of 50gives us the best accuracy for the top prediction. We achieve furtherimprovements for perplexity and accuracy of the top five predictions by using a sparse pointer networkthat uses a smaller memory of the past 20identifier representations.5.1 Q UALITATIVE ANALYSISFigures 3a-d show a code suggestion example involving an identifier usage. While the LSTM baselineis uncertain about the next token, we get a sensible prediction by using attention or the sparse pointernetwork. The sparse pointer network provides more reasonable alternative suggestions beyond thecorrect top suggestion.Figures 3e-h show the use-case referring to a class attribute declared 67tokens in the past. Onlythe Sparse Pointer Network makes a good suggestion. Furthermore, the attention weights in 3idemonstrate that this model distinguished attributes from other groups of identifiers. We give a fullexample of a token-by-token suggestion of the Sparse Pointer Network in Figure 4 in the Appendix.6Under review as a conference paper at ICLR 2017(a) Code snippet for referencingvariable.(b) LSTM Model. (c) LSTM w/ Attention50.(d) Sparse Pointer Net-work.(e) Code snippet for referencing classmember.(f) LSTM Model. (g) LSTM w/ Attention50.(h) Sparse Pointer Net-work.(i) Sparse Pointer Network attention over memory of identifier representations.Figure 3: Code suggestion example involving a reference to a variable (a-d), a long-range dependency(e-h), and the attention weights of the Sparse Pointer Network (i).6 R ELATED WORKPrevious code suggestion work using methods from statistical NLP has mostly focused on n-grammodels. Much of this work is inspired by Hindle et al. (2012) who argued that real programs fallinto a much smaller space than the flexibility of programming languages allows. They were ableto capture the repetitiveness and predictable statistical properties of real programs using languagemodels. Subsequently, Tu et al. (2014) improved upon Hindle et al.’s work by adding a cachemechanism that allowed them to exploit locality stemming from the specialisation and decoupling ofprogram modules. Tu et al.’s idea of adding a cache mechanism to the language model is specificallydesigned to exploit the properties of source code, and thus follows the same aim as the sparse attentionmechanism introduced in this paper.While the majority of preceding work trained on small corpora, Allamanis & Sutton (2013) created acorpus of 352M lines of Java code which they analysed with n-gram language models. The size ofthe corpus allowed them to train a single language model that was effective across multiple differentproject domains. White et al. (2015) later demonstrated that neural language models outperformn-gram models for code suggestion. They compared various n-gram models (up to nine grams),including Tu et al.’s cache model, with a basic RNN neural language model. Khanh Dam et al.(2016) compared White et al.’s basic RNN with LSTMs and found that the latter are better at codesuggestion due to their improved ability to learn long-range dependencies found in source code. Ourpaper extends this line of work by introducing a sparse attention model that captures even longerdependencies.The combination of lagged attention mechanisms with language modelling is inspired by Chenget al. (2016) who equipped LSTM cells with a fixed-length memory tape rather than a single memorycell. They achieved promising results on the standard Penn Treebank benchmark corpus (Marcuset al., 1993). Similarly, Tran et al. added a memory block to LSTMs for language modelling ofEnglish, German and Italian and outperformed both n-gram and neural language models. Theirmemory encompasses representations of all possible words in the vocabulary rather than providing asparse view as we do. Attention mechanisms were previously applied to the study of source code byAllamanis et al. who used a convolutional neural network combined with an attention mechanism togenerate method names from bodies.7Under review as a conference paper at ICLR 2017An alternative to our purely lexical approach to code suggestion involves the use of probabilisticcontext-free grammars (PCFGs) which exploit the formal grammar specifications and well-defined,deterministic parsers available for source code. These were used by Allamanis & Sutton (2014)to extract idiomatic patterns from source code. A weakness of PCFGs is their inability to modelcontext-dependent rules of programming languages such as that variables need to be declared beforebeing used. Maddison & Tarlow (2014) added context-aware variables to their PCFG model in orderto capture such rules.Ling et al. (2016) recently used a pointer network to generate code from natural language descriptions.Our use of a controller for deciding whether to generate from a language model or copy an identifierusing a sparse pointer network is inspired by their latent code predictor. However, their inputs (textualdescriptions) are short whereas code suggestion requires capturing very long-range dependencies thatwe addressed by a filtered view on the memory of previous identifier representations.7 C ONCLUSIONS AND FUTURE WORKIn this paper, we investigated neural language models for code suggestion of the dynamically-typedprogramming language Python. We released a corpus of 41M lines of Python crawled from GitHuband compared n-gram, standard neural language models, and attention. By using attention, weobserved an order of magnitude more accurate prediction of identifiers. Furthermore, we proposed asparse pointer network that can efficiently capture long-range dependencies by only operating ona filtered view of a memory of previous identifier representations. This model achieves the lowestperplexity and best accuracy among the top five predictions. The Python corpus and the code for ourmodels is released at https://github.com/uclmr/pycodesuggest .The presented methods were only tested for code suggestion within the same Python file. We areinterested in scaling the approach to the level of entire code projects and collections thereof, as wellas integrating a trained code suggestion model into an existing IDE. Furthermore, we plan to work oncode completion, i.e., models that provide a likely continuation of a partial token, using characterlanguage models (Graves, 2013).ACKNOWLEDGMENTSThis work was supported by Microsoft Research through its PhD Scholarship Programme, an AllenDistinguished Investigator Award, and a Marie Curie Career Integration Award. | BJgR3jbEg | An attention mechanism that isn't learned | 5: Marginally below acceptance threshold | This paper takes a standard auto-regressive model of source code and augments it with a fixed attention policy that tracks the use of certain token types, like identifiers. Additionally they release a Python open source dataset. As expected this augmentation, the fixed attention policy, improves the perplexity of the model. It seems important to dig a bit deeper into these results and show the contribution of different token types to the achieve perplexity. This is alluded to in the text, but a more thorough comparison would be welcome. The idea of an attention policy that takes advantage of expert knowledge is a nice contribution, but perhaps if limited novelty --- for example the Maddison and Tarlow 2014 paper, which the authors cite, has scoping rules that track previously used identifiers in scope. | 4: The reviewer is confident but not absolutely certain that the evaluation is correct |
BkJsCIcgl | ICLR.cc/2017/conference | 2017 | The Predictron: End-To-End Learning and Planning | ["David Silver", "Hado van Hasselt", "Matteo Hessel", "Tom Schaul", "Arthur Guez", "Tim Harley", "Gabriel Dulac-Arnold", "David Reichert", "Neil Rabinowitz", "Andre Barreto", "Thomas Degris"] | One of the key challenges of artificial intelligence is to learn models that are effective in the context of planning. In this document we introduce the predictron architecture. The predictron consists of a fully abstract model, represented by a Markov reward process, that can be rolled forward multiple "imagined" planning steps. Each forward pass of the predictron accumulates internal rewards and values over multiple planning depths.
The predictron is trained end-to-end so as to make these accumulated values accurately approximate the true value function, thereby focusing the model upon the aspects of the environment most relevant to planning. We applied the predictron to procedurally generated random mazes and a simulator for the game of pool. The predictron yielded significantly more accurate predictions than conventional deep neural network architectures. | ["Deep learning", "Reinforcement Learning", "Supervised Learning", "Semi-Supervised Learning"] | ABSTRACTOne of the key challenges of artificial intelligence is to learn models that are ef-fective in the context of planning. In this document we introduce the predictronarchitecture. The predictron consists of a fully abstract model, represented by aMarkov reward process, that can be rolled forward multiple “imagined” planningsteps. Each forward pass of the predictron accumulates internal rewards and val-ues over multiple planning depths. The predictron is trained end-to-end so as tomake these accumulated values accurately approximate the true value function.We applied the predictron to procedurally generated random mazes and a sim-ulator for the game of pool. The predictron yielded significantly more accuratepredictions than conventional deep neural network architectures.1 I NTRODUCTIONThe central idea of model-based reinforcement learning is to decompose the RL problem into twosubproblems: learning a model of the environment, and then planning with this model. The modelis typically represented by a Markov reward process (MRP) or decision process (MDP). The plan-ning component uses this model to evaluate and select among possible strategies. This is typicallyachieved by rolling forward the model to construct a value function that estimates cumulative re-ward. In prior work, the model is trained essentially independently of its use within the planner.As a result, the model is not well-matched with the overall objective of the agent. Prior deep rein-forcement learning methods have successfully constructed models that can unroll near pixel-perfectreconstructions (Oh et al., 2015; Chiappa et al., 2016); but are yet to surpass state-of-the-art model-free methods in challenging RL domains with raw inputs (e.g., Mnih et al., 2015; 2016; Lillicrapet al., 2016).In this paper we introduce a new architecture, which we call the predictron , that integrates learningand planning into one end-to-end training procedure. At every step, a model is applied to an internalstate, to produce a next state, reward, discount, and value estimate. This model is completely abstractand its only goal is to facilitate accurate value prediction. For example, to plan effectively in a game,an agent must be able to predict the score. If our model makes accurate predictions, then an optimalplan with respect to our model will also be an optimal plan for the underlying game – even if thatmodel uses a different state space (e.g., an abstract representation of enemy positions, ignoringtheir shapes and colours), action space (e.g., a high-level action to move away from an enemy),rewards (e.g., a single abstract step could have a higher value than any real reward), or even time-step (e.g., a single abstract step could “jump” the agent to the end of a corridor). All we requireis that trajectories through the abstract model produce scores that are consistent with trajectoriesthrough the real environment. This is achieved by training the predictron end-to-end, so as to makeits value estimates as accurate as possible.An ideal model could generalise to many different prediction tasks, rather than overfitting to a singletask; and could learn from a rich variety of feedback signals, not just a single extrinsic reward. Wetherefore train the predictron to predict a host of different value functions for a variety of pseudo-reward functions and discount factors. These pseudo-rewards can encode any event or aspect of theenvironment that the agent may care about, e.g., staying alive or reaching the next room.We focus upon the prediction task: estimating value functions in MRP environments with uncon-trolled dynamics. In this case, the predictron can be implemented as a deep neural network with an*Primary contributors1Under review as a conference paper at ICLR 2017MRP as a recurrent core. The predictron unrolls this core multiple steps and accumulates rewardsinto an overall estimate of value.We applied the predictron to procedurally generated random mazes, and a simulated pool domain,directly from pixel inputs. In both cases, the predictron significantly outperformed model-free al-gorithms with conventional deep network architectures; and was much more robust to architecturalchoices such as depth.2 B ACKGROUNDWe consider environments defined by an MRP with states s2S. The MRP is defined by a function,s0;r; =p(s;), wheres0is the next state, ris the reward, and is the discount factor, whichcan for instance represent the non-termination probability for this transition. The process may bestochastic, given IID noise .The return of an MRP is the cumulative discounted reward over a single trajectory, gt=rt+1+t+1rt+2+t+1t+2rt+3+:::, wheretcan vary per time-step. We consider a generalisation of theMRP setting that includes vector-valued rewards r, diagonal-matrix discounts , and vector-valuedreturns g; definitions are otherwise identical to the above. We use this bold font notation to closelymatch the more familiar scalar MRP case; the majority of the paper can be comfortably understoodby reading all rewards as scalars, and all discount factors as scalar and constant, i.e., t=.Thevalue function of an MRPpis the expected return from state s,vp(s) =Ep[gtjst=s]. Inthe vector case, these are known as general value functions (Sutton et al., 2011). We will say that a(general) value function v()isconsistent with environment pif and only if v=vpwhich satisfiesthe following Bellman equation (Bellman, 1957),vp(s) =Ep[r+vp(s0)js]: (1)In model-based reinforcement learning (Sutton and Barto, 1998), an approximation mpto theenvironment is learned. In the uncontrolled setting this model is normally an MRP s0;r;=m(s;)that maps from state sto subsequent state s0and additionally outputs rewards rand discounts ;the model may be stochastic given an IID source of noise . A (general) value function vm()isconsistent with model m(orvalid , (Sutton, 1995)), if and only if it satisfies a Bellman equationvm(s) =Em[r+vm(s0)js]with respect to model m. Conventionally, model-based RL methodsfocus on finding a value function vthat is consistent with a separately learned model m.3 P REDICTRON ARCHITECTUREThe predictron is composed of four main components. First, a state representation s=f(s)thatencodes raw input s(this could be a history of observations, in the partially observed setting, forexample when fis a recurrent network) into an internal (abstract, hidden) state s. Second, a models0;r;=m(s;)that maps from internal state sto subsequent internal state s0, internal rewards r,and internal discounts . Third, a value function vthat outputs internal values v=v(s)representingthe future, internal return from internal state sonwards. The predictron is applied by unrolling itsmodelmmultiple “planning” steps to produce internal rewards, discounts and values. We usesuperscriptskto indicate internal steps of the model (which have no necessary connection to timestepstof the environment). Finally, these internal rewards, discounts and values are combinedtogether by an accumulator into an overall estimate of value g. The whole predictron, from inputstatesto output g, may be viewed as a value function approximator for external targets (i.e. thereturns in the real environment). We consider both k-step and-weighted accumulators.Thek-step predictron rolls its internal model forward ksteps. Specifically, the k-step predictronreturn gk(henceforth abbreviated as preturn ) is the internal return obtained by accumulating kmodel steps, plus a final value vkfrom thekth step,gk=r1+1(r2+2(:::(rk1+k1(rk+kvk)):::)): (2)The 0-step preturn is simply the first value g0=v0. The 1-step preturn is g1=r1+1v1, and soon (see Fig. 1a).The-predictron combines together many k-step preturns. Specifically, it computes a diagonalweight matrix kfrom each internal state sk. The accumulator uses weights 0;:::;Kto aggregate2Under review as a conference paper at ICLR 2017a)k-step predictron b)-predictron.........22r22&&... s2 //OO99v2 //+1s2OO//99v212//+11r1%%r11&&... s1 //OO99v1 //+0s1OO//99+0s1OO//99v111//+00r0%%r0%%r00&&s0 //OO99v0 //+s0OO//99+s0OO//99+s0OO//99v010//+sOOg0sOOg1sOOg2sOOgFigure 1: a) The k-step predictron architecture. The first three columns illustrate 0, 1 and 2-steppathways through the predictron. The 0-step preturn reduces to standard model-free value functionapproximation; other preturns “imagine” additional steps with an internal model. Each pathwayoutputs ak-step preturn gkthat accumulates discounted rewards along with a final value estimate. Inpractice allk-step preturns are computed in a single forward pass. b) The -predictron architecture.The-parameters gate between the different preturns. The output is a -preturn gthat is a mixtureover thek-step preturns. For example, if 0=1;1=1;2=0then we recover the 2-step preturn,g=g2. Discount factors kand-parameters kare dependent on state sk; this dependence isnot shown in the figure.overk-step preturns g0;:::;gKand output a combined value that we call the -preturn g,g=KXk=0wkgkwhere wk=8><>:(1k)Qk1j=0jifk<KQK1j=0jotherwise.(3)where 1is the identity matrix. This -preturn is analogous to the -return in the forward-viewTD() algorithm (Sutton, 1988; Sutton and Barto, 1998). It may also be computed by a backwardaccumulation through intermediate steps gk;,gk;= (1k)vk+krk+1+k+1gk+1;; (4)where gK;=vK, and then using g=g0;. Computation in the -predictron operates in a sweep,iterating first through the model from k= 0:::K and then back through the accumulator fromk=K::: 0in a single “forward” pass of the network (see Figure 1b). Each kweight acts as agate on the computation of the -preturn: a value of k=0will truncate the -preturn at layer k,while a value of k=1will utilise deeper layers based on additional steps of the model m; the finalweight is always K=0. The individual kweights may depend on the corresponding abstractstateskand can differ per prediction. This enables the predictron to compute to an adaptive depth(Graves, 2016) depending on the internal state and learning dynamics of the network.4 P REDICTRON LEARNING UPDATESWe first consider updates that optimise the joint parameters of the state representation, model, andvalue function. We begin with the k-step predictron. We update the k-step predictron gktowardsa target outcome g, such as the Monte-Carlo return from the real environment, by minimising amean-squared error loss,Lk=12Ep[gjs]Emgkjs2:@lk@=ggk@gk@: (5)wherelk=12ggk2is the sample loss. We can use the gradient of the sample loss to updateparameters, e.g. by stochastic gradient descent. For stochastic models, two independent samples arerequired for gkand@gk@to get unbiased samples for the gradient of Lk.3Under review as a conference paper at ICLR 2017The-predictron combines together many k-step preturns. To update the joint parameters , we canuniformly average the losses on the individual preturns gk,L0:K=12KKXk=0Ep[gjs]Emgkjs2;@l0:K@=1KKXk=0ggk@gk@: (6)Alternative, we could weight each loss by the usage wkof the corresponding preturn, such that thegradient isPKk=0wkggk@gk@.The-predictron uses an accumulator with additional parameters that determine the relativeweighting of the k-step preturns. These weights are also updated so as to minimise a mean-squarederror lossL,L=12Ep[gjs]Emgjs2;@l@=gg@g@: (7)In summary, the joint parameters of the state representation f, the modelm, and the value functionvare updated to make each of the k-step preturns gkmore similar to the target g, and the parametersof the-accumulator are updated to make the aggregate -preturn gmore similar to the target g.4.1 C ONSISTENCY (SEMI-SUPERVISED ) LEARNING WITH THE -PREDICTRONIdeally, the predictron (f;m;v )learns preturns that are all equal in expectation to the true valuefunction of the environment, Emgkjs=Ep[gtjs] =vp(s), in which case the preturns mustbe equal in expectation, Emg0js=Emg1js=:::=EmgKjs. In addition, each k-steppreturn must then be equal in expectation to the -preturn, Emgkjs=Emgjs, for anyparameters. All these consistency relations between preturns give rise to additional constraints uponthe predictron. Specifically, we may adjust the parameters of the predictron to lead to consistentpreturns, even in the absence of labelled targets.Concretely, we can adjust each preturn gktowards the-preturn g; in other words, we can updateeach individual value estimate towards the best aggregated estimate by minimizingL=12KXk=0EmgjsEmgkjs2;@l@=KXk=0ggk@gk@:(8)Heregis considered fixed; the parameters are only updated to make gkmore similar to g, notvice versa. This consistency update does not require any labels gor samples from the environment.As a result, it can be applied to (potentially hypothetical) states that have no associated ‘real’ (e.g.Monte-Carlo) outcome: we update the value estimates to be self-consistent with each other. Notethe similarity with the semi-supervised setting, where we may have unlabelled inputs.5 E XPERIMENTSWe conducted experiments on two domains. The first domain consists of randomly generated 2020mazes in which each location either is empty or contains a wall. Two locations in a maze are consid-ered connected if they are both empty and we can reach one from the other by moving horizontallyor vertically through adjacent empty cells. The goal is to predict, for each of the locations on thediagonal from top-left to bottom-right of the maze, whether the bottom-right corner is connected tothat location, given the entire maze as an input image. Some of these predictions will be straightfor-ward, for instance for locations on the diagonal that contain a wall themselves and for locations closeto the bottom right. Many other predictive questions seem to require a simple algorithm, such assome form of a flood fill or search; our hypothesis is that an internal model can learn to emulate suchalgorithms, where naive approximation may struggle. A few example mazes are shown in Figure 2.Our second domain is a simulation of the game of pool, using four balls and four pockets. The simu-lator is implemented in the physics engine Mujoco (Todorov et al., 2012). We generate sequences ofRGB frames starting from a random arrangement of balls on the table. The goal is to simultaneouslylearn to predict future events for each of the four balls, given 5 RGB frames as input. These eventsinclude: collision with any other ball, collision with any boundary of the table, entering a quadrant(4, for each quadrant), being located in a quadrant ( 4, for each quadrant), and entering a pocket4Under review as a conference paper at ICLR 2017Figure 2: Left: Two sample mazes from the random-maze domain. Light blue cells are empty,darker blue cells contain a wall. One maze is connected from top-left to bottom-right (indicated inblack), the other is not. Right: An example trajectory in the pool domain (before downsampling).It was selected by maximising the prediction of pocketing balls, using the predictron.usage weightingrr, vweight sharingskipconnections(r, v, r)-predictronFeedforward netRecurrent netResNetRecurrent ResNetRecurrent net0 1M 2M 3M 4M 5M0.00010.0010.01RMSE on random mazes(log scale)Usage weighted0 1M 2M 3M 4M 5MUniformly weightedrecurrent netλ-predictron(r,γ)-predictron(r,γ,λ)-predictron0 500K 1MUpdates0.20.30.4RMSE on pool0 500K 1MUpdatesFigure 3: Exploring predictron variants. Aggregated prediction errors over all predictions (20for mazes, 280 for pool) for the eight predictron variants corresponding to the cube on the left (asdescribed in the main text), for both random mazes (top) and pool (bottom). Each line is the medianof RMSE over five seeds; shaded regions encompass all seeds. The full (r;; )-prediction ( red)consistently performed best.(4, for each pocket). Each of these 144events provides a binary pseudo-reward that we combinewith 5 different discount factors f0;0:5;0:9;0:98;1gand predict their cumulative discounted sumover various time spans. This yields a total of 280 general value functions. An example trajectory isshown in Figure 2. In both domains, inputs are presented as minibatches of i.i.d. samples with theirregression targets. Additional domain details are provided in Appendix E.5.1 E XPLORING THE PREDICTRON ARCHITECTUREOur first set of experiments examines three binary dimensions that differentiate the predictron fromstandard deep networks. We compare eight predictron variants corresponding to the corners of thecube on the left in Figure 3.The first dimension corresponds to whether or not the predictron architecture utilises the structure ofan MRP model. In the MRP case, labelled r;, internal rewards and discounts are both learned. Inthe non-r;case, which corresponds to a vanilla hidden-to-hidden neural network module, internalrewards and discounts are ignored by fixing their values to rk=0andk=1.The second dimension is whether a K-step accumulator or -accumulator is used to aggregate overpreturns. When a -accumulator is used, a -preturn is computed as described in Section 3. Other-wise, intermediate preturns are ignored by fixing their values to k= 1fork<K . In this case, theoverall output of the predictron is simply the maximum-depth preturn gK.The third dimension, labelled usage weighting, defines the loss that is used to update the parameters. On this dimension, we consider two options: the preturn losses can either be weighted uniformly(see Equation 6), or the update for each preturn gkcan be weighted according to the weight wkthatdetermines how much it is used in the -predictron’s overall output. We call the latter loss ‘usageweighted‘. Note that for architectures without a -accumulator, wk= 0fork <K , andwK= 1,thus usage weighting then implies backpropagating only the loss on the final preturn gK.All variants utilise a convolutional core with 2 intermediate hidden layers (see Appendix A); param-eters were updated by supervised learning (see Appendix B for more details). Root mean squaredprediction errors for each architecture, aggregated over all predictions, are shown in Figure 3. The5Under review as a conference paper at ICLR 2017rr, vweight sharingskipconnections(r, v, r)-predictronConvNetrecurrent ConvNetResNetrecurrent ResNetusage weighting0 1M 2M 3M 4M 5M0.00010.0010.01RMSE on random mazes(log scale)Shared coredeep netdeep net with skips(r,γ,λ)-predictron(r,γ,λ)-predictron with skips0 1M 2M 3M 4M 5MUnshared cores0 500K 1MUpdates0.20.30.4RMSE on pool0 500K 1MUpdatesFigure 4: Comparing predictron to baselines. Aggregated prediction errors on random mazes(top) and pool (bottom) over all predictions for the eight architectures corresponding to the cube onthe left. Each line is the median of RMSE over five seeds; shaded regions encompass all seeds. Thefull(r;; )-predictron ( red), consistently outperformed conventional deep network architectures(black ), with and without skips and with and without weight sharing.top row corresponds to the random mazes and the bottom row to the pool domain. The main con-clusion is that learning an MRP model improved performance greatly. The inclusion of weightshelped as well, especially on pool. Usage weighting further improved performance.5.2 C OMPARING THE PREDICTRON TO OTHER DEEPNETWORKSOur second set of experiments compares the predictron to feedforward and recurrent deep learningarchitectures, with and without skip connections. We compare the corners of a new cube, as depictedon the left in Figure 4, based on three different binary dimensions.The first dimension of this second cube is whether we use a predictron, or a (non- , non-r;) deepnetwork that does not have an internal model and does not output or learn from intermediate predic-tions. We use the most effective predictron from the previous section, i.e., the (r;; )-predictronwith usage weighting.The second dimension is whether weights are shared between all cores (as in a recurrent network),or whether each core uses separate weights (as in a feedforward network). We note that the non-, non-r;variants of the predictron then correspond to standard (convolutional) feedforward and(unrolled) recurrent neural networks respectively.The third dimension is whether we include skip connections. This is equivalent to defining the modelstep to output a change to the current state, s, and then defining sk+1=h(sk+ sk), wherehis the non-linear function—in our case a ReLU, h(x) = max(0;x). The deep network with skipconnections is a variant of ResNet (He et al., 2015).Root mean squared prediction errors for each architecture are shown in Figure 4. All (r;; )-predictrons (red lines) outperformed the corresponding feedforward or recurrent neural networkbaselines (black lines) both in the random mazes and in pool. We also investigated the effect ofchanging the depth of the networks (see Appendix C). The predictron outperformed the correspond-ing feedforward or recurrent baselines for all depths, with and without skip connections.5.3 S EMI-SUPERVISED LEARNING BY CONSISTENCYWe now consider how to use the predictron for semi-supervised learning, training the model ona combination of labelled and unlabelled random mazes. Semi-supervised learning is importantbecause a common bottleneck in applying machine learning in the real world is the difficulty ofcollecting labelled data, whereas often large quantities of unlabelled data exist.We trained a full (r;; )-predictron by alternating standard supervised updates with consistencyupdates, obtained by stochastically minimizing the consistency loss (8), on the unlabelled samples.For each supervised update we apply either 0, 1, or 9 consistency updates. Figure 5 shows that6Under review as a conference paper at ICLR 2017the performance improved monotonically with the number of consistency updates, measured as afunction of the number of labelled samples consumed.0 100K 200K 300K 400K 500KNumber of labels0.0010.0030.010.03RMSE on random mazes(log scale)Shared core0 consistency updates1 consistency update9 consistency updates0 100K 200K 300K 400K 500KNumber of labelsUnshared coresFigure 5: Semi-supervised learning. Prediction errors of the (r;; )-predictrons (shared core, noskips) using 0, 1, or 9 consistency updates for every update with labelled data, plotted as function ofthe number of labels consumed. Learning performance improves with more consistency updates.5.4 A NALYSIS OF ADAPTIVE DEPTHIn principle, the predictron can adapt its depth to ‘think more’ about some predictions than others,perhaps depending on the complexity of the underlying target. We investigate this by looking atqualitatively different prediction types in pool: ball collisions, rail collisions, pocketing balls, andentering or staying in quadrants. For each prediction type we consider several different time-spans(determined by the real-world discount factors associated with each pseudo-reward). Figure 6 showsdistributions of depth for each type of prediction. The ‘depth’ of a predictron is here defined as theeffective number of model steps. If the predictron relies fully on the very first value (i.e., 0= 0),this counts as 0 steps. If, instead, it learns to place equal weight on all rewards and on the finalvalue, this counts as 16 steps. Concretely, the depth dcan be defined recursively as d=d0wheredk=k(1 +kdk+1)anddK=0. Note that even for the same input state, each prediction has aseparate depth.The depth distributions exhibit three properties. First, different types of predictions used differentdepths. Second, depth was correlated with the real-world discount for the first four prediction types.Third, the distributions are not strongly peaked, which implies that the depth can differ per inputeven for a single real-world discount and prediction type. In a control experiment (not shown) weused a scalar shared among all predictions, which reduced performance in all scenarios, indicatingthat the heterogeneous depth is a valuable form of flexibility.5.5 V ISUALIZING THE PREDICTIONS IN THE POOL DOMAINWe test the quality of the predictions in the pool domain to evaluate whether they are well-suited tomaking decisions. For each sampled pool position, we consider a set Iof different initial conditions(different angles and velocity of the white ball), and ask which is more likely to lead to pocketingcoloured balls. For each initial condition s2I, we apply the (r;; )-predictron (shared cores, 16model steps, no skip connections) to obtain predictions g. We sum the predictions that correspond00.5 0.90.98 1Real-world discounts0246810121416Depthcollision00.5 0.90.98 1Real-world discounts0246810121416rails00.5 0.90.98 1Real-world discounts0246810121416enter00.5 0.90.98 1Real-world discounts0246810121416pocket00.5 0.90.98 1Real-world discounts0246810121416stayFigure 6: Thinking depth. Distributions of thinking depth on pool for different types of predictionsand for different real-world discounts.7Under review as a conference paper at ICLR 2017to pocketing any ball except the white ball, and to real-world discounts = 0:98and= 1. Weselect the condition sthat maximises this sum.We then roll forward the pool simulator from sand log the number of pocketing events. Figure 2shows a sampled rollout, using the predictron to pick s. When providing the choice of 128an-gles and two velocities for initial conditions ( jIj= 256 ), this procedure resulted in pocketing 27coloured balls in 50 episodes. Using the same procedure with an equally deep convolutional net-work only resulted in 10pocketing events. These results suggest that the lower loss of the learned(r;; )-predictron translated into meaningful improvements when informing decisions. A video ofthe rollouts selected by the predictron is available here: https://youtu.be/BeaLdaN2C3Q .6 R ELATED WORKLee et al. (2015) introduced a neural network architecture where classifications branch off interme-diate hidden layers. An important difference with respect to the -predictron, is that the weights arehand-tuned as hyper-parameters, whereas in the predictron the weights are learnt and, more im-portantly, conditional on the input. Another difference is that the loss on the auxiliary classificationsis used to speed up learning, but the classifications themselves are not combined into an aggregateprediction; the output of the model itself is the deepest prediction.Graves (2016) introduced an architecture with adaptive computation time (ACT), with a discrete(but differentiable) decision on when to halt, and aggregating over the outputs at each ponderingstep. This is related to our weights, but obtains depth in a different way; one notable difference isthat the-predictron can choose different pondering depths for each of its predictions.Value iteration networks (VINs) (Tamar et al., 2016) also learn value functions end-to-end using aninternal model, similar to the (non- ) predictron. However, VINs plan via convolutional operationsover the full input state space; whereas the predictron plans via imagined trajectories through anabstract state space. This may allow the predictron architecture to scale much more effectively indomains that do not have a natural two-dimensional encoding of the state space.The notion of learning about many predictions of the future relates to work on predictive staterepresentations (PSRs; Littman et al., 2001), general value functions (GVFs; Sutton et al., 2011),and nexting (Modayil et al., 2012). Such predictions have been shown to be useful as representa-tions (Schaul and Ring, 2013) and for transfer (Schaul et al., 2015). So far, however, none of thesehave been considered for learning abstract models.Schmidhuber (2015) discusses learning abstract models, but maintains separate losses for the modeland a controller, and suggests training the model unsupervised to compactly encode the entire historyof observations, through predictive coding. The predictron’s abstract model is instead trained end-to-end to obtain accurate values.7 C ONCLUSIONThe predictron is a single differentiable architecture that rolls forward an internal model to estimateexternal values. This internal model may be given both the structure and the semantics of tradi-tional reinforcement learning models. But unlike most approaches to model-based reinforcementlearning, the model is fully abstract: it need not correspond to the real environment in any humanunderstandable fashion, so long as its rolled-forward “plans” accurately predict outcomes in the trueenvironment.The predictron may be viewed as a novel network architecture that incorporates several separableideas. First, the predictron outputs a value by accumulating rewards over a series of internal planningsteps. Second, each forward pass of the predictron outputs values at multiple planning depths. Third,these values may be combined together, also within a single forward pass, to output an overallensemble value. Finally, the different values output by the predictron may be encouraged to beself-consistent with each other, to provide an additional signal during learning. Our experimentsdemonstrate that these differences result in more accurate predictions of value, in reinforcementlearning environments, than more conventional network architectures.We have focused on value prediction tasks in uncontrolled environments. However, these ideas maytransfer to the control setting, for example by using the predictron as a Q-network (Mnih et al.,2015). Even more intriguing is the possibility of learning an internal MDP with abstract internalactions, rather than the MRP considered in this paper. We aim to explore these ideas in future work.8Under review as a conference paper at ICLR 2017 | rJm53ibNx | Review | 9: Top 15% of accepted papers, strong accept | This work proposes a computational structure of function approximator with a strong prior: it is optimized to act as an abstract MRP, capable of learning its own internal state, model, and notion of time-step. Thanks to the incorporation of a \lambda-return style return estimation, it can effectively adapt its own "thinking-depth" on the current input, thus performing some sort of soft iterative inference.
Such a prior, maintained by strong regularization, helps perform better than similar baselines or some prediction tasks that require some form of sequential reasoning.
The proposed idea is novel, and a very interesting take on forcing internal models upon function approximators which begs for future work. The experimental methodology is complete, showcases the potential of the approach, and nicely analyses the iterative/adaptative thinking depth learned by the model.
As pointed out by my previous comments, the paper reads well but utilizes language that may confuse a reader unfamiliar with the subject. I think some rewording could be done without having much impact on the depth of the paper. In particular, introducing the method as a regularized model pushed to act like an MRP, rather than an actual MRP performing some abstract reasoning, may help confused readers such as myself.
| 2: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper |
BkJsCIcgl | ICLR.cc/2017/conference | 2017 | The Predictron: End-To-End Learning and Planning | ["David Silver", "Hado van Hasselt", "Matteo Hessel", "Tom Schaul", "Arthur Guez", "Tim Harley", "Gabriel Dulac-Arnold", "David Reichert", "Neil Rabinowitz", "Andre Barreto", "Thomas Degris"] | One of the key challenges of artificial intelligence is to learn models that are effective in the context of planning. In this document we introduce the predictron architecture. The predictron consists of a fully abstract model, represented by a Markov reward process, that can be rolled forward multiple "imagined" planning steps. Each forward pass of the predictron accumulates internal rewards and values over multiple planning depths.
The predictron is trained end-to-end so as to make these accumulated values accurately approximate the true value function, thereby focusing the model upon the aspects of the environment most relevant to planning. We applied the predictron to procedurally generated random mazes and a simulator for the game of pool. The predictron yielded significantly more accurate predictions than conventional deep neural network architectures. | ["Deep learning", "Reinforcement Learning", "Supervised Learning", "Semi-Supervised Learning"] | ABSTRACTOne of the key challenges of artificial intelligence is to learn models that are ef-fective in the context of planning. In this document we introduce the predictronarchitecture. The predictron consists of a fully abstract model, represented by aMarkov reward process, that can be rolled forward multiple “imagined” planningsteps. Each forward pass of the predictron accumulates internal rewards and val-ues over multiple planning depths. The predictron is trained end-to-end so as tomake these accumulated values accurately approximate the true value function.We applied the predictron to procedurally generated random mazes and a sim-ulator for the game of pool. The predictron yielded significantly more accuratepredictions than conventional deep neural network architectures.1 I NTRODUCTIONThe central idea of model-based reinforcement learning is to decompose the RL problem into twosubproblems: learning a model of the environment, and then planning with this model. The modelis typically represented by a Markov reward process (MRP) or decision process (MDP). The plan-ning component uses this model to evaluate and select among possible strategies. This is typicallyachieved by rolling forward the model to construct a value function that estimates cumulative re-ward. In prior work, the model is trained essentially independently of its use within the planner.As a result, the model is not well-matched with the overall objective of the agent. Prior deep rein-forcement learning methods have successfully constructed models that can unroll near pixel-perfectreconstructions (Oh et al., 2015; Chiappa et al., 2016); but are yet to surpass state-of-the-art model-free methods in challenging RL domains with raw inputs (e.g., Mnih et al., 2015; 2016; Lillicrapet al., 2016).In this paper we introduce a new architecture, which we call the predictron , that integrates learningand planning into one end-to-end training procedure. At every step, a model is applied to an internalstate, to produce a next state, reward, discount, and value estimate. This model is completely abstractand its only goal is to facilitate accurate value prediction. For example, to plan effectively in a game,an agent must be able to predict the score. If our model makes accurate predictions, then an optimalplan with respect to our model will also be an optimal plan for the underlying game – even if thatmodel uses a different state space (e.g., an abstract representation of enemy positions, ignoringtheir shapes and colours), action space (e.g., a high-level action to move away from an enemy),rewards (e.g., a single abstract step could have a higher value than any real reward), or even time-step (e.g., a single abstract step could “jump” the agent to the end of a corridor). All we requireis that trajectories through the abstract model produce scores that are consistent with trajectoriesthrough the real environment. This is achieved by training the predictron end-to-end, so as to makeits value estimates as accurate as possible.An ideal model could generalise to many different prediction tasks, rather than overfitting to a singletask; and could learn from a rich variety of feedback signals, not just a single extrinsic reward. Wetherefore train the predictron to predict a host of different value functions for a variety of pseudo-reward functions and discount factors. These pseudo-rewards can encode any event or aspect of theenvironment that the agent may care about, e.g., staying alive or reaching the next room.We focus upon the prediction task: estimating value functions in MRP environments with uncon-trolled dynamics. In this case, the predictron can be implemented as a deep neural network with an*Primary contributors1Under review as a conference paper at ICLR 2017MRP as a recurrent core. The predictron unrolls this core multiple steps and accumulates rewardsinto an overall estimate of value.We applied the predictron to procedurally generated random mazes, and a simulated pool domain,directly from pixel inputs. In both cases, the predictron significantly outperformed model-free al-gorithms with conventional deep network architectures; and was much more robust to architecturalchoices such as depth.2 B ACKGROUNDWe consider environments defined by an MRP with states s2S. The MRP is defined by a function,s0;r; =p(s;), wheres0is the next state, ris the reward, and is the discount factor, whichcan for instance represent the non-termination probability for this transition. The process may bestochastic, given IID noise .The return of an MRP is the cumulative discounted reward over a single trajectory, gt=rt+1+t+1rt+2+t+1t+2rt+3+:::, wheretcan vary per time-step. We consider a generalisation of theMRP setting that includes vector-valued rewards r, diagonal-matrix discounts , and vector-valuedreturns g; definitions are otherwise identical to the above. We use this bold font notation to closelymatch the more familiar scalar MRP case; the majority of the paper can be comfortably understoodby reading all rewards as scalars, and all discount factors as scalar and constant, i.e., t=.Thevalue function of an MRPpis the expected return from state s,vp(s) =Ep[gtjst=s]. Inthe vector case, these are known as general value functions (Sutton et al., 2011). We will say that a(general) value function v()isconsistent with environment pif and only if v=vpwhich satisfiesthe following Bellman equation (Bellman, 1957),vp(s) =Ep[r+vp(s0)js]: (1)In model-based reinforcement learning (Sutton and Barto, 1998), an approximation mpto theenvironment is learned. In the uncontrolled setting this model is normally an MRP s0;r;=m(s;)that maps from state sto subsequent state s0and additionally outputs rewards rand discounts ;the model may be stochastic given an IID source of noise . A (general) value function vm()isconsistent with model m(orvalid , (Sutton, 1995)), if and only if it satisfies a Bellman equationvm(s) =Em[r+vm(s0)js]with respect to model m. Conventionally, model-based RL methodsfocus on finding a value function vthat is consistent with a separately learned model m.3 P REDICTRON ARCHITECTUREThe predictron is composed of four main components. First, a state representation s=f(s)thatencodes raw input s(this could be a history of observations, in the partially observed setting, forexample when fis a recurrent network) into an internal (abstract, hidden) state s. Second, a models0;r;=m(s;)that maps from internal state sto subsequent internal state s0, internal rewards r,and internal discounts . Third, a value function vthat outputs internal values v=v(s)representingthe future, internal return from internal state sonwards. The predictron is applied by unrolling itsmodelmmultiple “planning” steps to produce internal rewards, discounts and values. We usesuperscriptskto indicate internal steps of the model (which have no necessary connection to timestepstof the environment). Finally, these internal rewards, discounts and values are combinedtogether by an accumulator into an overall estimate of value g. The whole predictron, from inputstatesto output g, may be viewed as a value function approximator for external targets (i.e. thereturns in the real environment). We consider both k-step and-weighted accumulators.Thek-step predictron rolls its internal model forward ksteps. Specifically, the k-step predictronreturn gk(henceforth abbreviated as preturn ) is the internal return obtained by accumulating kmodel steps, plus a final value vkfrom thekth step,gk=r1+1(r2+2(:::(rk1+k1(rk+kvk)):::)): (2)The 0-step preturn is simply the first value g0=v0. The 1-step preturn is g1=r1+1v1, and soon (see Fig. 1a).The-predictron combines together many k-step preturns. Specifically, it computes a diagonalweight matrix kfrom each internal state sk. The accumulator uses weights 0;:::;Kto aggregate2Under review as a conference paper at ICLR 2017a)k-step predictron b)-predictron.........22r22&&... s2 //OO99v2 //+1s2OO//99v212//+11r1%%r11&&... s1 //OO99v1 //+0s1OO//99+0s1OO//99v111//+00r0%%r0%%r00&&s0 //OO99v0 //+s0OO//99+s0OO//99+s0OO//99v010//+sOOg0sOOg1sOOg2sOOgFigure 1: a) The k-step predictron architecture. The first three columns illustrate 0, 1 and 2-steppathways through the predictron. The 0-step preturn reduces to standard model-free value functionapproximation; other preturns “imagine” additional steps with an internal model. Each pathwayoutputs ak-step preturn gkthat accumulates discounted rewards along with a final value estimate. Inpractice allk-step preturns are computed in a single forward pass. b) The -predictron architecture.The-parameters gate between the different preturns. The output is a -preturn gthat is a mixtureover thek-step preturns. For example, if 0=1;1=1;2=0then we recover the 2-step preturn,g=g2. Discount factors kand-parameters kare dependent on state sk; this dependence isnot shown in the figure.overk-step preturns g0;:::;gKand output a combined value that we call the -preturn g,g=KXk=0wkgkwhere wk=8><>:(1k)Qk1j=0jifk<KQK1j=0jotherwise.(3)where 1is the identity matrix. This -preturn is analogous to the -return in the forward-viewTD() algorithm (Sutton, 1988; Sutton and Barto, 1998). It may also be computed by a backwardaccumulation through intermediate steps gk;,gk;= (1k)vk+krk+1+k+1gk+1;; (4)where gK;=vK, and then using g=g0;. Computation in the -predictron operates in a sweep,iterating first through the model from k= 0:::K and then back through the accumulator fromk=K::: 0in a single “forward” pass of the network (see Figure 1b). Each kweight acts as agate on the computation of the -preturn: a value of k=0will truncate the -preturn at layer k,while a value of k=1will utilise deeper layers based on additional steps of the model m; the finalweight is always K=0. The individual kweights may depend on the corresponding abstractstateskand can differ per prediction. This enables the predictron to compute to an adaptive depth(Graves, 2016) depending on the internal state and learning dynamics of the network.4 P REDICTRON LEARNING UPDATESWe first consider updates that optimise the joint parameters of the state representation, model, andvalue function. We begin with the k-step predictron. We update the k-step predictron gktowardsa target outcome g, such as the Monte-Carlo return from the real environment, by minimising amean-squared error loss,Lk=12Ep[gjs]Emgkjs2:@lk@=ggk@gk@: (5)wherelk=12ggk2is the sample loss. We can use the gradient of the sample loss to updateparameters, e.g. by stochastic gradient descent. For stochastic models, two independent samples arerequired for gkand@gk@to get unbiased samples for the gradient of Lk.3Under review as a conference paper at ICLR 2017The-predictron combines together many k-step preturns. To update the joint parameters , we canuniformly average the losses on the individual preturns gk,L0:K=12KKXk=0Ep[gjs]Emgkjs2;@l0:K@=1KKXk=0ggk@gk@: (6)Alternative, we could weight each loss by the usage wkof the corresponding preturn, such that thegradient isPKk=0wkggk@gk@.The-predictron uses an accumulator with additional parameters that determine the relativeweighting of the k-step preturns. These weights are also updated so as to minimise a mean-squarederror lossL,L=12Ep[gjs]Emgjs2;@l@=gg@g@: (7)In summary, the joint parameters of the state representation f, the modelm, and the value functionvare updated to make each of the k-step preturns gkmore similar to the target g, and the parametersof the-accumulator are updated to make the aggregate -preturn gmore similar to the target g.4.1 C ONSISTENCY (SEMI-SUPERVISED ) LEARNING WITH THE -PREDICTRONIdeally, the predictron (f;m;v )learns preturns that are all equal in expectation to the true valuefunction of the environment, Emgkjs=Ep[gtjs] =vp(s), in which case the preturns mustbe equal in expectation, Emg0js=Emg1js=:::=EmgKjs. In addition, each k-steppreturn must then be equal in expectation to the -preturn, Emgkjs=Emgjs, for anyparameters. All these consistency relations between preturns give rise to additional constraints uponthe predictron. Specifically, we may adjust the parameters of the predictron to lead to consistentpreturns, even in the absence of labelled targets.Concretely, we can adjust each preturn gktowards the-preturn g; in other words, we can updateeach individual value estimate towards the best aggregated estimate by minimizingL=12KXk=0EmgjsEmgkjs2;@l@=KXk=0ggk@gk@:(8)Heregis considered fixed; the parameters are only updated to make gkmore similar to g, notvice versa. This consistency update does not require any labels gor samples from the environment.As a result, it can be applied to (potentially hypothetical) states that have no associated ‘real’ (e.g.Monte-Carlo) outcome: we update the value estimates to be self-consistent with each other. Notethe similarity with the semi-supervised setting, where we may have unlabelled inputs.5 E XPERIMENTSWe conducted experiments on two domains. The first domain consists of randomly generated 2020mazes in which each location either is empty or contains a wall. Two locations in a maze are consid-ered connected if they are both empty and we can reach one from the other by moving horizontallyor vertically through adjacent empty cells. The goal is to predict, for each of the locations on thediagonal from top-left to bottom-right of the maze, whether the bottom-right corner is connected tothat location, given the entire maze as an input image. Some of these predictions will be straightfor-ward, for instance for locations on the diagonal that contain a wall themselves and for locations closeto the bottom right. Many other predictive questions seem to require a simple algorithm, such assome form of a flood fill or search; our hypothesis is that an internal model can learn to emulate suchalgorithms, where naive approximation may struggle. A few example mazes are shown in Figure 2.Our second domain is a simulation of the game of pool, using four balls and four pockets. The simu-lator is implemented in the physics engine Mujoco (Todorov et al., 2012). We generate sequences ofRGB frames starting from a random arrangement of balls on the table. The goal is to simultaneouslylearn to predict future events for each of the four balls, given 5 RGB frames as input. These eventsinclude: collision with any other ball, collision with any boundary of the table, entering a quadrant(4, for each quadrant), being located in a quadrant ( 4, for each quadrant), and entering a pocket4Under review as a conference paper at ICLR 2017Figure 2: Left: Two sample mazes from the random-maze domain. Light blue cells are empty,darker blue cells contain a wall. One maze is connected from top-left to bottom-right (indicated inblack), the other is not. Right: An example trajectory in the pool domain (before downsampling).It was selected by maximising the prediction of pocketing balls, using the predictron.usage weightingrr, vweight sharingskipconnections(r, v, r)-predictronFeedforward netRecurrent netResNetRecurrent ResNetRecurrent net0 1M 2M 3M 4M 5M0.00010.0010.01RMSE on random mazes(log scale)Usage weighted0 1M 2M 3M 4M 5MUniformly weightedrecurrent netλ-predictron(r,γ)-predictron(r,γ,λ)-predictron0 500K 1MUpdates0.20.30.4RMSE on pool0 500K 1MUpdatesFigure 3: Exploring predictron variants. Aggregated prediction errors over all predictions (20for mazes, 280 for pool) for the eight predictron variants corresponding to the cube on the left (asdescribed in the main text), for both random mazes (top) and pool (bottom). Each line is the medianof RMSE over five seeds; shaded regions encompass all seeds. The full (r;; )-prediction ( red)consistently performed best.(4, for each pocket). Each of these 144events provides a binary pseudo-reward that we combinewith 5 different discount factors f0;0:5;0:9;0:98;1gand predict their cumulative discounted sumover various time spans. This yields a total of 280 general value functions. An example trajectory isshown in Figure 2. In both domains, inputs are presented as minibatches of i.i.d. samples with theirregression targets. Additional domain details are provided in Appendix E.5.1 E XPLORING THE PREDICTRON ARCHITECTUREOur first set of experiments examines three binary dimensions that differentiate the predictron fromstandard deep networks. We compare eight predictron variants corresponding to the corners of thecube on the left in Figure 3.The first dimension corresponds to whether or not the predictron architecture utilises the structure ofan MRP model. In the MRP case, labelled r;, internal rewards and discounts are both learned. Inthe non-r;case, which corresponds to a vanilla hidden-to-hidden neural network module, internalrewards and discounts are ignored by fixing their values to rk=0andk=1.The second dimension is whether a K-step accumulator or -accumulator is used to aggregate overpreturns. When a -accumulator is used, a -preturn is computed as described in Section 3. Other-wise, intermediate preturns are ignored by fixing their values to k= 1fork<K . In this case, theoverall output of the predictron is simply the maximum-depth preturn gK.The third dimension, labelled usage weighting, defines the loss that is used to update the parameters. On this dimension, we consider two options: the preturn losses can either be weighted uniformly(see Equation 6), or the update for each preturn gkcan be weighted according to the weight wkthatdetermines how much it is used in the -predictron’s overall output. We call the latter loss ‘usageweighted‘. Note that for architectures without a -accumulator, wk= 0fork <K , andwK= 1,thus usage weighting then implies backpropagating only the loss on the final preturn gK.All variants utilise a convolutional core with 2 intermediate hidden layers (see Appendix A); param-eters were updated by supervised learning (see Appendix B for more details). Root mean squaredprediction errors for each architecture, aggregated over all predictions, are shown in Figure 3. The5Under review as a conference paper at ICLR 2017rr, vweight sharingskipconnections(r, v, r)-predictronConvNetrecurrent ConvNetResNetrecurrent ResNetusage weighting0 1M 2M 3M 4M 5M0.00010.0010.01RMSE on random mazes(log scale)Shared coredeep netdeep net with skips(r,γ,λ)-predictron(r,γ,λ)-predictron with skips0 1M 2M 3M 4M 5MUnshared cores0 500K 1MUpdates0.20.30.4RMSE on pool0 500K 1MUpdatesFigure 4: Comparing predictron to baselines. Aggregated prediction errors on random mazes(top) and pool (bottom) over all predictions for the eight architectures corresponding to the cube onthe left. Each line is the median of RMSE over five seeds; shaded regions encompass all seeds. Thefull(r;; )-predictron ( red), consistently outperformed conventional deep network architectures(black ), with and without skips and with and without weight sharing.top row corresponds to the random mazes and the bottom row to the pool domain. The main con-clusion is that learning an MRP model improved performance greatly. The inclusion of weightshelped as well, especially on pool. Usage weighting further improved performance.5.2 C OMPARING THE PREDICTRON TO OTHER DEEPNETWORKSOur second set of experiments compares the predictron to feedforward and recurrent deep learningarchitectures, with and without skip connections. We compare the corners of a new cube, as depictedon the left in Figure 4, based on three different binary dimensions.The first dimension of this second cube is whether we use a predictron, or a (non- , non-r;) deepnetwork that does not have an internal model and does not output or learn from intermediate predic-tions. We use the most effective predictron from the previous section, i.e., the (r;; )-predictronwith usage weighting.The second dimension is whether weights are shared between all cores (as in a recurrent network),or whether each core uses separate weights (as in a feedforward network). We note that the non-, non-r;variants of the predictron then correspond to standard (convolutional) feedforward and(unrolled) recurrent neural networks respectively.The third dimension is whether we include skip connections. This is equivalent to defining the modelstep to output a change to the current state, s, and then defining sk+1=h(sk+ sk), wherehis the non-linear function—in our case a ReLU, h(x) = max(0;x). The deep network with skipconnections is a variant of ResNet (He et al., 2015).Root mean squared prediction errors for each architecture are shown in Figure 4. All (r;; )-predictrons (red lines) outperformed the corresponding feedforward or recurrent neural networkbaselines (black lines) both in the random mazes and in pool. We also investigated the effect ofchanging the depth of the networks (see Appendix C). The predictron outperformed the correspond-ing feedforward or recurrent baselines for all depths, with and without skip connections.5.3 S EMI-SUPERVISED LEARNING BY CONSISTENCYWe now consider how to use the predictron for semi-supervised learning, training the model ona combination of labelled and unlabelled random mazes. Semi-supervised learning is importantbecause a common bottleneck in applying machine learning in the real world is the difficulty ofcollecting labelled data, whereas often large quantities of unlabelled data exist.We trained a full (r;; )-predictron by alternating standard supervised updates with consistencyupdates, obtained by stochastically minimizing the consistency loss (8), on the unlabelled samples.For each supervised update we apply either 0, 1, or 9 consistency updates. Figure 5 shows that6Under review as a conference paper at ICLR 2017the performance improved monotonically with the number of consistency updates, measured as afunction of the number of labelled samples consumed.0 100K 200K 300K 400K 500KNumber of labels0.0010.0030.010.03RMSE on random mazes(log scale)Shared core0 consistency updates1 consistency update9 consistency updates0 100K 200K 300K 400K 500KNumber of labelsUnshared coresFigure 5: Semi-supervised learning. Prediction errors of the (r;; )-predictrons (shared core, noskips) using 0, 1, or 9 consistency updates for every update with labelled data, plotted as function ofthe number of labels consumed. Learning performance improves with more consistency updates.5.4 A NALYSIS OF ADAPTIVE DEPTHIn principle, the predictron can adapt its depth to ‘think more’ about some predictions than others,perhaps depending on the complexity of the underlying target. We investigate this by looking atqualitatively different prediction types in pool: ball collisions, rail collisions, pocketing balls, andentering or staying in quadrants. For each prediction type we consider several different time-spans(determined by the real-world discount factors associated with each pseudo-reward). Figure 6 showsdistributions of depth for each type of prediction. The ‘depth’ of a predictron is here defined as theeffective number of model steps. If the predictron relies fully on the very first value (i.e., 0= 0),this counts as 0 steps. If, instead, it learns to place equal weight on all rewards and on the finalvalue, this counts as 16 steps. Concretely, the depth dcan be defined recursively as d=d0wheredk=k(1 +kdk+1)anddK=0. Note that even for the same input state, each prediction has aseparate depth.The depth distributions exhibit three properties. First, different types of predictions used differentdepths. Second, depth was correlated with the real-world discount for the first four prediction types.Third, the distributions are not strongly peaked, which implies that the depth can differ per inputeven for a single real-world discount and prediction type. In a control experiment (not shown) weused a scalar shared among all predictions, which reduced performance in all scenarios, indicatingthat the heterogeneous depth is a valuable form of flexibility.5.5 V ISUALIZING THE PREDICTIONS IN THE POOL DOMAINWe test the quality of the predictions in the pool domain to evaluate whether they are well-suited tomaking decisions. For each sampled pool position, we consider a set Iof different initial conditions(different angles and velocity of the white ball), and ask which is more likely to lead to pocketingcoloured balls. For each initial condition s2I, we apply the (r;; )-predictron (shared cores, 16model steps, no skip connections) to obtain predictions g. We sum the predictions that correspond00.5 0.90.98 1Real-world discounts0246810121416Depthcollision00.5 0.90.98 1Real-world discounts0246810121416rails00.5 0.90.98 1Real-world discounts0246810121416enter00.5 0.90.98 1Real-world discounts0246810121416pocket00.5 0.90.98 1Real-world discounts0246810121416stayFigure 6: Thinking depth. Distributions of thinking depth on pool for different types of predictionsand for different real-world discounts.7Under review as a conference paper at ICLR 2017to pocketing any ball except the white ball, and to real-world discounts = 0:98and= 1. Weselect the condition sthat maximises this sum.We then roll forward the pool simulator from sand log the number of pocketing events. Figure 2shows a sampled rollout, using the predictron to pick s. When providing the choice of 128an-gles and two velocities for initial conditions ( jIj= 256 ), this procedure resulted in pocketing 27coloured balls in 50 episodes. Using the same procedure with an equally deep convolutional net-work only resulted in 10pocketing events. These results suggest that the lower loss of the learned(r;; )-predictron translated into meaningful improvements when informing decisions. A video ofthe rollouts selected by the predictron is available here: https://youtu.be/BeaLdaN2C3Q .6 R ELATED WORKLee et al. (2015) introduced a neural network architecture where classifications branch off interme-diate hidden layers. An important difference with respect to the -predictron, is that the weights arehand-tuned as hyper-parameters, whereas in the predictron the weights are learnt and, more im-portantly, conditional on the input. Another difference is that the loss on the auxiliary classificationsis used to speed up learning, but the classifications themselves are not combined into an aggregateprediction; the output of the model itself is the deepest prediction.Graves (2016) introduced an architecture with adaptive computation time (ACT), with a discrete(but differentiable) decision on when to halt, and aggregating over the outputs at each ponderingstep. This is related to our weights, but obtains depth in a different way; one notable difference isthat the-predictron can choose different pondering depths for each of its predictions.Value iteration networks (VINs) (Tamar et al., 2016) also learn value functions end-to-end using aninternal model, similar to the (non- ) predictron. However, VINs plan via convolutional operationsover the full input state space; whereas the predictron plans via imagined trajectories through anabstract state space. This may allow the predictron architecture to scale much more effectively indomains that do not have a natural two-dimensional encoding of the state space.The notion of learning about many predictions of the future relates to work on predictive staterepresentations (PSRs; Littman et al., 2001), general value functions (GVFs; Sutton et al., 2011),and nexting (Modayil et al., 2012). Such predictions have been shown to be useful as representa-tions (Schaul and Ring, 2013) and for transfer (Schaul et al., 2015). So far, however, none of thesehave been considered for learning abstract models.Schmidhuber (2015) discusses learning abstract models, but maintains separate losses for the modeland a controller, and suggests training the model unsupervised to compactly encode the entire historyof observations, through predictive coding. The predictron’s abstract model is instead trained end-to-end to obtain accurate values.7 C ONCLUSIONThe predictron is a single differentiable architecture that rolls forward an internal model to estimateexternal values. This internal model may be given both the structure and the semantics of tradi-tional reinforcement learning models. But unlike most approaches to model-based reinforcementlearning, the model is fully abstract: it need not correspond to the real environment in any humanunderstandable fashion, so long as its rolled-forward “plans” accurately predict outcomes in the trueenvironment.The predictron may be viewed as a novel network architecture that incorporates several separableideas. First, the predictron outputs a value by accumulating rewards over a series of internal planningsteps. Second, each forward pass of the predictron outputs values at multiple planning depths. Third,these values may be combined together, also within a single forward pass, to output an overallensemble value. Finally, the different values output by the predictron may be encouraged to beself-consistent with each other, to provide an additional signal during learning. Our experimentsdemonstrate that these differences result in more accurate predictions of value, in reinforcementlearning environments, than more conventional network architectures.We have focused on value prediction tasks in uncontrolled environments. However, these ideas maytransfer to the control setting, for example by using the predictron as a Q-network (Mnih et al.,2015). Even more intriguing is the possibility of learning an internal MDP with abstract internalactions, rather than the MRP considered in this paper. We aim to explore these ideas in future work.8Under review as a conference paper at ICLR 2017 | HkYJOkmBx | A very worthwhile idea, but the empirical results could have been better aligned with the main message | 6: Marginally above acceptance threshold | The paper proposes an approach to learning models that are good for planning problems, using deep netowork architectures. The key idea is to ensure that models are self-consistent and accurately predict the future. The problem of learning good planning models (as opposed to simply good predictive models is really crucial and attempts so far have failed. This paper is conceptually interesting and provides a valuable perspective on how to achieve this goal. Its incorporation of key RL concepts (like discounting and eligibility traces) and the flexibility to learn these is very appealing. Hence, I think it should be accepted. This being said, I think the paper does not quite live up to its claims. Here are some aspects that need to be addressed (in order of importance):
1. Relationship to past work: the proposed representation seems essentially a non-linear implementation of the Horde architecture. It is also very similar in spirit to predictive state representations. Yet these connections are almost not discussed at all. The related work paragraph is very brief and needs expansion to situate the work in the context of other predictive modelling attempts that both were designed to be used for planning and (in the case of PSRs) were in fact successsfully used in planning tasks. Some newer work on learning action-conditional models in Atari games are also not discussed. Situating the paper better in the context of existing model learning would also help understand easier both the motivations and the novel contributions of the work (otherwise, the reader is left to try and elucidate this for themselves, and may come to the wrong conclusion).
2. The paper needs to provide some insight about the necessity of the recurrent core of the architecture. The ideas are presented nicely in general fashion, yet the proposed impolementation is quite specific and "bulky" (very high number of parameters). Is this really necessary in all tasks? Can one implement the basic ideas outside of the particular architecture proposed? Can we use feedforward approximations or is the recurrent part somehow necessary? At the very least the paper should expand the discussion on this topic, if not provide some empirical evidence.
3. The experiments are very restricted in their setup: iid data drawn from fixed distributions, correct targets. So, the proposed approach seems like an overkill for these particular tasks. There is an indirect attempt to provide evidence the learned models would be useful for planning, but no direct measurement to support this'd claim (no use of the models in planning). Compared to the original Horde paper, fewer predictions are learned, and these are more similar to each other. While I sympathize with the desire to go in steps, I think the paper stops short of where it should. At the very least, doing prediction in the context of an actual RL prediction task, with non-iid inputs, should be included in the paper. This should only require minor modifications to the experiments (same task, just different data). Ideally, in the case of the mazes, the learned models should be used in some form of simplified planning to learn paths. This would align the experiments much better with the claims in the presentation of the architecture. | 5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature |
BkJsCIcgl | ICLR.cc/2017/conference | 2017 | The Predictron: End-To-End Learning and Planning | ["David Silver", "Hado van Hasselt", "Matteo Hessel", "Tom Schaul", "Arthur Guez", "Tim Harley", "Gabriel Dulac-Arnold", "David Reichert", "Neil Rabinowitz", "Andre Barreto", "Thomas Degris"] | One of the key challenges of artificial intelligence is to learn models that are effective in the context of planning. In this document we introduce the predictron architecture. The predictron consists of a fully abstract model, represented by a Markov reward process, that can be rolled forward multiple "imagined" planning steps. Each forward pass of the predictron accumulates internal rewards and values over multiple planning depths.
The predictron is trained end-to-end so as to make these accumulated values accurately approximate the true value function, thereby focusing the model upon the aspects of the environment most relevant to planning. We applied the predictron to procedurally generated random mazes and a simulator for the game of pool. The predictron yielded significantly more accurate predictions than conventional deep neural network architectures. | ["Deep learning", "Reinforcement Learning", "Supervised Learning", "Semi-Supervised Learning"] | ABSTRACTOne of the key challenges of artificial intelligence is to learn models that are ef-fective in the context of planning. In this document we introduce the predictronarchitecture. The predictron consists of a fully abstract model, represented by aMarkov reward process, that can be rolled forward multiple “imagined” planningsteps. Each forward pass of the predictron accumulates internal rewards and val-ues over multiple planning depths. The predictron is trained end-to-end so as tomake these accumulated values accurately approximate the true value function.We applied the predictron to procedurally generated random mazes and a sim-ulator for the game of pool. The predictron yielded significantly more accuratepredictions than conventional deep neural network architectures.1 I NTRODUCTIONThe central idea of model-based reinforcement learning is to decompose the RL problem into twosubproblems: learning a model of the environment, and then planning with this model. The modelis typically represented by a Markov reward process (MRP) or decision process (MDP). The plan-ning component uses this model to evaluate and select among possible strategies. This is typicallyachieved by rolling forward the model to construct a value function that estimates cumulative re-ward. In prior work, the model is trained essentially independently of its use within the planner.As a result, the model is not well-matched with the overall objective of the agent. Prior deep rein-forcement learning methods have successfully constructed models that can unroll near pixel-perfectreconstructions (Oh et al., 2015; Chiappa et al., 2016); but are yet to surpass state-of-the-art model-free methods in challenging RL domains with raw inputs (e.g., Mnih et al., 2015; 2016; Lillicrapet al., 2016).In this paper we introduce a new architecture, which we call the predictron , that integrates learningand planning into one end-to-end training procedure. At every step, a model is applied to an internalstate, to produce a next state, reward, discount, and value estimate. This model is completely abstractand its only goal is to facilitate accurate value prediction. For example, to plan effectively in a game,an agent must be able to predict the score. If our model makes accurate predictions, then an optimalplan with respect to our model will also be an optimal plan for the underlying game – even if thatmodel uses a different state space (e.g., an abstract representation of enemy positions, ignoringtheir shapes and colours), action space (e.g., a high-level action to move away from an enemy),rewards (e.g., a single abstract step could have a higher value than any real reward), or even time-step (e.g., a single abstract step could “jump” the agent to the end of a corridor). All we requireis that trajectories through the abstract model produce scores that are consistent with trajectoriesthrough the real environment. This is achieved by training the predictron end-to-end, so as to makeits value estimates as accurate as possible.An ideal model could generalise to many different prediction tasks, rather than overfitting to a singletask; and could learn from a rich variety of feedback signals, not just a single extrinsic reward. Wetherefore train the predictron to predict a host of different value functions for a variety of pseudo-reward functions and discount factors. These pseudo-rewards can encode any event or aspect of theenvironment that the agent may care about, e.g., staying alive or reaching the next room.We focus upon the prediction task: estimating value functions in MRP environments with uncon-trolled dynamics. In this case, the predictron can be implemented as a deep neural network with an*Primary contributors1Under review as a conference paper at ICLR 2017MRP as a recurrent core. The predictron unrolls this core multiple steps and accumulates rewardsinto an overall estimate of value.We applied the predictron to procedurally generated random mazes, and a simulated pool domain,directly from pixel inputs. In both cases, the predictron significantly outperformed model-free al-gorithms with conventional deep network architectures; and was much more robust to architecturalchoices such as depth.2 B ACKGROUNDWe consider environments defined by an MRP with states s2S. The MRP is defined by a function,s0;r; =p(s;), wheres0is the next state, ris the reward, and is the discount factor, whichcan for instance represent the non-termination probability for this transition. The process may bestochastic, given IID noise .The return of an MRP is the cumulative discounted reward over a single trajectory, gt=rt+1+t+1rt+2+t+1t+2rt+3+:::, wheretcan vary per time-step. We consider a generalisation of theMRP setting that includes vector-valued rewards r, diagonal-matrix discounts , and vector-valuedreturns g; definitions are otherwise identical to the above. We use this bold font notation to closelymatch the more familiar scalar MRP case; the majority of the paper can be comfortably understoodby reading all rewards as scalars, and all discount factors as scalar and constant, i.e., t=.Thevalue function of an MRPpis the expected return from state s,vp(s) =Ep[gtjst=s]. Inthe vector case, these are known as general value functions (Sutton et al., 2011). We will say that a(general) value function v()isconsistent with environment pif and only if v=vpwhich satisfiesthe following Bellman equation (Bellman, 1957),vp(s) =Ep[r+vp(s0)js]: (1)In model-based reinforcement learning (Sutton and Barto, 1998), an approximation mpto theenvironment is learned. In the uncontrolled setting this model is normally an MRP s0;r;=m(s;)that maps from state sto subsequent state s0and additionally outputs rewards rand discounts ;the model may be stochastic given an IID source of noise . A (general) value function vm()isconsistent with model m(orvalid , (Sutton, 1995)), if and only if it satisfies a Bellman equationvm(s) =Em[r+vm(s0)js]with respect to model m. Conventionally, model-based RL methodsfocus on finding a value function vthat is consistent with a separately learned model m.3 P REDICTRON ARCHITECTUREThe predictron is composed of four main components. First, a state representation s=f(s)thatencodes raw input s(this could be a history of observations, in the partially observed setting, forexample when fis a recurrent network) into an internal (abstract, hidden) state s. Second, a models0;r;=m(s;)that maps from internal state sto subsequent internal state s0, internal rewards r,and internal discounts . Third, a value function vthat outputs internal values v=v(s)representingthe future, internal return from internal state sonwards. The predictron is applied by unrolling itsmodelmmultiple “planning” steps to produce internal rewards, discounts and values. We usesuperscriptskto indicate internal steps of the model (which have no necessary connection to timestepstof the environment). Finally, these internal rewards, discounts and values are combinedtogether by an accumulator into an overall estimate of value g. The whole predictron, from inputstatesto output g, may be viewed as a value function approximator for external targets (i.e. thereturns in the real environment). We consider both k-step and-weighted accumulators.Thek-step predictron rolls its internal model forward ksteps. Specifically, the k-step predictronreturn gk(henceforth abbreviated as preturn ) is the internal return obtained by accumulating kmodel steps, plus a final value vkfrom thekth step,gk=r1+1(r2+2(:::(rk1+k1(rk+kvk)):::)): (2)The 0-step preturn is simply the first value g0=v0. The 1-step preturn is g1=r1+1v1, and soon (see Fig. 1a).The-predictron combines together many k-step preturns. Specifically, it computes a diagonalweight matrix kfrom each internal state sk. The accumulator uses weights 0;:::;Kto aggregate2Under review as a conference paper at ICLR 2017a)k-step predictron b)-predictron.........22r22&&... s2 //OO99v2 //+1s2OO//99v212//+11r1%%r11&&... s1 //OO99v1 //+0s1OO//99+0s1OO//99v111//+00r0%%r0%%r00&&s0 //OO99v0 //+s0OO//99+s0OO//99+s0OO//99v010//+sOOg0sOOg1sOOg2sOOgFigure 1: a) The k-step predictron architecture. The first three columns illustrate 0, 1 and 2-steppathways through the predictron. The 0-step preturn reduces to standard model-free value functionapproximation; other preturns “imagine” additional steps with an internal model. Each pathwayoutputs ak-step preturn gkthat accumulates discounted rewards along with a final value estimate. Inpractice allk-step preturns are computed in a single forward pass. b) The -predictron architecture.The-parameters gate between the different preturns. The output is a -preturn gthat is a mixtureover thek-step preturns. For example, if 0=1;1=1;2=0then we recover the 2-step preturn,g=g2. Discount factors kand-parameters kare dependent on state sk; this dependence isnot shown in the figure.overk-step preturns g0;:::;gKand output a combined value that we call the -preturn g,g=KXk=0wkgkwhere wk=8><>:(1k)Qk1j=0jifk<KQK1j=0jotherwise.(3)where 1is the identity matrix. This -preturn is analogous to the -return in the forward-viewTD() algorithm (Sutton, 1988; Sutton and Barto, 1998). It may also be computed by a backwardaccumulation through intermediate steps gk;,gk;= (1k)vk+krk+1+k+1gk+1;; (4)where gK;=vK, and then using g=g0;. Computation in the -predictron operates in a sweep,iterating first through the model from k= 0:::K and then back through the accumulator fromk=K::: 0in a single “forward” pass of the network (see Figure 1b). Each kweight acts as agate on the computation of the -preturn: a value of k=0will truncate the -preturn at layer k,while a value of k=1will utilise deeper layers based on additional steps of the model m; the finalweight is always K=0. The individual kweights may depend on the corresponding abstractstateskand can differ per prediction. This enables the predictron to compute to an adaptive depth(Graves, 2016) depending on the internal state and learning dynamics of the network.4 P REDICTRON LEARNING UPDATESWe first consider updates that optimise the joint parameters of the state representation, model, andvalue function. We begin with the k-step predictron. We update the k-step predictron gktowardsa target outcome g, such as the Monte-Carlo return from the real environment, by minimising amean-squared error loss,Lk=12Ep[gjs]Emgkjs2:@lk@=ggk@gk@: (5)wherelk=12ggk2is the sample loss. We can use the gradient of the sample loss to updateparameters, e.g. by stochastic gradient descent. For stochastic models, two independent samples arerequired for gkand@gk@to get unbiased samples for the gradient of Lk.3Under review as a conference paper at ICLR 2017The-predictron combines together many k-step preturns. To update the joint parameters , we canuniformly average the losses on the individual preturns gk,L0:K=12KKXk=0Ep[gjs]Emgkjs2;@l0:K@=1KKXk=0ggk@gk@: (6)Alternative, we could weight each loss by the usage wkof the corresponding preturn, such that thegradient isPKk=0wkggk@gk@.The-predictron uses an accumulator with additional parameters that determine the relativeweighting of the k-step preturns. These weights are also updated so as to minimise a mean-squarederror lossL,L=12Ep[gjs]Emgjs2;@l@=gg@g@: (7)In summary, the joint parameters of the state representation f, the modelm, and the value functionvare updated to make each of the k-step preturns gkmore similar to the target g, and the parametersof the-accumulator are updated to make the aggregate -preturn gmore similar to the target g.4.1 C ONSISTENCY (SEMI-SUPERVISED ) LEARNING WITH THE -PREDICTRONIdeally, the predictron (f;m;v )learns preturns that are all equal in expectation to the true valuefunction of the environment, Emgkjs=Ep[gtjs] =vp(s), in which case the preturns mustbe equal in expectation, Emg0js=Emg1js=:::=EmgKjs. In addition, each k-steppreturn must then be equal in expectation to the -preturn, Emgkjs=Emgjs, for anyparameters. All these consistency relations between preturns give rise to additional constraints uponthe predictron. Specifically, we may adjust the parameters of the predictron to lead to consistentpreturns, even in the absence of labelled targets.Concretely, we can adjust each preturn gktowards the-preturn g; in other words, we can updateeach individual value estimate towards the best aggregated estimate by minimizingL=12KXk=0EmgjsEmgkjs2;@l@=KXk=0ggk@gk@:(8)Heregis considered fixed; the parameters are only updated to make gkmore similar to g, notvice versa. This consistency update does not require any labels gor samples from the environment.As a result, it can be applied to (potentially hypothetical) states that have no associated ‘real’ (e.g.Monte-Carlo) outcome: we update the value estimates to be self-consistent with each other. Notethe similarity with the semi-supervised setting, where we may have unlabelled inputs.5 E XPERIMENTSWe conducted experiments on two domains. The first domain consists of randomly generated 2020mazes in which each location either is empty or contains a wall. Two locations in a maze are consid-ered connected if they are both empty and we can reach one from the other by moving horizontallyor vertically through adjacent empty cells. The goal is to predict, for each of the locations on thediagonal from top-left to bottom-right of the maze, whether the bottom-right corner is connected tothat location, given the entire maze as an input image. Some of these predictions will be straightfor-ward, for instance for locations on the diagonal that contain a wall themselves and for locations closeto the bottom right. Many other predictive questions seem to require a simple algorithm, such assome form of a flood fill or search; our hypothesis is that an internal model can learn to emulate suchalgorithms, where naive approximation may struggle. A few example mazes are shown in Figure 2.Our second domain is a simulation of the game of pool, using four balls and four pockets. The simu-lator is implemented in the physics engine Mujoco (Todorov et al., 2012). We generate sequences ofRGB frames starting from a random arrangement of balls on the table. The goal is to simultaneouslylearn to predict future events for each of the four balls, given 5 RGB frames as input. These eventsinclude: collision with any other ball, collision with any boundary of the table, entering a quadrant(4, for each quadrant), being located in a quadrant ( 4, for each quadrant), and entering a pocket4Under review as a conference paper at ICLR 2017Figure 2: Left: Two sample mazes from the random-maze domain. Light blue cells are empty,darker blue cells contain a wall. One maze is connected from top-left to bottom-right (indicated inblack), the other is not. Right: An example trajectory in the pool domain (before downsampling).It was selected by maximising the prediction of pocketing balls, using the predictron.usage weightingrr, vweight sharingskipconnections(r, v, r)-predictronFeedforward netRecurrent netResNetRecurrent ResNetRecurrent net0 1M 2M 3M 4M 5M0.00010.0010.01RMSE on random mazes(log scale)Usage weighted0 1M 2M 3M 4M 5MUniformly weightedrecurrent netλ-predictron(r,γ)-predictron(r,γ,λ)-predictron0 500K 1MUpdates0.20.30.4RMSE on pool0 500K 1MUpdatesFigure 3: Exploring predictron variants. Aggregated prediction errors over all predictions (20for mazes, 280 for pool) for the eight predictron variants corresponding to the cube on the left (asdescribed in the main text), for both random mazes (top) and pool (bottom). Each line is the medianof RMSE over five seeds; shaded regions encompass all seeds. The full (r;; )-prediction ( red)consistently performed best.(4, for each pocket). Each of these 144events provides a binary pseudo-reward that we combinewith 5 different discount factors f0;0:5;0:9;0:98;1gand predict their cumulative discounted sumover various time spans. This yields a total of 280 general value functions. An example trajectory isshown in Figure 2. In both domains, inputs are presented as minibatches of i.i.d. samples with theirregression targets. Additional domain details are provided in Appendix E.5.1 E XPLORING THE PREDICTRON ARCHITECTUREOur first set of experiments examines three binary dimensions that differentiate the predictron fromstandard deep networks. We compare eight predictron variants corresponding to the corners of thecube on the left in Figure 3.The first dimension corresponds to whether or not the predictron architecture utilises the structure ofan MRP model. In the MRP case, labelled r;, internal rewards and discounts are both learned. Inthe non-r;case, which corresponds to a vanilla hidden-to-hidden neural network module, internalrewards and discounts are ignored by fixing their values to rk=0andk=1.The second dimension is whether a K-step accumulator or -accumulator is used to aggregate overpreturns. When a -accumulator is used, a -preturn is computed as described in Section 3. Other-wise, intermediate preturns are ignored by fixing their values to k= 1fork<K . In this case, theoverall output of the predictron is simply the maximum-depth preturn gK.The third dimension, labelled usage weighting, defines the loss that is used to update the parameters. On this dimension, we consider two options: the preturn losses can either be weighted uniformly(see Equation 6), or the update for each preturn gkcan be weighted according to the weight wkthatdetermines how much it is used in the -predictron’s overall output. We call the latter loss ‘usageweighted‘. Note that for architectures without a -accumulator, wk= 0fork <K , andwK= 1,thus usage weighting then implies backpropagating only the loss on the final preturn gK.All variants utilise a convolutional core with 2 intermediate hidden layers (see Appendix A); param-eters were updated by supervised learning (see Appendix B for more details). Root mean squaredprediction errors for each architecture, aggregated over all predictions, are shown in Figure 3. The5Under review as a conference paper at ICLR 2017rr, vweight sharingskipconnections(r, v, r)-predictronConvNetrecurrent ConvNetResNetrecurrent ResNetusage weighting0 1M 2M 3M 4M 5M0.00010.0010.01RMSE on random mazes(log scale)Shared coredeep netdeep net with skips(r,γ,λ)-predictron(r,γ,λ)-predictron with skips0 1M 2M 3M 4M 5MUnshared cores0 500K 1MUpdates0.20.30.4RMSE on pool0 500K 1MUpdatesFigure 4: Comparing predictron to baselines. Aggregated prediction errors on random mazes(top) and pool (bottom) over all predictions for the eight architectures corresponding to the cube onthe left. Each line is the median of RMSE over five seeds; shaded regions encompass all seeds. Thefull(r;; )-predictron ( red), consistently outperformed conventional deep network architectures(black ), with and without skips and with and without weight sharing.top row corresponds to the random mazes and the bottom row to the pool domain. The main con-clusion is that learning an MRP model improved performance greatly. The inclusion of weightshelped as well, especially on pool. Usage weighting further improved performance.5.2 C OMPARING THE PREDICTRON TO OTHER DEEPNETWORKSOur second set of experiments compares the predictron to feedforward and recurrent deep learningarchitectures, with and without skip connections. We compare the corners of a new cube, as depictedon the left in Figure 4, based on three different binary dimensions.The first dimension of this second cube is whether we use a predictron, or a (non- , non-r;) deepnetwork that does not have an internal model and does not output or learn from intermediate predic-tions. We use the most effective predictron from the previous section, i.e., the (r;; )-predictronwith usage weighting.The second dimension is whether weights are shared between all cores (as in a recurrent network),or whether each core uses separate weights (as in a feedforward network). We note that the non-, non-r;variants of the predictron then correspond to standard (convolutional) feedforward and(unrolled) recurrent neural networks respectively.The third dimension is whether we include skip connections. This is equivalent to defining the modelstep to output a change to the current state, s, and then defining sk+1=h(sk+ sk), wherehis the non-linear function—in our case a ReLU, h(x) = max(0;x). The deep network with skipconnections is a variant of ResNet (He et al., 2015).Root mean squared prediction errors for each architecture are shown in Figure 4. All (r;; )-predictrons (red lines) outperformed the corresponding feedforward or recurrent neural networkbaselines (black lines) both in the random mazes and in pool. We also investigated the effect ofchanging the depth of the networks (see Appendix C). The predictron outperformed the correspond-ing feedforward or recurrent baselines for all depths, with and without skip connections.5.3 S EMI-SUPERVISED LEARNING BY CONSISTENCYWe now consider how to use the predictron for semi-supervised learning, training the model ona combination of labelled and unlabelled random mazes. Semi-supervised learning is importantbecause a common bottleneck in applying machine learning in the real world is the difficulty ofcollecting labelled data, whereas often large quantities of unlabelled data exist.We trained a full (r;; )-predictron by alternating standard supervised updates with consistencyupdates, obtained by stochastically minimizing the consistency loss (8), on the unlabelled samples.For each supervised update we apply either 0, 1, or 9 consistency updates. Figure 5 shows that6Under review as a conference paper at ICLR 2017the performance improved monotonically with the number of consistency updates, measured as afunction of the number of labelled samples consumed.0 100K 200K 300K 400K 500KNumber of labels0.0010.0030.010.03RMSE on random mazes(log scale)Shared core0 consistency updates1 consistency update9 consistency updates0 100K 200K 300K 400K 500KNumber of labelsUnshared coresFigure 5: Semi-supervised learning. Prediction errors of the (r;; )-predictrons (shared core, noskips) using 0, 1, or 9 consistency updates for every update with labelled data, plotted as function ofthe number of labels consumed. Learning performance improves with more consistency updates.5.4 A NALYSIS OF ADAPTIVE DEPTHIn principle, the predictron can adapt its depth to ‘think more’ about some predictions than others,perhaps depending on the complexity of the underlying target. We investigate this by looking atqualitatively different prediction types in pool: ball collisions, rail collisions, pocketing balls, andentering or staying in quadrants. For each prediction type we consider several different time-spans(determined by the real-world discount factors associated with each pseudo-reward). Figure 6 showsdistributions of depth for each type of prediction. The ‘depth’ of a predictron is here defined as theeffective number of model steps. If the predictron relies fully on the very first value (i.e., 0= 0),this counts as 0 steps. If, instead, it learns to place equal weight on all rewards and on the finalvalue, this counts as 16 steps. Concretely, the depth dcan be defined recursively as d=d0wheredk=k(1 +kdk+1)anddK=0. Note that even for the same input state, each prediction has aseparate depth.The depth distributions exhibit three properties. First, different types of predictions used differentdepths. Second, depth was correlated with the real-world discount for the first four prediction types.Third, the distributions are not strongly peaked, which implies that the depth can differ per inputeven for a single real-world discount and prediction type. In a control experiment (not shown) weused a scalar shared among all predictions, which reduced performance in all scenarios, indicatingthat the heterogeneous depth is a valuable form of flexibility.5.5 V ISUALIZING THE PREDICTIONS IN THE POOL DOMAINWe test the quality of the predictions in the pool domain to evaluate whether they are well-suited tomaking decisions. For each sampled pool position, we consider a set Iof different initial conditions(different angles and velocity of the white ball), and ask which is more likely to lead to pocketingcoloured balls. For each initial condition s2I, we apply the (r;; )-predictron (shared cores, 16model steps, no skip connections) to obtain predictions g. We sum the predictions that correspond00.5 0.90.98 1Real-world discounts0246810121416Depthcollision00.5 0.90.98 1Real-world discounts0246810121416rails00.5 0.90.98 1Real-world discounts0246810121416enter00.5 0.90.98 1Real-world discounts0246810121416pocket00.5 0.90.98 1Real-world discounts0246810121416stayFigure 6: Thinking depth. Distributions of thinking depth on pool for different types of predictionsand for different real-world discounts.7Under review as a conference paper at ICLR 2017to pocketing any ball except the white ball, and to real-world discounts = 0:98and= 1. Weselect the condition sthat maximises this sum.We then roll forward the pool simulator from sand log the number of pocketing events. Figure 2shows a sampled rollout, using the predictron to pick s. When providing the choice of 128an-gles and two velocities for initial conditions ( jIj= 256 ), this procedure resulted in pocketing 27coloured balls in 50 episodes. Using the same procedure with an equally deep convolutional net-work only resulted in 10pocketing events. These results suggest that the lower loss of the learned(r;; )-predictron translated into meaningful improvements when informing decisions. A video ofthe rollouts selected by the predictron is available here: https://youtu.be/BeaLdaN2C3Q .6 R ELATED WORKLee et al. (2015) introduced a neural network architecture where classifications branch off interme-diate hidden layers. An important difference with respect to the -predictron, is that the weights arehand-tuned as hyper-parameters, whereas in the predictron the weights are learnt and, more im-portantly, conditional on the input. Another difference is that the loss on the auxiliary classificationsis used to speed up learning, but the classifications themselves are not combined into an aggregateprediction; the output of the model itself is the deepest prediction.Graves (2016) introduced an architecture with adaptive computation time (ACT), with a discrete(but differentiable) decision on when to halt, and aggregating over the outputs at each ponderingstep. This is related to our weights, but obtains depth in a different way; one notable difference isthat the-predictron can choose different pondering depths for each of its predictions.Value iteration networks (VINs) (Tamar et al., 2016) also learn value functions end-to-end using aninternal model, similar to the (non- ) predictron. However, VINs plan via convolutional operationsover the full input state space; whereas the predictron plans via imagined trajectories through anabstract state space. This may allow the predictron architecture to scale much more effectively indomains that do not have a natural two-dimensional encoding of the state space.The notion of learning about many predictions of the future relates to work on predictive staterepresentations (PSRs; Littman et al., 2001), general value functions (GVFs; Sutton et al., 2011),and nexting (Modayil et al., 2012). Such predictions have been shown to be useful as representa-tions (Schaul and Ring, 2013) and for transfer (Schaul et al., 2015). So far, however, none of thesehave been considered for learning abstract models.Schmidhuber (2015) discusses learning abstract models, but maintains separate losses for the modeland a controller, and suggests training the model unsupervised to compactly encode the entire historyof observations, through predictive coding. The predictron’s abstract model is instead trained end-to-end to obtain accurate values.7 C ONCLUSIONThe predictron is a single differentiable architecture that rolls forward an internal model to estimateexternal values. This internal model may be given both the structure and the semantics of tradi-tional reinforcement learning models. But unlike most approaches to model-based reinforcementlearning, the model is fully abstract: it need not correspond to the real environment in any humanunderstandable fashion, so long as its rolled-forward “plans” accurately predict outcomes in the trueenvironment.The predictron may be viewed as a novel network architecture that incorporates several separableideas. First, the predictron outputs a value by accumulating rewards over a series of internal planningsteps. Second, each forward pass of the predictron outputs values at multiple planning depths. Third,these values may be combined together, also within a single forward pass, to output an overallensemble value. Finally, the different values output by the predictron may be encouraged to beself-consistent with each other, to provide an additional signal during learning. Our experimentsdemonstrate that these differences result in more accurate predictions of value, in reinforcementlearning environments, than more conventional network architectures.We have focused on value prediction tasks in uncontrolled environments. However, these ideas maytransfer to the control setting, for example by using the predictron as a Q-network (Mnih et al.,2015). Even more intriguing is the possibility of learning an internal MDP with abstract internalactions, rather than the MRP considered in this paper. We aim to explore these ideas in future work.8Under review as a conference paper at ICLR 2017 | SJpfdqG4g | 4: Ok but not good enough - rejection | I think there may be a nice paper to made from this, but as it is, it should not be accepted. The authors describe a new architecture for regression, inspired by techniques for estimating the value function of an Markov reward process. The connection is interesting, and there is certainly merit in the idea. However, the writing is confusing, and as far as I can tell, the experiments and discussion are inadequate. It is quite possible that I am misunderstanding some things, so I am not putting high confidence.
Because of all the discussion of MRP's and the background that inspired the model, it is difficult to see that the authors are in a pure, i.i.d. regression setting, where they sample inputs i.i.d. (with deterministic outputs given the input) from a distribution, and try to match a parameterized function to the input output pairs.
Because they are in this setting, there is a lot lacking from the experiments. For example, they report l2 loss on the maze problem; but not "percent correct"; indeed, it looks like the deep net with skips goes to about .001 average l2 loss on the 0-1 output maze problem. This is an issue because because it suggests that by simply thresholding the outputs, you could get nearly perfect results, which would point to a model specification error of the baseline. Are there sigmoids at the end of the baseline plain deep network? Note that the proposed models do have sigmoids in the outputs in the multiplicative weightings.
How do the number of parameters of the proposed network compare to the baselines? Is the better performance (and again, better is really marginal if I am understanding the way loss is measured) simply an issue of modeling power (perhaps because of the multiplicative connections of the proposed model vs. the baseline)? Because the input is taken i.i.d and the test distribution exactly matches the train, this is an important part of the discussion. Moreover, there do not seem to be experiments where the size of the training set is fixed- the axis in the graphs is number of samples seen, which is tied to the number of optimization steps. Thus there is no testing of over-fitting.
Why not try the model on more standard regression problems (as at heart, the paper seems to be about a new convnet architecture for regression)? Show imagenet or cifar accuracies, for example. If the proposed model does worse there, try to explain/understand what it is about the reported tasks that favor the proposed model?
**********************************************************************************
edited with increased confidence in post review discussions
********************************************************************************** | 4: The reviewer is confident but not absolutely certain that the evaluation is correct |
|
rJo9n9Feg | ICLR.cc/2017/conference | 2017 | Chess Game Concepts Emerge under Weak Supervision: A Case Study of Tic-tac-toe | ["Hao Zhao", "Ming Lu", "Anbang Yao", "Yurong Chen", "Li Zhang"] | This paper explores the possibility of learning chess game concepts under weak supervision with convolutional neural networks, which is a topic that has not been visited to the best of our knowledge. We put this task in three different backgrounds: (1) deep reinforcement learning has shown an amazing capability to learn a mapping from visual inputs to most rewarding actions, without knowing the concepts of a video game. But how could we confirm that the network understands these concepts or it just does not? (2) cross-modal supervision for visual representation learning draws much attention recently. Is this methodology still applicable when it comes to the domain of game concepts and actions? (3) class activation mapping is widely recognized as a visualization technique to help us understand what a network has learnt. Is it possible for it to activate at non-salient regions? With the simplest chess game tic-tac-toe, we report interesting results as answers to those three questions mentioned above. All codes, pre-processed datasets and pre-trained models will be released. | ["Semi-Supervised Learning"] | ABSTRACTThis paper explores the possibility of learning chess game concepts under weaksupervision with convolutional neural networks, which is a topic that has not beenvisited to the best of our knowledge. We put this task in three different back-grounds: (1) deep reinforcement learning has shown an amazing capability tolearn a mapping from visual inputs to most rewarding actions, without know-ing the concepts of a video game. But how could we confirm that the networkunderstands these concepts or it just does not? (2) cross-modal supervision forvisual representation learning has drawn much attention recently. Is this method-ology still applicable when it comes to the domain of game concepts and actions?(3) class activation mapping is widely recognized as a visualization technique tohelp us understand what a network has learnt. Is it possible for it to activate atnon-salient regions? With the simplest chess game tic-tac-toe, we report inter-esting results as answers to those three questions mentioned above. All codes,pre-processed datasets and pre-trained models will be released.1 I NTRODUCTION1.1 A PPLICATION BACKGROUNDDeep reinforcement learning (DRL) has drawn quite much attention since the publication of influ-ential work Mnih et al. (2015). A convolutional neural network (CNN) is used to bridge the gapbetween video game screen frames and the most rewarding actions. An amazing feature of this kindof systems is that they do not need to know the concepts of these games (e.g. DRL learns to playBreakout without knowing there is a paddle or a ball in Fig 1a). However, how could we confirmthat this network really understands these concepts or it just learns a mapping from patterns in thevisual inputs to the best actions? This is the first question we are trying to answer here.Mnih et al. (2015) provides some unsupervised analysis results for visualization, showing that per-ceptually dissimilar frames may produce close rewards, yet this does not answer the question. Wechoose another visualization technique called class activation mapping as described in Zhou et al.(2016), which can reveal where the CNN’s attention is. However, directly applying it in tasks likeBreakout still cannot answer the question. Imagine one modifies the network described in Mnihet al. (2015) into another version as Zhou et al. (2016) does. The CNN’s attention may be fixed onthe ball but it is still not enough to support that the network understands the concept of a ball.This work was done when Hao Zhao was an intern at Intel Labs China, supervised by Anbang Yao who isresponsible for correspondence.1Under review as a conference paper at ICLR 2017Figure 1: We raise three questions from application, methodology and technique perspectives re-spectively and provide our answers with a case study of the simplest chess game tic-tac-toe.We propose to use a simple chess game called tic-tac-toe for case study. In order to answer thequestion, we propose a protocol as this: to place a piece where the CNN’s attention is, and examinewhether it is the right move. Of course, the training has to be done under weak supervision, or say,without telling the network what exactly a right move is. We think if this experiment succeeds wecan claim that the network figures out the concepts of: (1) a chess board grid; (2) the winning rule;(3) two sides. Detailed analysis about these three concepts are provided later.1.2 M ETHODOLOGY BACKGROUNDThere have been some works about representation learning with cross-modal supervision recently.Owens et al. (2016) clusters sound statistics into several categories, and uses them as labels to learnvisual representation from images corresponding to these sounds. It quantitatively shows that visualrepresentation learnt in this way is capable of handling challenging computer vision tasks and qual-itatively shows that visual and sound representations are consistent (e.g. babies’ faces correspondto baby cry sound samples). Castrej ́on et al. (2016) goes even further by learning representationsacross five modalities: RGB images, clip art pictures, sketches, texts and spatial texts. Gupta et al.(2016) learns depth image representation with mid-level features extracted from RGB images assupervision, and reports improved RGB-D object detection performance.What is the common point among these works? They generate weak supervision from one modalityand use it to learn representation from another (e.g. to learn what a train looks like from what atrain sounds like or to learn what a chair looks like in depth images from what a chair looks like inRGB images ). During training phase, no concepts about a train or a chair are explicitly modeled.Although there are many other modalities not visited by this methodology, we think the basic ideasbehind these works are same: an abstract concept like a train can be observed in different modalitiesand different representations can be connected.Here comes the question: is this methodology still applicable when it goes beyond the problem oflearning representations from different observations of a same concept? Albanie & Vedaldi (2016)is an example, which tries to relate facial expressions with what happened in a TV show (e.g. if acharacter earns a lot of money, she will be very happy). Although in Albanie & Vedaldi (2016) whathappened is explicitly defined, it still can be regarded as a weak supervision for what this expressionis.Although with the same methodology, the problem studied in this paper addresses even higher se-mantics: to learn what to do under the weak supervision of what will happen (Fig 1b). This is sub-stantially different from cross-modal supervision works mentioned above because there is no longera certain abstract concept of object or attribute observed in different modalities. Instead, figuringout the relationship between what to do andwhat will happen needs a higher level of intelligence.2Under review as a conference paper at ICLR 20171.3 T ECHNIQUE BACKGROUNDThe core technique used in this paper is class activation mapping (CAM) as described in Zhou et al.(2016). So leaving out all the backgrounds about playing a chess game or cross-modal supervision,what do our experiments say more than its inventors’? We think we show that CAM can also activateat non-salient regions. CAM helps us to understand where contributes the most to a classificationresult. As Fig 1c shows, the heatmap reveals that the face contributes the most to the result that thenetwork claims it as a person .As has already been shown by Krizhevsky et al. (2012), kernels of lower layers of a CNN capturegradients in an image. Existing CAM experiments tend to activate at salient regions, and this isvery reasonable because there are more gradients and therefore more information (e.g. the face inFig 1c). Here comes the question: could CAM activate at non-salient regions like the empty spaceson a chess board? Our answer is positive as the results (Fig 1d) show that in order to predict whatwill happen in the future, the CNN’s attention is fixed upon texture-free regions.Since we render chessboards as visual inputs without adding noise, those empty spaces are com-pletely empty meaning that: (1) if we take out the activated patch in Fig 1d, all pixels in this patchhave exactly the same value. (2) If we evaluate this patch with quantitative information metric likeentropy, there is no information here. Thus the only reason why these regions are activated is thatthe network collects enough information from these regions’ receptive fields. We argue that this ex-periment (CAM can activate at non-salient regions) testifies (again) CNN’s ability to hierarchicallycollect information from visual inputs.1.4 W HAT THISPAPER IS ABOUTAfter introducing those three backgrounds, we describe our work briefly as: to classify renderedtic-tac-toe chessboards with weak labels and to visualize that the CNN’s attention automaticallyreveals where the next piece should be placed. Learnt representation shows that: (1) the networkknows some concepts of the game that it is not told of; (2) this level of supervision for representationlearning is possible; (3) the technique of class activation mapping can activate at non-salient regions.2 R ELATED WORKS2.1 C ONCEPT LEARNINGConcept learning has different meanings in different contexts, and how to confirm a concept is learntremains an open question. In Jia et al. (2013), a concept is learnt if a generative model is learnt froma small number of positive samples. In Lake et al. (2015), a concept is learnt if a model learnt fromonly one instance can generalize to various tasks. Higgins et al. (2016) claims a concept is learntwhen a model can predict unseen objects’ sizes and positions. To summarize, they evaluate whethera concept is learnt through a model’s generalization ability. In even earlier works like Zhu et al.(2010);Yang et al. (2010), concept learning means a object/attribute classification task dealing withappearance variations, in which a concept is actually already pre-defined.Unlike these works, we investigate the concepts of game rules instead of object/attribute. UnlikeJia et al. (2013);Lake et al. (2015);Higgins et al. (2016), we claim a concept is learnt through anovel testing protocol instead of generalization ability. Why generalization ability could show aconcept is learnt? We think the reason is that a model understands a concept if it can use it in morecases. To this end, we argue that our protocol could also show a concept is learnt because the learntrepresentations in our experiments can be used to decide what to do though no rule about what needto be done is provided.2.2 C ROSS -MODAL SUPERVISIONThe literature of cross-model supervision and the differences between this paper and existing onesare already covered in last section. Here we re-claim it briefly: Owens et al. (2016);Castrej ́on et al.(2016);Gupta et al. (2016) learn representations across modalities because actually they are differentobservations of a same (object or attribute) concept. Whether this methodology is applicable for3Under review as a conference paper at ICLR 2017Figure 2: 18 different types of chessboard states and corresponding labels.higher-level concepts like game rules remains an open question and we provide positive answers tothis question.2.3 C LASS ACTIVATION MAPPINGBefore the technique of class activation mapping is introduced by Zhou et al. (2016), pioneeringworks like Simonyan et al. (2014);Zhou et al. (2015) have already shown CNN’s ability to localizeobjects with image-level labels. Although with different techniques, Simonyan et al. (2014);Zhouet al. (2015)’s activation visualization results also focus on salient regions. Unlike these works,we show that class activation mapping can activate at non-salient regions, or say more specifically,completely texture-free regions. Since the activated patch itself provides no information, all dis-criminative information comes from its context. This is another strong evidence to prove CNN’scapability to collect information from receptive fields, as a hierarchical visual model.3 E XPERIMENT I: G AME ENDS IN NEXT MOVEA tic-tac-toe chessboard is a 33grid, and there are two players (black and white in our case). Dueto duality, we generate all training samples assuming the black side takes the first move. The statespace of tic-tac-toe is small consisting of totally 39= 19683 combinations. Among them, manycombinations are illegal such as the one in which all 9 pieces are black. We exhaustively search overthe space according to a recursive simulation algorithm, in which: (1) the chessboard state is denotedby an integer smaller than 19683. (2) every state corresponds to a 9-d vector, with each element cantake a value from this set f0-illegal, 1-black win, 2-white win, 4-tie, 5-uncertain g. We call this 9-dvector a state transfer vector, denoting what will happen if the next legal piece placement happensat according location. (3) generated transfer vectors can predict the existence of a critical move thatwill finish the game in advance. We will release this simulation code.After pruning out illegal states, we collect 4486 possible states in total. Among these samples, wefurther take out 1029 states that a certain side is going to win in the next move. We then transformthese chessboard states into visual representations (gray-scale images at resolution (180 ;180) ). Eachof these 1029 samples is assigned a label according to the state transfer vectors. There are totally 18different labels illustrating 2(sides)9(locations). As demonstrated by Fig 2, we randomly pick asample for each label. As mentioned before black side takes the first move, thus if the numbers of4Under review as a conference paper at ICLR 2017Figure 3: Class activation mapping results on our dataset.black and white pieces are equal the next move will be black side’s and if there are one more blackpiece the next move will be white side’s.Although the concepts of two sides and nine locations are coded into the labels, this kind of super-vision is still weak supervision. Because what we are showing to the algorithm is just 18 abstractcategories as Fig 2 shows. Could an algorithm figure out what it needs to do by observing thesevisual inputs? We think even for a human baby it is difficult because no concepts like this is a gameoryou need to find out how to win are provided. In the setting of deep reinforcement learning thereis at least an objective of getting higher score to pursue.As mentioned before, the method we exploit is to train a classification network on this rendereddataset (Fig 2) and analyze learnt representations with the technique of class activation mapping.As Zhou et al. (2016) suggests, we add one global average pooling layer after the last convolutionallayer of a pre-trained AlexNet model. All fully connected layers of the AlexNet model are discarded,and a new fully connected layer is added after the global average pooling layer. After the newclassification network is fine-tuned on our dataset, a CAM visualization is generated by weightingthe outputs of the last convolutional layer with parameters from the added fully connected layer. OurCAM implementation is built upon Marvin and it will be released.Due to the simplicity of this classification task, the top one classification accuracy is 100% (notsurprisingly). Class activation mapping results are provided in Fig 3 and here we present the reasonswhy we claim concepts are learnt: (1) We provide 18 abstract categories, but in order to classifyvisual inputs into these 18 categories the network’s attention is roughly fixed upon chessboard grids.5Under review as a conference paper at ICLR 2017Figure 4: Class activation mapping results after grid lines are added.This means the concept of grid emerges in the learnt representation. (2) If we place a piece at themost activated location in Fig 3, that will be the right (and legal) move to finish the game. Onone hand, this means the concept of winning rule emerges in the learnt representation. On theother hand, this means this learnt concept can be used to deal with un-taught task (analogous to Jiaet al. (2013);Lake et al. (2015);Higgins et al. (2016) who use generalization ability to illustrate thatconcepts are learnt). (3) As Fig 3cehijnpq show, both sides can win in the next move if we violatethe take-turns rule. However, the network pays attention to the right location that is consistent tothe rule. For example, in Fig 3j, it seems that placing a black piece at the left-top location will alsoend the game. However, this move will violate the rule because there are already more black piecesthan white pieces meaning that this is the white side’s turn. This means that the concept of two sidesemerges in learnt representation.Except for learnt concepts, we analyze what this experiment provides for the remaining two ques-tions. To the second question: results in Fig 3 show that the methodology of generating labels fromone modality (state transfer vectors in our case) to supervise another modality is still applicable.More importantly, we use images as inputs yet the learnt visual representations contain not onlyvisual saliency information but also untold chess game concepts. To the third question: as Fig 3shows, most activated regions are empty spaces on the chessboard.4 E XPERIMENT II: A DDING GRIDLINESSince we claim complicated concepts emerge in learnt visual representations, a natural questionwill be: if the chessboard’s and pieces’ appearances are changed does this experiment still work?Thus we design this experiment by adding grid lines to the chessboards when rendering syntheticdata (Fig 4). The intentions behind this design is three-folded: (1) in this case, the chessboard’sappearance is changed. (2) after these lines are added, the concept that there is a chessboard grid isactually implied. Still, we do not think these lines directly provide the concept of chessboard gridthus we use the word imply . Whether the network can figure out what these lines mean still remain6Under review as a conference paper at ICLR 2017Figure 5: Class activation mapping results after piece appearance is changed.uncertain. (3) those locations that are completely empty in Experiment I are no longer empty fromthe perspective of information (still empty from the perspective of game rule).We train the same network on the newly rendered dataset with grid lines and calculate CAM resultsin the same way. The results are demonstrated by Fig 4. Generally speaking, the grid lines allow thenetwork to better activate at the location of right move, making them stands out more on the heatmap.What does this mean to the three intentions mentioned in last paragraph? (1) Firstly, it shows that ourexperiment is robust to chess board appearance variance. (2) Secondly, after implying the conceptthat there is a chessboard grid, the network performs better at paying attention to the location ofright move. Again we compare this phenomenon against how a human baby learns. Although notsupported by phycological experiment, we think with a chessboard grid a human baby is more easyto figure out the game rule than without. (3) Thirdly, heatmap changes in Fig 4 is not surprising,because after adding those lines, the empty (from the perspective of game rule) regions containmore gradients for lower layers of a CNN to collect. However, again it supports that activating atnon-salient regions isNOT trivial.5 E XPERIMENT III: P IECE APPEARANCE CHANGEIn this experiment we change the appearance of the piece by: (1) replacing black boxes with whitecircles; (2) replacing white boxes with black crosses. Note that in this case the white side movesfirst. Again we train the same network and visualize with CAM. The results comparison is providedin Fig 6. Further we add grid lines to the cross/circle chessboard.6 E XPERIMENT IV: M ODEL BEHAVIOR OVER TIMEIn order to further demonstrate the non-triviality of the model behaviors, we design this experiment.We train on the dataset in Experiment I with 1000 iterations and snap-shotted the parameters at 500thiteration. The classification accuracy is 100% at 1000th iteration and 53.13% at 500th iteration. The7Under review as a conference paper at ICLR 2017Figure 6: Class activation mapping results on true positive samples at 500 iterations (left, 53.13%accuracy) and 1000 iterations (right, 100% accuracy).Figure 7: We propose two quantitative evaluation protocols: (a) by selecting the most activatedpatch, we calculate how frequent the representation fire at the correct location; (b) we correlate therepresentation with an ideal activation map.CAM results are shown by Fig 5 in which all samples are true positives. We think it shows thatthere are two ways to achieve this classification task: (1) by paying attention to the visual patternsformed by the existing pieces; (2) by paying attention to where the next piece should be placed. Thisexperiment shows that at an earlier stage of learning the model’s behavior is consistent to the firsthypothesis and after the training is completely done the network can finally fire at correct location.7 Q UANTITATIVE EVALUATIONWe propose two different quantitative evaluation protocols. The first one is representation accuracy(RAC), for which we select the most activated patch and examine whether it is the correct locationto end the game. The second one is representation consistency (RCO), which correlates the normal-ized representation and a normalized ideal activation map. The quantitative comparisons are shownin Table 1, in which NAC stands for network classification accuracy. These results quantitativelysupport that: (1) learnt representation can be used to predict the right move at an over 70% accuracy.(2) adding grid lines (implying the concept of a chessboard) dramatically improves localization.8 C ONCLUSIONThe core experiment in this paper is to train a classification CNN on rendered chessboard imagesunder weak labels. After class activation mapping visualization, we analyse and interpret the results8Under review as a conference paper at ICLR 2017Experiment I II III III IVoriginal grid piece piece+grid 500thNAC (%) 100.00 100.00 100.00 100.00 53.13RAC (%) 71.82 97.25 83.77 99.00 27.87RCO ( 103) -8.096 -5.115 -7.751 -4.9321 -10.610Table 1: Quantitative results.in three different backgrounds. Although simple, we argue that our results are enough to show that:(1) a CNN can automatically figure out complicated game rule concepts in this case. (2) cross-modalsupervision for representation learning is still applicable in this case of higher-level semantics. (3)the technique of CAM can activate at non-salient regions, testifying CNN’s capability to collectinformation from context in an extreme case (only context has information). | ByFJkHY4x | Unclear | 3: Clear rejection | Game of tic-tac-toe is considered. 1029 tic-tac-toe board combinations are chosen so that a single move will result into victory of either the black or the white player. There are 18 possible moves - 2 players x 9 locations. A CNN is trained from a visual rendering of the game board to these 18 possible outputs. CAM technique is used to visualize the salient regions in the inputs responsible for the prediction that CNN makes. Authors find that predictions correspond to the winning board locations.
Authors claim that this:
1. is a very interesting finding.
2. CNN has figured out game rules.
3. Cross modal supervision is applicable to higher-level semantics.
I don't think (2) be can be claimed because the knowledge of game rules is not tested by any experiment. There is only "one" stage of a game - i.e. last move that is considered. Further, the results are on the training set itself - the bare minimum requirement of any implicit or explicit representation of game rules is the ability to act in previously unseen states (i.e. generalization). Even if the CNN did generalize, I would avoid making any claims about knowledge of game rules.
For (3), author's definition of cross-modal seems to be training from images to games moves. In image-classification we go from images --> labels (i.e. between two different domains). We already know CNNs can perform such mappings. CNNs have been used to map images to actions such as in DQN my Mnih et al., or DDPG by Lillicrap et al. and a lot of other classical work such as ALVIN. It's unclear what points authors are trying to make.
For (1): how interesting is an implicit attention mechanism is a subjective matter. The authors claim a difference between the concepts of "what do do" and "what will happen". They claim by supervising for "what will happen", the CNN can automatically learn about "what to do". This is extensively studied in the model predictive control literature. Where model is "what will happen next", and the model is used to infer a control law - "what to do". However, in the experimental setup presented in the paper what will happen and what to do seem to be the exact same things.
For further analysis of what the CNN has learnt I would recommend:
(a) Visualizing CAM with respect to incorrect classes. For eg, visualize the CAM with respect to player would lose (instead of winning).
(b) Split the data into train/val and use the predictions on the val-set for visualization. These would be much more informative about what kind of "generalizable" features the CNN pays attention to.
In summary, understanding why CNN's make what decisions they make is a very interesting area of research. While the emergence of an implicit attention mechanism may be considered to be an interesting finding by some, many claims made by the authors are not supported by experiments (see comments above).
| 5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature |
rJo9n9Feg | ICLR.cc/2017/conference | 2017 | Chess Game Concepts Emerge under Weak Supervision: A Case Study of Tic-tac-toe | ["Hao Zhao", "Ming Lu", "Anbang Yao", "Yurong Chen", "Li Zhang"] | This paper explores the possibility of learning chess game concepts under weak supervision with convolutional neural networks, which is a topic that has not been visited to the best of our knowledge. We put this task in three different backgrounds: (1) deep reinforcement learning has shown an amazing capability to learn a mapping from visual inputs to most rewarding actions, without knowing the concepts of a video game. But how could we confirm that the network understands these concepts or it just does not? (2) cross-modal supervision for visual representation learning draws much attention recently. Is this methodology still applicable when it comes to the domain of game concepts and actions? (3) class activation mapping is widely recognized as a visualization technique to help us understand what a network has learnt. Is it possible for it to activate at non-salient regions? With the simplest chess game tic-tac-toe, we report interesting results as answers to those three questions mentioned above. All codes, pre-processed datasets and pre-trained models will be released. | ["Semi-Supervised Learning"] | ABSTRACTThis paper explores the possibility of learning chess game concepts under weaksupervision with convolutional neural networks, which is a topic that has not beenvisited to the best of our knowledge. We put this task in three different back-grounds: (1) deep reinforcement learning has shown an amazing capability tolearn a mapping from visual inputs to most rewarding actions, without know-ing the concepts of a video game. But how could we confirm that the networkunderstands these concepts or it just does not? (2) cross-modal supervision forvisual representation learning has drawn much attention recently. Is this method-ology still applicable when it comes to the domain of game concepts and actions?(3) class activation mapping is widely recognized as a visualization technique tohelp us understand what a network has learnt. Is it possible for it to activate atnon-salient regions? With the simplest chess game tic-tac-toe, we report inter-esting results as answers to those three questions mentioned above. All codes,pre-processed datasets and pre-trained models will be released.1 I NTRODUCTION1.1 A PPLICATION BACKGROUNDDeep reinforcement learning (DRL) has drawn quite much attention since the publication of influ-ential work Mnih et al. (2015). A convolutional neural network (CNN) is used to bridge the gapbetween video game screen frames and the most rewarding actions. An amazing feature of this kindof systems is that they do not need to know the concepts of these games (e.g. DRL learns to playBreakout without knowing there is a paddle or a ball in Fig 1a). However, how could we confirmthat this network really understands these concepts or it just learns a mapping from patterns in thevisual inputs to the best actions? This is the first question we are trying to answer here.Mnih et al. (2015) provides some unsupervised analysis results for visualization, showing that per-ceptually dissimilar frames may produce close rewards, yet this does not answer the question. Wechoose another visualization technique called class activation mapping as described in Zhou et al.(2016), which can reveal where the CNN’s attention is. However, directly applying it in tasks likeBreakout still cannot answer the question. Imagine one modifies the network described in Mnihet al. (2015) into another version as Zhou et al. (2016) does. The CNN’s attention may be fixed onthe ball but it is still not enough to support that the network understands the concept of a ball.This work was done when Hao Zhao was an intern at Intel Labs China, supervised by Anbang Yao who isresponsible for correspondence.1Under review as a conference paper at ICLR 2017Figure 1: We raise three questions from application, methodology and technique perspectives re-spectively and provide our answers with a case study of the simplest chess game tic-tac-toe.We propose to use a simple chess game called tic-tac-toe for case study. In order to answer thequestion, we propose a protocol as this: to place a piece where the CNN’s attention is, and examinewhether it is the right move. Of course, the training has to be done under weak supervision, or say,without telling the network what exactly a right move is. We think if this experiment succeeds wecan claim that the network figures out the concepts of: (1) a chess board grid; (2) the winning rule;(3) two sides. Detailed analysis about these three concepts are provided later.1.2 M ETHODOLOGY BACKGROUNDThere have been some works about representation learning with cross-modal supervision recently.Owens et al. (2016) clusters sound statistics into several categories, and uses them as labels to learnvisual representation from images corresponding to these sounds. It quantitatively shows that visualrepresentation learnt in this way is capable of handling challenging computer vision tasks and qual-itatively shows that visual and sound representations are consistent (e.g. babies’ faces correspondto baby cry sound samples). Castrej ́on et al. (2016) goes even further by learning representationsacross five modalities: RGB images, clip art pictures, sketches, texts and spatial texts. Gupta et al.(2016) learns depth image representation with mid-level features extracted from RGB images assupervision, and reports improved RGB-D object detection performance.What is the common point among these works? They generate weak supervision from one modalityand use it to learn representation from another (e.g. to learn what a train looks like from what atrain sounds like or to learn what a chair looks like in depth images from what a chair looks like inRGB images ). During training phase, no concepts about a train or a chair are explicitly modeled.Although there are many other modalities not visited by this methodology, we think the basic ideasbehind these works are same: an abstract concept like a train can be observed in different modalitiesand different representations can be connected.Here comes the question: is this methodology still applicable when it goes beyond the problem oflearning representations from different observations of a same concept? Albanie & Vedaldi (2016)is an example, which tries to relate facial expressions with what happened in a TV show (e.g. if acharacter earns a lot of money, she will be very happy). Although in Albanie & Vedaldi (2016) whathappened is explicitly defined, it still can be regarded as a weak supervision for what this expressionis.Although with the same methodology, the problem studied in this paper addresses even higher se-mantics: to learn what to do under the weak supervision of what will happen (Fig 1b). This is sub-stantially different from cross-modal supervision works mentioned above because there is no longera certain abstract concept of object or attribute observed in different modalities. Instead, figuringout the relationship between what to do andwhat will happen needs a higher level of intelligence.2Under review as a conference paper at ICLR 20171.3 T ECHNIQUE BACKGROUNDThe core technique used in this paper is class activation mapping (CAM) as described in Zhou et al.(2016). So leaving out all the backgrounds about playing a chess game or cross-modal supervision,what do our experiments say more than its inventors’? We think we show that CAM can also activateat non-salient regions. CAM helps us to understand where contributes the most to a classificationresult. As Fig 1c shows, the heatmap reveals that the face contributes the most to the result that thenetwork claims it as a person .As has already been shown by Krizhevsky et al. (2012), kernels of lower layers of a CNN capturegradients in an image. Existing CAM experiments tend to activate at salient regions, and this isvery reasonable because there are more gradients and therefore more information (e.g. the face inFig 1c). Here comes the question: could CAM activate at non-salient regions like the empty spaceson a chess board? Our answer is positive as the results (Fig 1d) show that in order to predict whatwill happen in the future, the CNN’s attention is fixed upon texture-free regions.Since we render chessboards as visual inputs without adding noise, those empty spaces are com-pletely empty meaning that: (1) if we take out the activated patch in Fig 1d, all pixels in this patchhave exactly the same value. (2) If we evaluate this patch with quantitative information metric likeentropy, there is no information here. Thus the only reason why these regions are activated is thatthe network collects enough information from these regions’ receptive fields. We argue that this ex-periment (CAM can activate at non-salient regions) testifies (again) CNN’s ability to hierarchicallycollect information from visual inputs.1.4 W HAT THISPAPER IS ABOUTAfter introducing those three backgrounds, we describe our work briefly as: to classify renderedtic-tac-toe chessboards with weak labels and to visualize that the CNN’s attention automaticallyreveals where the next piece should be placed. Learnt representation shows that: (1) the networkknows some concepts of the game that it is not told of; (2) this level of supervision for representationlearning is possible; (3) the technique of class activation mapping can activate at non-salient regions.2 R ELATED WORKS2.1 C ONCEPT LEARNINGConcept learning has different meanings in different contexts, and how to confirm a concept is learntremains an open question. In Jia et al. (2013), a concept is learnt if a generative model is learnt froma small number of positive samples. In Lake et al. (2015), a concept is learnt if a model learnt fromonly one instance can generalize to various tasks. Higgins et al. (2016) claims a concept is learntwhen a model can predict unseen objects’ sizes and positions. To summarize, they evaluate whethera concept is learnt through a model’s generalization ability. In even earlier works like Zhu et al.(2010);Yang et al. (2010), concept learning means a object/attribute classification task dealing withappearance variations, in which a concept is actually already pre-defined.Unlike these works, we investigate the concepts of game rules instead of object/attribute. UnlikeJia et al. (2013);Lake et al. (2015);Higgins et al. (2016), we claim a concept is learnt through anovel testing protocol instead of generalization ability. Why generalization ability could show aconcept is learnt? We think the reason is that a model understands a concept if it can use it in morecases. To this end, we argue that our protocol could also show a concept is learnt because the learntrepresentations in our experiments can be used to decide what to do though no rule about what needto be done is provided.2.2 C ROSS -MODAL SUPERVISIONThe literature of cross-model supervision and the differences between this paper and existing onesare already covered in last section. Here we re-claim it briefly: Owens et al. (2016);Castrej ́on et al.(2016);Gupta et al. (2016) learn representations across modalities because actually they are differentobservations of a same (object or attribute) concept. Whether this methodology is applicable for3Under review as a conference paper at ICLR 2017Figure 2: 18 different types of chessboard states and corresponding labels.higher-level concepts like game rules remains an open question and we provide positive answers tothis question.2.3 C LASS ACTIVATION MAPPINGBefore the technique of class activation mapping is introduced by Zhou et al. (2016), pioneeringworks like Simonyan et al. (2014);Zhou et al. (2015) have already shown CNN’s ability to localizeobjects with image-level labels. Although with different techniques, Simonyan et al. (2014);Zhouet al. (2015)’s activation visualization results also focus on salient regions. Unlike these works,we show that class activation mapping can activate at non-salient regions, or say more specifically,completely texture-free regions. Since the activated patch itself provides no information, all dis-criminative information comes from its context. This is another strong evidence to prove CNN’scapability to collect information from receptive fields, as a hierarchical visual model.3 E XPERIMENT I: G AME ENDS IN NEXT MOVEA tic-tac-toe chessboard is a 33grid, and there are two players (black and white in our case). Dueto duality, we generate all training samples assuming the black side takes the first move. The statespace of tic-tac-toe is small consisting of totally 39= 19683 combinations. Among them, manycombinations are illegal such as the one in which all 9 pieces are black. We exhaustively search overthe space according to a recursive simulation algorithm, in which: (1) the chessboard state is denotedby an integer smaller than 19683. (2) every state corresponds to a 9-d vector, with each element cantake a value from this set f0-illegal, 1-black win, 2-white win, 4-tie, 5-uncertain g. We call this 9-dvector a state transfer vector, denoting what will happen if the next legal piece placement happensat according location. (3) generated transfer vectors can predict the existence of a critical move thatwill finish the game in advance. We will release this simulation code.After pruning out illegal states, we collect 4486 possible states in total. Among these samples, wefurther take out 1029 states that a certain side is going to win in the next move. We then transformthese chessboard states into visual representations (gray-scale images at resolution (180 ;180) ). Eachof these 1029 samples is assigned a label according to the state transfer vectors. There are totally 18different labels illustrating 2(sides)9(locations). As demonstrated by Fig 2, we randomly pick asample for each label. As mentioned before black side takes the first move, thus if the numbers of4Under review as a conference paper at ICLR 2017Figure 3: Class activation mapping results on our dataset.black and white pieces are equal the next move will be black side’s and if there are one more blackpiece the next move will be white side’s.Although the concepts of two sides and nine locations are coded into the labels, this kind of super-vision is still weak supervision. Because what we are showing to the algorithm is just 18 abstractcategories as Fig 2 shows. Could an algorithm figure out what it needs to do by observing thesevisual inputs? We think even for a human baby it is difficult because no concepts like this is a gameoryou need to find out how to win are provided. In the setting of deep reinforcement learning thereis at least an objective of getting higher score to pursue.As mentioned before, the method we exploit is to train a classification network on this rendereddataset (Fig 2) and analyze learnt representations with the technique of class activation mapping.As Zhou et al. (2016) suggests, we add one global average pooling layer after the last convolutionallayer of a pre-trained AlexNet model. All fully connected layers of the AlexNet model are discarded,and a new fully connected layer is added after the global average pooling layer. After the newclassification network is fine-tuned on our dataset, a CAM visualization is generated by weightingthe outputs of the last convolutional layer with parameters from the added fully connected layer. OurCAM implementation is built upon Marvin and it will be released.Due to the simplicity of this classification task, the top one classification accuracy is 100% (notsurprisingly). Class activation mapping results are provided in Fig 3 and here we present the reasonswhy we claim concepts are learnt: (1) We provide 18 abstract categories, but in order to classifyvisual inputs into these 18 categories the network’s attention is roughly fixed upon chessboard grids.5Under review as a conference paper at ICLR 2017Figure 4: Class activation mapping results after grid lines are added.This means the concept of grid emerges in the learnt representation. (2) If we place a piece at themost activated location in Fig 3, that will be the right (and legal) move to finish the game. Onone hand, this means the concept of winning rule emerges in the learnt representation. On theother hand, this means this learnt concept can be used to deal with un-taught task (analogous to Jiaet al. (2013);Lake et al. (2015);Higgins et al. (2016) who use generalization ability to illustrate thatconcepts are learnt). (3) As Fig 3cehijnpq show, both sides can win in the next move if we violatethe take-turns rule. However, the network pays attention to the right location that is consistent tothe rule. For example, in Fig 3j, it seems that placing a black piece at the left-top location will alsoend the game. However, this move will violate the rule because there are already more black piecesthan white pieces meaning that this is the white side’s turn. This means that the concept of two sidesemerges in learnt representation.Except for learnt concepts, we analyze what this experiment provides for the remaining two ques-tions. To the second question: results in Fig 3 show that the methodology of generating labels fromone modality (state transfer vectors in our case) to supervise another modality is still applicable.More importantly, we use images as inputs yet the learnt visual representations contain not onlyvisual saliency information but also untold chess game concepts. To the third question: as Fig 3shows, most activated regions are empty spaces on the chessboard.4 E XPERIMENT II: A DDING GRIDLINESSince we claim complicated concepts emerge in learnt visual representations, a natural questionwill be: if the chessboard’s and pieces’ appearances are changed does this experiment still work?Thus we design this experiment by adding grid lines to the chessboards when rendering syntheticdata (Fig 4). The intentions behind this design is three-folded: (1) in this case, the chessboard’sappearance is changed. (2) after these lines are added, the concept that there is a chessboard grid isactually implied. Still, we do not think these lines directly provide the concept of chessboard gridthus we use the word imply . Whether the network can figure out what these lines mean still remain6Under review as a conference paper at ICLR 2017Figure 5: Class activation mapping results after piece appearance is changed.uncertain. (3) those locations that are completely empty in Experiment I are no longer empty fromthe perspective of information (still empty from the perspective of game rule).We train the same network on the newly rendered dataset with grid lines and calculate CAM resultsin the same way. The results are demonstrated by Fig 4. Generally speaking, the grid lines allow thenetwork to better activate at the location of right move, making them stands out more on the heatmap.What does this mean to the three intentions mentioned in last paragraph? (1) Firstly, it shows that ourexperiment is robust to chess board appearance variance. (2) Secondly, after implying the conceptthat there is a chessboard grid, the network performs better at paying attention to the location ofright move. Again we compare this phenomenon against how a human baby learns. Although notsupported by phycological experiment, we think with a chessboard grid a human baby is more easyto figure out the game rule than without. (3) Thirdly, heatmap changes in Fig 4 is not surprising,because after adding those lines, the empty (from the perspective of game rule) regions containmore gradients for lower layers of a CNN to collect. However, again it supports that activating atnon-salient regions isNOT trivial.5 E XPERIMENT III: P IECE APPEARANCE CHANGEIn this experiment we change the appearance of the piece by: (1) replacing black boxes with whitecircles; (2) replacing white boxes with black crosses. Note that in this case the white side movesfirst. Again we train the same network and visualize with CAM. The results comparison is providedin Fig 6. Further we add grid lines to the cross/circle chessboard.6 E XPERIMENT IV: M ODEL BEHAVIOR OVER TIMEIn order to further demonstrate the non-triviality of the model behaviors, we design this experiment.We train on the dataset in Experiment I with 1000 iterations and snap-shotted the parameters at 500thiteration. The classification accuracy is 100% at 1000th iteration and 53.13% at 500th iteration. The7Under review as a conference paper at ICLR 2017Figure 6: Class activation mapping results on true positive samples at 500 iterations (left, 53.13%accuracy) and 1000 iterations (right, 100% accuracy).Figure 7: We propose two quantitative evaluation protocols: (a) by selecting the most activatedpatch, we calculate how frequent the representation fire at the correct location; (b) we correlate therepresentation with an ideal activation map.CAM results are shown by Fig 5 in which all samples are true positives. We think it shows thatthere are two ways to achieve this classification task: (1) by paying attention to the visual patternsformed by the existing pieces; (2) by paying attention to where the next piece should be placed. Thisexperiment shows that at an earlier stage of learning the model’s behavior is consistent to the firsthypothesis and after the training is completely done the network can finally fire at correct location.7 Q UANTITATIVE EVALUATIONWe propose two different quantitative evaluation protocols. The first one is representation accuracy(RAC), for which we select the most activated patch and examine whether it is the correct locationto end the game. The second one is representation consistency (RCO), which correlates the normal-ized representation and a normalized ideal activation map. The quantitative comparisons are shownin Table 1, in which NAC stands for network classification accuracy. These results quantitativelysupport that: (1) learnt representation can be used to predict the right move at an over 70% accuracy.(2) adding grid lines (implying the concept of a chessboard) dramatically improves localization.8 C ONCLUSIONThe core experiment in this paper is to train a classification CNN on rendered chessboard imagesunder weak labels. After class activation mapping visualization, we analyse and interpret the results8Under review as a conference paper at ICLR 2017Experiment I II III III IVoriginal grid piece piece+grid 500thNAC (%) 100.00 100.00 100.00 100.00 53.13RAC (%) 71.82 97.25 83.77 99.00 27.87RCO ( 103) -8.096 -5.115 -7.751 -4.9321 -10.610Table 1: Quantitative results.in three different backgrounds. Although simple, we argue that our results are enough to show that:(1) a CNN can automatically figure out complicated game rule concepts in this case. (2) cross-modalsupervision for representation learning is still applicable in this case of higher-level semantics. (3)the technique of CAM can activate at non-salient regions, testifying CNN’s capability to collectinformation from context in an extreme case (only context has information). | BkXHxhEEe | Novel experiments, but the results and significance are not clear | 3: Clear rejection | Summary
===
This paper presents tic-tac-toe as toy problem for investigating CNNs.
A dataset is created containing tic-tac-toe boards where one player is one
move away from winning and a CNN is trained to label boards according
to (1) the player who can win (2 choices) and (2) the position they may move
to win (9 choices), resulting in 18 labels. The CNN evaluated in this paper
performs perfectly at the task and the paper's goal is to inspect how the
CNN works.
The fundamental mechanism for this inspection is Class Activation
Mapping (CAM) (Zhou et. al. 2016), which identifies regions of implicit attention
in the CNN. These implicit attention maps (localization heat maps) are used to
derive actions (which square each player should move). The attention maps
(1) attend to squares in the tic-tac-toe board rather than arbitrary
blobs, despite the fact that one square in a board has uniform color, and
(2) they can be used to pick correct (winning) actions.
This experiment are used to support assertions that the network understands
(1) chess (tic-tac-toe) boards
(2) a rule for winning tic-tac-toe
(3) that there are two players.
Some follow up experiments indicate similar results under various renderings
of the tic-tac-toe boards and an incomplete training regime.
More Clarifying Questions
===
* I am not quite sure precisely how CAM is implemented here. In the original CAM
one must identify a class of interest to visualize (e.g., cat or dog). I don't
think this paper identifies such a choice. How is one of the 18 possible classes
chosen for creating the CAM visualization and through that visualization
choosing an action?
* How was the test set for this dataset for the table 1 results created?
How many of the final 1029 states were used for test and was the
distribution of labels the same in train and test?
* How is RCO computed? Is rank correlation or Pearson correlation used?
If Pearson correlation is used then it may be good to consider rank correlation,
as argued in "Human Attention in Visual Question Answering: Do Humans and
Deep Networks Look at the Same Regions?" by Das et. al. in EMNLP 2016.
In table 1, what does the 10^3 next to RCO mean?
Pros
===
* The proposed method, deriving an action to take from the result of a
visualization technique, is very novel.
* This paper provides an experiment that clearly shows a CNN relying on context
to make accurate predictions.
* The use of a toy tic-tac-toe domain to study attention in CNNs
(implicit or otherwise) is a potentially fruitful setting that may
lead to better understanding of implicit and maybe explicit attention mechanisms.
Cons
===
* This work distinguishes between predictions about "what will happen"
(will the white player win?) and "what to do" (where should the white
player move to win?). The central idea is generalization from "what will happen"
to "what to do" indicates concept learning (sec. 2.1). Why should an ability to
act be any more indicative of a learned concept than an ability to predict
future states. I see a further issue with the presentation of this approach and
a potential correctness problem:
1. (correctness)
In the specific setting proposed I see no difference between "what to do"
and "what will happen."
Suppose one created labels dictating "what to do" for each example in the
proposed dataset. How would these differ from the labels of "what will happen"
in the proposed dataset? In this case "what will happen" labels include
both player identity (who wins) and board position (which position they move
to win). Wouldn't the "what to do" labels need to indicate board position?
They could also chosen to indicate player identity, which would make them
identical to the "what will happen" labels (both 18-way softmaxes).
2. (presentation)
I think this distinction would usually be handled by the Reinforcement Learning
framework, but the proposed method is not presented in that framework or
related to an RL based approach. In RL "what will happen" is the reward an
agent will receive for making a particular action and "what to do" is the
action an agent should take. From this point of view, generalization from
"what will happen" to "what to do" is not a novel thing to study.
Alternate models include:
* A deep Q network (Mnih. et. al. 2015) could predict the value of
every possible action where an action is a (player, board position) tuple.
* The argmax of the current model's softmax could be used as an action
prediction.
The deep Q network approach need not be implemented, but differences between
methods should be explained because of the uniqueness of the proposed approach.
* Comparison to work that uses visualization to investigate deep RL networks
is missing. In particular, other work in RL has used Simonyan et. al.
(arXiv 2013) style saliency maps to investigate network behavior. For example,
"Dueling Network Architectures for Deep Reinforcement Learning" by Wang et. al.
in (ICML 2016) uses saliency maps to identify differences between their
state-value and advantage networks. In "Graying the black box:
Understanding DQNs" by Zahavy et. al. (ICML 2016) these saliency maps are
also used to analyze network behavior.
* In section 2.3, saliency maps of Simonyan et. al. are said to not be able to
activate on grid squares because they have constant intensity, yet no empirical
or theoretical evidence is provided for this claim.
On a related note, what precisely is the notion of information referenced in
section 2.3 and why is it relevant? Is it entropy of the distribution of pixel
intensities in a patch? To me it seems that any measure which depends only
on one patch is irrelevant because the methods discussed (e.g., saliency maps)
depend on context as well as the intensities within a patch.
* The presentation in the paper would be improved if the results in section 7
were presented along with relevant discussion in preceding sections.
Overall Evaluation
===
The experiments presented here are novel, but I am not sure they are very
significant or offer clear conclusions. The methods and goals are not presented
clearly and lack the broader relevant context mentioned above. Furthermore, I
find the lines of thought mentioned in the Cons section possibly incorrect
or incomplete. As detailed with further clarifying questions, upon closer
inspection I do not see how some aspects of the proposed approach were
implemented, so my opinion may change with further details. | 3: The reviewer is fairly confident that the evaluation is correct |
rJo9n9Feg | ICLR.cc/2017/conference | 2017 | Chess Game Concepts Emerge under Weak Supervision: A Case Study of Tic-tac-toe | ["Hao Zhao", "Ming Lu", "Anbang Yao", "Yurong Chen", "Li Zhang"] | This paper explores the possibility of learning chess game concepts under weak supervision with convolutional neural networks, which is a topic that has not been visited to the best of our knowledge. We put this task in three different backgrounds: (1) deep reinforcement learning has shown an amazing capability to learn a mapping from visual inputs to most rewarding actions, without knowing the concepts of a video game. But how could we confirm that the network understands these concepts or it just does not? (2) cross-modal supervision for visual representation learning draws much attention recently. Is this methodology still applicable when it comes to the domain of game concepts and actions? (3) class activation mapping is widely recognized as a visualization technique to help us understand what a network has learnt. Is it possible for it to activate at non-salient regions? With the simplest chess game tic-tac-toe, we report interesting results as answers to those three questions mentioned above. All codes, pre-processed datasets and pre-trained models will be released. | ["Semi-Supervised Learning"] | ABSTRACTThis paper explores the possibility of learning chess game concepts under weaksupervision with convolutional neural networks, which is a topic that has not beenvisited to the best of our knowledge. We put this task in three different back-grounds: (1) deep reinforcement learning has shown an amazing capability tolearn a mapping from visual inputs to most rewarding actions, without know-ing the concepts of a video game. But how could we confirm that the networkunderstands these concepts or it just does not? (2) cross-modal supervision forvisual representation learning has drawn much attention recently. Is this method-ology still applicable when it comes to the domain of game concepts and actions?(3) class activation mapping is widely recognized as a visualization technique tohelp us understand what a network has learnt. Is it possible for it to activate atnon-salient regions? With the simplest chess game tic-tac-toe, we report inter-esting results as answers to those three questions mentioned above. All codes,pre-processed datasets and pre-trained models will be released.1 I NTRODUCTION1.1 A PPLICATION BACKGROUNDDeep reinforcement learning (DRL) has drawn quite much attention since the publication of influ-ential work Mnih et al. (2015). A convolutional neural network (CNN) is used to bridge the gapbetween video game screen frames and the most rewarding actions. An amazing feature of this kindof systems is that they do not need to know the concepts of these games (e.g. DRL learns to playBreakout without knowing there is a paddle or a ball in Fig 1a). However, how could we confirmthat this network really understands these concepts or it just learns a mapping from patterns in thevisual inputs to the best actions? This is the first question we are trying to answer here.Mnih et al. (2015) provides some unsupervised analysis results for visualization, showing that per-ceptually dissimilar frames may produce close rewards, yet this does not answer the question. Wechoose another visualization technique called class activation mapping as described in Zhou et al.(2016), which can reveal where the CNN’s attention is. However, directly applying it in tasks likeBreakout still cannot answer the question. Imagine one modifies the network described in Mnihet al. (2015) into another version as Zhou et al. (2016) does. The CNN’s attention may be fixed onthe ball but it is still not enough to support that the network understands the concept of a ball.This work was done when Hao Zhao was an intern at Intel Labs China, supervised by Anbang Yao who isresponsible for correspondence.1Under review as a conference paper at ICLR 2017Figure 1: We raise three questions from application, methodology and technique perspectives re-spectively and provide our answers with a case study of the simplest chess game tic-tac-toe.We propose to use a simple chess game called tic-tac-toe for case study. In order to answer thequestion, we propose a protocol as this: to place a piece where the CNN’s attention is, and examinewhether it is the right move. Of course, the training has to be done under weak supervision, or say,without telling the network what exactly a right move is. We think if this experiment succeeds wecan claim that the network figures out the concepts of: (1) a chess board grid; (2) the winning rule;(3) two sides. Detailed analysis about these three concepts are provided later.1.2 M ETHODOLOGY BACKGROUNDThere have been some works about representation learning with cross-modal supervision recently.Owens et al. (2016) clusters sound statistics into several categories, and uses them as labels to learnvisual representation from images corresponding to these sounds. It quantitatively shows that visualrepresentation learnt in this way is capable of handling challenging computer vision tasks and qual-itatively shows that visual and sound representations are consistent (e.g. babies’ faces correspondto baby cry sound samples). Castrej ́on et al. (2016) goes even further by learning representationsacross five modalities: RGB images, clip art pictures, sketches, texts and spatial texts. Gupta et al.(2016) learns depth image representation with mid-level features extracted from RGB images assupervision, and reports improved RGB-D object detection performance.What is the common point among these works? They generate weak supervision from one modalityand use it to learn representation from another (e.g. to learn what a train looks like from what atrain sounds like or to learn what a chair looks like in depth images from what a chair looks like inRGB images ). During training phase, no concepts about a train or a chair are explicitly modeled.Although there are many other modalities not visited by this methodology, we think the basic ideasbehind these works are same: an abstract concept like a train can be observed in different modalitiesand different representations can be connected.Here comes the question: is this methodology still applicable when it goes beyond the problem oflearning representations from different observations of a same concept? Albanie & Vedaldi (2016)is an example, which tries to relate facial expressions with what happened in a TV show (e.g. if acharacter earns a lot of money, she will be very happy). Although in Albanie & Vedaldi (2016) whathappened is explicitly defined, it still can be regarded as a weak supervision for what this expressionis.Although with the same methodology, the problem studied in this paper addresses even higher se-mantics: to learn what to do under the weak supervision of what will happen (Fig 1b). This is sub-stantially different from cross-modal supervision works mentioned above because there is no longera certain abstract concept of object or attribute observed in different modalities. Instead, figuringout the relationship between what to do andwhat will happen needs a higher level of intelligence.2Under review as a conference paper at ICLR 20171.3 T ECHNIQUE BACKGROUNDThe core technique used in this paper is class activation mapping (CAM) as described in Zhou et al.(2016). So leaving out all the backgrounds about playing a chess game or cross-modal supervision,what do our experiments say more than its inventors’? We think we show that CAM can also activateat non-salient regions. CAM helps us to understand where contributes the most to a classificationresult. As Fig 1c shows, the heatmap reveals that the face contributes the most to the result that thenetwork claims it as a person .As has already been shown by Krizhevsky et al. (2012), kernels of lower layers of a CNN capturegradients in an image. Existing CAM experiments tend to activate at salient regions, and this isvery reasonable because there are more gradients and therefore more information (e.g. the face inFig 1c). Here comes the question: could CAM activate at non-salient regions like the empty spaceson a chess board? Our answer is positive as the results (Fig 1d) show that in order to predict whatwill happen in the future, the CNN’s attention is fixed upon texture-free regions.Since we render chessboards as visual inputs without adding noise, those empty spaces are com-pletely empty meaning that: (1) if we take out the activated patch in Fig 1d, all pixels in this patchhave exactly the same value. (2) If we evaluate this patch with quantitative information metric likeentropy, there is no information here. Thus the only reason why these regions are activated is thatthe network collects enough information from these regions’ receptive fields. We argue that this ex-periment (CAM can activate at non-salient regions) testifies (again) CNN’s ability to hierarchicallycollect information from visual inputs.1.4 W HAT THISPAPER IS ABOUTAfter introducing those three backgrounds, we describe our work briefly as: to classify renderedtic-tac-toe chessboards with weak labels and to visualize that the CNN’s attention automaticallyreveals where the next piece should be placed. Learnt representation shows that: (1) the networkknows some concepts of the game that it is not told of; (2) this level of supervision for representationlearning is possible; (3) the technique of class activation mapping can activate at non-salient regions.2 R ELATED WORKS2.1 C ONCEPT LEARNINGConcept learning has different meanings in different contexts, and how to confirm a concept is learntremains an open question. In Jia et al. (2013), a concept is learnt if a generative model is learnt froma small number of positive samples. In Lake et al. (2015), a concept is learnt if a model learnt fromonly one instance can generalize to various tasks. Higgins et al. (2016) claims a concept is learntwhen a model can predict unseen objects’ sizes and positions. To summarize, they evaluate whethera concept is learnt through a model’s generalization ability. In even earlier works like Zhu et al.(2010);Yang et al. (2010), concept learning means a object/attribute classification task dealing withappearance variations, in which a concept is actually already pre-defined.Unlike these works, we investigate the concepts of game rules instead of object/attribute. UnlikeJia et al. (2013);Lake et al. (2015);Higgins et al. (2016), we claim a concept is learnt through anovel testing protocol instead of generalization ability. Why generalization ability could show aconcept is learnt? We think the reason is that a model understands a concept if it can use it in morecases. To this end, we argue that our protocol could also show a concept is learnt because the learntrepresentations in our experiments can be used to decide what to do though no rule about what needto be done is provided.2.2 C ROSS -MODAL SUPERVISIONThe literature of cross-model supervision and the differences between this paper and existing onesare already covered in last section. Here we re-claim it briefly: Owens et al. (2016);Castrej ́on et al.(2016);Gupta et al. (2016) learn representations across modalities because actually they are differentobservations of a same (object or attribute) concept. Whether this methodology is applicable for3Under review as a conference paper at ICLR 2017Figure 2: 18 different types of chessboard states and corresponding labels.higher-level concepts like game rules remains an open question and we provide positive answers tothis question.2.3 C LASS ACTIVATION MAPPINGBefore the technique of class activation mapping is introduced by Zhou et al. (2016), pioneeringworks like Simonyan et al. (2014);Zhou et al. (2015) have already shown CNN’s ability to localizeobjects with image-level labels. Although with different techniques, Simonyan et al. (2014);Zhouet al. (2015)’s activation visualization results also focus on salient regions. Unlike these works,we show that class activation mapping can activate at non-salient regions, or say more specifically,completely texture-free regions. Since the activated patch itself provides no information, all dis-criminative information comes from its context. This is another strong evidence to prove CNN’scapability to collect information from receptive fields, as a hierarchical visual model.3 E XPERIMENT I: G AME ENDS IN NEXT MOVEA tic-tac-toe chessboard is a 33grid, and there are two players (black and white in our case). Dueto duality, we generate all training samples assuming the black side takes the first move. The statespace of tic-tac-toe is small consisting of totally 39= 19683 combinations. Among them, manycombinations are illegal such as the one in which all 9 pieces are black. We exhaustively search overthe space according to a recursive simulation algorithm, in which: (1) the chessboard state is denotedby an integer smaller than 19683. (2) every state corresponds to a 9-d vector, with each element cantake a value from this set f0-illegal, 1-black win, 2-white win, 4-tie, 5-uncertain g. We call this 9-dvector a state transfer vector, denoting what will happen if the next legal piece placement happensat according location. (3) generated transfer vectors can predict the existence of a critical move thatwill finish the game in advance. We will release this simulation code.After pruning out illegal states, we collect 4486 possible states in total. Among these samples, wefurther take out 1029 states that a certain side is going to win in the next move. We then transformthese chessboard states into visual representations (gray-scale images at resolution (180 ;180) ). Eachof these 1029 samples is assigned a label according to the state transfer vectors. There are totally 18different labels illustrating 2(sides)9(locations). As demonstrated by Fig 2, we randomly pick asample for each label. As mentioned before black side takes the first move, thus if the numbers of4Under review as a conference paper at ICLR 2017Figure 3: Class activation mapping results on our dataset.black and white pieces are equal the next move will be black side’s and if there are one more blackpiece the next move will be white side’s.Although the concepts of two sides and nine locations are coded into the labels, this kind of super-vision is still weak supervision. Because what we are showing to the algorithm is just 18 abstractcategories as Fig 2 shows. Could an algorithm figure out what it needs to do by observing thesevisual inputs? We think even for a human baby it is difficult because no concepts like this is a gameoryou need to find out how to win are provided. In the setting of deep reinforcement learning thereis at least an objective of getting higher score to pursue.As mentioned before, the method we exploit is to train a classification network on this rendereddataset (Fig 2) and analyze learnt representations with the technique of class activation mapping.As Zhou et al. (2016) suggests, we add one global average pooling layer after the last convolutionallayer of a pre-trained AlexNet model. All fully connected layers of the AlexNet model are discarded,and a new fully connected layer is added after the global average pooling layer. After the newclassification network is fine-tuned on our dataset, a CAM visualization is generated by weightingthe outputs of the last convolutional layer with parameters from the added fully connected layer. OurCAM implementation is built upon Marvin and it will be released.Due to the simplicity of this classification task, the top one classification accuracy is 100% (notsurprisingly). Class activation mapping results are provided in Fig 3 and here we present the reasonswhy we claim concepts are learnt: (1) We provide 18 abstract categories, but in order to classifyvisual inputs into these 18 categories the network’s attention is roughly fixed upon chessboard grids.5Under review as a conference paper at ICLR 2017Figure 4: Class activation mapping results after grid lines are added.This means the concept of grid emerges in the learnt representation. (2) If we place a piece at themost activated location in Fig 3, that will be the right (and legal) move to finish the game. Onone hand, this means the concept of winning rule emerges in the learnt representation. On theother hand, this means this learnt concept can be used to deal with un-taught task (analogous to Jiaet al. (2013);Lake et al. (2015);Higgins et al. (2016) who use generalization ability to illustrate thatconcepts are learnt). (3) As Fig 3cehijnpq show, both sides can win in the next move if we violatethe take-turns rule. However, the network pays attention to the right location that is consistent tothe rule. For example, in Fig 3j, it seems that placing a black piece at the left-top location will alsoend the game. However, this move will violate the rule because there are already more black piecesthan white pieces meaning that this is the white side’s turn. This means that the concept of two sidesemerges in learnt representation.Except for learnt concepts, we analyze what this experiment provides for the remaining two ques-tions. To the second question: results in Fig 3 show that the methodology of generating labels fromone modality (state transfer vectors in our case) to supervise another modality is still applicable.More importantly, we use images as inputs yet the learnt visual representations contain not onlyvisual saliency information but also untold chess game concepts. To the third question: as Fig 3shows, most activated regions are empty spaces on the chessboard.4 E XPERIMENT II: A DDING GRIDLINESSince we claim complicated concepts emerge in learnt visual representations, a natural questionwill be: if the chessboard’s and pieces’ appearances are changed does this experiment still work?Thus we design this experiment by adding grid lines to the chessboards when rendering syntheticdata (Fig 4). The intentions behind this design is three-folded: (1) in this case, the chessboard’sappearance is changed. (2) after these lines are added, the concept that there is a chessboard grid isactually implied. Still, we do not think these lines directly provide the concept of chessboard gridthus we use the word imply . Whether the network can figure out what these lines mean still remain6Under review as a conference paper at ICLR 2017Figure 5: Class activation mapping results after piece appearance is changed.uncertain. (3) those locations that are completely empty in Experiment I are no longer empty fromthe perspective of information (still empty from the perspective of game rule).We train the same network on the newly rendered dataset with grid lines and calculate CAM resultsin the same way. The results are demonstrated by Fig 4. Generally speaking, the grid lines allow thenetwork to better activate at the location of right move, making them stands out more on the heatmap.What does this mean to the three intentions mentioned in last paragraph? (1) Firstly, it shows that ourexperiment is robust to chess board appearance variance. (2) Secondly, after implying the conceptthat there is a chessboard grid, the network performs better at paying attention to the location ofright move. Again we compare this phenomenon against how a human baby learns. Although notsupported by phycological experiment, we think with a chessboard grid a human baby is more easyto figure out the game rule than without. (3) Thirdly, heatmap changes in Fig 4 is not surprising,because after adding those lines, the empty (from the perspective of game rule) regions containmore gradients for lower layers of a CNN to collect. However, again it supports that activating atnon-salient regions isNOT trivial.5 E XPERIMENT III: P IECE APPEARANCE CHANGEIn this experiment we change the appearance of the piece by: (1) replacing black boxes with whitecircles; (2) replacing white boxes with black crosses. Note that in this case the white side movesfirst. Again we train the same network and visualize with CAM. The results comparison is providedin Fig 6. Further we add grid lines to the cross/circle chessboard.6 E XPERIMENT IV: M ODEL BEHAVIOR OVER TIMEIn order to further demonstrate the non-triviality of the model behaviors, we design this experiment.We train on the dataset in Experiment I with 1000 iterations and snap-shotted the parameters at 500thiteration. The classification accuracy is 100% at 1000th iteration and 53.13% at 500th iteration. The7Under review as a conference paper at ICLR 2017Figure 6: Class activation mapping results on true positive samples at 500 iterations (left, 53.13%accuracy) and 1000 iterations (right, 100% accuracy).Figure 7: We propose two quantitative evaluation protocols: (a) by selecting the most activatedpatch, we calculate how frequent the representation fire at the correct location; (b) we correlate therepresentation with an ideal activation map.CAM results are shown by Fig 5 in which all samples are true positives. We think it shows thatthere are two ways to achieve this classification task: (1) by paying attention to the visual patternsformed by the existing pieces; (2) by paying attention to where the next piece should be placed. Thisexperiment shows that at an earlier stage of learning the model’s behavior is consistent to the firsthypothesis and after the training is completely done the network can finally fire at correct location.7 Q UANTITATIVE EVALUATIONWe propose two different quantitative evaluation protocols. The first one is representation accuracy(RAC), for which we select the most activated patch and examine whether it is the correct locationto end the game. The second one is representation consistency (RCO), which correlates the normal-ized representation and a normalized ideal activation map. The quantitative comparisons are shownin Table 1, in which NAC stands for network classification accuracy. These results quantitativelysupport that: (1) learnt representation can be used to predict the right move at an over 70% accuracy.(2) adding grid lines (implying the concept of a chessboard) dramatically improves localization.8 C ONCLUSIONThe core experiment in this paper is to train a classification CNN on rendered chessboard imagesunder weak labels. After class activation mapping visualization, we analyse and interpret the results8Under review as a conference paper at ICLR 2017Experiment I II III III IVoriginal grid piece piece+grid 500thNAC (%) 100.00 100.00 100.00 100.00 53.13RAC (%) 71.82 97.25 83.77 99.00 27.87RCO ( 103) -8.096 -5.115 -7.751 -4.9321 -10.610Table 1: Quantitative results.in three different backgrounds. Although simple, we argue that our results are enough to show that:(1) a CNN can automatically figure out complicated game rule concepts in this case. (2) cross-modalsupervision for representation learning is still applicable in this case of higher-level semantics. (3)the technique of CAM can activate at non-salient regions, testifying CNN’s capability to collectinformation from context in an extreme case (only context has information). | Hk5euvf4g | Still not sure what to take away from these experiments | 3: Clear rejection | 1029 tic-tac-toe boards are rendered (in various ways). These 1029 boards are legal boards where the next legal play can end the game. There are 18 categories of such boards -- 9 for the different locations of the next play, and 2 for the color of the next play. The supervision is basically saying "If you place a black square in the middle right, black will win" or "if you place a white square in the upper left, white will win". A CNN is trained to predict these 18 categories and can do so with 100% accuracy.
The focus of the paper is using Zhou et al's Class Activation Mapping to show where the CNN focuses when making it's decision. As I understand it, an input to CAM is the class of interest. So let's say it is class 1 (black wins with a play to the bottom right square, if I've deciphered figure 2 correctly. Figure 2 should really be more clear about what each class is). So we ask CAM to determine the area of focus of the CNN for deciding whether class 1 is exhibited. The focus ends up being on the empty bottom right square (because certainly you can't exhibit class 1 if the bottom right square is occupied). The CNN also needs to condition its decision on other parts of the board -- it needs to know whether there will be 3 in a row from some direction. But maybe that conditioning is weaker?
That's kind of interesting but I'm not sure about the deeper statements about discovering game rules that the paper hints at. I'm also not sure about the connection of this work to weakly supervised learning or multi-modal learning.
The paper is pretty well written, overall, with some grammatical mistakes, but I simply don't see the surprising discovery of this work.
I also have some concerns about how contrived this scenario is -- using a big, expressive CNN for such a simple game domain and using a particular CNN visualization method.
I am not an expert in reinforcement learning (which isn't happening in this paper, but is in related works on CNN game playing), so maybe I'm not appreciating the paper appropriately. | 2: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper |
SkhU2fcll | ICLR.cc/2017/conference | 2017 | Deep Multi-task Representation Learning: A Tensor Factorisation Approach | ["Yongxin Yang", "Timothy M. Hospedales"] | Most contemporary multi-task learning methods assume linear models. This setting is considered shallow in the era of deep learning. In this paper, we present a new deep multi-task representation learning framework that learns cross-task sharing structure at every layer in a deep network. Our approach is based on generalising the matrix factorisation techniques explicitly or implicitly used by many conventional MTL algorithms to tensor factorisation, to realise automatic learning of end-to-end knowledge sharing in deep networks. This is in contrast to existing deep learning approaches that need a user-defined multi-task sharing strategy. Our approach applies to both homogeneous and heterogeneous MTL. Experiments demonstrate the efficacy of our deep multi-task representation learning in terms of both higher accuracy and fewer design choices. | ["deep", "representation learning", "tensor factorisation", "representation", "contemporary", "methods", "linear models", "setting", "shallow", "era"] | ABSTRACTMost contemporary multi-task learning methods assume linear models. This set-ting is considered shallow in the era of deep learning. In this paper, we presenta new deep multi-task representation learning framework that learns cross-tasksharing structure at every layer in a deep network . Our approach is based ongeneralising the matrix factorisation techniques explicitly or implicitly used bymany conventional MTL algorithms to tensor factorisation, to realise automaticlearning of end-to-end knowledge sharing in deep networks. This is in contrastto existing deep learning approaches that need a user-defined multi-task sharingstrategy. Our approach applies to both homogeneous and heterogeneous MTL.Experiments demonstrate the efficacy of our deep multi-task representation learn-ing in terms of both higher accuracy and fewer design choices.1 I NTRODUCTIONThe paradigm of multi-task learning is to learn multiple related tasks simultaneously so that knowl-edge obtained from each task can be re-used by the others. Early work in this area focused on neuralnetwork models (Caruana, 1997), while more recent methods have shifted focus to kernel methods,sparsity and low-dimensional task representations of linear models (Evgeniou & Pontil, 2004; Ar-gyriou et al., 2008; Kumar & Daum ́e III, 2012). Nevertheless given the impressive practical efficacyof contemporary deep neural networks (DNN)s in many important applications, we are motivated torevisit MTL from a deep learning perspective.While the machine learning community has focused on MTL for shallow linear models recently, ap-plications have continued to exploit neural network MTL (Zhang et al., 2014; Liu et al., 2015). Thetypical design pattern dates back at least 20 years (Caruana, 1997): define a DNN with shared lowerrepresentation layers, which then forks into separate layers and losses for each task. The sharingstructure is defined manually: full-sharing up to the fork, and full separation after the fork. Howeverthis complicates DNN architecture design because the user must specify the sharing structure: Howmany task specific layers? How many task independent layers? How to structure sharing if there aremany tasks of varying relatedness?In this paper we present a method for end-to-end multi-task learning in DNNs. This contributioncan be seen as generalising shallow MTL methods (Evgeniou & Pontil, 2004; Argyriou et al., 2008;Kumar & Daum ́e III, 2012) to learning how to share at every layer of a deep network; or as learningthe sharing structure for deep MTL (Caruana, 1997; Zhang et al., 2014; Spieckermann et al., 2014;Liu et al., 2015) which currently must be defined manually on a problem-by-problem basis.Before proceeding it is worth explicitly distinguishing some different problem settings, which haveall been loosely referred to as MTL in the literature. Homogeneous MTL: Each task correspondsto asingle output. For example, MNIST digit recognition is commonly used to evaluate MTL algo-rithms by casting it as 10 binary classification tasks (Kumar & Daum ́e III, 2012). HeterogeneousMTL: Each task corresponds to a unique set of output(s) (Zhang et al., 2014). For example, onemay want simultaneously predict a person’s age (task one: multi-class classification or regression)as well as identify their gender (task two: binary classification) from a face image.In this paper, we propose a multi-task learning method that works on all these settings. The key ideais to use tensor factorisation to divide each set of model parameters (i.e., both FC weight matrices,1Published as a conference paper at ICLR 2017and convolutional kernel tensors) into shared andtask-specific parts. It is a natural generalisationof shallow MTL methods that explicitly or implicitly are based on matrix factorisation (Evgeniou &Pontil, 2004; Argyriou et al., 2008; Kumar & Daum ́e III, 2012; Daum ́e III, 2007). As linear methods,these typically require pre-engineered features. In contrast, as a deep network, our generalisationcan learn directly from raw image data, determining sharing structure in a layer-wise fashion. Forthe simplest NN architecture – no hidden layer, single output – our method reduces to matrix-basedones, therefore matrix-based methods including (Evgeniou & Pontil, 2004; Argyriou et al., 2008;Kumar & Daum ́e III, 2012; Daum ́e III, 2007) are special cases of ours.2 R ELATED WORKMulti-Task Learning Most contemporary MTL algorithms assume that the input and model arebothD-dimensional vectors. The models of Ttasks can then be stacked into a DTsized matrixW. Despite different motivations and implementations, many matrix-based MTL methods workby placing constrains on W. For example, posing an `2;1norm onWto encourage low-rank W(Argyriou et al., 2008). Similarly, (Kumar & Daum ́e III, 2012) factorises WasW=LS, i.e., itassigns a lower rank as a hyper-parameter. An earlier work (Evgeniou & Pontil, 2004) proposesthat the linear model for each task tcan be written as wt= ^wt+ ^w0. This is the factorisationL= [ ^w0;^w1;:::; ^wT]andS= [11T;IT]. In fact, such matrix factorisation encompasses manyMTL methods. E.g., (Xue et al., 2007) assumes S;i(theith column of S) is a unit vector generatedby a Dirichlet Process and (Passos et al., 2012) models Wusing linear factor analysis with IndianBuffet Process (Griffiths & Ghahramani, 2011) prior on S.Tensor Factorisation In deep learning, tensor factorisation has been used to exploit factorisedtensors’ fewer parameters than the original (e.g., 4-way convolutional kernel) tensor, and thus com-press and/or speed up the model, e.g., (Lebedev et al., 2015; Novikov et al., 2015). For shallow linearMTL, tensor factorisation has been used to address problems where tasks are described by multipleindependent factors rather than merely indexed by a single factor (Yang & Hospedales, 2015). HeretheD-dimensional linear models for all unique tasks stack into a tensor W, of e.g.DT1T2in the case of two task factors. Knowledge sharing is then achieved by imposing tensor norms onW(Romera-paredes et al., 2013; Wimalawarne et al., 2014). Our framework factors tensors for thedifferent reason that for DNN models, parameters include convolutional kernels ( N-way tensors) orD1D2FC layer weight matrices ( 2-way tensors). Stacking up these parameters for many tasksresults inD1DNTtensors within which we share knowledge through factorisation.Heterogeneous MTL and DNNs Some studies consider heterogeneous MTL, where tasks mayhave different numbers of outputs (Caruana, 1997). This differs from the previously discussed stud-ies (Evgeniou & Pontil, 2004; Argyriou et al., 2008; Bonilla et al., 2007; Jacob et al., 2009; Kumar& Daum ́e III, 2012; Romera-paredes et al., 2013; Wimalawarne et al., 2014) which implicitly as-sume that each task has a single output. Heterogeneous MTL typically uses neural networks withmultiple sets of outputs and losses. E.g., Huang et al. (2013) proposes a shared-hidden-layer DNNmodel for multilingual speech processing, where each task corresponds to an individual language.Zhang et al. (2014) uses a DNN to find facial landmarks (regression) as well as recognise facialattributes (classification); while Liu et al. (2015) proposes a DNN for query classification and in-formation retrieval (ranking for web search). A key commonality of these studies is that they allrequire a user-defined parameter sharing strategy. A typical design pattern is to use shared layers(same parameters) for lower layers of the DNN and then split (independent parameters) for the toplayers. However, there is no systematic way to make such design choices, so researchers usually relyon trial-and-error, further complicating the already somewhat dark art of DNN design. In contrast,our method learns where and how much to share representation parameters across the tasks, hencesignificantly reducing the space of DNN design choices.Parametrised DNNs Our MTL approach is a parameterised DNN (Sigaud et al., 2015), in thatDNN weights are dynamically generated given some side information – in the case of MTL, giventhe task identity. In a related example of speaker-adaptive speech recognition (Tan et al., 2016) theremay be several clusters in the data (e.g., gender, acoustic conditions), and each speaker’s modelcould be a linear combination of these latent task/clusters’ models. They model each speaker i’sweight matrix W(i)as a sum of Kbase models ~W, i.e.,W(i)=PKk=1(i)p~W(p). The differencebetween speakers/tasks comes from and the base models are shared. An advantage of this is that,2Published as a conference paper at ICLR 2017when new data come, one can choose to re-train parameters only, and keep ~Wfixed. This willsignificantly reduce the number of parameters to learn, and consequently the required training data.Beyond this, Yang & Hospedales (2015) show that it is possible to train another neural network topredict thosevalues from some abstract metadata. Thus a model for an unseen task can be gener-ated on-the-fly with notraining instances given an abstract description of the task. The techniquesdeveloped here are compatible with both these ideas of generating models with minimal or no effort.3 M ETHODOLOGY3.1 P RELIMINARIESWe first recap some tensor factorisation basics before explaining how to factorise DNN weighttensors for multi-task representation learning. An N-way tensorWwith shapeD1D2DNis anN-dimensional array containingQNn=1Dnelements. Scalars, vectors, and matrices can beseen as 0,1, and 2-way tensors respectively, although the term tensor is usually used for 3-way orhigher. A mode- nfibre ofWis aDn-dimensional vector obtained by fixing all but the nth index.The mode-nflatteningW(n)ofWis the matrix of size DnQi:nDiconstructed by concatenatingall of theQi:nDimode-nfibres along columns.The dot product of two tensors is a natural extension of matrix dot product, e.g., if we have a tensorAof sizeM1M2Pand a tensorBof sizePN1N2:::, the tensor dot product AB willbe a tensor of size M1M2N1N2by matrix dot product AT(1)B(1)and reshaping1.More generally, tensor dot product can be performed along specified axes, A B(i;j)=AT(i)B(j)and reshaping. Here the subscripts indicate the axes of AandBat which dot product is performed.E.g., whenAis of sizeM1PM3MIandBis of sizeN1N2PNJ, thenA B(2;3)is a tensor of size M1M3MIN1N2NJ.Matrix-based Knowledge Sharing Assume we have Tlinear models (tasks) parametrised by D-dimensional weight vectors, so the collection of all models forms a size DTmatrixW. Onecommonly used MTL approach (Kumar & Daum ́e III, 2012) is to place a structure constraint on W,e.g.,W=LS, whereLis aDKmatrix andSis aKTmatrix. This factorisation recovers ashared factorLand a task-specific factorS. One can see the columns of Las latent basis tasks, andthe modelw(i)for theith task is the linear combination of those latent basis tasks with task-specificinformation S;i.w(i):=W;i=LS;i=KXk=1L;kSk;i (1)From Single to Multiple Outputs Consider extending this matrix factorisation approach to thecase of multiple outputs. The model for each task is then a D1D2matrix, forD1input andD2output dimensions. The collection of all those matrices constructs a D1D2Ttensor. Astraightforward extension of Eq. 1 to this case isW(i):=W;;i=KXk=1L;;kSk;i (2)This is equivalent to imposing the same structural constraint on WT(3)(transposed mode- 3flatteningofW). It is important to note that this allows knowledge sharing across the tasks only. I.e., knowl-edge sharing is only across-tasks not across dimensions within a task. However it may be that theknowledge learned in the mapping to one output dimension may be useful to the others within onetask. E.g., consider recognising photos of handwritten and print digits – it may be useful to shareacross handwritten-print; as well as across different digits within each. In order to support generalknowledge sharing across both tasks and outputs within tasks, we propose to use more general tensorfactorisation techniques. Unlike for matrices, there are multiple definitions of tensor factorisation,and we use Tucker (Tucker, 1966) and Tensor Train (TT) (Oseledets, 2011) decompositions.1We slightly abuse ‘-1’ referring to the last axis of the tensor.3Published as a conference paper at ICLR 20173.2 T ENSOR FACTORISATION FOR KNOWLEDGE SHARINGTucker Decomposition Given anN-way tensor of size D1D2DN, Tucker decompositionoutputs a core tensor Sof sizeK1K2KN, andNmatricesU(n)of sizeDnKn, suchthat,Wd1;d2;:::;d N=K1Xk1=1K2Xk2=1KNXkN=1Sk1;k2;:::;k NU(1)d1;k1U(2)d2;k2U(N)dN;kN(3)W=SU(1)(1;2)U(2)(1;2)U(N)(1;2)(4)Tucker decomposition is usually implemented by an alternating least squares (ALS) method (Kolda& Bader, 2009). However (Lathauwer et al., 2000) treat it as a higher-order singular value decom-position (HOSVD), which is more efficient to solve: U(n)is exactly the Umatrix from the SVD ofmode-nflatteningW(n)ofW, and the core tensor Sis obtained by,S=WU(1)(1;1)U(2)(1;1)U(N)(1;1)(5)Tensor Train Decomposition Tensor Train (TT) Decomposition outputs 2matricesU(1)andU(N)of sizeD1K1andKN1DNrespectively, and (N2) 3-way tensorsU(n)of sizeKn1DnKn. The elements ofWcan be computed by,Wd1;d2;:::;d N=K1Xk1=1K2Xk2=1KN1XkN1=1U(1)d1;k1U(2)k1;d2;k2U(3)k2;d3;k3U(N)kN1;dN(6)=U(1)d1;U(2);d2;U(3);d3;U(d);dN(7)W=U(1)U(2)U(N)(8)whereU(n);dn;is a matrix of size Kn1Knsliced fromU(n)with the second axis fixed at dn. TheTT decomposition is typically realised with a recursive SVD-based solution (Oseledets, 2011).Knowledge Sharing If the final axis of the input tensor above indexes tasks, i.e. if DN=Tthenthe last factor U(N)in both decompositions encodes a matrix of task specific knowledge, and theother factors encode shared knowledge.3.3 D EEPMULTI -TASK REPRESENTATION LEARNINGTo realise deep multi-task representation learning (DMTRL), we learn one DNN per-task each withthe same architecture2. However each corresponding layer’s weights are generated with one of theknowledge sharing structures in Eq. 2, Eq. 4 or Eq. 8. It is important to note that we apply these‘right-to-left’ in order to generate weight tensors with the specified sharing structure, rather thanactually applying Tucker or TT to decompose an input tensor. In the forward pass, we synthesiseweight tensorsWand perform inference as usual, so the method can be thought of as tensor com-position rather than decomposition.Our weight generation (construct tensors from smaller pieces) does not introduce non-differentiableterms, so our deep multi-task representation learner is trainable via standard backpropagation.Specifically, in the backward pass over FC layers, rather than directly learning the 3-way tensorW, our methods learn either fS;U1;U2;U3g(DMTRL-Tucker, Eq. 4), fU1;U2;U3g(DMTRL-TT,Eq. 8), or in the simplest case fL;Sg(DMTRL-LAF3, Eq. 2). Besides FC layers, contemporary2Except heterogeneous MTL, where the output layer is necessarily unshared due to different dimensionality.3LAF refers to Last Axis Flattening.4Published as a conference paper at ICLR 2017 ..........................................HomogeneousMTL(Shallow)HeterogeneousMTL............STLMTLUDMTLDMTRLSTL.....................HomogeneousMTL(Deep)............UDMTLDMTRLSTLFigure 1: Illustrative example with two tasks corresponding to two neural networks in homogeneous(single output) and heterogeneous (different output dimension) cases. Weight layers grouped bysolid rectangles are tied across networks. Weight layers grouped by dashed rectangles are softlyshared across networks with our method. Ungrouped weights are independent.Homogeneous MTL Shallow: Left is STL (two independent networks); right is MTL. In the caseof vector input and no hidden layer, our method is equivalent to conventional matrix-based MTLmethods. Homogeneous MTL Deep: STL (Left) is independent networks. User-defined-MTL (UD-MTL) selects layers to share/separate. Our DMTRL learns sharing at every layer. HeterogeneousMTL: UD-MTL selects layers to share/separate. DMTRL learns sharing at every shareable layer.DNN designs often exploit convolutional layers. Those layers usually contain kernel filter parame-ters that are 3-way tensors of size HWC, (whereHis height,Wis width, and Cis the numberof input channels) or 4-way tensors of size HWCM, whereMis the number of filters in thislayer (i.e., the number of output channels). The proposed methods naturally extend to convolutionlayers as convolution just adds more axes on the left-hand side. E.g., the collection of parametersfrom a given convolutional layer of Tneural networks forms a tensor of shape HWCMT.These knowledge sharing strategies provide a way to softly share parameters across the correspond-ing layers of each task’s DNN: where, what, and how much to share are learned from data. This isin contrast to the conventional Deep-MTL approach of manually selecting a set of layers to undergohard parameter sharing: by tying weights so each task uses exactly the same weight matrix/tensorfor the corresponding layer (Zhang et al., 2014; Liu et al., 2015); and a set of layers to be completelyseparate: by using independent weight matrices/tensors. In contrast our approach benefits from:(i) automatically learning this sharing structure from data rather than requiring user trial and error,and (ii) smoothly interpolating between fully shared and fully segregated layers, rather than a hardswitching between these states. An illustration of the proposed framework for different problemsettings can be found in Fig. 1.4 E XPERIMENTSImplementation Details Our method is implemented with TensorFlow (Abadi et al., 2015). Thecode is released on GitHub4. For DMTRL-Tucker, DMTRL-TT, and DMTRL-LAF, we need toassign the rank of each weight tensor. The DNN architecture itself may be complicated and socan benefit from different ranks at different layers, but grid-search is impractical. However, since4https://github.com/wOOL/DMTRL5Published as a conference paper at ICLR 2017both Tucker and TT decomposition methods have SVD-based solutions, and vanilla SVD is directlyapplicable to DMTRL-LAF, we can initialise the model and set the ranks as follows: First train theDNNs independently in single task learning mode. Then pack the layer-wise parameters as the inputfor tensor decomposition. When SVD is applied, set a threshold for relative error so SVD will pickthe appropriate rank. Thus our method needs only a single hyper parameter of max reconstructionerror (we set to = 10% throughout) that indirectly specifies the ranks of every layer. Note thattraining from random initialisation also works, but the STL-based initialisation makes rank selectioneasy and transparent. Nevertheless, like (Kumar & Daum ́e III, 2012) the framework is not sensitiveto rank choice so long as they are big enough. If random initialisation is desired to eliminate thepre-training requirement, good practice is to initialise parameter tensors by a suitable random weightdistribution first, then do decomposition, and use the decomposed values for initialising the factors(thereallearnable parameters in our framework). In this way, the resulting re-composed tensors willhave approximately the intended distribution. Our sharing is applied to weight parameters only, biasterms are not shared. Apart from initialisation, decomposition is not used anywhere.4.1 H OMOGENEOUS MTLDataset, Settings and Baselines We use MNIST handwritten digits. The task is to recognise digitimages zero to nine. When this dataset is used for the evaluation of MTL methods, ten 1-vs-allbinary classification problems usually define ten tasks (Kumar & Daum ́e III, 2012). The dataset hasa given train (60,000 images) and test (10,000 images) split. Each instance is a monochrome imageof size 28281.We use a modified LeNet (LeCun et al., 1998) as the CNN architecture. The first convolutional layerhas32filters of size 55, followed by 22max pooling. The second convolutional layer has 64filters of size 44, and again a 22max pooling. After these two convolutional layers, two fullyconnected layers with 512and1output(s) are placed sequentially. The convolutional and first FClayer use RELU f(x) = max(x;0)activation function. We use hinge loss, `(y) = max(0;1^yy),wherey21is the true label and ^yis the output of each task’s neural network.Conventional matrix-based MTL methods (Evgeniou & Pontil, 2004; Argyriou et al., 2008; Kumar& Daum ́e III, 2012; Romera-paredes et al., 2013; Wimalawarne et al., 2014) are linear models takingvector input only, so they need a preprocessing that flattens the image into a vector, and typicallyreduce dimension by PCA. As per our motivation for studying Deep MTL, our methods decisivelyoutperform such shallow linear baselines. Thus to find a stronger MTL competitor, we instead searchuser defined architectures for Deep-MTL parameter sharing (cf (Zhang et al., 2014; Liu et al., 2015;Caruana, 1997)). In all of the four parametrised layers (pooling has no parameters), we set the firstN(1N3) to be hard shared5. We then use cross-validation to select among the three user-defined MTL architectures and the best option is N= 3, i.e., the first three layers are fully shared(we denote this model UD-MTL). For our methods, all four parametrised layers are softly sharedwith the different factorisation approaches. To evaluate different MTL methods and a baseline ofsingle task learning (STL), we take ten different fractions of the given 60K training split, train themodel, and test on the 10K testing split. For each fraction, we repeat the experiment 5times withrandomly sampled training data. We report two performance metrics: (1) the mean error rate of theten binary classification problems and (2) the error rate of recognising a digit by ranking each task’s1-vs-all output (multi-class classification error).Results As we can see in Fig. 2, all MTL approaches outperform STL, and the advantage is moresignificant when the training data is small. The proposed methods, DMTRL-TT and DMTRL-Tucker outperform the best user-defined MTL when the training data is very small, and their perfor-mance is comparable when the training data is large.Further Discussion For a slightly unfair comparison, in the case of binary classification with 1000training data, shallow matrix-based MTL methods with PCA feature (Kang et al., 2011; Kumar &Daum ́e III, 2012) reported 14:0%/13:4%error rate. With the same amount of data, our methods5This is not strictly all possible user-defined sharing options. For example, another possibility is the firstconvolutional layer and the first FC layer could be fully shared, with the second convolutional layer being in-dependent (task specific). However, this is against the intuition that lower/earlier layers are more task agnostic,and later layers more task specific. Note that sharing the last layer is technically possible but not intuitive, andin any case not meaningful unless at least one early layer is unshared, as the tasks are different.6Published as a conference paper at ICLR 201710-210-1100Fraction of Training Data00.020.040.060.080.10.12Error RateBinary ClassificationSTLDMTRL-LAFDMTRL-TuckerDMTRL-TTUD-MTL10-210-1100Fraction of Training Data00.050.10.150.2Error RateMulti-class ClassificationSTLDMTRL-LAFDMTRL-TuckerDMTRL-TTUD-MTLFigure 2: Homogeneous MTL: digit recognition on MNIST dataset. Each digit provides a task.have error rate below 6%. This shows the importance of our deep end-to-end multi-task represen-tation learning contribution versus conventional shallow MTL. Since the error rates in (Kang et al.,2011; Kumar & Daum ́e III, 2012) were produced on a private subset of MNIST dataset with PCArepresentations only, to ensure a direct comparison, we implement several classic MTL methods andcompare them in Appendix A.For readers interested in the connection to model capacity (number of parameters), we present fur-ther analysis in Appendix B.4.2 H ETEROGENEOUS MTL: F ACE ANALYSISDataset, Settings and Baselines The AdienceFaces (Eidinger et al., 2014) is a large-scale faceimages dataset with the labels of each person’s gender and age group. We use this dataset forthe evaluation of heterogeneous MTL with two tasks: (i) gender classification (two classes) and(ii) age group classification (eight classes). Two independent CNN models for this benchmark areintroduced in (Levi & Hassncer, 2015). The two CNNs have the same architecture except for thelast fully-connected layer, since the heterogeneous tasks have different number of outputs (two /eight). We take these CNNs from (Levi & Hassncer, 2015) as the STL baseline. We again searchfor the best possible user-defined MTL architecture as a strong competitor: the proposed CNN hassix layers – three convolutional and three fully-connected layers. The last fully-connected layer hasnon-shareable parameters because they are of different size. To search the MTL design-space, wetry setting the first N(1N5) layers to be hard shared between the tasks. Running 5-foldcross-validation on the train set to evaluate the architectures, we find the best choice is N= 5(i.e.,all layers fully shared before the final heterogeneous outputs). For our proposed methods, all thelayers before the last heterogeneous dimensionality FC layers are softly shared.We select increasing fractions of the AdienceFaces train split randomly, train the model, and evaluateon the same test set. For reference, there are 12245 images with gender labelled for training, 4007ones for testing, and 11823 images with age group labelled for training, and 4316 ones for testing.Results Fig. 3 shows the error rate for each task. For the gender recognition task, we find that:(i) User-defined MTL is not consistently better than STL, but (ii) our methods, esp., DMTRL-Tucker, consistently outperform both STL and the best user-defined MTL. For the harder age groupclassification task, our methods generally improve on STL. However UD-MTL does not consistentlyimprove on STL, and even reduces performance when the training set is bigger. This is the negativetransfer phenomenon (Rosenstein et al., 2005), where using a transfer learning algorithm is worsethan not using it. This difference in outcomes is attributed to sufficient data eventually providingsome effective task-specific representation. Our methods can discover and exploit this, but UD-MTL’s hard switch between sharing and not sharing can not represent or exploit such increasingtask-specificity of representation.7Published as a conference paper at ICLR 201710-210-1100Fraction of Training Data0.20.250.30.350.40.45Error RateGender Classification10-210-1100Fraction of Training Data0.50.550.60.650.70.75Error RateAge Group ClassificationSTLDMTRL-LAFDMTRL-TuckerDMTRL-TTUD-MTLFigure 3: Heterogeneous MTL: Age and Gender recognition in AdienceFace dataset.4.3 H ETEROGENEOUS MTL: M ULTI -ALPHABET RECOGNITIONDataset, Settings and Baselines We next consider the task of learning to recognise handwrittenletters in multiple languages using the Omniglot (Lake et al., 2015) dataset. Omniglot containshandwritten characters in 50 different alphabets (e.g., Cyrillic, Korean, Tengwar), each with its ownnumber of unique characters ( 1455). In total, there are 1623 unique characters, and each hasexactly 20 instances. Here each task corresponds to an alphabet, and the goal is to recognise itscharacters. MTL has a clear motivation here, as cross-alphabet knowledge sharing is likely to beuseful as one is unlikely to have extensive training data for a wide variety of less common alphabets.The images are monochrome of size 105105. We design a CNN with 3convolutional and 2FClayers. The first conv layer has 8filters of size 55; the second conv layer has 12filters of size33, and the third convolutional layer has 16filters of size 33. Each convolutional layer isfollowed by a 22max-pooling. The first FC layer has 64neurons, and the second FC layer hassize corresponding to the number of unique classes in the alphabet. The activation function is tanh .We use a similar strategy to find the best user-defined MTL model: the CNN has 5parametrisedlayers, of which 4layers are potentially shareable. So we tried hard-sharing the first N(1N4)layers. Evaluating these options by 5-fold cross-validation, the best option turned out to be N= 3,i.e., the first three layers are hard shared. For our methods, all four shareable layers are softly shared.Since there is no standard train/test split for this dataset, we use the following setting: We repeat-edly pick at random 5;:::90% of images per class for training. Note that 5%is the minimum,corresponding to one-shot learning. The remaining data are used for evaluation.Results Fig. 4 reports the average error rate across all 50tasks (alphabets). Our proposed MTLmethods surpass the STL baseline in all cases. User-defined MTL does not work well when thetraining data is very small, but does help when training fraction is larger than 50%.Measuring the Learned Sharing Compared to the conventional user-defined sharing architec-tures, our method learns how to share from data. We next try to quantify the amount of sharingestimated by our model on the Omniglot data. Returning to the key factorisation W=LS, wecan find that S-like matrix appears in all variants of proposed method. It is Sin DMTRL-LAF, thetransposedU(N)in DMTRL-Tucker, and U(N)in DMTRL-TT ( Nis the last axis ofW).Sis aKTsize matrix, where Tis the number of tasks, and Kis the number of latent tasks (Kumar& Daum ́e III, 2012) or the dimension of task coding (Yang & Hospedales, 2015). Each columnofSis a set of coefficients that produce the final weight matrix/tensor by linear combination. Ifwe put STL and user-defined MTL (for a certain shared layer) in this framework, we see that STLis to assign (rather than learn )Sto be an identity matrix IT. Similarly, user-defined MTL (for acertain shared layer) is to assign Sto be a matrix with all zeros but one particular row is all ones,e.g.,S= [11T;0]. Between these two extremes, our method learns the sharing structure in S. Wepropose the following equation to measure the learned sharing strength:=1T2Xi<j(S;i;S;j) =2T(T1)Xi<j(S;i;S;j) (9)8Published as a conference paper at ICLR 2017Conv1Conv2Conv3 FC1 FC2Layers00.20.40.60.81Sharing StrengthSharing Strength at Each LayerDMTRL-LAFDMTRL-TuckerDMTRL-TTUD-MTL0.05 0.10 0.20 0.50 0.60 0.70 0.80 0.90Fraction of Training Data0.30.40.50.60.7Error RateAlphabet ClassificationSTLDMTRL-LAFDMTRL-TuckerDMTRL-TTUD-MTLFigure 4: Results of multi-task learning of multilingual character recognition (Omniglot dataset).Below: Illustration of the language pairs estimated to be the most related (left - Georgian Mkhedruliand Inuktitut) and most unrelated (right - Balinese and ULOG) character recognition tasks.Here (a;b)is a similarity measure for two vectors aandband we use cosine similarity. is theaverage on all combinations of column-wise similarity. So measures how much sharing is encodedbySbetween= 0for STL (nothing to share) and = 1for user-defined MTL (completely shared).SinceSis a real-valued matrix in our scenario, we normalise it before applying Eq. 9: First we takeabsolute values, because large either positive or negative value suggests a significant coefficient.Second we normalise each column of Sby applying a softmax function, so the sum of every columnis1. The motivation behind the second step is to make a matched range of our SwithS=ITorS= [11T;0], as for those two cases, the sum of each column is 1and the range is [0;1].For the Omniglot experiment, we plot the measured sharing amount for training fraction 10%. Fig. 4reveals that three proposed methods tend to share more for bottom layers (‘Conv1’, ‘Conv2’, and‘Conv3’) and share less for top layer (‘FC1’). This is qualitatively similar to the best user-definedMTL, where the first three layers are fully shared ( = 1) and the 4th layer is completely not shared(= 0). However, our methods: (i) learn this structure in a purely data-driven way and (ii) benefitsfrom the ability to smoothly interpolate between high and low degrees of sharing as depth increases.As an illustration, Fig. 4 also shows example text from the most and least similar language pairs asestimated at our multilingual character recogniser’s FC1 layer (the result can vary across layers).5 C ONCLUSIONIn this paper, we propose a novel framework for end-to-end multi-task representation learning incontemporary deep neural networks. The key idea is to generalise matrix factorisation-based multi-task ideas to tensor factorisation, in order to flexibly share knowledge in fully connected and convo-lutional DNN layers. Our method provides consistently better performance than single task learningand comparable or better performance than the best results from exhaustive search of user-definedMTL architectures. It reduces the design choices and architectural search space that must be ex-plored in the workflow of Deep MTL architecture design (Caruana, 1997; Zhang et al., 2014; Liuet al., 2015), relieving researchers of the need to decide how to structure layer sharing/segregation.Instead sharing structure is determined in a data-driven way on a layer-by-layer basis that moreoverallows a smooth interpolation between sharing and not sharing in progressively deeper layers.Acknowledgements This work was supported by EPSRC (EP/L023385/1), and the EuropeanUnion’s Horizon 2020 research and innovation program under grant agreement No 640891.9Published as a conference paper at ICLR 2017 | S1DiTxMNe | 7: Good paper, accept | The paper proposed a tensor factorization approach for MTL to learn cross task structures for better generalization. The presentation is clean and clear and experimental justification is convincing.
As mentioned, including discussions on the effect of model size vs. performance would be useful in the final version and also work in other fields related to this.
One question on Sec. 3.3, to build the DMTRL, one DNN per-task is trained with the same architecture. How important is this pretraining? Would random initialization also work here? If the data is unbalanced, namely, some classes have very few examples, how would that affect the model?
| 4: The reviewer is confident but not absolutely certain that the evaluation is correct |
|
SkhU2fcll | ICLR.cc/2017/conference | 2017 | Deep Multi-task Representation Learning: A Tensor Factorisation Approach | ["Yongxin Yang", "Timothy M. Hospedales"] | Most contemporary multi-task learning methods assume linear models. This setting is considered shallow in the era of deep learning. In this paper, we present a new deep multi-task representation learning framework that learns cross-task sharing structure at every layer in a deep network. Our approach is based on generalising the matrix factorisation techniques explicitly or implicitly used by many conventional MTL algorithms to tensor factorisation, to realise automatic learning of end-to-end knowledge sharing in deep networks. This is in contrast to existing deep learning approaches that need a user-defined multi-task sharing strategy. Our approach applies to both homogeneous and heterogeneous MTL. Experiments demonstrate the efficacy of our deep multi-task representation learning in terms of both higher accuracy and fewer design choices. | ["deep", "representation learning", "tensor factorisation", "representation", "contemporary", "methods", "linear models", "setting", "shallow", "era"] | ABSTRACTMost contemporary multi-task learning methods assume linear models. This set-ting is considered shallow in the era of deep learning. In this paper, we presenta new deep multi-task representation learning framework that learns cross-tasksharing structure at every layer in a deep network . Our approach is based ongeneralising the matrix factorisation techniques explicitly or implicitly used bymany conventional MTL algorithms to tensor factorisation, to realise automaticlearning of end-to-end knowledge sharing in deep networks. This is in contrastto existing deep learning approaches that need a user-defined multi-task sharingstrategy. Our approach applies to both homogeneous and heterogeneous MTL.Experiments demonstrate the efficacy of our deep multi-task representation learn-ing in terms of both higher accuracy and fewer design choices.1 I NTRODUCTIONThe paradigm of multi-task learning is to learn multiple related tasks simultaneously so that knowl-edge obtained from each task can be re-used by the others. Early work in this area focused on neuralnetwork models (Caruana, 1997), while more recent methods have shifted focus to kernel methods,sparsity and low-dimensional task representations of linear models (Evgeniou & Pontil, 2004; Ar-gyriou et al., 2008; Kumar & Daum ́e III, 2012). Nevertheless given the impressive practical efficacyof contemporary deep neural networks (DNN)s in many important applications, we are motivated torevisit MTL from a deep learning perspective.While the machine learning community has focused on MTL for shallow linear models recently, ap-plications have continued to exploit neural network MTL (Zhang et al., 2014; Liu et al., 2015). Thetypical design pattern dates back at least 20 years (Caruana, 1997): define a DNN with shared lowerrepresentation layers, which then forks into separate layers and losses for each task. The sharingstructure is defined manually: full-sharing up to the fork, and full separation after the fork. Howeverthis complicates DNN architecture design because the user must specify the sharing structure: Howmany task specific layers? How many task independent layers? How to structure sharing if there aremany tasks of varying relatedness?In this paper we present a method for end-to-end multi-task learning in DNNs. This contributioncan be seen as generalising shallow MTL methods (Evgeniou & Pontil, 2004; Argyriou et al., 2008;Kumar & Daum ́e III, 2012) to learning how to share at every layer of a deep network; or as learningthe sharing structure for deep MTL (Caruana, 1997; Zhang et al., 2014; Spieckermann et al., 2014;Liu et al., 2015) which currently must be defined manually on a problem-by-problem basis.Before proceeding it is worth explicitly distinguishing some different problem settings, which haveall been loosely referred to as MTL in the literature. Homogeneous MTL: Each task correspondsto asingle output. For example, MNIST digit recognition is commonly used to evaluate MTL algo-rithms by casting it as 10 binary classification tasks (Kumar & Daum ́e III, 2012). HeterogeneousMTL: Each task corresponds to a unique set of output(s) (Zhang et al., 2014). For example, onemay want simultaneously predict a person’s age (task one: multi-class classification or regression)as well as identify their gender (task two: binary classification) from a face image.In this paper, we propose a multi-task learning method that works on all these settings. The key ideais to use tensor factorisation to divide each set of model parameters (i.e., both FC weight matrices,1Published as a conference paper at ICLR 2017and convolutional kernel tensors) into shared andtask-specific parts. It is a natural generalisationof shallow MTL methods that explicitly or implicitly are based on matrix factorisation (Evgeniou &Pontil, 2004; Argyriou et al., 2008; Kumar & Daum ́e III, 2012; Daum ́e III, 2007). As linear methods,these typically require pre-engineered features. In contrast, as a deep network, our generalisationcan learn directly from raw image data, determining sharing structure in a layer-wise fashion. Forthe simplest NN architecture – no hidden layer, single output – our method reduces to matrix-basedones, therefore matrix-based methods including (Evgeniou & Pontil, 2004; Argyriou et al., 2008;Kumar & Daum ́e III, 2012; Daum ́e III, 2007) are special cases of ours.2 R ELATED WORKMulti-Task Learning Most contemporary MTL algorithms assume that the input and model arebothD-dimensional vectors. The models of Ttasks can then be stacked into a DTsized matrixW. Despite different motivations and implementations, many matrix-based MTL methods workby placing constrains on W. For example, posing an `2;1norm onWto encourage low-rank W(Argyriou et al., 2008). Similarly, (Kumar & Daum ́e III, 2012) factorises WasW=LS, i.e., itassigns a lower rank as a hyper-parameter. An earlier work (Evgeniou & Pontil, 2004) proposesthat the linear model for each task tcan be written as wt= ^wt+ ^w0. This is the factorisationL= [ ^w0;^w1;:::; ^wT]andS= [11T;IT]. In fact, such matrix factorisation encompasses manyMTL methods. E.g., (Xue et al., 2007) assumes S;i(theith column of S) is a unit vector generatedby a Dirichlet Process and (Passos et al., 2012) models Wusing linear factor analysis with IndianBuffet Process (Griffiths & Ghahramani, 2011) prior on S.Tensor Factorisation In deep learning, tensor factorisation has been used to exploit factorisedtensors’ fewer parameters than the original (e.g., 4-way convolutional kernel) tensor, and thus com-press and/or speed up the model, e.g., (Lebedev et al., 2015; Novikov et al., 2015). For shallow linearMTL, tensor factorisation has been used to address problems where tasks are described by multipleindependent factors rather than merely indexed by a single factor (Yang & Hospedales, 2015). HeretheD-dimensional linear models for all unique tasks stack into a tensor W, of e.g.DT1T2in the case of two task factors. Knowledge sharing is then achieved by imposing tensor norms onW(Romera-paredes et al., 2013; Wimalawarne et al., 2014). Our framework factors tensors for thedifferent reason that for DNN models, parameters include convolutional kernels ( N-way tensors) orD1D2FC layer weight matrices ( 2-way tensors). Stacking up these parameters for many tasksresults inD1DNTtensors within which we share knowledge through factorisation.Heterogeneous MTL and DNNs Some studies consider heterogeneous MTL, where tasks mayhave different numbers of outputs (Caruana, 1997). This differs from the previously discussed stud-ies (Evgeniou & Pontil, 2004; Argyriou et al., 2008; Bonilla et al., 2007; Jacob et al., 2009; Kumar& Daum ́e III, 2012; Romera-paredes et al., 2013; Wimalawarne et al., 2014) which implicitly as-sume that each task has a single output. Heterogeneous MTL typically uses neural networks withmultiple sets of outputs and losses. E.g., Huang et al. (2013) proposes a shared-hidden-layer DNNmodel for multilingual speech processing, where each task corresponds to an individual language.Zhang et al. (2014) uses a DNN to find facial landmarks (regression) as well as recognise facialattributes (classification); while Liu et al. (2015) proposes a DNN for query classification and in-formation retrieval (ranking for web search). A key commonality of these studies is that they allrequire a user-defined parameter sharing strategy. A typical design pattern is to use shared layers(same parameters) for lower layers of the DNN and then split (independent parameters) for the toplayers. However, there is no systematic way to make such design choices, so researchers usually relyon trial-and-error, further complicating the already somewhat dark art of DNN design. In contrast,our method learns where and how much to share representation parameters across the tasks, hencesignificantly reducing the space of DNN design choices.Parametrised DNNs Our MTL approach is a parameterised DNN (Sigaud et al., 2015), in thatDNN weights are dynamically generated given some side information – in the case of MTL, giventhe task identity. In a related example of speaker-adaptive speech recognition (Tan et al., 2016) theremay be several clusters in the data (e.g., gender, acoustic conditions), and each speaker’s modelcould be a linear combination of these latent task/clusters’ models. They model each speaker i’sweight matrix W(i)as a sum of Kbase models ~W, i.e.,W(i)=PKk=1(i)p~W(p). The differencebetween speakers/tasks comes from and the base models are shared. An advantage of this is that,2Published as a conference paper at ICLR 2017when new data come, one can choose to re-train parameters only, and keep ~Wfixed. This willsignificantly reduce the number of parameters to learn, and consequently the required training data.Beyond this, Yang & Hospedales (2015) show that it is possible to train another neural network topredict thosevalues from some abstract metadata. Thus a model for an unseen task can be gener-ated on-the-fly with notraining instances given an abstract description of the task. The techniquesdeveloped here are compatible with both these ideas of generating models with minimal or no effort.3 M ETHODOLOGY3.1 P RELIMINARIESWe first recap some tensor factorisation basics before explaining how to factorise DNN weighttensors for multi-task representation learning. An N-way tensorWwith shapeD1D2DNis anN-dimensional array containingQNn=1Dnelements. Scalars, vectors, and matrices can beseen as 0,1, and 2-way tensors respectively, although the term tensor is usually used for 3-way orhigher. A mode- nfibre ofWis aDn-dimensional vector obtained by fixing all but the nth index.The mode-nflatteningW(n)ofWis the matrix of size DnQi:nDiconstructed by concatenatingall of theQi:nDimode-nfibres along columns.The dot product of two tensors is a natural extension of matrix dot product, e.g., if we have a tensorAof sizeM1M2Pand a tensorBof sizePN1N2:::, the tensor dot product AB willbe a tensor of size M1M2N1N2by matrix dot product AT(1)B(1)and reshaping1.More generally, tensor dot product can be performed along specified axes, A B(i;j)=AT(i)B(j)and reshaping. Here the subscripts indicate the axes of AandBat which dot product is performed.E.g., whenAis of sizeM1PM3MIandBis of sizeN1N2PNJ, thenA B(2;3)is a tensor of size M1M3MIN1N2NJ.Matrix-based Knowledge Sharing Assume we have Tlinear models (tasks) parametrised by D-dimensional weight vectors, so the collection of all models forms a size DTmatrixW. Onecommonly used MTL approach (Kumar & Daum ́e III, 2012) is to place a structure constraint on W,e.g.,W=LS, whereLis aDKmatrix andSis aKTmatrix. This factorisation recovers ashared factorLand a task-specific factorS. One can see the columns of Las latent basis tasks, andthe modelw(i)for theith task is the linear combination of those latent basis tasks with task-specificinformation S;i.w(i):=W;i=LS;i=KXk=1L;kSk;i (1)From Single to Multiple Outputs Consider extending this matrix factorisation approach to thecase of multiple outputs. The model for each task is then a D1D2matrix, forD1input andD2output dimensions. The collection of all those matrices constructs a D1D2Ttensor. Astraightforward extension of Eq. 1 to this case isW(i):=W;;i=KXk=1L;;kSk;i (2)This is equivalent to imposing the same structural constraint on WT(3)(transposed mode- 3flatteningofW). It is important to note that this allows knowledge sharing across the tasks only. I.e., knowl-edge sharing is only across-tasks not across dimensions within a task. However it may be that theknowledge learned in the mapping to one output dimension may be useful to the others within onetask. E.g., consider recognising photos of handwritten and print digits – it may be useful to shareacross handwritten-print; as well as across different digits within each. In order to support generalknowledge sharing across both tasks and outputs within tasks, we propose to use more general tensorfactorisation techniques. Unlike for matrices, there are multiple definitions of tensor factorisation,and we use Tucker (Tucker, 1966) and Tensor Train (TT) (Oseledets, 2011) decompositions.1We slightly abuse ‘-1’ referring to the last axis of the tensor.3Published as a conference paper at ICLR 20173.2 T ENSOR FACTORISATION FOR KNOWLEDGE SHARINGTucker Decomposition Given anN-way tensor of size D1D2DN, Tucker decompositionoutputs a core tensor Sof sizeK1K2KN, andNmatricesU(n)of sizeDnKn, suchthat,Wd1;d2;:::;d N=K1Xk1=1K2Xk2=1KNXkN=1Sk1;k2;:::;k NU(1)d1;k1U(2)d2;k2U(N)dN;kN(3)W=SU(1)(1;2)U(2)(1;2)U(N)(1;2)(4)Tucker decomposition is usually implemented by an alternating least squares (ALS) method (Kolda& Bader, 2009). However (Lathauwer et al., 2000) treat it as a higher-order singular value decom-position (HOSVD), which is more efficient to solve: U(n)is exactly the Umatrix from the SVD ofmode-nflatteningW(n)ofW, and the core tensor Sis obtained by,S=WU(1)(1;1)U(2)(1;1)U(N)(1;1)(5)Tensor Train Decomposition Tensor Train (TT) Decomposition outputs 2matricesU(1)andU(N)of sizeD1K1andKN1DNrespectively, and (N2) 3-way tensorsU(n)of sizeKn1DnKn. The elements ofWcan be computed by,Wd1;d2;:::;d N=K1Xk1=1K2Xk2=1KN1XkN1=1U(1)d1;k1U(2)k1;d2;k2U(3)k2;d3;k3U(N)kN1;dN(6)=U(1)d1;U(2);d2;U(3);d3;U(d);dN(7)W=U(1)U(2)U(N)(8)whereU(n);dn;is a matrix of size Kn1Knsliced fromU(n)with the second axis fixed at dn. TheTT decomposition is typically realised with a recursive SVD-based solution (Oseledets, 2011).Knowledge Sharing If the final axis of the input tensor above indexes tasks, i.e. if DN=Tthenthe last factor U(N)in both decompositions encodes a matrix of task specific knowledge, and theother factors encode shared knowledge.3.3 D EEPMULTI -TASK REPRESENTATION LEARNINGTo realise deep multi-task representation learning (DMTRL), we learn one DNN per-task each withthe same architecture2. However each corresponding layer’s weights are generated with one of theknowledge sharing structures in Eq. 2, Eq. 4 or Eq. 8. It is important to note that we apply these‘right-to-left’ in order to generate weight tensors with the specified sharing structure, rather thanactually applying Tucker or TT to decompose an input tensor. In the forward pass, we synthesiseweight tensorsWand perform inference as usual, so the method can be thought of as tensor com-position rather than decomposition.Our weight generation (construct tensors from smaller pieces) does not introduce non-differentiableterms, so our deep multi-task representation learner is trainable via standard backpropagation.Specifically, in the backward pass over FC layers, rather than directly learning the 3-way tensorW, our methods learn either fS;U1;U2;U3g(DMTRL-Tucker, Eq. 4), fU1;U2;U3g(DMTRL-TT,Eq. 8), or in the simplest case fL;Sg(DMTRL-LAF3, Eq. 2). Besides FC layers, contemporary2Except heterogeneous MTL, where the output layer is necessarily unshared due to different dimensionality.3LAF refers to Last Axis Flattening.4Published as a conference paper at ICLR 2017 ..........................................HomogeneousMTL(Shallow)HeterogeneousMTL............STLMTLUDMTLDMTRLSTL.....................HomogeneousMTL(Deep)............UDMTLDMTRLSTLFigure 1: Illustrative example with two tasks corresponding to two neural networks in homogeneous(single output) and heterogeneous (different output dimension) cases. Weight layers grouped bysolid rectangles are tied across networks. Weight layers grouped by dashed rectangles are softlyshared across networks with our method. Ungrouped weights are independent.Homogeneous MTL Shallow: Left is STL (two independent networks); right is MTL. In the caseof vector input and no hidden layer, our method is equivalent to conventional matrix-based MTLmethods. Homogeneous MTL Deep: STL (Left) is independent networks. User-defined-MTL (UD-MTL) selects layers to share/separate. Our DMTRL learns sharing at every layer. HeterogeneousMTL: UD-MTL selects layers to share/separate. DMTRL learns sharing at every shareable layer.DNN designs often exploit convolutional layers. Those layers usually contain kernel filter parame-ters that are 3-way tensors of size HWC, (whereHis height,Wis width, and Cis the numberof input channels) or 4-way tensors of size HWCM, whereMis the number of filters in thislayer (i.e., the number of output channels). The proposed methods naturally extend to convolutionlayers as convolution just adds more axes on the left-hand side. E.g., the collection of parametersfrom a given convolutional layer of Tneural networks forms a tensor of shape HWCMT.These knowledge sharing strategies provide a way to softly share parameters across the correspond-ing layers of each task’s DNN: where, what, and how much to share are learned from data. This isin contrast to the conventional Deep-MTL approach of manually selecting a set of layers to undergohard parameter sharing: by tying weights so each task uses exactly the same weight matrix/tensorfor the corresponding layer (Zhang et al., 2014; Liu et al., 2015); and a set of layers to be completelyseparate: by using independent weight matrices/tensors. In contrast our approach benefits from:(i) automatically learning this sharing structure from data rather than requiring user trial and error,and (ii) smoothly interpolating between fully shared and fully segregated layers, rather than a hardswitching between these states. An illustration of the proposed framework for different problemsettings can be found in Fig. 1.4 E XPERIMENTSImplementation Details Our method is implemented with TensorFlow (Abadi et al., 2015). Thecode is released on GitHub4. For DMTRL-Tucker, DMTRL-TT, and DMTRL-LAF, we need toassign the rank of each weight tensor. The DNN architecture itself may be complicated and socan benefit from different ranks at different layers, but grid-search is impractical. However, since4https://github.com/wOOL/DMTRL5Published as a conference paper at ICLR 2017both Tucker and TT decomposition methods have SVD-based solutions, and vanilla SVD is directlyapplicable to DMTRL-LAF, we can initialise the model and set the ranks as follows: First train theDNNs independently in single task learning mode. Then pack the layer-wise parameters as the inputfor tensor decomposition. When SVD is applied, set a threshold for relative error so SVD will pickthe appropriate rank. Thus our method needs only a single hyper parameter of max reconstructionerror (we set to = 10% throughout) that indirectly specifies the ranks of every layer. Note thattraining from random initialisation also works, but the STL-based initialisation makes rank selectioneasy and transparent. Nevertheless, like (Kumar & Daum ́e III, 2012) the framework is not sensitiveto rank choice so long as they are big enough. If random initialisation is desired to eliminate thepre-training requirement, good practice is to initialise parameter tensors by a suitable random weightdistribution first, then do decomposition, and use the decomposed values for initialising the factors(thereallearnable parameters in our framework). In this way, the resulting re-composed tensors willhave approximately the intended distribution. Our sharing is applied to weight parameters only, biasterms are not shared. Apart from initialisation, decomposition is not used anywhere.4.1 H OMOGENEOUS MTLDataset, Settings and Baselines We use MNIST handwritten digits. The task is to recognise digitimages zero to nine. When this dataset is used for the evaluation of MTL methods, ten 1-vs-allbinary classification problems usually define ten tasks (Kumar & Daum ́e III, 2012). The dataset hasa given train (60,000 images) and test (10,000 images) split. Each instance is a monochrome imageof size 28281.We use a modified LeNet (LeCun et al., 1998) as the CNN architecture. The first convolutional layerhas32filters of size 55, followed by 22max pooling. The second convolutional layer has 64filters of size 44, and again a 22max pooling. After these two convolutional layers, two fullyconnected layers with 512and1output(s) are placed sequentially. The convolutional and first FClayer use RELU f(x) = max(x;0)activation function. We use hinge loss, `(y) = max(0;1^yy),wherey21is the true label and ^yis the output of each task’s neural network.Conventional matrix-based MTL methods (Evgeniou & Pontil, 2004; Argyriou et al., 2008; Kumar& Daum ́e III, 2012; Romera-paredes et al., 2013; Wimalawarne et al., 2014) are linear models takingvector input only, so they need a preprocessing that flattens the image into a vector, and typicallyreduce dimension by PCA. As per our motivation for studying Deep MTL, our methods decisivelyoutperform such shallow linear baselines. Thus to find a stronger MTL competitor, we instead searchuser defined architectures for Deep-MTL parameter sharing (cf (Zhang et al., 2014; Liu et al., 2015;Caruana, 1997)). In all of the four parametrised layers (pooling has no parameters), we set the firstN(1N3) to be hard shared5. We then use cross-validation to select among the three user-defined MTL architectures and the best option is N= 3, i.e., the first three layers are fully shared(we denote this model UD-MTL). For our methods, all four parametrised layers are softly sharedwith the different factorisation approaches. To evaluate different MTL methods and a baseline ofsingle task learning (STL), we take ten different fractions of the given 60K training split, train themodel, and test on the 10K testing split. For each fraction, we repeat the experiment 5times withrandomly sampled training data. We report two performance metrics: (1) the mean error rate of theten binary classification problems and (2) the error rate of recognising a digit by ranking each task’s1-vs-all output (multi-class classification error).Results As we can see in Fig. 2, all MTL approaches outperform STL, and the advantage is moresignificant when the training data is small. The proposed methods, DMTRL-TT and DMTRL-Tucker outperform the best user-defined MTL when the training data is very small, and their perfor-mance is comparable when the training data is large.Further Discussion For a slightly unfair comparison, in the case of binary classification with 1000training data, shallow matrix-based MTL methods with PCA feature (Kang et al., 2011; Kumar &Daum ́e III, 2012) reported 14:0%/13:4%error rate. With the same amount of data, our methods5This is not strictly all possible user-defined sharing options. For example, another possibility is the firstconvolutional layer and the first FC layer could be fully shared, with the second convolutional layer being in-dependent (task specific). However, this is against the intuition that lower/earlier layers are more task agnostic,and later layers more task specific. Note that sharing the last layer is technically possible but not intuitive, andin any case not meaningful unless at least one early layer is unshared, as the tasks are different.6Published as a conference paper at ICLR 201710-210-1100Fraction of Training Data00.020.040.060.080.10.12Error RateBinary ClassificationSTLDMTRL-LAFDMTRL-TuckerDMTRL-TTUD-MTL10-210-1100Fraction of Training Data00.050.10.150.2Error RateMulti-class ClassificationSTLDMTRL-LAFDMTRL-TuckerDMTRL-TTUD-MTLFigure 2: Homogeneous MTL: digit recognition on MNIST dataset. Each digit provides a task.have error rate below 6%. This shows the importance of our deep end-to-end multi-task represen-tation learning contribution versus conventional shallow MTL. Since the error rates in (Kang et al.,2011; Kumar & Daum ́e III, 2012) were produced on a private subset of MNIST dataset with PCArepresentations only, to ensure a direct comparison, we implement several classic MTL methods andcompare them in Appendix A.For readers interested in the connection to model capacity (number of parameters), we present fur-ther analysis in Appendix B.4.2 H ETEROGENEOUS MTL: F ACE ANALYSISDataset, Settings and Baselines The AdienceFaces (Eidinger et al., 2014) is a large-scale faceimages dataset with the labels of each person’s gender and age group. We use this dataset forthe evaluation of heterogeneous MTL with two tasks: (i) gender classification (two classes) and(ii) age group classification (eight classes). Two independent CNN models for this benchmark areintroduced in (Levi & Hassncer, 2015). The two CNNs have the same architecture except for thelast fully-connected layer, since the heterogeneous tasks have different number of outputs (two /eight). We take these CNNs from (Levi & Hassncer, 2015) as the STL baseline. We again searchfor the best possible user-defined MTL architecture as a strong competitor: the proposed CNN hassix layers – three convolutional and three fully-connected layers. The last fully-connected layer hasnon-shareable parameters because they are of different size. To search the MTL design-space, wetry setting the first N(1N5) layers to be hard shared between the tasks. Running 5-foldcross-validation on the train set to evaluate the architectures, we find the best choice is N= 5(i.e.,all layers fully shared before the final heterogeneous outputs). For our proposed methods, all thelayers before the last heterogeneous dimensionality FC layers are softly shared.We select increasing fractions of the AdienceFaces train split randomly, train the model, and evaluateon the same test set. For reference, there are 12245 images with gender labelled for training, 4007ones for testing, and 11823 images with age group labelled for training, and 4316 ones for testing.Results Fig. 3 shows the error rate for each task. For the gender recognition task, we find that:(i) User-defined MTL is not consistently better than STL, but (ii) our methods, esp., DMTRL-Tucker, consistently outperform both STL and the best user-defined MTL. For the harder age groupclassification task, our methods generally improve on STL. However UD-MTL does not consistentlyimprove on STL, and even reduces performance when the training set is bigger. This is the negativetransfer phenomenon (Rosenstein et al., 2005), where using a transfer learning algorithm is worsethan not using it. This difference in outcomes is attributed to sufficient data eventually providingsome effective task-specific representation. Our methods can discover and exploit this, but UD-MTL’s hard switch between sharing and not sharing can not represent or exploit such increasingtask-specificity of representation.7Published as a conference paper at ICLR 201710-210-1100Fraction of Training Data0.20.250.30.350.40.45Error RateGender Classification10-210-1100Fraction of Training Data0.50.550.60.650.70.75Error RateAge Group ClassificationSTLDMTRL-LAFDMTRL-TuckerDMTRL-TTUD-MTLFigure 3: Heterogeneous MTL: Age and Gender recognition in AdienceFace dataset.4.3 H ETEROGENEOUS MTL: M ULTI -ALPHABET RECOGNITIONDataset, Settings and Baselines We next consider the task of learning to recognise handwrittenletters in multiple languages using the Omniglot (Lake et al., 2015) dataset. Omniglot containshandwritten characters in 50 different alphabets (e.g., Cyrillic, Korean, Tengwar), each with its ownnumber of unique characters ( 1455). In total, there are 1623 unique characters, and each hasexactly 20 instances. Here each task corresponds to an alphabet, and the goal is to recognise itscharacters. MTL has a clear motivation here, as cross-alphabet knowledge sharing is likely to beuseful as one is unlikely to have extensive training data for a wide variety of less common alphabets.The images are monochrome of size 105105. We design a CNN with 3convolutional and 2FClayers. The first conv layer has 8filters of size 55; the second conv layer has 12filters of size33, and the third convolutional layer has 16filters of size 33. Each convolutional layer isfollowed by a 22max-pooling. The first FC layer has 64neurons, and the second FC layer hassize corresponding to the number of unique classes in the alphabet. The activation function is tanh .We use a similar strategy to find the best user-defined MTL model: the CNN has 5parametrisedlayers, of which 4layers are potentially shareable. So we tried hard-sharing the first N(1N4)layers. Evaluating these options by 5-fold cross-validation, the best option turned out to be N= 3,i.e., the first three layers are hard shared. For our methods, all four shareable layers are softly shared.Since there is no standard train/test split for this dataset, we use the following setting: We repeat-edly pick at random 5;:::90% of images per class for training. Note that 5%is the minimum,corresponding to one-shot learning. The remaining data are used for evaluation.Results Fig. 4 reports the average error rate across all 50tasks (alphabets). Our proposed MTLmethods surpass the STL baseline in all cases. User-defined MTL does not work well when thetraining data is very small, but does help when training fraction is larger than 50%.Measuring the Learned Sharing Compared to the conventional user-defined sharing architec-tures, our method learns how to share from data. We next try to quantify the amount of sharingestimated by our model on the Omniglot data. Returning to the key factorisation W=LS, wecan find that S-like matrix appears in all variants of proposed method. It is Sin DMTRL-LAF, thetransposedU(N)in DMTRL-Tucker, and U(N)in DMTRL-TT ( Nis the last axis ofW).Sis aKTsize matrix, where Tis the number of tasks, and Kis the number of latent tasks (Kumar& Daum ́e III, 2012) or the dimension of task coding (Yang & Hospedales, 2015). Each columnofSis a set of coefficients that produce the final weight matrix/tensor by linear combination. Ifwe put STL and user-defined MTL (for a certain shared layer) in this framework, we see that STLis to assign (rather than learn )Sto be an identity matrix IT. Similarly, user-defined MTL (for acertain shared layer) is to assign Sto be a matrix with all zeros but one particular row is all ones,e.g.,S= [11T;0]. Between these two extremes, our method learns the sharing structure in S. Wepropose the following equation to measure the learned sharing strength:=1T2Xi<j(S;i;S;j) =2T(T1)Xi<j(S;i;S;j) (9)8Published as a conference paper at ICLR 2017Conv1Conv2Conv3 FC1 FC2Layers00.20.40.60.81Sharing StrengthSharing Strength at Each LayerDMTRL-LAFDMTRL-TuckerDMTRL-TTUD-MTL0.05 0.10 0.20 0.50 0.60 0.70 0.80 0.90Fraction of Training Data0.30.40.50.60.7Error RateAlphabet ClassificationSTLDMTRL-LAFDMTRL-TuckerDMTRL-TTUD-MTLFigure 4: Results of multi-task learning of multilingual character recognition (Omniglot dataset).Below: Illustration of the language pairs estimated to be the most related (left - Georgian Mkhedruliand Inuktitut) and most unrelated (right - Balinese and ULOG) character recognition tasks.Here (a;b)is a similarity measure for two vectors aandband we use cosine similarity. is theaverage on all combinations of column-wise similarity. So measures how much sharing is encodedbySbetween= 0for STL (nothing to share) and = 1for user-defined MTL (completely shared).SinceSis a real-valued matrix in our scenario, we normalise it before applying Eq. 9: First we takeabsolute values, because large either positive or negative value suggests a significant coefficient.Second we normalise each column of Sby applying a softmax function, so the sum of every columnis1. The motivation behind the second step is to make a matched range of our SwithS=ITorS= [11T;0], as for those two cases, the sum of each column is 1and the range is [0;1].For the Omniglot experiment, we plot the measured sharing amount for training fraction 10%. Fig. 4reveals that three proposed methods tend to share more for bottom layers (‘Conv1’, ‘Conv2’, and‘Conv3’) and share less for top layer (‘FC1’). This is qualitatively similar to the best user-definedMTL, where the first three layers are fully shared ( = 1) and the 4th layer is completely not shared(= 0). However, our methods: (i) learn this structure in a purely data-driven way and (ii) benefitsfrom the ability to smoothly interpolate between high and low degrees of sharing as depth increases.As an illustration, Fig. 4 also shows example text from the most and least similar language pairs asestimated at our multilingual character recogniser’s FC1 layer (the result can vary across layers).5 C ONCLUSIONIn this paper, we propose a novel framework for end-to-end multi-task representation learning incontemporary deep neural networks. The key idea is to generalise matrix factorisation-based multi-task ideas to tensor factorisation, in order to flexibly share knowledge in fully connected and convo-lutional DNN layers. Our method provides consistently better performance than single task learningand comparable or better performance than the best results from exhaustive search of user-definedMTL architectures. It reduces the design choices and architectural search space that must be ex-plored in the workflow of Deep MTL architecture design (Caruana, 1997; Zhang et al., 2014; Liuet al., 2015), relieving researchers of the need to decide how to structure layer sharing/segregation.Instead sharing structure is determined in a data-driven way on a layer-by-layer basis that moreoverallows a smooth interpolation between sharing and not sharing in progressively deeper layers.Acknowledgements This work was supported by EPSRC (EP/L023385/1), and the EuropeanUnion’s Horizon 2020 research and innovation program under grant agreement No 640891.9Published as a conference paper at ICLR 2017 | Bk7WO8XSl | Comparison with other standard MTL methods is missing | 5: Marginally below acceptance threshold | The paper proposed a nice framework leveraging Tucker and Tensor train low-rank tensor factorization to induce parameter sharing for multi-task learning.
The framework is nice and appealing.
However, MTL is a very well studied problem and the paper considers simple task for different classification, and it is not clear if we really need ``Deep Learning" for these simple datasets. A comparison with existing shallow MTL is necessary to show the benefits of the proposed methods (and in particular being deep) on the dataset. The authors ignore them on the basis of speculation and it is not clear if the proposed framework is really superior to simple regularizations like the nuclear norm. The idea of nuclear norm regularization can also be extended to deep learning as gradient descent are popular in all methods. | 3: The reviewer is fairly confident that the evaluation is correct |
SkhU2fcll | ICLR.cc/2017/conference | 2017 | Deep Multi-task Representation Learning: A Tensor Factorisation Approach | ["Yongxin Yang", "Timothy M. Hospedales"] | Most contemporary multi-task learning methods assume linear models. This setting is considered shallow in the era of deep learning. In this paper, we present a new deep multi-task representation learning framework that learns cross-task sharing structure at every layer in a deep network. Our approach is based on generalising the matrix factorisation techniques explicitly or implicitly used by many conventional MTL algorithms to tensor factorisation, to realise automatic learning of end-to-end knowledge sharing in deep networks. This is in contrast to existing deep learning approaches that need a user-defined multi-task sharing strategy. Our approach applies to both homogeneous and heterogeneous MTL. Experiments demonstrate the efficacy of our deep multi-task representation learning in terms of both higher accuracy and fewer design choices. | ["deep", "representation learning", "tensor factorisation", "representation", "contemporary", "methods", "linear models", "setting", "shallow", "era"] | ABSTRACTMost contemporary multi-task learning methods assume linear models. This set-ting is considered shallow in the era of deep learning. In this paper, we presenta new deep multi-task representation learning framework that learns cross-tasksharing structure at every layer in a deep network . Our approach is based ongeneralising the matrix factorisation techniques explicitly or implicitly used bymany conventional MTL algorithms to tensor factorisation, to realise automaticlearning of end-to-end knowledge sharing in deep networks. This is in contrastto existing deep learning approaches that need a user-defined multi-task sharingstrategy. Our approach applies to both homogeneous and heterogeneous MTL.Experiments demonstrate the efficacy of our deep multi-task representation learn-ing in terms of both higher accuracy and fewer design choices.1 I NTRODUCTIONThe paradigm of multi-task learning is to learn multiple related tasks simultaneously so that knowl-edge obtained from each task can be re-used by the others. Early work in this area focused on neuralnetwork models (Caruana, 1997), while more recent methods have shifted focus to kernel methods,sparsity and low-dimensional task representations of linear models (Evgeniou & Pontil, 2004; Ar-gyriou et al., 2008; Kumar & Daum ́e III, 2012). Nevertheless given the impressive practical efficacyof contemporary deep neural networks (DNN)s in many important applications, we are motivated torevisit MTL from a deep learning perspective.While the machine learning community has focused on MTL for shallow linear models recently, ap-plications have continued to exploit neural network MTL (Zhang et al., 2014; Liu et al., 2015). Thetypical design pattern dates back at least 20 years (Caruana, 1997): define a DNN with shared lowerrepresentation layers, which then forks into separate layers and losses for each task. The sharingstructure is defined manually: full-sharing up to the fork, and full separation after the fork. Howeverthis complicates DNN architecture design because the user must specify the sharing structure: Howmany task specific layers? How many task independent layers? How to structure sharing if there aremany tasks of varying relatedness?In this paper we present a method for end-to-end multi-task learning in DNNs. This contributioncan be seen as generalising shallow MTL methods (Evgeniou & Pontil, 2004; Argyriou et al., 2008;Kumar & Daum ́e III, 2012) to learning how to share at every layer of a deep network; or as learningthe sharing structure for deep MTL (Caruana, 1997; Zhang et al., 2014; Spieckermann et al., 2014;Liu et al., 2015) which currently must be defined manually on a problem-by-problem basis.Before proceeding it is worth explicitly distinguishing some different problem settings, which haveall been loosely referred to as MTL in the literature. Homogeneous MTL: Each task correspondsto asingle output. For example, MNIST digit recognition is commonly used to evaluate MTL algo-rithms by casting it as 10 binary classification tasks (Kumar & Daum ́e III, 2012). HeterogeneousMTL: Each task corresponds to a unique set of output(s) (Zhang et al., 2014). For example, onemay want simultaneously predict a person’s age (task one: multi-class classification or regression)as well as identify their gender (task two: binary classification) from a face image.In this paper, we propose a multi-task learning method that works on all these settings. The key ideais to use tensor factorisation to divide each set of model parameters (i.e., both FC weight matrices,1Published as a conference paper at ICLR 2017and convolutional kernel tensors) into shared andtask-specific parts. It is a natural generalisationof shallow MTL methods that explicitly or implicitly are based on matrix factorisation (Evgeniou &Pontil, 2004; Argyriou et al., 2008; Kumar & Daum ́e III, 2012; Daum ́e III, 2007). As linear methods,these typically require pre-engineered features. In contrast, as a deep network, our generalisationcan learn directly from raw image data, determining sharing structure in a layer-wise fashion. Forthe simplest NN architecture – no hidden layer, single output – our method reduces to matrix-basedones, therefore matrix-based methods including (Evgeniou & Pontil, 2004; Argyriou et al., 2008;Kumar & Daum ́e III, 2012; Daum ́e III, 2007) are special cases of ours.2 R ELATED WORKMulti-Task Learning Most contemporary MTL algorithms assume that the input and model arebothD-dimensional vectors. The models of Ttasks can then be stacked into a DTsized matrixW. Despite different motivations and implementations, many matrix-based MTL methods workby placing constrains on W. For example, posing an `2;1norm onWto encourage low-rank W(Argyriou et al., 2008). Similarly, (Kumar & Daum ́e III, 2012) factorises WasW=LS, i.e., itassigns a lower rank as a hyper-parameter. An earlier work (Evgeniou & Pontil, 2004) proposesthat the linear model for each task tcan be written as wt= ^wt+ ^w0. This is the factorisationL= [ ^w0;^w1;:::; ^wT]andS= [11T;IT]. In fact, such matrix factorisation encompasses manyMTL methods. E.g., (Xue et al., 2007) assumes S;i(theith column of S) is a unit vector generatedby a Dirichlet Process and (Passos et al., 2012) models Wusing linear factor analysis with IndianBuffet Process (Griffiths & Ghahramani, 2011) prior on S.Tensor Factorisation In deep learning, tensor factorisation has been used to exploit factorisedtensors’ fewer parameters than the original (e.g., 4-way convolutional kernel) tensor, and thus com-press and/or speed up the model, e.g., (Lebedev et al., 2015; Novikov et al., 2015). For shallow linearMTL, tensor factorisation has been used to address problems where tasks are described by multipleindependent factors rather than merely indexed by a single factor (Yang & Hospedales, 2015). HeretheD-dimensional linear models for all unique tasks stack into a tensor W, of e.g.DT1T2in the case of two task factors. Knowledge sharing is then achieved by imposing tensor norms onW(Romera-paredes et al., 2013; Wimalawarne et al., 2014). Our framework factors tensors for thedifferent reason that for DNN models, parameters include convolutional kernels ( N-way tensors) orD1D2FC layer weight matrices ( 2-way tensors). Stacking up these parameters for many tasksresults inD1DNTtensors within which we share knowledge through factorisation.Heterogeneous MTL and DNNs Some studies consider heterogeneous MTL, where tasks mayhave different numbers of outputs (Caruana, 1997). This differs from the previously discussed stud-ies (Evgeniou & Pontil, 2004; Argyriou et al., 2008; Bonilla et al., 2007; Jacob et al., 2009; Kumar& Daum ́e III, 2012; Romera-paredes et al., 2013; Wimalawarne et al., 2014) which implicitly as-sume that each task has a single output. Heterogeneous MTL typically uses neural networks withmultiple sets of outputs and losses. E.g., Huang et al. (2013) proposes a shared-hidden-layer DNNmodel for multilingual speech processing, where each task corresponds to an individual language.Zhang et al. (2014) uses a DNN to find facial landmarks (regression) as well as recognise facialattributes (classification); while Liu et al. (2015) proposes a DNN for query classification and in-formation retrieval (ranking for web search). A key commonality of these studies is that they allrequire a user-defined parameter sharing strategy. A typical design pattern is to use shared layers(same parameters) for lower layers of the DNN and then split (independent parameters) for the toplayers. However, there is no systematic way to make such design choices, so researchers usually relyon trial-and-error, further complicating the already somewhat dark art of DNN design. In contrast,our method learns where and how much to share representation parameters across the tasks, hencesignificantly reducing the space of DNN design choices.Parametrised DNNs Our MTL approach is a parameterised DNN (Sigaud et al., 2015), in thatDNN weights are dynamically generated given some side information – in the case of MTL, giventhe task identity. In a related example of speaker-adaptive speech recognition (Tan et al., 2016) theremay be several clusters in the data (e.g., gender, acoustic conditions), and each speaker’s modelcould be a linear combination of these latent task/clusters’ models. They model each speaker i’sweight matrix W(i)as a sum of Kbase models ~W, i.e.,W(i)=PKk=1(i)p~W(p). The differencebetween speakers/tasks comes from and the base models are shared. An advantage of this is that,2Published as a conference paper at ICLR 2017when new data come, one can choose to re-train parameters only, and keep ~Wfixed. This willsignificantly reduce the number of parameters to learn, and consequently the required training data.Beyond this, Yang & Hospedales (2015) show that it is possible to train another neural network topredict thosevalues from some abstract metadata. Thus a model for an unseen task can be gener-ated on-the-fly with notraining instances given an abstract description of the task. The techniquesdeveloped here are compatible with both these ideas of generating models with minimal or no effort.3 M ETHODOLOGY3.1 P RELIMINARIESWe first recap some tensor factorisation basics before explaining how to factorise DNN weighttensors for multi-task representation learning. An N-way tensorWwith shapeD1D2DNis anN-dimensional array containingQNn=1Dnelements. Scalars, vectors, and matrices can beseen as 0,1, and 2-way tensors respectively, although the term tensor is usually used for 3-way orhigher. A mode- nfibre ofWis aDn-dimensional vector obtained by fixing all but the nth index.The mode-nflatteningW(n)ofWis the matrix of size DnQi:nDiconstructed by concatenatingall of theQi:nDimode-nfibres along columns.The dot product of two tensors is a natural extension of matrix dot product, e.g., if we have a tensorAof sizeM1M2Pand a tensorBof sizePN1N2:::, the tensor dot product AB willbe a tensor of size M1M2N1N2by matrix dot product AT(1)B(1)and reshaping1.More generally, tensor dot product can be performed along specified axes, A B(i;j)=AT(i)B(j)and reshaping. Here the subscripts indicate the axes of AandBat which dot product is performed.E.g., whenAis of sizeM1PM3MIandBis of sizeN1N2PNJ, thenA B(2;3)is a tensor of size M1M3MIN1N2NJ.Matrix-based Knowledge Sharing Assume we have Tlinear models (tasks) parametrised by D-dimensional weight vectors, so the collection of all models forms a size DTmatrixW. Onecommonly used MTL approach (Kumar & Daum ́e III, 2012) is to place a structure constraint on W,e.g.,W=LS, whereLis aDKmatrix andSis aKTmatrix. This factorisation recovers ashared factorLand a task-specific factorS. One can see the columns of Las latent basis tasks, andthe modelw(i)for theith task is the linear combination of those latent basis tasks with task-specificinformation S;i.w(i):=W;i=LS;i=KXk=1L;kSk;i (1)From Single to Multiple Outputs Consider extending this matrix factorisation approach to thecase of multiple outputs. The model for each task is then a D1D2matrix, forD1input andD2output dimensions. The collection of all those matrices constructs a D1D2Ttensor. Astraightforward extension of Eq. 1 to this case isW(i):=W;;i=KXk=1L;;kSk;i (2)This is equivalent to imposing the same structural constraint on WT(3)(transposed mode- 3flatteningofW). It is important to note that this allows knowledge sharing across the tasks only. I.e., knowl-edge sharing is only across-tasks not across dimensions within a task. However it may be that theknowledge learned in the mapping to one output dimension may be useful to the others within onetask. E.g., consider recognising photos of handwritten and print digits – it may be useful to shareacross handwritten-print; as well as across different digits within each. In order to support generalknowledge sharing across both tasks and outputs within tasks, we propose to use more general tensorfactorisation techniques. Unlike for matrices, there are multiple definitions of tensor factorisation,and we use Tucker (Tucker, 1966) and Tensor Train (TT) (Oseledets, 2011) decompositions.1We slightly abuse ‘-1’ referring to the last axis of the tensor.3Published as a conference paper at ICLR 20173.2 T ENSOR FACTORISATION FOR KNOWLEDGE SHARINGTucker Decomposition Given anN-way tensor of size D1D2DN, Tucker decompositionoutputs a core tensor Sof sizeK1K2KN, andNmatricesU(n)of sizeDnKn, suchthat,Wd1;d2;:::;d N=K1Xk1=1K2Xk2=1KNXkN=1Sk1;k2;:::;k NU(1)d1;k1U(2)d2;k2U(N)dN;kN(3)W=SU(1)(1;2)U(2)(1;2)U(N)(1;2)(4)Tucker decomposition is usually implemented by an alternating least squares (ALS) method (Kolda& Bader, 2009). However (Lathauwer et al., 2000) treat it as a higher-order singular value decom-position (HOSVD), which is more efficient to solve: U(n)is exactly the Umatrix from the SVD ofmode-nflatteningW(n)ofW, and the core tensor Sis obtained by,S=WU(1)(1;1)U(2)(1;1)U(N)(1;1)(5)Tensor Train Decomposition Tensor Train (TT) Decomposition outputs 2matricesU(1)andU(N)of sizeD1K1andKN1DNrespectively, and (N2) 3-way tensorsU(n)of sizeKn1DnKn. The elements ofWcan be computed by,Wd1;d2;:::;d N=K1Xk1=1K2Xk2=1KN1XkN1=1U(1)d1;k1U(2)k1;d2;k2U(3)k2;d3;k3U(N)kN1;dN(6)=U(1)d1;U(2);d2;U(3);d3;U(d);dN(7)W=U(1)U(2)U(N)(8)whereU(n);dn;is a matrix of size Kn1Knsliced fromU(n)with the second axis fixed at dn. TheTT decomposition is typically realised with a recursive SVD-based solution (Oseledets, 2011).Knowledge Sharing If the final axis of the input tensor above indexes tasks, i.e. if DN=Tthenthe last factor U(N)in both decompositions encodes a matrix of task specific knowledge, and theother factors encode shared knowledge.3.3 D EEPMULTI -TASK REPRESENTATION LEARNINGTo realise deep multi-task representation learning (DMTRL), we learn one DNN per-task each withthe same architecture2. However each corresponding layer’s weights are generated with one of theknowledge sharing structures in Eq. 2, Eq. 4 or Eq. 8. It is important to note that we apply these‘right-to-left’ in order to generate weight tensors with the specified sharing structure, rather thanactually applying Tucker or TT to decompose an input tensor. In the forward pass, we synthesiseweight tensorsWand perform inference as usual, so the method can be thought of as tensor com-position rather than decomposition.Our weight generation (construct tensors from smaller pieces) does not introduce non-differentiableterms, so our deep multi-task representation learner is trainable via standard backpropagation.Specifically, in the backward pass over FC layers, rather than directly learning the 3-way tensorW, our methods learn either fS;U1;U2;U3g(DMTRL-Tucker, Eq. 4), fU1;U2;U3g(DMTRL-TT,Eq. 8), or in the simplest case fL;Sg(DMTRL-LAF3, Eq. 2). Besides FC layers, contemporary2Except heterogeneous MTL, where the output layer is necessarily unshared due to different dimensionality.3LAF refers to Last Axis Flattening.4Published as a conference paper at ICLR 2017 ..........................................HomogeneousMTL(Shallow)HeterogeneousMTL............STLMTLUDMTLDMTRLSTL.....................HomogeneousMTL(Deep)............UDMTLDMTRLSTLFigure 1: Illustrative example with two tasks corresponding to two neural networks in homogeneous(single output) and heterogeneous (different output dimension) cases. Weight layers grouped bysolid rectangles are tied across networks. Weight layers grouped by dashed rectangles are softlyshared across networks with our method. Ungrouped weights are independent.Homogeneous MTL Shallow: Left is STL (two independent networks); right is MTL. In the caseof vector input and no hidden layer, our method is equivalent to conventional matrix-based MTLmethods. Homogeneous MTL Deep: STL (Left) is independent networks. User-defined-MTL (UD-MTL) selects layers to share/separate. Our DMTRL learns sharing at every layer. HeterogeneousMTL: UD-MTL selects layers to share/separate. DMTRL learns sharing at every shareable layer.DNN designs often exploit convolutional layers. Those layers usually contain kernel filter parame-ters that are 3-way tensors of size HWC, (whereHis height,Wis width, and Cis the numberof input channels) or 4-way tensors of size HWCM, whereMis the number of filters in thislayer (i.e., the number of output channels). The proposed methods naturally extend to convolutionlayers as convolution just adds more axes on the left-hand side. E.g., the collection of parametersfrom a given convolutional layer of Tneural networks forms a tensor of shape HWCMT.These knowledge sharing strategies provide a way to softly share parameters across the correspond-ing layers of each task’s DNN: where, what, and how much to share are learned from data. This isin contrast to the conventional Deep-MTL approach of manually selecting a set of layers to undergohard parameter sharing: by tying weights so each task uses exactly the same weight matrix/tensorfor the corresponding layer (Zhang et al., 2014; Liu et al., 2015); and a set of layers to be completelyseparate: by using independent weight matrices/tensors. In contrast our approach benefits from:(i) automatically learning this sharing structure from data rather than requiring user trial and error,and (ii) smoothly interpolating between fully shared and fully segregated layers, rather than a hardswitching between these states. An illustration of the proposed framework for different problemsettings can be found in Fig. 1.4 E XPERIMENTSImplementation Details Our method is implemented with TensorFlow (Abadi et al., 2015). Thecode is released on GitHub4. For DMTRL-Tucker, DMTRL-TT, and DMTRL-LAF, we need toassign the rank of each weight tensor. The DNN architecture itself may be complicated and socan benefit from different ranks at different layers, but grid-search is impractical. However, since4https://github.com/wOOL/DMTRL5Published as a conference paper at ICLR 2017both Tucker and TT decomposition methods have SVD-based solutions, and vanilla SVD is directlyapplicable to DMTRL-LAF, we can initialise the model and set the ranks as follows: First train theDNNs independently in single task learning mode. Then pack the layer-wise parameters as the inputfor tensor decomposition. When SVD is applied, set a threshold for relative error so SVD will pickthe appropriate rank. Thus our method needs only a single hyper parameter of max reconstructionerror (we set to = 10% throughout) that indirectly specifies the ranks of every layer. Note thattraining from random initialisation also works, but the STL-based initialisation makes rank selectioneasy and transparent. Nevertheless, like (Kumar & Daum ́e III, 2012) the framework is not sensitiveto rank choice so long as they are big enough. If random initialisation is desired to eliminate thepre-training requirement, good practice is to initialise parameter tensors by a suitable random weightdistribution first, then do decomposition, and use the decomposed values for initialising the factors(thereallearnable parameters in our framework). In this way, the resulting re-composed tensors willhave approximately the intended distribution. Our sharing is applied to weight parameters only, biasterms are not shared. Apart from initialisation, decomposition is not used anywhere.4.1 H OMOGENEOUS MTLDataset, Settings and Baselines We use MNIST handwritten digits. The task is to recognise digitimages zero to nine. When this dataset is used for the evaluation of MTL methods, ten 1-vs-allbinary classification problems usually define ten tasks (Kumar & Daum ́e III, 2012). The dataset hasa given train (60,000 images) and test (10,000 images) split. Each instance is a monochrome imageof size 28281.We use a modified LeNet (LeCun et al., 1998) as the CNN architecture. The first convolutional layerhas32filters of size 55, followed by 22max pooling. The second convolutional layer has 64filters of size 44, and again a 22max pooling. After these two convolutional layers, two fullyconnected layers with 512and1output(s) are placed sequentially. The convolutional and first FClayer use RELU f(x) = max(x;0)activation function. We use hinge loss, `(y) = max(0;1^yy),wherey21is the true label and ^yis the output of each task’s neural network.Conventional matrix-based MTL methods (Evgeniou & Pontil, 2004; Argyriou et al., 2008; Kumar& Daum ́e III, 2012; Romera-paredes et al., 2013; Wimalawarne et al., 2014) are linear models takingvector input only, so they need a preprocessing that flattens the image into a vector, and typicallyreduce dimension by PCA. As per our motivation for studying Deep MTL, our methods decisivelyoutperform such shallow linear baselines. Thus to find a stronger MTL competitor, we instead searchuser defined architectures for Deep-MTL parameter sharing (cf (Zhang et al., 2014; Liu et al., 2015;Caruana, 1997)). In all of the four parametrised layers (pooling has no parameters), we set the firstN(1N3) to be hard shared5. We then use cross-validation to select among the three user-defined MTL architectures and the best option is N= 3, i.e., the first three layers are fully shared(we denote this model UD-MTL). For our methods, all four parametrised layers are softly sharedwith the different factorisation approaches. To evaluate different MTL methods and a baseline ofsingle task learning (STL), we take ten different fractions of the given 60K training split, train themodel, and test on the 10K testing split. For each fraction, we repeat the experiment 5times withrandomly sampled training data. We report two performance metrics: (1) the mean error rate of theten binary classification problems and (2) the error rate of recognising a digit by ranking each task’s1-vs-all output (multi-class classification error).Results As we can see in Fig. 2, all MTL approaches outperform STL, and the advantage is moresignificant when the training data is small. The proposed methods, DMTRL-TT and DMTRL-Tucker outperform the best user-defined MTL when the training data is very small, and their perfor-mance is comparable when the training data is large.Further Discussion For a slightly unfair comparison, in the case of binary classification with 1000training data, shallow matrix-based MTL methods with PCA feature (Kang et al., 2011; Kumar &Daum ́e III, 2012) reported 14:0%/13:4%error rate. With the same amount of data, our methods5This is not strictly all possible user-defined sharing options. For example, another possibility is the firstconvolutional layer and the first FC layer could be fully shared, with the second convolutional layer being in-dependent (task specific). However, this is against the intuition that lower/earlier layers are more task agnostic,and later layers more task specific. Note that sharing the last layer is technically possible but not intuitive, andin any case not meaningful unless at least one early layer is unshared, as the tasks are different.6Published as a conference paper at ICLR 201710-210-1100Fraction of Training Data00.020.040.060.080.10.12Error RateBinary ClassificationSTLDMTRL-LAFDMTRL-TuckerDMTRL-TTUD-MTL10-210-1100Fraction of Training Data00.050.10.150.2Error RateMulti-class ClassificationSTLDMTRL-LAFDMTRL-TuckerDMTRL-TTUD-MTLFigure 2: Homogeneous MTL: digit recognition on MNIST dataset. Each digit provides a task.have error rate below 6%. This shows the importance of our deep end-to-end multi-task represen-tation learning contribution versus conventional shallow MTL. Since the error rates in (Kang et al.,2011; Kumar & Daum ́e III, 2012) were produced on a private subset of MNIST dataset with PCArepresentations only, to ensure a direct comparison, we implement several classic MTL methods andcompare them in Appendix A.For readers interested in the connection to model capacity (number of parameters), we present fur-ther analysis in Appendix B.4.2 H ETEROGENEOUS MTL: F ACE ANALYSISDataset, Settings and Baselines The AdienceFaces (Eidinger et al., 2014) is a large-scale faceimages dataset with the labels of each person’s gender and age group. We use this dataset forthe evaluation of heterogeneous MTL with two tasks: (i) gender classification (two classes) and(ii) age group classification (eight classes). Two independent CNN models for this benchmark areintroduced in (Levi & Hassncer, 2015). The two CNNs have the same architecture except for thelast fully-connected layer, since the heterogeneous tasks have different number of outputs (two /eight). We take these CNNs from (Levi & Hassncer, 2015) as the STL baseline. We again searchfor the best possible user-defined MTL architecture as a strong competitor: the proposed CNN hassix layers – three convolutional and three fully-connected layers. The last fully-connected layer hasnon-shareable parameters because they are of different size. To search the MTL design-space, wetry setting the first N(1N5) layers to be hard shared between the tasks. Running 5-foldcross-validation on the train set to evaluate the architectures, we find the best choice is N= 5(i.e.,all layers fully shared before the final heterogeneous outputs). For our proposed methods, all thelayers before the last heterogeneous dimensionality FC layers are softly shared.We select increasing fractions of the AdienceFaces train split randomly, train the model, and evaluateon the same test set. For reference, there are 12245 images with gender labelled for training, 4007ones for testing, and 11823 images with age group labelled for training, and 4316 ones for testing.Results Fig. 3 shows the error rate for each task. For the gender recognition task, we find that:(i) User-defined MTL is not consistently better than STL, but (ii) our methods, esp., DMTRL-Tucker, consistently outperform both STL and the best user-defined MTL. For the harder age groupclassification task, our methods generally improve on STL. However UD-MTL does not consistentlyimprove on STL, and even reduces performance when the training set is bigger. This is the negativetransfer phenomenon (Rosenstein et al., 2005), where using a transfer learning algorithm is worsethan not using it. This difference in outcomes is attributed to sufficient data eventually providingsome effective task-specific representation. Our methods can discover and exploit this, but UD-MTL’s hard switch between sharing and not sharing can not represent or exploit such increasingtask-specificity of representation.7Published as a conference paper at ICLR 201710-210-1100Fraction of Training Data0.20.250.30.350.40.45Error RateGender Classification10-210-1100Fraction of Training Data0.50.550.60.650.70.75Error RateAge Group ClassificationSTLDMTRL-LAFDMTRL-TuckerDMTRL-TTUD-MTLFigure 3: Heterogeneous MTL: Age and Gender recognition in AdienceFace dataset.4.3 H ETEROGENEOUS MTL: M ULTI -ALPHABET RECOGNITIONDataset, Settings and Baselines We next consider the task of learning to recognise handwrittenletters in multiple languages using the Omniglot (Lake et al., 2015) dataset. Omniglot containshandwritten characters in 50 different alphabets (e.g., Cyrillic, Korean, Tengwar), each with its ownnumber of unique characters ( 1455). In total, there are 1623 unique characters, and each hasexactly 20 instances. Here each task corresponds to an alphabet, and the goal is to recognise itscharacters. MTL has a clear motivation here, as cross-alphabet knowledge sharing is likely to beuseful as one is unlikely to have extensive training data for a wide variety of less common alphabets.The images are monochrome of size 105105. We design a CNN with 3convolutional and 2FClayers. The first conv layer has 8filters of size 55; the second conv layer has 12filters of size33, and the third convolutional layer has 16filters of size 33. Each convolutional layer isfollowed by a 22max-pooling. The first FC layer has 64neurons, and the second FC layer hassize corresponding to the number of unique classes in the alphabet. The activation function is tanh .We use a similar strategy to find the best user-defined MTL model: the CNN has 5parametrisedlayers, of which 4layers are potentially shareable. So we tried hard-sharing the first N(1N4)layers. Evaluating these options by 5-fold cross-validation, the best option turned out to be N= 3,i.e., the first three layers are hard shared. For our methods, all four shareable layers are softly shared.Since there is no standard train/test split for this dataset, we use the following setting: We repeat-edly pick at random 5;:::90% of images per class for training. Note that 5%is the minimum,corresponding to one-shot learning. The remaining data are used for evaluation.Results Fig. 4 reports the average error rate across all 50tasks (alphabets). Our proposed MTLmethods surpass the STL baseline in all cases. User-defined MTL does not work well when thetraining data is very small, but does help when training fraction is larger than 50%.Measuring the Learned Sharing Compared to the conventional user-defined sharing architec-tures, our method learns how to share from data. We next try to quantify the amount of sharingestimated by our model on the Omniglot data. Returning to the key factorisation W=LS, wecan find that S-like matrix appears in all variants of proposed method. It is Sin DMTRL-LAF, thetransposedU(N)in DMTRL-Tucker, and U(N)in DMTRL-TT ( Nis the last axis ofW).Sis aKTsize matrix, where Tis the number of tasks, and Kis the number of latent tasks (Kumar& Daum ́e III, 2012) or the dimension of task coding (Yang & Hospedales, 2015). Each columnofSis a set of coefficients that produce the final weight matrix/tensor by linear combination. Ifwe put STL and user-defined MTL (for a certain shared layer) in this framework, we see that STLis to assign (rather than learn )Sto be an identity matrix IT. Similarly, user-defined MTL (for acertain shared layer) is to assign Sto be a matrix with all zeros but one particular row is all ones,e.g.,S= [11T;0]. Between these two extremes, our method learns the sharing structure in S. Wepropose the following equation to measure the learned sharing strength:=1T2Xi<j(S;i;S;j) =2T(T1)Xi<j(S;i;S;j) (9)8Published as a conference paper at ICLR 2017Conv1Conv2Conv3 FC1 FC2Layers00.20.40.60.81Sharing StrengthSharing Strength at Each LayerDMTRL-LAFDMTRL-TuckerDMTRL-TTUD-MTL0.05 0.10 0.20 0.50 0.60 0.70 0.80 0.90Fraction of Training Data0.30.40.50.60.7Error RateAlphabet ClassificationSTLDMTRL-LAFDMTRL-TuckerDMTRL-TTUD-MTLFigure 4: Results of multi-task learning of multilingual character recognition (Omniglot dataset).Below: Illustration of the language pairs estimated to be the most related (left - Georgian Mkhedruliand Inuktitut) and most unrelated (right - Balinese and ULOG) character recognition tasks.Here (a;b)is a similarity measure for two vectors aandband we use cosine similarity. is theaverage on all combinations of column-wise similarity. So measures how much sharing is encodedbySbetween= 0for STL (nothing to share) and = 1for user-defined MTL (completely shared).SinceSis a real-valued matrix in our scenario, we normalise it before applying Eq. 9: First we takeabsolute values, because large either positive or negative value suggests a significant coefficient.Second we normalise each column of Sby applying a softmax function, so the sum of every columnis1. The motivation behind the second step is to make a matched range of our SwithS=ITorS= [11T;0], as for those two cases, the sum of each column is 1and the range is [0;1].For the Omniglot experiment, we plot the measured sharing amount for training fraction 10%. Fig. 4reveals that three proposed methods tend to share more for bottom layers (‘Conv1’, ‘Conv2’, and‘Conv3’) and share less for top layer (‘FC1’). This is qualitatively similar to the best user-definedMTL, where the first three layers are fully shared ( = 1) and the 4th layer is completely not shared(= 0). However, our methods: (i) learn this structure in a purely data-driven way and (ii) benefitsfrom the ability to smoothly interpolate between high and low degrees of sharing as depth increases.As an illustration, Fig. 4 also shows example text from the most and least similar language pairs asestimated at our multilingual character recogniser’s FC1 layer (the result can vary across layers).5 C ONCLUSIONIn this paper, we propose a novel framework for end-to-end multi-task representation learning incontemporary deep neural networks. The key idea is to generalise matrix factorisation-based multi-task ideas to tensor factorisation, in order to flexibly share knowledge in fully connected and convo-lutional DNN layers. Our method provides consistently better performance than single task learningand comparable or better performance than the best results from exhaustive search of user-definedMTL architectures. It reduces the design choices and architectural search space that must be ex-plored in the workflow of Deep MTL architecture design (Caruana, 1997; Zhang et al., 2014; Liuet al., 2015), relieving researchers of the need to decide how to structure layer sharing/segregation.Instead sharing structure is determined in a data-driven way on a layer-by-layer basis that moreoverallows a smooth interpolation between sharing and not sharing in progressively deeper layers.Acknowledgements This work was supported by EPSRC (EP/L023385/1), and the EuropeanUnion’s Horizon 2020 research and innovation program under grant agreement No 640891.9Published as a conference paper at ICLR 2017 | B17a61-Vg | 8: Top 50% of accepted papers, clear accept | This paper proposed a deep multi-task representation learning framework that learns cross-task sharing structure at every layer in a deep network with tensor factorization and end-to-end knowledge sharing. This approach removed the requirement of a user-defined multi-task sharing strategy in conventional approach. Their experimental results indicate that their approach can achieve higher accuracy with fewer design choices.
Although factorization ideas have been exploited in the past for other tasks I think applying it to MTL is interesting. The only thing I want to point out is that the saving of parameter is from the low-rank factorization. In the conventional MTL each layer's weight size can also be reduced if SVD is used.
BTW, recent neural network MTL was explored first (earlier than 2014, 2015 work cited) in speech recognition community. see, e.g.,
Huang, J.T., Li, J., Yu, D., Deng, L. and Gong, Y., 2013, May. Cross-language knowledge transfer using multilingual deep neural network with shared hidden layers. In 2013 IEEE International Conference on Acoustics, Speech and Signal Processing (pp. 7304-7308). IEEE. | 4: The reviewer is confident but not absolutely certain that the evaluation is correct |
|
BJKYvt5lg | ICLR.cc/2017/conference | 2017 | PixelVAE: A Latent Variable Model for Natural Images | ["Ishaan Gulrajani", "Kundan Kumar", "Faruk Ahmed", "Adrien Ali Taiga", "Francesco Visin", "David Vazquez", "Aaron Courville"] | Natural image modeling is a landmark challenge of unsupervised learning. Variational Autoencoders (VAEs) learn a useful latent representation and model global structure well but have difficulty capturing small details. PixelCNN models details very well, but lacks a latent code and is difficult to scale for capturing large structures. We present PixelVAE, a VAE model with an autoregressive decoder based on PixelCNN. Our model requires very few expensive autoregressive layers compared to PixelCNN and learns latent codes that are more compressed than a standard VAE while still capturing most non-trivial structure. Finally, we extend our model to a hierarchy of latent variables at different scales. Our model achieves state-of-the-art performance on binarized MNIST, competitive performance on 64 × 64 ImageNet, and high-quality samples on the LSUN bedrooms dataset.
| ["Deep learning", "Unsupervised Learning"] | ABSTRACTNatural image modeling is a landmark challenge of unsupervised learning. Varia-tional Autoencoders (V AEs) learn a useful latent representation and model globalstructure well but have difficulty capturing small details. PixelCNN models de-tails very well, but lacks a latent code and is difficult to scale for capturing largestructures. We present PixelV AE, a V AE model with an autoregressive decoderbased on PixelCNN. Our model requires very few expensive autoregressive lay-ers compared to PixelCNN and learns latent codes that are more compressed thana standard V AE while still capturing most non-trivial structure. Finally, we ex-tend our model to a hierarchy of latent variables at different scales. Our modelachieves state-of-the-art performance on binarized MNIST, competitive perfor-mance on 6464ImageNet, and high-quality samples on the LSUN bedroomsdataset.1 I NTRODUCTIONBuilding high-quality generative models of natural images has been a long standing challenge. Al-though recent work has made significant progress (Kingma & Welling, 2014; van den Oord et al.,2016a;b), we are still far from generating convincing, high-resolution natural images.Many recent approaches to this problem are based on an efficient method for performing amor-tized, approximate inference in continuous stochastic latent variables: the variational autoencoder(V AE) (Kingma & Welling, 2014) jointly trains a top-down decoder generative neural network witha bottom-up encoder inference network. V AEs for images typically use rigid decoders that modelthe output pixels as conditionally independent given the latent variables. The resulting model learnsa useful latent representation of the data and effectively models global structure in images, but hasdifficulty capturing small-scale features such as textures and sharp edges due to the conditional inde-pendence of the output pixels, which significantly hurts both log-likelihood and quality of generatedsamples compared to other models.PixelCNNs (van den Oord et al., 2016a;b) are another state-of-the-art image model. Unlike V AEs,PixelCNNs model image densities autoregressively, pixel-by-pixel. This allows it to capture finedetails in images, as features such as edges can be precisely aligned. By leveraging carefully con-structed masked convolutions (van den Oord et al., 2016b), PixelCNNs can be trained efficiently inparallel on GPUs. Nonetheless, PixelCNN models are still very computationally expensive. Unliketypical convolutional architectures they do not apply downsampling between layers, which meansthat each layer is computationally expensive and that the depth of a PixelCNN must grow linearlywith the size of the images in order for it to capture dependencies between far-away pixels. Pix-elCNNs also do not explicitly learn a latent representation of the data, which can be useful fordownstream tasks such as semi-supervised learning.Corresponding author; igul222@gmail.com1Published as a conference paper at ICLR 2017Figure 1: Samples from hierarchical PixelV AE on the LSUN bedrooms dataset.Our contributions are as follows:We present PixelV AE, a latent variable model which combines the largely complementaryadvantages of V AEs and PixelCNNs by using PixelCNN-based masked convolutions in theconditional output distribution of a V AE.We extend PixelV AE to a hierarchical model with multiple stochastic layers and autore-gressive decoders at each layer. This lets us autoregressively model not only the outputpixels but also higher-level latent feature maps.On MNIST, we show that PixelV AE: (1) establishes a new state-of-the-art likelihood, (2)performs comparably to PixelCNN using far fewer computationally expensive autoregres-sive layers, (3) learns more compressed latent codes than a standard V AE while still ac-counting for most non-trivial structure, and (4) learns a latent code which separates digitsbetter than a standard V AE.We evaluate hierarchical PixelV AE on two challenging natural image datasets ( 6464ImageNet and LSUN bedrooms). On 6464ImageNet, we report likelihood competitivewith the state of the art at significantly less computational cost. On LSUN bedrooms,we generate high-quality samples and show that hierarchical PixelV AE learns to modeldifferent properties of the scene with each of its multiple layers.2 R ELATED WORKThere have been many recent advancements in generative modeling of images. We briefly discusssome of these below, especially those that are related to our approach.The Variational Autoencoder (V AE) (Kingma & Welling, 2014) is a framework to train neural net-works for generation and approximate inference jointly by optimizing a variational bound on thedata log-likelihood. The use of normalizing flows (Rezende & Mohamed, 2015) improves the flex-ibility of the V AE approximate posterior. Based on this, Kingma et al. (2016) develop an efficientformulation of an autoregressive approximate posterior model using MADE (Germain et al., 2015).In our work, we avoid the need for such flexible inference models by using autoregressive priors.The idea of using autoregressive conditional likelihoods in V AEs has been explored in the context oflanguage modeling in (Bowman et al., 2016), however in that work the use of latent variables failsto improve likelihood over a purely autoregressive model.2Published as a conference paper at ICLR 2017. concatImageEncoderLatentVariablesDecoder PixelCNN layersReconstructionORSampleGeneration: Autoregressive samplingTraining: Teacher forcingORORFigure 2: Our proposed model, PixelV AE, makes use of PixelCNN to model an autoregressive de-coder for a V AE. V AEs, which assume (conditional) independence among pixels, are known to sufferfrom blurry samples, while PixelCNN, modeling the joint distribution, produces sharp samples, butlack a latent representation that might be more useful for downstream tasks. PixelV AE combines thebest of both worlds, providing a meaningful latent representation, while producing sharp samples.Simultaneously to our work, Chen et al. (2016) present a V AE model for images with an an autore-gressive output distribution. In constrast to Chen et al. (2016), who focus on models with a singlelayer of latent variables, we also investigate models with a hierarchy of latent variables (and cor-responding autoregressive priors) and show that they enable us to scale our model to challengingnatural image datasets.Another promising recent approach is Generative Adversarial Networks (GANs) (Goodfellow et al.,2014), which pit a generator network and a discriminator network against each other. Recent workhas improved training stability (Radford et al., 2015; Salimans et al., 2016) and incorporated in-ference networks into the GAN framework (Dumoulin et al., 2016; Donahue et al., 2016). GANsgenerate compelling samples compared to our work, but still exhibit unstable training dynamics andare known to underfit by ignoring modes of the data distribution (Dumoulin et al., 2016). Further, itis difficult to accurately estimate the data likelihood in GANs.3 P IXEL VAE M ODELLike a V AE, our model jointly trains an “encoder” inference network, which maps an image xto aposterior distribution over latent variables z, and a “decoder” generative network, which models adistribution over xconditioned on z. The encoder and decoder networks are composed of a seriesof convolutional layers, respectively with strided convolutions for downsampling in the encoder andtransposed convolutions for upsampling in the decoder.As opposed to most V AE decoders that model each dimension of the output independently (forexample, by modeling the output as a Gaussian with diagonal covariance), we use a conditionalPixelCNN in the decoder. Our decoder models xas the product of each dimension xiconditionedon all previous dimensions and the latent variable z:p(xjz) =Yip(xijx1;:::;x i1;z)We first transform zthrough a series of convolutional layers into feature maps with the same spatialresolution as the output image and then concatenate the resulting feature maps with the image.The resulting concatenated feature maps are then further processed by several PixelCNN maskedconvolutional layers and a final PixelCNN 256-way softmax output.Unlike typical PixelCNN implementations, we use very few PixelCNN layers in our decoder, relyingon the latent variables to model the structure of the input at scales larger than the combined receptive3Published as a conference paper at ICLR 2017Figure 3: We generate top-down through a hierarchical latent space decomposition. The inferencenetwork generates latent variables by composing successive deterministic functions to compute pa-rameters of the stochastic random variables. Dotted lines denote contributions to the cost.field of our PixelCNN layers. As a result of this, our architecture captures global structure at a muchlower computational cost than a standard PixelCNN implementation.3.1 H IERARCHICAL ARCHITECTUREThe performance of V AEs can be improved by stacking them to form a hierarchy of stochastic latentvariables: in the simplest configuration, the V AE at each level models a distribution over the latentvariables at the level below, with generation proceeding downward and inference upward througheach level (i.e. as in Fig. 3). In convolutional architectures, the intermediate latent variables aretypically organized into feature maps whose spatial resolution decreases toward higher levels.Our model can be extended in the same way. At each level, the generator is a conditional PixelCNNover the latent features in the level below. This lets us autoregressively model not only the outputdistribution over pixels but also the prior over each set of latent feature maps. The higher-levelPixelCNN decoders use diagonal Gaussian output layers instead of 256-way softmax, and modelthe dimensions within each spatial location (i.e. across feature maps) independently. This is donefor simplicity, but is not a limitation of our model.The output distributions over the latent variables for the generative and inference networks decom-pose as follows (see Fig. 3).p(z1;;zL) =p(zL)p(zL1jzL)p(z1jz2)q(z1;;zLjx) =q(z1jx)q(zLjx)We optimize the negative of the evidence lower bound (sum of data negative log-likelihood andKL-divergence of the posterior over latents with the prior).L(x;q;p ) =Ez1q(z1jx)logp(xjz1) +DKL(q(z1;zLjx)jjp(z1;;zL))=Ez1q(z1jx)logp(xjz1) +Zz1;;zLLYj=1q(zjjx)LXi=1logq(zijx)p(zijzi+1)dz1:::dzL=Ez1q(z1jx)logp(xjz1) +LXi=1Zz1;;zLLYj=1q(zjjx) logq(zijx)p(zijzi+1)dz1:::dzL=Ez1q(z1jx)logp(xjz1) +LXi=1Zzi;zi+1q(zi+1jx)q(zijx) logq(zijx)p(zijzi+1)dzidzi+14Published as a conference paper at ICLR 2017=Ez1q(z1jx)logp(xjz1) +LXi=1Ezi+1q(zi+1jx)DKL(q(zijx)jjp(zijzi+1))Note that when specifying an autoregressive prior over each latent level zi, we can leverage maskedconvolutions (van den Oord et al., 2016b) and samples drawn independently from the approximateposteriorq(zijx)(i.e. from the inference network) to train efficiently in parallel on GPUs.4 E XPERIMENTS4.1 MNISTModel NLL TestDRAW (Gregor et al., 2016) 80.97Discrete V AE (Rolfe, 2016) =81.01IAF V AE (Kingma et al., 2016) 79.88PixelCNN (van den Oord et al., 2016a) =81.30PixelRNN (van den Oord et al., 2016a) =79.20VLAE (Chen et al., 2016) =79.03Convolutional V AE 87.41PixelV AE 80.64Gated PixelCNN (our implementation) =80.10Gated PixelV AE 79.48 (80.02)Gated PixelV AE without upsampling 78.96 (79.58)Table 1: We compare performance of different models on binarized MNIST. “PixelCNN” is themodel described in van den Oord et al. (2016a). Our corresponding latent variable model is “Pixel-V AE”. “Gated PixelCNN” and “Gated PixelV AE” use the gated activation function in van den Oordet al. (2016b). In “Gated PixelV AE without upsampling”, a linear transformation of latent variableconditions the (gated) activation in every PixelCNN layer instead of using upsampling layers.We evaluate our model on the binarized MNIST dataset (Salakhutdinov & Murray, 2008; Lecunet al., 1998) and report results in Table 1. We also experiment with a variant of our model in whicheach PixelCNN layer is directly conditioned on a linear transformation of latent variable, z(ratherthan transforming zfirst through several upsampling convolutional layers) (as in (van den Oord et al.,2016b) and find that this further improves performance, achieving an NLL upper bound comparablewith the current state of the art. We estimate the marginal likelihood of our MNIST model usingthe importance sampling technique in Burda et al. (2015), which computes a lower bound on thelikelihood whose tightness increases with the number of importance samples per datapoint. We useN= 5000 samples per datapoint (higher values don’t appear to significantly affect the likelihoodestimate) and achieve state-of-the-art likelihood.4.1.1 U SING FEWPIXEL CNN L AYERSThe masked convolutional layers in PixelCNN are computationally expensive because they operateat the full resolution of the image and in order to cover the full receptive field of the image, PixelCNNtypically needs a large number of them. One advantage of our architecture is that we can achievestrong performance with very few PixelCNN layers, which makes training and sampling from ourmodel significantly faster than PixelCNN. To demonstrate this, we compare the performance of ourmodel to PixelCNN as a function of the number of PixelCNN layers (Fig. 4a). We find that withfewer than 10 autoregressive layers, our PixelV AE model performs much better than PixelCNN.This is expected since with few layers, the effective receptive field of the PixelCNN output units istoo small to capture long-range dependencies in the data.We also observe that adding even a single PixelCNN layer has a dramatic impact on the NLL boundof PixelV AE. This is not surprising since the PixelCNN layer helps model local characteristics which5Published as a conference paper at ICLR 20170 2 4 6 8 10 12 14#PixelCNN layers80828486889092949698Negative Log-likelihoodGated PixelVAE NLL boundGated PixelCNN NLL(a)NLL Upper Bound (b)Figure 4: (a) Comparison of Negative log-likelihood upper bound of PixelV AE and NLL for Pixel-CNN as a function of the number of PixelCNN layers used. (b) Cost break down into KL divergenceand reconstruction cost.are complementary to the global characteristics which a V AE with a factorized output distributionmodels.4.1.2 L ATENT VARIABLE INFORMATION CONTENTBecause the autoregressive conditional likelihood function of PixelV AE is expressive enough tomodel some properties of the image distribution, it isn’t forced to account for those propertiesthrough its latent variables as a standard V AE is. As a result, we can expect PixelV AE to learnlatent representations which are invariant to textures, precise positions, and other attributes whichare more efficiently modeled by the autoregressive decoder. To empirically validate this, we trainPixelV AE models with different numbers of autoregressive layers (and hence, different PixelCNNreceptive field sizes) and plot the breakdown of the NLL bound for each of these models into thereconstruction term logp(xjz)and the KL divergence term DKL(q(zjx)jjp(z))(Fig. 4b). The KLdivergence term can be interpreted as a measure of the information content in the posterior distri-butionq(zjx)(in the sense that in expectation, samples from q(zjx)requireKL(qjjp)fewer bits tocode under a code optimized for qthan under one optimized for p(Burnham & Anderson, 2003))and hence, models with smaller KL terms encode less information in their latent variables.We observe a sharp drop in the KL divergence term when we use a single autoregressive layercompared to no autoregressive layers, indicating that the latent variables have been freed from havingto encode small-scale details in the images. Since the addition of a single PixelCNN layer allows thedecoder to model interactions between pixels which are at most 2 pixels away from each other (sinceour masked convolution filter size is 55), we can also say that most of the non-trivial (long-range)structure in the images is still encoded in the latent variables.4.1.3 L ATENT REPRESENTATIONSOn MNIST, given a sufficiently high-dimensional latent space, V AEs have already been shown tolearn representations in which digits are well-separated (Sønderby et al., 2016). However, this taskbecomes more challenging as the capacity of the latent space is decreased. PixelV AE’s flexibleoutput distribution should allow it to learn a latent representation which is invariant to small detailsand thus better models global factors of variation given limited capacity.To test this, we train a PixelV AE with a two-dimensional latent space, and an equivalent V AE.We visualize the distribution of test set images in latent space and observe that PixelV AE’s latentrepresentation separates digits significantly better than V AE (Figure 5). To quantify this difference,we train a K-nearest neighbors classifier in the latent space of each model and find that PixelV AE6Published as a conference paper at ICLR 2017(a) (b)Figure 5: Visualization of the MNIST test set in the latent space of (a) convolutional V AE and (b)PixelV AE with two latent dimensions. PixelV AE separates classes more completely than V AE.Figure 6: We visually inspect the variation in image features captured by the different levels ofstochasticity in our model. For the two-level latent variable model trained on 6464LSUN bed-rooms, we vary only the top-level sampling noise (top) while holding the other levels constant,vary only the middle-level noise (middle) , and vary only the bottom (pixel-level) noise (bottom) .It appears that the top-level latent variables learn to model room structure and overall geometry,the middle-level latents model color and texture features, and the pixel-level distribution modelslow-level image characteristics such as texture, alignment, shading.significantly outperforms V AE, achieving a test error of 7.2% compared to V AE’s 22.9%. We alsonote that unlike V AE, PixelV AE learns a representation in which digit identity is largely disentangledfrom other generative factors.4.2 LSUN B EDROOMSTo evaluate our model’s performance with more data and complicated image distributions, we per-form experiments on the LSUN bedrooms dataset (Yu et al., 2015). We use the same preprocessingas in Radford et al. (2015) to remove duplicate images in the dataset. For quantitative experimentswe use a 3232downsampled version of the dataset, and we present samples from a model trainedon the 6464version.We train a two-level PixelV AE with latent variables at 11and88spatial resolutions. We find thatthis outperforms both a two-level convolutional V AE with diagonal Gaussian output and a single-level PixelV AE in terms of log-likelihood and sample quality. We also try replacing the PixelCNNlayers at the higher level with a diagonal Gaussian decoder and find that this hurts log-likelihood,which suggests that multi-scale PixelV AE uses those layers effectively to autoregressively modellatent features.7Published as a conference paper at ICLR 2017Figure 7: Samples from hierarchical PixelV AE on the 64x64 ImageNet dataset.4.2.1 F EATURES MODELED AT EACH LAYERTo see which features are modeled by each of the multiple layers, we draw multiple samples whilevarying the sampling noise at only a specific layer (either at the pixel-wise output or one of thelatent layers) and visually inspect the resulting images (Fig. 6). When we vary only the pixel-level sampling (holding z1andz2fixed), samples are almost indistinguishable and differ only inprecise positioning and shading details, suggesting that the model uses the pixel-level autoregressivedistribution to model only these features. Samples where only the noise in the middle-level (8 8) latent variables is varied have different objects and colors, but appear to have similar basic roomgeometry and composition. Finally, samples with varied top-level latent variables have diverse roomgeometry.4.3 6464IMAGE NETThe6464ImageNet generative modeling task was introduced in (van den Oord et al., 2016a) andinvolves density estimation of a difficult, highly varied image distribution. We trained a heirarchicalPixelV AE model (with a similar architecture to the model in section 4.2) on 6464ImageNet andreport validation set likelihood in Table 2. Our model achieves a likelihood competitive with van denOord et al. (2016a;b), despite being substantially less computationally complex. A visual inspectionof ImageNet samples from our model (Fig. 7) also reveals them to be significantly more globallycoherent than samples from PixelRNN.Model NLL Validation (Train) FLOPsConvolutional DRAW (Gregor et al., 2016) 4.10 (4.04) —Real NVP (Dinh et al., 2016) =4.01 (3.93) —PixelRNN (van den Oord et al., 2016a) =3.63 (3.57) 154109Gated PixelCNN (van den Oord et al., 2016b) =3.57 (3.48) 134109Hierarchical PixelV AE 3.62 (3.55) 63109Table 2: Model performance on 6464ImageNet. We achieve competitive NLL at a fraction of thecomputational complexity of other leading models.8Published as a conference paper at ICLR 20175 C ONCLUSIONSIn this paper, we introduced a V AE model for natural images with an autoregressive decoder thatachieves strong performance across a number of datasets. We explored properties of our model,showing that it can generate more compressed latent representations than a standard V AE and that itcan use fewer autoregressive layers than PixelCNN. We established a new state-of-the-art on bina-rized MNIST dataset in terms of likelihood on 6464ImageNet and demonstrated that our modelgenerates high-quality samples on LSUN bedrooms.The ability of PixelV AE to learn compressed representations in its latent variables by ignoring thesmall-scale structure in images is potentially very useful for downstream tasks. It would be interest-ing to further explore our model’s capabilities for semi-supervised classification and representationlearning in future work.ACKNOWLEDGMENTSThe authors would like to thank the developers of Theano (Theano Development Team, 2016) andBlocks and Fuel (van Merri ̈enboer et al., 2015). We acknowledge the support of the followingagencies for research funding and computing support: Ubisoft, Nuance Foundation, NSERC, Cal-cul Quebec, Compute Canada, CIFAR, MEC Project TRA2014-57088-C2-1-R, SGR project 2014-SGR-1506 and TECNIOspring-FP7-ACCI grant. | BJq3MnrEg | Review | 7: Good paper, accept | UPDATE: The authors addressed all my concerns in the new version of the paper, so I raised my score and now recommend acceptance.
--------------
This paper combines the recent progress in variational autoencoder and autoregressive density modeling in the proposed PixelVAE model. The paper shows that it can match the NLL performance of a PixelCNN with a PixelVAE that has a much shallower PixelCNN decoder.
I think the idea of capturing the global structure with a VAE and modeling the local structure with a PixelCNN decoder makes a lot of sense and can prevent the blurry reconstruction/samples of VAE. I specially like the hierarchical image generation experiments.
I have the following suggestions/concerns about the paper:
1) Is there any experiment showing that using the PixelCNN as the decoder of VAE will result in better disentangling of high-level factors of variations in the hidden code? For example, the authors can train a PixelVAE and VAE on MNIST with 2D hidden code and visualize the 2D hidden code for test images and color code each hidden code based on the digit and show that the digits have a better separation in the PixelVAE representation. A semi-supervised classification comparison between VAE and the PixelVAE will also significantly improve the quality of the paper.
2) A similar idea is also presented in a concurrent ICLR submission "Variational Lossy Autoencoder". It would be interesting to have a discussion included in the paper and compare these works.
3) The answer to the pre-review questions made the architecture details of the paper much more clear, but I still ask the authors to include the exact architecture details of all the experiments in the paper and/or open source the code. The clarity of the presentation is not satisfying and the experiments are difficult to reproduce.
4) As pointed out in my pre-review question, it would be great to include two sets of MNIST samples maybe in an appendix section. One with PixelCNN and the other with PixelVAE with the same pixelcnn depth to illustrate the hidden code in PixelVAE actually captures the global structure.
I will gladly raise the score if the authors address my concerns. | 4: The reviewer is confident but not absolutely certain that the evaluation is correct |
BJKYvt5lg | ICLR.cc/2017/conference | 2017 | PixelVAE: A Latent Variable Model for Natural Images | ["Ishaan Gulrajani", "Kundan Kumar", "Faruk Ahmed", "Adrien Ali Taiga", "Francesco Visin", "David Vazquez", "Aaron Courville"] | Natural image modeling is a landmark challenge of unsupervised learning. Variational Autoencoders (VAEs) learn a useful latent representation and model global structure well but have difficulty capturing small details. PixelCNN models details very well, but lacks a latent code and is difficult to scale for capturing large structures. We present PixelVAE, a VAE model with an autoregressive decoder based on PixelCNN. Our model requires very few expensive autoregressive layers compared to PixelCNN and learns latent codes that are more compressed than a standard VAE while still capturing most non-trivial structure. Finally, we extend our model to a hierarchy of latent variables at different scales. Our model achieves state-of-the-art performance on binarized MNIST, competitive performance on 64 × 64 ImageNet, and high-quality samples on the LSUN bedrooms dataset.
| ["Deep learning", "Unsupervised Learning"] | ABSTRACTNatural image modeling is a landmark challenge of unsupervised learning. Varia-tional Autoencoders (V AEs) learn a useful latent representation and model globalstructure well but have difficulty capturing small details. PixelCNN models de-tails very well, but lacks a latent code and is difficult to scale for capturing largestructures. We present PixelV AE, a V AE model with an autoregressive decoderbased on PixelCNN. Our model requires very few expensive autoregressive lay-ers compared to PixelCNN and learns latent codes that are more compressed thana standard V AE while still capturing most non-trivial structure. Finally, we ex-tend our model to a hierarchy of latent variables at different scales. Our modelachieves state-of-the-art performance on binarized MNIST, competitive perfor-mance on 6464ImageNet, and high-quality samples on the LSUN bedroomsdataset.1 I NTRODUCTIONBuilding high-quality generative models of natural images has been a long standing challenge. Al-though recent work has made significant progress (Kingma & Welling, 2014; van den Oord et al.,2016a;b), we are still far from generating convincing, high-resolution natural images.Many recent approaches to this problem are based on an efficient method for performing amor-tized, approximate inference in continuous stochastic latent variables: the variational autoencoder(V AE) (Kingma & Welling, 2014) jointly trains a top-down decoder generative neural network witha bottom-up encoder inference network. V AEs for images typically use rigid decoders that modelthe output pixels as conditionally independent given the latent variables. The resulting model learnsa useful latent representation of the data and effectively models global structure in images, but hasdifficulty capturing small-scale features such as textures and sharp edges due to the conditional inde-pendence of the output pixels, which significantly hurts both log-likelihood and quality of generatedsamples compared to other models.PixelCNNs (van den Oord et al., 2016a;b) are another state-of-the-art image model. Unlike V AEs,PixelCNNs model image densities autoregressively, pixel-by-pixel. This allows it to capture finedetails in images, as features such as edges can be precisely aligned. By leveraging carefully con-structed masked convolutions (van den Oord et al., 2016b), PixelCNNs can be trained efficiently inparallel on GPUs. Nonetheless, PixelCNN models are still very computationally expensive. Unliketypical convolutional architectures they do not apply downsampling between layers, which meansthat each layer is computationally expensive and that the depth of a PixelCNN must grow linearlywith the size of the images in order for it to capture dependencies between far-away pixels. Pix-elCNNs also do not explicitly learn a latent representation of the data, which can be useful fordownstream tasks such as semi-supervised learning.Corresponding author; igul222@gmail.com1Published as a conference paper at ICLR 2017Figure 1: Samples from hierarchical PixelV AE on the LSUN bedrooms dataset.Our contributions are as follows:We present PixelV AE, a latent variable model which combines the largely complementaryadvantages of V AEs and PixelCNNs by using PixelCNN-based masked convolutions in theconditional output distribution of a V AE.We extend PixelV AE to a hierarchical model with multiple stochastic layers and autore-gressive decoders at each layer. This lets us autoregressively model not only the outputpixels but also higher-level latent feature maps.On MNIST, we show that PixelV AE: (1) establishes a new state-of-the-art likelihood, (2)performs comparably to PixelCNN using far fewer computationally expensive autoregres-sive layers, (3) learns more compressed latent codes than a standard V AE while still ac-counting for most non-trivial structure, and (4) learns a latent code which separates digitsbetter than a standard V AE.We evaluate hierarchical PixelV AE on two challenging natural image datasets ( 6464ImageNet and LSUN bedrooms). On 6464ImageNet, we report likelihood competitivewith the state of the art at significantly less computational cost. On LSUN bedrooms,we generate high-quality samples and show that hierarchical PixelV AE learns to modeldifferent properties of the scene with each of its multiple layers.2 R ELATED WORKThere have been many recent advancements in generative modeling of images. We briefly discusssome of these below, especially those that are related to our approach.The Variational Autoencoder (V AE) (Kingma & Welling, 2014) is a framework to train neural net-works for generation and approximate inference jointly by optimizing a variational bound on thedata log-likelihood. The use of normalizing flows (Rezende & Mohamed, 2015) improves the flex-ibility of the V AE approximate posterior. Based on this, Kingma et al. (2016) develop an efficientformulation of an autoregressive approximate posterior model using MADE (Germain et al., 2015).In our work, we avoid the need for such flexible inference models by using autoregressive priors.The idea of using autoregressive conditional likelihoods in V AEs has been explored in the context oflanguage modeling in (Bowman et al., 2016), however in that work the use of latent variables failsto improve likelihood over a purely autoregressive model.2Published as a conference paper at ICLR 2017. concatImageEncoderLatentVariablesDecoder PixelCNN layersReconstructionORSampleGeneration: Autoregressive samplingTraining: Teacher forcingORORFigure 2: Our proposed model, PixelV AE, makes use of PixelCNN to model an autoregressive de-coder for a V AE. V AEs, which assume (conditional) independence among pixels, are known to sufferfrom blurry samples, while PixelCNN, modeling the joint distribution, produces sharp samples, butlack a latent representation that might be more useful for downstream tasks. PixelV AE combines thebest of both worlds, providing a meaningful latent representation, while producing sharp samples.Simultaneously to our work, Chen et al. (2016) present a V AE model for images with an an autore-gressive output distribution. In constrast to Chen et al. (2016), who focus on models with a singlelayer of latent variables, we also investigate models with a hierarchy of latent variables (and cor-responding autoregressive priors) and show that they enable us to scale our model to challengingnatural image datasets.Another promising recent approach is Generative Adversarial Networks (GANs) (Goodfellow et al.,2014), which pit a generator network and a discriminator network against each other. Recent workhas improved training stability (Radford et al., 2015; Salimans et al., 2016) and incorporated in-ference networks into the GAN framework (Dumoulin et al., 2016; Donahue et al., 2016). GANsgenerate compelling samples compared to our work, but still exhibit unstable training dynamics andare known to underfit by ignoring modes of the data distribution (Dumoulin et al., 2016). Further, itis difficult to accurately estimate the data likelihood in GANs.3 P IXEL VAE M ODELLike a V AE, our model jointly trains an “encoder” inference network, which maps an image xto aposterior distribution over latent variables z, and a “decoder” generative network, which models adistribution over xconditioned on z. The encoder and decoder networks are composed of a seriesof convolutional layers, respectively with strided convolutions for downsampling in the encoder andtransposed convolutions for upsampling in the decoder.As opposed to most V AE decoders that model each dimension of the output independently (forexample, by modeling the output as a Gaussian with diagonal covariance), we use a conditionalPixelCNN in the decoder. Our decoder models xas the product of each dimension xiconditionedon all previous dimensions and the latent variable z:p(xjz) =Yip(xijx1;:::;x i1;z)We first transform zthrough a series of convolutional layers into feature maps with the same spatialresolution as the output image and then concatenate the resulting feature maps with the image.The resulting concatenated feature maps are then further processed by several PixelCNN maskedconvolutional layers and a final PixelCNN 256-way softmax output.Unlike typical PixelCNN implementations, we use very few PixelCNN layers in our decoder, relyingon the latent variables to model the structure of the input at scales larger than the combined receptive3Published as a conference paper at ICLR 2017Figure 3: We generate top-down through a hierarchical latent space decomposition. The inferencenetwork generates latent variables by composing successive deterministic functions to compute pa-rameters of the stochastic random variables. Dotted lines denote contributions to the cost.field of our PixelCNN layers. As a result of this, our architecture captures global structure at a muchlower computational cost than a standard PixelCNN implementation.3.1 H IERARCHICAL ARCHITECTUREThe performance of V AEs can be improved by stacking them to form a hierarchy of stochastic latentvariables: in the simplest configuration, the V AE at each level models a distribution over the latentvariables at the level below, with generation proceeding downward and inference upward througheach level (i.e. as in Fig. 3). In convolutional architectures, the intermediate latent variables aretypically organized into feature maps whose spatial resolution decreases toward higher levels.Our model can be extended in the same way. At each level, the generator is a conditional PixelCNNover the latent features in the level below. This lets us autoregressively model not only the outputdistribution over pixels but also the prior over each set of latent feature maps. The higher-levelPixelCNN decoders use diagonal Gaussian output layers instead of 256-way softmax, and modelthe dimensions within each spatial location (i.e. across feature maps) independently. This is donefor simplicity, but is not a limitation of our model.The output distributions over the latent variables for the generative and inference networks decom-pose as follows (see Fig. 3).p(z1;;zL) =p(zL)p(zL1jzL)p(z1jz2)q(z1;;zLjx) =q(z1jx)q(zLjx)We optimize the negative of the evidence lower bound (sum of data negative log-likelihood andKL-divergence of the posterior over latents with the prior).L(x;q;p ) =Ez1q(z1jx)logp(xjz1) +DKL(q(z1;zLjx)jjp(z1;;zL))=Ez1q(z1jx)logp(xjz1) +Zz1;;zLLYj=1q(zjjx)LXi=1logq(zijx)p(zijzi+1)dz1:::dzL=Ez1q(z1jx)logp(xjz1) +LXi=1Zz1;;zLLYj=1q(zjjx) logq(zijx)p(zijzi+1)dz1:::dzL=Ez1q(z1jx)logp(xjz1) +LXi=1Zzi;zi+1q(zi+1jx)q(zijx) logq(zijx)p(zijzi+1)dzidzi+14Published as a conference paper at ICLR 2017=Ez1q(z1jx)logp(xjz1) +LXi=1Ezi+1q(zi+1jx)DKL(q(zijx)jjp(zijzi+1))Note that when specifying an autoregressive prior over each latent level zi, we can leverage maskedconvolutions (van den Oord et al., 2016b) and samples drawn independently from the approximateposteriorq(zijx)(i.e. from the inference network) to train efficiently in parallel on GPUs.4 E XPERIMENTS4.1 MNISTModel NLL TestDRAW (Gregor et al., 2016) 80.97Discrete V AE (Rolfe, 2016) =81.01IAF V AE (Kingma et al., 2016) 79.88PixelCNN (van den Oord et al., 2016a) =81.30PixelRNN (van den Oord et al., 2016a) =79.20VLAE (Chen et al., 2016) =79.03Convolutional V AE 87.41PixelV AE 80.64Gated PixelCNN (our implementation) =80.10Gated PixelV AE 79.48 (80.02)Gated PixelV AE without upsampling 78.96 (79.58)Table 1: We compare performance of different models on binarized MNIST. “PixelCNN” is themodel described in van den Oord et al. (2016a). Our corresponding latent variable model is “Pixel-V AE”. “Gated PixelCNN” and “Gated PixelV AE” use the gated activation function in van den Oordet al. (2016b). In “Gated PixelV AE without upsampling”, a linear transformation of latent variableconditions the (gated) activation in every PixelCNN layer instead of using upsampling layers.We evaluate our model on the binarized MNIST dataset (Salakhutdinov & Murray, 2008; Lecunet al., 1998) and report results in Table 1. We also experiment with a variant of our model in whicheach PixelCNN layer is directly conditioned on a linear transformation of latent variable, z(ratherthan transforming zfirst through several upsampling convolutional layers) (as in (van den Oord et al.,2016b) and find that this further improves performance, achieving an NLL upper bound comparablewith the current state of the art. We estimate the marginal likelihood of our MNIST model usingthe importance sampling technique in Burda et al. (2015), which computes a lower bound on thelikelihood whose tightness increases with the number of importance samples per datapoint. We useN= 5000 samples per datapoint (higher values don’t appear to significantly affect the likelihoodestimate) and achieve state-of-the-art likelihood.4.1.1 U SING FEWPIXEL CNN L AYERSThe masked convolutional layers in PixelCNN are computationally expensive because they operateat the full resolution of the image and in order to cover the full receptive field of the image, PixelCNNtypically needs a large number of them. One advantage of our architecture is that we can achievestrong performance with very few PixelCNN layers, which makes training and sampling from ourmodel significantly faster than PixelCNN. To demonstrate this, we compare the performance of ourmodel to PixelCNN as a function of the number of PixelCNN layers (Fig. 4a). We find that withfewer than 10 autoregressive layers, our PixelV AE model performs much better than PixelCNN.This is expected since with few layers, the effective receptive field of the PixelCNN output units istoo small to capture long-range dependencies in the data.We also observe that adding even a single PixelCNN layer has a dramatic impact on the NLL boundof PixelV AE. This is not surprising since the PixelCNN layer helps model local characteristics which5Published as a conference paper at ICLR 20170 2 4 6 8 10 12 14#PixelCNN layers80828486889092949698Negative Log-likelihoodGated PixelVAE NLL boundGated PixelCNN NLL(a)NLL Upper Bound (b)Figure 4: (a) Comparison of Negative log-likelihood upper bound of PixelV AE and NLL for Pixel-CNN as a function of the number of PixelCNN layers used. (b) Cost break down into KL divergenceand reconstruction cost.are complementary to the global characteristics which a V AE with a factorized output distributionmodels.4.1.2 L ATENT VARIABLE INFORMATION CONTENTBecause the autoregressive conditional likelihood function of PixelV AE is expressive enough tomodel some properties of the image distribution, it isn’t forced to account for those propertiesthrough its latent variables as a standard V AE is. As a result, we can expect PixelV AE to learnlatent representations which are invariant to textures, precise positions, and other attributes whichare more efficiently modeled by the autoregressive decoder. To empirically validate this, we trainPixelV AE models with different numbers of autoregressive layers (and hence, different PixelCNNreceptive field sizes) and plot the breakdown of the NLL bound for each of these models into thereconstruction term logp(xjz)and the KL divergence term DKL(q(zjx)jjp(z))(Fig. 4b). The KLdivergence term can be interpreted as a measure of the information content in the posterior distri-butionq(zjx)(in the sense that in expectation, samples from q(zjx)requireKL(qjjp)fewer bits tocode under a code optimized for qthan under one optimized for p(Burnham & Anderson, 2003))and hence, models with smaller KL terms encode less information in their latent variables.We observe a sharp drop in the KL divergence term when we use a single autoregressive layercompared to no autoregressive layers, indicating that the latent variables have been freed from havingto encode small-scale details in the images. Since the addition of a single PixelCNN layer allows thedecoder to model interactions between pixels which are at most 2 pixels away from each other (sinceour masked convolution filter size is 55), we can also say that most of the non-trivial (long-range)structure in the images is still encoded in the latent variables.4.1.3 L ATENT REPRESENTATIONSOn MNIST, given a sufficiently high-dimensional latent space, V AEs have already been shown tolearn representations in which digits are well-separated (Sønderby et al., 2016). However, this taskbecomes more challenging as the capacity of the latent space is decreased. PixelV AE’s flexibleoutput distribution should allow it to learn a latent representation which is invariant to small detailsand thus better models global factors of variation given limited capacity.To test this, we train a PixelV AE with a two-dimensional latent space, and an equivalent V AE.We visualize the distribution of test set images in latent space and observe that PixelV AE’s latentrepresentation separates digits significantly better than V AE (Figure 5). To quantify this difference,we train a K-nearest neighbors classifier in the latent space of each model and find that PixelV AE6Published as a conference paper at ICLR 2017(a) (b)Figure 5: Visualization of the MNIST test set in the latent space of (a) convolutional V AE and (b)PixelV AE with two latent dimensions. PixelV AE separates classes more completely than V AE.Figure 6: We visually inspect the variation in image features captured by the different levels ofstochasticity in our model. For the two-level latent variable model trained on 6464LSUN bed-rooms, we vary only the top-level sampling noise (top) while holding the other levels constant,vary only the middle-level noise (middle) , and vary only the bottom (pixel-level) noise (bottom) .It appears that the top-level latent variables learn to model room structure and overall geometry,the middle-level latents model color and texture features, and the pixel-level distribution modelslow-level image characteristics such as texture, alignment, shading.significantly outperforms V AE, achieving a test error of 7.2% compared to V AE’s 22.9%. We alsonote that unlike V AE, PixelV AE learns a representation in which digit identity is largely disentangledfrom other generative factors.4.2 LSUN B EDROOMSTo evaluate our model’s performance with more data and complicated image distributions, we per-form experiments on the LSUN bedrooms dataset (Yu et al., 2015). We use the same preprocessingas in Radford et al. (2015) to remove duplicate images in the dataset. For quantitative experimentswe use a 3232downsampled version of the dataset, and we present samples from a model trainedon the 6464version.We train a two-level PixelV AE with latent variables at 11and88spatial resolutions. We find thatthis outperforms both a two-level convolutional V AE with diagonal Gaussian output and a single-level PixelV AE in terms of log-likelihood and sample quality. We also try replacing the PixelCNNlayers at the higher level with a diagonal Gaussian decoder and find that this hurts log-likelihood,which suggests that multi-scale PixelV AE uses those layers effectively to autoregressively modellatent features.7Published as a conference paper at ICLR 2017Figure 7: Samples from hierarchical PixelV AE on the 64x64 ImageNet dataset.4.2.1 F EATURES MODELED AT EACH LAYERTo see which features are modeled by each of the multiple layers, we draw multiple samples whilevarying the sampling noise at only a specific layer (either at the pixel-wise output or one of thelatent layers) and visually inspect the resulting images (Fig. 6). When we vary only the pixel-level sampling (holding z1andz2fixed), samples are almost indistinguishable and differ only inprecise positioning and shading details, suggesting that the model uses the pixel-level autoregressivedistribution to model only these features. Samples where only the noise in the middle-level (8 8) latent variables is varied have different objects and colors, but appear to have similar basic roomgeometry and composition. Finally, samples with varied top-level latent variables have diverse roomgeometry.4.3 6464IMAGE NETThe6464ImageNet generative modeling task was introduced in (van den Oord et al., 2016a) andinvolves density estimation of a difficult, highly varied image distribution. We trained a heirarchicalPixelV AE model (with a similar architecture to the model in section 4.2) on 6464ImageNet andreport validation set likelihood in Table 2. Our model achieves a likelihood competitive with van denOord et al. (2016a;b), despite being substantially less computationally complex. A visual inspectionof ImageNet samples from our model (Fig. 7) also reveals them to be significantly more globallycoherent than samples from PixelRNN.Model NLL Validation (Train) FLOPsConvolutional DRAW (Gregor et al., 2016) 4.10 (4.04) —Real NVP (Dinh et al., 2016) =4.01 (3.93) —PixelRNN (van den Oord et al., 2016a) =3.63 (3.57) 154109Gated PixelCNN (van den Oord et al., 2016b) =3.57 (3.48) 134109Hierarchical PixelV AE 3.62 (3.55) 63109Table 2: Model performance on 6464ImageNet. We achieve competitive NLL at a fraction of thecomputational complexity of other leading models.8Published as a conference paper at ICLR 20175 C ONCLUSIONSIn this paper, we introduced a V AE model for natural images with an autoregressive decoder thatachieves strong performance across a number of datasets. We explored properties of our model,showing that it can generate more compressed latent representations than a standard V AE and that itcan use fewer autoregressive layers than PixelCNN. We established a new state-of-the-art on bina-rized MNIST dataset in terms of likelihood on 6464ImageNet and demonstrated that our modelgenerates high-quality samples on LSUN bedrooms.The ability of PixelV AE to learn compressed representations in its latent variables by ignoring thesmall-scale structure in images is potentially very useful for downstream tasks. It would be interest-ing to further explore our model’s capabilities for semi-supervised classification and representationlearning in future work.ACKNOWLEDGMENTSThe authors would like to thank the developers of Theano (Theano Development Team, 2016) andBlocks and Fuel (van Merri ̈enboer et al., 2015). We acknowledge the support of the followingagencies for research funding and computing support: Ubisoft, Nuance Foundation, NSERC, Cal-cul Quebec, Compute Canada, CIFAR, MEC Project TRA2014-57088-C2-1-R, SGR project 2014-SGR-1506 and TECNIOspring-FP7-ACCI grant. | BJMF-WxNl | Nice paper | 7: Good paper, accept | All in all this is a nice paper.
I think the model is quite clever, attempting to get the best of latent variable models and auto-regressive models. The implementation and specific architecture choices (as discussed in the pre-review) also seem reasonable.
On the experimental side, I would have liked to see something more than NLL measurements and samples - maybe show this is useful for other tasks such as classification?
Though I don't think this is a huge leap forward this is certainly a nice paper and I recoemmend acceptance. | 3: The reviewer is fairly confident that the evaluation is correct |
BJKYvt5lg | ICLR.cc/2017/conference | 2017 | PixelVAE: A Latent Variable Model for Natural Images | ["Ishaan Gulrajani", "Kundan Kumar", "Faruk Ahmed", "Adrien Ali Taiga", "Francesco Visin", "David Vazquez", "Aaron Courville"] | Natural image modeling is a landmark challenge of unsupervised learning. Variational Autoencoders (VAEs) learn a useful latent representation and model global structure well but have difficulty capturing small details. PixelCNN models details very well, but lacks a latent code and is difficult to scale for capturing large structures. We present PixelVAE, a VAE model with an autoregressive decoder based on PixelCNN. Our model requires very few expensive autoregressive layers compared to PixelCNN and learns latent codes that are more compressed than a standard VAE while still capturing most non-trivial structure. Finally, we extend our model to a hierarchy of latent variables at different scales. Our model achieves state-of-the-art performance on binarized MNIST, competitive performance on 64 × 64 ImageNet, and high-quality samples on the LSUN bedrooms dataset.
| ["Deep learning", "Unsupervised Learning"] | ABSTRACTNatural image modeling is a landmark challenge of unsupervised learning. Varia-tional Autoencoders (V AEs) learn a useful latent representation and model globalstructure well but have difficulty capturing small details. PixelCNN models de-tails very well, but lacks a latent code and is difficult to scale for capturing largestructures. We present PixelV AE, a V AE model with an autoregressive decoderbased on PixelCNN. Our model requires very few expensive autoregressive lay-ers compared to PixelCNN and learns latent codes that are more compressed thana standard V AE while still capturing most non-trivial structure. Finally, we ex-tend our model to a hierarchy of latent variables at different scales. Our modelachieves state-of-the-art performance on binarized MNIST, competitive perfor-mance on 6464ImageNet, and high-quality samples on the LSUN bedroomsdataset.1 I NTRODUCTIONBuilding high-quality generative models of natural images has been a long standing challenge. Al-though recent work has made significant progress (Kingma & Welling, 2014; van den Oord et al.,2016a;b), we are still far from generating convincing, high-resolution natural images.Many recent approaches to this problem are based on an efficient method for performing amor-tized, approximate inference in continuous stochastic latent variables: the variational autoencoder(V AE) (Kingma & Welling, 2014) jointly trains a top-down decoder generative neural network witha bottom-up encoder inference network. V AEs for images typically use rigid decoders that modelthe output pixels as conditionally independent given the latent variables. The resulting model learnsa useful latent representation of the data and effectively models global structure in images, but hasdifficulty capturing small-scale features such as textures and sharp edges due to the conditional inde-pendence of the output pixels, which significantly hurts both log-likelihood and quality of generatedsamples compared to other models.PixelCNNs (van den Oord et al., 2016a;b) are another state-of-the-art image model. Unlike V AEs,PixelCNNs model image densities autoregressively, pixel-by-pixel. This allows it to capture finedetails in images, as features such as edges can be precisely aligned. By leveraging carefully con-structed masked convolutions (van den Oord et al., 2016b), PixelCNNs can be trained efficiently inparallel on GPUs. Nonetheless, PixelCNN models are still very computationally expensive. Unliketypical convolutional architectures they do not apply downsampling between layers, which meansthat each layer is computationally expensive and that the depth of a PixelCNN must grow linearlywith the size of the images in order for it to capture dependencies between far-away pixels. Pix-elCNNs also do not explicitly learn a latent representation of the data, which can be useful fordownstream tasks such as semi-supervised learning.Corresponding author; igul222@gmail.com1Published as a conference paper at ICLR 2017Figure 1: Samples from hierarchical PixelV AE on the LSUN bedrooms dataset.Our contributions are as follows:We present PixelV AE, a latent variable model which combines the largely complementaryadvantages of V AEs and PixelCNNs by using PixelCNN-based masked convolutions in theconditional output distribution of a V AE.We extend PixelV AE to a hierarchical model with multiple stochastic layers and autore-gressive decoders at each layer. This lets us autoregressively model not only the outputpixels but also higher-level latent feature maps.On MNIST, we show that PixelV AE: (1) establishes a new state-of-the-art likelihood, (2)performs comparably to PixelCNN using far fewer computationally expensive autoregres-sive layers, (3) learns more compressed latent codes than a standard V AE while still ac-counting for most non-trivial structure, and (4) learns a latent code which separates digitsbetter than a standard V AE.We evaluate hierarchical PixelV AE on two challenging natural image datasets ( 6464ImageNet and LSUN bedrooms). On 6464ImageNet, we report likelihood competitivewith the state of the art at significantly less computational cost. On LSUN bedrooms,we generate high-quality samples and show that hierarchical PixelV AE learns to modeldifferent properties of the scene with each of its multiple layers.2 R ELATED WORKThere have been many recent advancements in generative modeling of images. We briefly discusssome of these below, especially those that are related to our approach.The Variational Autoencoder (V AE) (Kingma & Welling, 2014) is a framework to train neural net-works for generation and approximate inference jointly by optimizing a variational bound on thedata log-likelihood. The use of normalizing flows (Rezende & Mohamed, 2015) improves the flex-ibility of the V AE approximate posterior. Based on this, Kingma et al. (2016) develop an efficientformulation of an autoregressive approximate posterior model using MADE (Germain et al., 2015).In our work, we avoid the need for such flexible inference models by using autoregressive priors.The idea of using autoregressive conditional likelihoods in V AEs has been explored in the context oflanguage modeling in (Bowman et al., 2016), however in that work the use of latent variables failsto improve likelihood over a purely autoregressive model.2Published as a conference paper at ICLR 2017. concatImageEncoderLatentVariablesDecoder PixelCNN layersReconstructionORSampleGeneration: Autoregressive samplingTraining: Teacher forcingORORFigure 2: Our proposed model, PixelV AE, makes use of PixelCNN to model an autoregressive de-coder for a V AE. V AEs, which assume (conditional) independence among pixels, are known to sufferfrom blurry samples, while PixelCNN, modeling the joint distribution, produces sharp samples, butlack a latent representation that might be more useful for downstream tasks. PixelV AE combines thebest of both worlds, providing a meaningful latent representation, while producing sharp samples.Simultaneously to our work, Chen et al. (2016) present a V AE model for images with an an autore-gressive output distribution. In constrast to Chen et al. (2016), who focus on models with a singlelayer of latent variables, we also investigate models with a hierarchy of latent variables (and cor-responding autoregressive priors) and show that they enable us to scale our model to challengingnatural image datasets.Another promising recent approach is Generative Adversarial Networks (GANs) (Goodfellow et al.,2014), which pit a generator network and a discriminator network against each other. Recent workhas improved training stability (Radford et al., 2015; Salimans et al., 2016) and incorporated in-ference networks into the GAN framework (Dumoulin et al., 2016; Donahue et al., 2016). GANsgenerate compelling samples compared to our work, but still exhibit unstable training dynamics andare known to underfit by ignoring modes of the data distribution (Dumoulin et al., 2016). Further, itis difficult to accurately estimate the data likelihood in GANs.3 P IXEL VAE M ODELLike a V AE, our model jointly trains an “encoder” inference network, which maps an image xto aposterior distribution over latent variables z, and a “decoder” generative network, which models adistribution over xconditioned on z. The encoder and decoder networks are composed of a seriesof convolutional layers, respectively with strided convolutions for downsampling in the encoder andtransposed convolutions for upsampling in the decoder.As opposed to most V AE decoders that model each dimension of the output independently (forexample, by modeling the output as a Gaussian with diagonal covariance), we use a conditionalPixelCNN in the decoder. Our decoder models xas the product of each dimension xiconditionedon all previous dimensions and the latent variable z:p(xjz) =Yip(xijx1;:::;x i1;z)We first transform zthrough a series of convolutional layers into feature maps with the same spatialresolution as the output image and then concatenate the resulting feature maps with the image.The resulting concatenated feature maps are then further processed by several PixelCNN maskedconvolutional layers and a final PixelCNN 256-way softmax output.Unlike typical PixelCNN implementations, we use very few PixelCNN layers in our decoder, relyingon the latent variables to model the structure of the input at scales larger than the combined receptive3Published as a conference paper at ICLR 2017Figure 3: We generate top-down through a hierarchical latent space decomposition. The inferencenetwork generates latent variables by composing successive deterministic functions to compute pa-rameters of the stochastic random variables. Dotted lines denote contributions to the cost.field of our PixelCNN layers. As a result of this, our architecture captures global structure at a muchlower computational cost than a standard PixelCNN implementation.3.1 H IERARCHICAL ARCHITECTUREThe performance of V AEs can be improved by stacking them to form a hierarchy of stochastic latentvariables: in the simplest configuration, the V AE at each level models a distribution over the latentvariables at the level below, with generation proceeding downward and inference upward througheach level (i.e. as in Fig. 3). In convolutional architectures, the intermediate latent variables aretypically organized into feature maps whose spatial resolution decreases toward higher levels.Our model can be extended in the same way. At each level, the generator is a conditional PixelCNNover the latent features in the level below. This lets us autoregressively model not only the outputdistribution over pixels but also the prior over each set of latent feature maps. The higher-levelPixelCNN decoders use diagonal Gaussian output layers instead of 256-way softmax, and modelthe dimensions within each spatial location (i.e. across feature maps) independently. This is donefor simplicity, but is not a limitation of our model.The output distributions over the latent variables for the generative and inference networks decom-pose as follows (see Fig. 3).p(z1;;zL) =p(zL)p(zL1jzL)p(z1jz2)q(z1;;zLjx) =q(z1jx)q(zLjx)We optimize the negative of the evidence lower bound (sum of data negative log-likelihood andKL-divergence of the posterior over latents with the prior).L(x;q;p ) =Ez1q(z1jx)logp(xjz1) +DKL(q(z1;zLjx)jjp(z1;;zL))=Ez1q(z1jx)logp(xjz1) +Zz1;;zLLYj=1q(zjjx)LXi=1logq(zijx)p(zijzi+1)dz1:::dzL=Ez1q(z1jx)logp(xjz1) +LXi=1Zz1;;zLLYj=1q(zjjx) logq(zijx)p(zijzi+1)dz1:::dzL=Ez1q(z1jx)logp(xjz1) +LXi=1Zzi;zi+1q(zi+1jx)q(zijx) logq(zijx)p(zijzi+1)dzidzi+14Published as a conference paper at ICLR 2017=Ez1q(z1jx)logp(xjz1) +LXi=1Ezi+1q(zi+1jx)DKL(q(zijx)jjp(zijzi+1))Note that when specifying an autoregressive prior over each latent level zi, we can leverage maskedconvolutions (van den Oord et al., 2016b) and samples drawn independently from the approximateposteriorq(zijx)(i.e. from the inference network) to train efficiently in parallel on GPUs.4 E XPERIMENTS4.1 MNISTModel NLL TestDRAW (Gregor et al., 2016) 80.97Discrete V AE (Rolfe, 2016) =81.01IAF V AE (Kingma et al., 2016) 79.88PixelCNN (van den Oord et al., 2016a) =81.30PixelRNN (van den Oord et al., 2016a) =79.20VLAE (Chen et al., 2016) =79.03Convolutional V AE 87.41PixelV AE 80.64Gated PixelCNN (our implementation) =80.10Gated PixelV AE 79.48 (80.02)Gated PixelV AE without upsampling 78.96 (79.58)Table 1: We compare performance of different models on binarized MNIST. “PixelCNN” is themodel described in van den Oord et al. (2016a). Our corresponding latent variable model is “Pixel-V AE”. “Gated PixelCNN” and “Gated PixelV AE” use the gated activation function in van den Oordet al. (2016b). In “Gated PixelV AE without upsampling”, a linear transformation of latent variableconditions the (gated) activation in every PixelCNN layer instead of using upsampling layers.We evaluate our model on the binarized MNIST dataset (Salakhutdinov & Murray, 2008; Lecunet al., 1998) and report results in Table 1. We also experiment with a variant of our model in whicheach PixelCNN layer is directly conditioned on a linear transformation of latent variable, z(ratherthan transforming zfirst through several upsampling convolutional layers) (as in (van den Oord et al.,2016b) and find that this further improves performance, achieving an NLL upper bound comparablewith the current state of the art. We estimate the marginal likelihood of our MNIST model usingthe importance sampling technique in Burda et al. (2015), which computes a lower bound on thelikelihood whose tightness increases with the number of importance samples per datapoint. We useN= 5000 samples per datapoint (higher values don’t appear to significantly affect the likelihoodestimate) and achieve state-of-the-art likelihood.4.1.1 U SING FEWPIXEL CNN L AYERSThe masked convolutional layers in PixelCNN are computationally expensive because they operateat the full resolution of the image and in order to cover the full receptive field of the image, PixelCNNtypically needs a large number of them. One advantage of our architecture is that we can achievestrong performance with very few PixelCNN layers, which makes training and sampling from ourmodel significantly faster than PixelCNN. To demonstrate this, we compare the performance of ourmodel to PixelCNN as a function of the number of PixelCNN layers (Fig. 4a). We find that withfewer than 10 autoregressive layers, our PixelV AE model performs much better than PixelCNN.This is expected since with few layers, the effective receptive field of the PixelCNN output units istoo small to capture long-range dependencies in the data.We also observe that adding even a single PixelCNN layer has a dramatic impact on the NLL boundof PixelV AE. This is not surprising since the PixelCNN layer helps model local characteristics which5Published as a conference paper at ICLR 20170 2 4 6 8 10 12 14#PixelCNN layers80828486889092949698Negative Log-likelihoodGated PixelVAE NLL boundGated PixelCNN NLL(a)NLL Upper Bound (b)Figure 4: (a) Comparison of Negative log-likelihood upper bound of PixelV AE and NLL for Pixel-CNN as a function of the number of PixelCNN layers used. (b) Cost break down into KL divergenceand reconstruction cost.are complementary to the global characteristics which a V AE with a factorized output distributionmodels.4.1.2 L ATENT VARIABLE INFORMATION CONTENTBecause the autoregressive conditional likelihood function of PixelV AE is expressive enough tomodel some properties of the image distribution, it isn’t forced to account for those propertiesthrough its latent variables as a standard V AE is. As a result, we can expect PixelV AE to learnlatent representations which are invariant to textures, precise positions, and other attributes whichare more efficiently modeled by the autoregressive decoder. To empirically validate this, we trainPixelV AE models with different numbers of autoregressive layers (and hence, different PixelCNNreceptive field sizes) and plot the breakdown of the NLL bound for each of these models into thereconstruction term logp(xjz)and the KL divergence term DKL(q(zjx)jjp(z))(Fig. 4b). The KLdivergence term can be interpreted as a measure of the information content in the posterior distri-butionq(zjx)(in the sense that in expectation, samples from q(zjx)requireKL(qjjp)fewer bits tocode under a code optimized for qthan under one optimized for p(Burnham & Anderson, 2003))and hence, models with smaller KL terms encode less information in their latent variables.We observe a sharp drop in the KL divergence term when we use a single autoregressive layercompared to no autoregressive layers, indicating that the latent variables have been freed from havingto encode small-scale details in the images. Since the addition of a single PixelCNN layer allows thedecoder to model interactions between pixels which are at most 2 pixels away from each other (sinceour masked convolution filter size is 55), we can also say that most of the non-trivial (long-range)structure in the images is still encoded in the latent variables.4.1.3 L ATENT REPRESENTATIONSOn MNIST, given a sufficiently high-dimensional latent space, V AEs have already been shown tolearn representations in which digits are well-separated (Sønderby et al., 2016). However, this taskbecomes more challenging as the capacity of the latent space is decreased. PixelV AE’s flexibleoutput distribution should allow it to learn a latent representation which is invariant to small detailsand thus better models global factors of variation given limited capacity.To test this, we train a PixelV AE with a two-dimensional latent space, and an equivalent V AE.We visualize the distribution of test set images in latent space and observe that PixelV AE’s latentrepresentation separates digits significantly better than V AE (Figure 5). To quantify this difference,we train a K-nearest neighbors classifier in the latent space of each model and find that PixelV AE6Published as a conference paper at ICLR 2017(a) (b)Figure 5: Visualization of the MNIST test set in the latent space of (a) convolutional V AE and (b)PixelV AE with two latent dimensions. PixelV AE separates classes more completely than V AE.Figure 6: We visually inspect the variation in image features captured by the different levels ofstochasticity in our model. For the two-level latent variable model trained on 6464LSUN bed-rooms, we vary only the top-level sampling noise (top) while holding the other levels constant,vary only the middle-level noise (middle) , and vary only the bottom (pixel-level) noise (bottom) .It appears that the top-level latent variables learn to model room structure and overall geometry,the middle-level latents model color and texture features, and the pixel-level distribution modelslow-level image characteristics such as texture, alignment, shading.significantly outperforms V AE, achieving a test error of 7.2% compared to V AE’s 22.9%. We alsonote that unlike V AE, PixelV AE learns a representation in which digit identity is largely disentangledfrom other generative factors.4.2 LSUN B EDROOMSTo evaluate our model’s performance with more data and complicated image distributions, we per-form experiments on the LSUN bedrooms dataset (Yu et al., 2015). We use the same preprocessingas in Radford et al. (2015) to remove duplicate images in the dataset. For quantitative experimentswe use a 3232downsampled version of the dataset, and we present samples from a model trainedon the 6464version.We train a two-level PixelV AE with latent variables at 11and88spatial resolutions. We find thatthis outperforms both a two-level convolutional V AE with diagonal Gaussian output and a single-level PixelV AE in terms of log-likelihood and sample quality. We also try replacing the PixelCNNlayers at the higher level with a diagonal Gaussian decoder and find that this hurts log-likelihood,which suggests that multi-scale PixelV AE uses those layers effectively to autoregressively modellatent features.7Published as a conference paper at ICLR 2017Figure 7: Samples from hierarchical PixelV AE on the 64x64 ImageNet dataset.4.2.1 F EATURES MODELED AT EACH LAYERTo see which features are modeled by each of the multiple layers, we draw multiple samples whilevarying the sampling noise at only a specific layer (either at the pixel-wise output or one of thelatent layers) and visually inspect the resulting images (Fig. 6). When we vary only the pixel-level sampling (holding z1andz2fixed), samples are almost indistinguishable and differ only inprecise positioning and shading details, suggesting that the model uses the pixel-level autoregressivedistribution to model only these features. Samples where only the noise in the middle-level (8 8) latent variables is varied have different objects and colors, but appear to have similar basic roomgeometry and composition. Finally, samples with varied top-level latent variables have diverse roomgeometry.4.3 6464IMAGE NETThe6464ImageNet generative modeling task was introduced in (van den Oord et al., 2016a) andinvolves density estimation of a difficult, highly varied image distribution. We trained a heirarchicalPixelV AE model (with a similar architecture to the model in section 4.2) on 6464ImageNet andreport validation set likelihood in Table 2. Our model achieves a likelihood competitive with van denOord et al. (2016a;b), despite being substantially less computationally complex. A visual inspectionof ImageNet samples from our model (Fig. 7) also reveals them to be significantly more globallycoherent than samples from PixelRNN.Model NLL Validation (Train) FLOPsConvolutional DRAW (Gregor et al., 2016) 4.10 (4.04) —Real NVP (Dinh et al., 2016) =4.01 (3.93) —PixelRNN (van den Oord et al., 2016a) =3.63 (3.57) 154109Gated PixelCNN (van den Oord et al., 2016b) =3.57 (3.48) 134109Hierarchical PixelV AE 3.62 (3.55) 63109Table 2: Model performance on 6464ImageNet. We achieve competitive NLL at a fraction of thecomputational complexity of other leading models.8Published as a conference paper at ICLR 20175 C ONCLUSIONSIn this paper, we introduced a V AE model for natural images with an autoregressive decoder thatachieves strong performance across a number of datasets. We explored properties of our model,showing that it can generate more compressed latent representations than a standard V AE and that itcan use fewer autoregressive layers than PixelCNN. We established a new state-of-the-art on bina-rized MNIST dataset in terms of likelihood on 6464ImageNet and demonstrated that our modelgenerates high-quality samples on LSUN bedrooms.The ability of PixelV AE to learn compressed representations in its latent variables by ignoring thesmall-scale structure in images is potentially very useful for downstream tasks. It would be interest-ing to further explore our model’s capabilities for semi-supervised classification and representationlearning in future work.ACKNOWLEDGMENTSThe authors would like to thank the developers of Theano (Theano Development Team, 2016) andBlocks and Fuel (van Merri ̈enboer et al., 2015). We acknowledge the support of the followingagencies for research funding and computing support: Ubisoft, Nuance Foundation, NSERC, Cal-cul Quebec, Compute Canada, CIFAR, MEC Project TRA2014-57088-C2-1-R, SGR project 2014-SGR-1506 and TECNIOspring-FP7-ACCI grant. | HJf5zCW4g | 6: Marginally above acceptance threshold | The paper combines a hierarchical Variational Autoencoder with PixelCNNs to model the distribution of natural images.
They report good (although not state of the art) likelihoods on natural images and briefly start to explore what information is encoded by the latent representations in the hierarchical VAE.
I believe that combining the PixelCNN with a VAE, as was already suggested in the PixelCNN paper, is an important and interesting contribution.
The encoding of high-, mid- and low-level variations at the different latent stages is interesting but seems not terribly surprising, since the size of the image regions the latent variables model is also at the corresponding scale. Showing that the PixelCNN improves the latent representation of the VAE with regard to some interesting task would be a much stronger result.
Also, while the paper claims, that combining the PixelCNN with the VAE reduces the number of computationally expensive autoregressive layers, it remains unclear how much more efficient their whole model is than an PixelCNN with comparable likelihood.
In general, I find the clarity of the presentation wanting. For example, I agree with reviewer1 that the exact structure of their model remains unclear from the paper and would be difficult to reproduce.
| 3: The reviewer is fairly confident that the evaluation is correct |
|
ryCcJaqgl | ICLR.cc/2017/conference | 2017 | TreNet: Hybrid Neural Networks for Learning the Local Trend in Time Series | ["Tao Lin", "Tian Guo", "Karl Aberer"] | Local trends of time series characterize the intermediate upward and downward patterns of time series. Learning and forecasting the local trend in time series data play an important role in many real applications, ranging from investing in the stock market, resource allocation in data centers and load schedule in smart grid. Inspired by the recent successes of neural networks, in this paper we propose TreNet, a novel end-to-end hybrid neural network that predicts the local trend of time series based on local and global contextual features. TreNet leverages convolutional neural networks (CNNs) to extract salient features from local raw data of time series. Meanwhile, considering long-range dependencies existing in the sequence of historical local trends, TreNet uses a long-short term memory recurrent neural network (LSTM) to capture such dependency. Furthermore, for predicting the local trend, a feature fusion layer is designed in TreNet to learn joint representation from the features captured by CNN and LSTM. Our proposed TreNet demonstrates its effectiveness by outperforming conventional CNN, LSTM, HMM method and various kernel based baselines on real datasets. | ["trenet", "local trend", "time series", "lstm", "hybrid neural networks", "time series trenet", "intermediate upward", "downward patterns", "time series data"] | ABSTRACTLocal trends of time series characterize the intermediate upward and downwardpatterns of time series. Learning and forecasting the local trend in time series dataplay an important role in many real applications, ranging from investing in thestock market, resource allocation in data centers and load schedule in smart grid.Inspired by the recent successes of neural networks, in this paper we proposeTreNet, a novel end-to-end hybrid neural network that predicts the local trendof time series based on local and global contextual features. TreNet leveragesconvolutional neural networks (CNNs) to extract salient features from local rawdata of time series. Meanwhile, considering long-range dependencies existing inthe sequence of historical local trends, TreNet uses a long-short term memoryrecurrent neural network (LSTM) to capture such dependency. Furthermore, forpredicting the local trend, a feature fusion layer is designed in TreNet to learnjoint representation from the features captured by CNN and LSTM. Our pro-posed TreNet demonstrates its effectiveness by outperforming conventional CNN,LSTM, HMM method and various kernel based baselines on real datasets.1 I NTRODUCTIONTime series, which is a sequence of data points in time order, is being generated in a wide spectrum ofdomains, such as daily fluctuation of the stock market, power consumption records of households,performance monitoring data of clusters in data centres, and so on. In many applications, usersare interested in understanding the evolving trend in time series and forecasting the trend, sincethe conventional prediction on specific data points could deliver very little information about thesemantics and dynamics of the underlying process generating the time series. For instance, timeseries in Figure 1 are from the household power consumption dataset1. Figure 1(a) shows some rawdata points of time series. Though point AandBhave approximately the same value, the underlyingsystem is likely to be in two different states when it outputs AandB, becauseAis in an upwardtrend whileBis in a downward trend (Wang et al., 2011; Matsubara et al., 2014). On the other hand,even when two points with the similar value are both in the upward trend, e.g, point AandC, thedifferent slopes and durations of the trends where point AandClocate, could also indicate differentstates of the underlying process.Particularly, in this paper we are interested in the local trend of time series which measures the in-termediate local behaviour, i.e., upward or downward pattern of time series that characterized by theslope and duration (Wang et al., 2011). For instance, in Figure 1(b) the linear segments over raw datapoints of time series represent the local trends extracted from a real household power consumptiontime series. For the ease of presentation, we will use the term trend and local trend interchangeablyin the rest of the paper. Learning and forecasting local trends are quite useful in a wide range ofapplications. For instance, in the stock market, due to its high volatility and noisy environment,in reality predicting stock price trends is preferred over the prediction of the stock market absolutevalues (Atsalakis & Valavanis, 2009). Predicting the local trend of stock price time series empowersThese two authors contributed equally.1https://archive.ics.uci.edu/ml/datasets/Individual+household+electric+power+consumption1Under review as a conference paper at ICLR 2017traders to design profitable trading strategies (Chang et al., 2012b; Atsalakis & Valavanis, 2009).In the smart energy domain, knowing the predictive local trend of power consumption time se-ries enables energy providers to schedule power supply and maximize energy utilization (Zhao &Magoul `es, 2012).Meanwhile, in recent years neural networks have shown the dramatical power in a wide spectrum ofdomains, e.g., natural language processing, computer vision, speech recognition, time series anal-ysis, etc. (Wang et al., 2016b; Sutskever et al., 2014; Yang et al., 2015; Lipton et al., 2015). Fortime series data, two mainstream architectures, convolutional neural network (CNN) and recurrentneural network (RNN) have been exploited in different time series related tasks, e.g., RNN in timeseries classification (Lipton et al., 2015) and CNN in activity recognition and snippet learning (Liuet al., 2015; Yang et al., 2015). RNN is powerful in discovering the dependency in sequence data(Jain et al., 2014; Graves, 2012) and particularly the Long Short-Term Memory (LSTM) RNN workswell on sequence data with long-term dependencies (Chung et al., 2014; Hochreiter & Schmidhuber,1997) due to the internal memory mechanism. CNN excels in exacting effective representation oflocal salience from raw data of time series by enforcing a local connectivity between neurons. (Yanget al., 2015; Hammerla et al., 2016).Figure 1: (a) Time series of household power consumption. (b) Local trends in time series. (c)Effect of local raw data on the trend forecasting.In this paper, we focus on learning and forecasting the local trends in time series via neural networks.This involves learning different aspects of the data. On one hand, the sequence of historical localtrends describes the long-term contextual information of time series and thus naturally affects theevolution of the following local trend. On the other hand, the recent raw data points of time series(Wang et al., 2011; Batal et al., 2012), which represent the local variation and behaviour of timeseries, affect the evolving of the following trend as well and have particular predictive power forabruptly changing local trends (Wang et al., 2011). For instance, in Figure 1(c), trend 1,2and3present a continuous upward pattern. Then when we aim at predicting the subsequent trend oftime series at the end of the third local trend, the previous three successive upward trends outline aprobable increasing trend afterwards. However, the local data around the end of the third trend, e.g.,data points in the red circle, indicate that time series could stabilize and even decrease. The datapoints after the third trend indeed present a decreasing trend indicated by the red dotted segment. Inthis case, the subsequent trend has more dependency on the local data points. Therefore, it is highlydesired to develop a systematic way to model such various hidden and complementary dependenciesin time series for the local trend forecasting problem.To this end, we propose a end-to-end hybrid neural network, referred to as TreNet. In particular,it consists of a LSTM recurrent neural network to capture the long dependency in historical localtrends, a convolutional neural network to extract local features from local raw data of time series,and a feature fusion layer to learn joint representation to take advantage of both features drawn fromCNN and LSTM. Such joint representation is used for the local trend forecasting. The experimentalanalysis on real datasets demonstrates that TreNet outperforms individual recurrent neural network,convolutional neural network and a variety of baselines in term of local trend prediction accuracy.The rest of the paper is organized as follows. Section 2 presents related work, while Section 3 definesthe problem to be solved and introduces the notations. In Section 4, we present the proposed TreNet.Section 5 demonstrates the performance of our method and baselines on real datasets. Finally, thepaper is concluded in Section 6. Refer to Section 7 and Section 8 for more experiment results anddiscussion.2Under review as a conference paper at ICLR 20172 R ELATED WORKTraditional learning approaches over local trends of time series mainly make use of Hidden MarkovModels (HMMs) (Wang et al., 2011; Matsubara et al., 2014). HMMs maintain short-term state de-pendences, i.e., the memoryless Markov property and predefined number of states, which requiressignificant task specific knowledge. RNNs instead use high dimensional, distributed hidden statesthat could take into account long-term dependencies in sequence data. Previous time series seg-mentation approaches (Keogh et al., 2001; Matsubara et al., 2014; Yuan, 2015) focus on achievinga meaningful segmentation and finding patterns, rather than modeling the relation in segments andtherefore are not suitable for forecasting local trends. Multi-step ahead prediction is another wayto realize local trend prediction by fitting the predicted values to estimate the local trend. However,multi-step ahead prediction is a non-trivial problem itself (Chang et al., 2012a). In this paper, weconcentrate on directly learning local trends through neural networks.RNNs have recently shown promising results in a variety of applications, especially when there ex-ist sequential dependencies in data (Lyu & Zhu, 2014; Chung et al., 2014; Sutskever et al., 2014).Long short-term memory (LSTM) (Hochreiter & Schmidhuber, 1997; Lyu & Zhu, 2014; Chunget al., 2014), a class of recurrent neural networks with sophisticated recurrent hidden and gatedunits, are particularly successful and popular due to its ability to learn hidden long-term sequentialdependencies. (Lipton et al., 2015) uses LSTMs to recognize patterns in multivariate time series,especially for multi-label classification of diagnoses. (Chauhan & Vig, 2015; Malhotra et al., 2015)evaluate the ability of LSTMs to detect anomalies in ECG time series. Bidirectional LSTM (Graves& Schmidhuber, 2005) is usually intended for speech processing rather than time series forecastingproblems. Our paper focuses on using LSTM to capture the dependency in the sequence of histor-ical local trends and meanwhile the hidden states in LSTM are further used to learn joint featurerepresentations for the local trend forecasting.CNN is often used to learn effective representation of local salience from raw data (Vinyals et al.,2015; Donahue et al., 2015; Karpathy et al., 2014). (Hammerla et al., 2016; Yang et al., 2015; Leaet al., 2016) make use of CNNs to extract features from raw time series data for activity/actionrecognition. (Liu et al., 2015) focuses on the prediction of periodical time series values by usingCNN and embedding time series with the potential neighbors in the temporal domain. Our proposedTreNet will combine the strengths of both LSTM and CNN and form a novel and unified neuralnetwork architecture for local trend forecasting.Hybrid neural networks, which combines the strengths of various neural networks, are receiving in-creasing interest in the computer vision domain, such as image captioning (Mao et al., 2014; Vinyalset al., 2015; Donahue et al., 2015), image classification (Wang et al., 2016a), protein structure pre-diction (Li & Yu, 2016), action recognition (Ballas et al., 2015; Donahue et al., 2015) and so on.But efficient exploitation of such hybrid architectures has not been well studied for time series data,especially the trend forecasting problem. (Li & Yu, 2016; Ballas et al., 2015) utilize CNNs over im-ages in cascade of RNNs in order to capture the temporal features for classification. (Bashivan et al.,2015) transforms EEG data into a sequence of topology-preserving multi-spectral images and thentrains a cascaded convolutional-recurrent network over such images for EEG classification. (Wanget al., 2016a; Mao et al., 2014) propose the CNN-RNN framework to learn a shared representationfor image captioning and classification problems. In our proposed TreNet, LSTM and CNN firstrespectively learn the trend evolution and local raw data of time series and then TreNet fuses thefeatures captured by LSTM and CNN to predict the trend.3 P ROBLEM FORMULATIONIn this section, we provide the formal definition of the trend learning and forecasting problem in thispaper.We define time series as a sequence of data points X=fx1;:::;x Tg, where each data point xtisreal-valued and subscript trepresents the time instant. The corresponding local trend sequence ofXis a series of piecewise linear representations of X, denoted byT=fh`k;skig. Each elementofT, e.g.,h`k;skidescribes a linear function over a certain subsequence (or segment) of Xandcorresponds to a local trend in X. Such local trends in Tare extracted from Xby time seriessegmentation and fitting a linear function w.r.t. timetover each segment (Keogh et al., 2001; Wang3Under review as a conference paper at ICLR 2017et al., 2011). `kandskrespectively represent the duration and slope of trend k.`kis measured interms of the time range covered by trend k. Local trends inTare time ordered and non-overlapping.The durations of all the local trends in TaddressPk`k=T. In addition, a local trend sequenceending by time tis denoted byT(t) =fh`k;skijPk`ktg.Meanwhile, as we discussed in Section 1, local raw data of time series affects the varying of trendas well and thus we define the local data w.r.t. a certain time instant tas a sequence of data pointsin a window of size w, denoted byL(t) =fxtw;:::;x tg.At certain time t, trend forecasting is meant to predict the duration and slope of the following trendbased on a given sequence of historical trends T(t)and local data setL(t). The predicted durationand slope at time tare denoted by ^`tand^st. Our proposed TreNet can be trained for predictingeither ^`tor^st. For simplicity, we use ^ytto represent the predicted value of TreNet throughout thepaper.Therefore, given the training dataset D=X[T , we aim to propose a neural network based approachto learn a function ^yt=f(T(t);L(t))for the trend forecasting. In this paper, we focus on univariatetime series. The proposed method can be naturally generalized to multivariate time series as well byaugmenting the input to the neural network. Refer to Section 8 for more discussion.4 H YBRID NEURAL NETWORKS FOR TREND LEARNING AND FORECASTINGIn this section, we first present an overview about the proposed TreNet for the trend forecasting.Then we will detail the components of TreNet.Overview.The idea of our TreNet is to combine CNN with LSTM to utilize their representation abilities ondifferent aspects of training data D(D=X[T ) and then to learn a joint feature for the trend pre-diction. Technically, TreNet is designed to learn a predictive function ^yt=f(R(T(t));C(L(t))).R(T(t))is derived by training the LSTM over sequence Tto capture the dependency in the trendevolving, while C(L(t))corresponds to local features extracted by CNN from L(t). The long-termand local features captured by LSTM and CNN, i.e., R(T(t))andC(L(t))convey complementaryinformation pertaining to the trend varying. Therefore, the feature fusion layer is supposed to takeadvantages of both features to produce a fused representation for improved performance. Finally,the trend prediction is realized by the function f(;), which corresponds to the feature fusion andoutput layers in Figure 2.Figure 2: Illustration of the hybrid architecture of TreNet. (best viewed in colour)Learning the dependency in the trend sequence.During the training phase, the duration `kand slopeskof each local trend kin sequenceTare fedinto the LSTM layer of TreNet. Each j-th neuron in the LSTM layer maintains a memory cjkat stepk. The output hjkor the activation of this neuron is then expressed as (Hochreiter & Schmidhuber,4Under review as a conference paper at ICLR 20171997; Chung et al., 2014):hjk=ojktanh(cjk) (1)whereojkis an output gate and calculated as:ojk=(Wo[`ksk] +Uohk1+Vock)j(2)where [`ksk]is the concatenation of the duration and slope of the trend k,hk1andckare thevectorization of the activations of fhjk1gandfcjkg, andis a logistic sigmoid function. Then,the memory cell cjkis updated through partially forgetting the existing memory and adding a newmemory content ~cjk:cjk=fjkcjk1+ijk~cjk;~cjk=tanh(Wc[`ksk] +Uchk1)j(3)The extent to which the existing memory is forgotten is modulated by a forget gate fjk, and thedegree to which the new memory content is added to the memory cell is modulated by an input gateijk. Then, such gates are computed byfjk=(Wf[`ksk] +Ufhk1+Vfck1)j(4)ijk=(Wi[`ksk] +Uihk1+Vick1)j(5)At each step k, the hidden activation hkis the output to the feature fusion layer. Specifically, givenaT(t)containingnlocal trends (i.e.,jT(t)j=n), the output of R(T(t))isR(T(t)) =hn.Learning features from the local raw data of time series.When thek-th trend inTis fed to LSTM, the corresponding local raw time series data input tothe CNN part of TreNet is L(t), wheret=kPi=1`i. CNN consists of Hstacked layers of 1-dconvolutional, activation and pooling operations. Denote by aithe input signal of layer iand thusat the first layer a1=L(t). Each layer has a specified number of filters niof a specified filter sizedi. Each filter on a layer sweeps through the entire input signal to exact local features as follows:vi;jm=(bi;j+m+di=2Xz=mdi=2Wi;jzaiz);8m= 1;:::;jaij (6)wherevi;jmis the activation of j-th filter of layer ionmposition of the input signal. Here is theLeaky Rectified Linear Unit, which is shown to perform better (Xu et al., 2015). Then the max-pooling is performed over the vi;jmof each filter.Finally, the output of CNN in TreNet is the concatenation of max-pooling of each filter on the lastlayerH, namely:C(L(t)) = [p1;:::;pnH]; pj= [ max1zq(fvH;jm+zg)];8j= 1;:::;nH(7)whereqis the pooling size.Feature fusion and output layers.The feature fusion layer combines the representations R(T(t))andC(L(t)), to form a joint feature.Then, such joint feature is fed to the output layer to provide the trend prediction. Particularly, we firstmapR(T(t))andC(L(t))to the same feature space and add them together to obtain the activationof the feature fusion layer (Mao et al., 2014). The output layer is a fully-connect layer following thefeature fusion layer. Mathematically, the prediction of TreNet is expressed as:^yt=f(R(T(t)); C(L(t))) =Wo(WrR(T(t)) +WcC(L(t)))| {z }feature fusion+bo(8)where()is element-wise leaky ReLU activation function and +denotes the element-wise addi-tion.Woandboare the weights and bias of the output layer.5Under review as a conference paper at ICLR 2017To train TreNet, we adopt the squared error function plus a regularization term as:J(W;b;T;X) =1jTjjTjXk=1(^ykyk)2+kWk2 (9)whereWandbrepresent the weight and bias parameters in TreNet, is a hyperparameter for theregularization term and ykis the true value of trend slope or duration.The cost function is differentiable and the architecture of TreNet allows the gradients from the lossfunction (9) to be backpropagated to both LSTM and CNN parts. TreNet can be trained respectivelyfor the slope and duration of local trends using TandX. When performing forecasting, T(t)andL(t)are fed to TreNet and the prediction value ^ykcould be either the slope or duration dependingon the training target.5 E XPERIMENTAL ANALYSISIn this section, we conduct extensive experiments to demonstrate the prediction performance ofTreNet by comparing to a variety of baselines. Due to the page limit, refer to Section 7 for moreexperiment results.5.1 E XPERIMENT SETUPDataset: We test our method and baselines on three real time series datasets.Daily Household Power Consumption (HousePC). This dataset2contains measurementsof electric power consumption in one household with a one-minute sampling rate over aperiod of almost 4 years. Different electrical quantities and some sub-metering values areavailable. We use the voltage time series throughout the experiments.Gas Sensor (GasSensor). This dataset3contains the recordings of chemical sensors ex-posed to dynamic gas mixtures at varying concentrations. The measurement was con-structed by the continuous acquisition of the sensor array signals for a duration of about 12hours without interruption. We mainly use the gas mixture time series regarding Ethyleneand Methane in air.Stock Transaction (Stock): This dataset is extracted from Yahoo Finance and contains thedaily stock transaction information in New York Stock Exchange from 1950-10 to 2016-4.All datasets are preprocessed by (Keogh et al., 2001) to extract local trends. Alternative time seriessegmentation and local trend extraction approaches can be used as well. We choose (Keogh et al.,2001) here due to its high efficiency. Totally, we obtain 42591 ,4720 and1316 local trends respec-tively from above datasets. For the ease of experimental result interpretation, the slope of extractedlocal trends is represented by the angle of the corresponding linear function and thus in a boundedvalue range [90;90]. The duration of local trends is measured by the number of data points withinthe local trend. Then, the obtained trend sequences and the set of local data are split into training(80%), validation ( 10%) and test ( 10%) datasets.Baselines: We compare TreNet with the following six baselines:CNN . This baseline method predicts the trend by only using CNN over the set of local rawdata of time series to learn features for the forecasting. The size of local data is set at wasis defined in Section 3.LSTM . This method uses LSTM to learn dependencies in the trend sequence Tand pre-dicts the trend only using the trained LSTM.Support Vector Regression (SVR) . A family of support vector regression based ap-proaches with different kernel methods is used for the trend forecasting. We consider three2https://archive.ics.uci.edu/ml/datasets/Individual+household+electric+power+consumption3https://archive.ics.uci.edu/ml/datasets/Gas+sensor+array+under+dynamic+gas+mixtures6Under review as a conference paper at ICLR 2017Dataset Model RMSE @ Duration RMSE @ SlopeHousePCCNN 27.51 13.56LSTM 27.27 13.27SVRBF 31.81 12.94SVPOLY 31.81 12.93SVSIG 31.80 12.93pHMM 34.06 26.00Naive 39.68 21.17CLSTM 25.97 13.77TreNet 25.89 12.89StockCNN 18.87 12.78LSTM 11.07 8.40SVRBF 11.38 7.40SVPOLY 11.40 7.42SVSIG 11.49 7.41pHMM 36.37 8.70Naive 11.36 8.58CLSTM 9.26 7.31TreNet 8.86 6.84GasSensorCNN 53.99 11.51LSTM 55.77 11.22SVRBF 62.81 10.21SVPOLY 70.91 10.95SVSIG 85.69 11.92pHMM 111.62 13.07Naive 53.76 10.57CLSTM 54.20 14.86TreNet 52.28 9.57Table 1: RMSE of the prediction of local trend duration and slope on each dataset.7Under review as a conference paper at ICLR 2017commonly used kernels (Liu et al., 2015), i.e., Radial Basis kernel ( SVRBF ), Polynomialkernel ( SVPOLY ), Sigmoid kernel ( SVSIG ). The trend sequence and the correspondingset of local time series data are concatenated as the input features to such SVR approaches.Pattern-based Hidden Markov Model (pHMM) . (Wang et al., 2011) proposed a pattern-based hidden Markov model (HMM), which segments the time series and models the de-pendency in segments via HMM. The derived HMM model is used to predict the state oftime series and then to estimate the trend based on the state.Naive . This is the naive approach which takes the duration and slope of the last trend asthe prediction for the next one.ConvNet+LSTM(CLSTM) . It is based on the cascade structure of ConvNet and LSTMin (Bashivan et al., 2015) which feeds the features learnt by ConvNet over time series to aLSTM and obtains the prediction from the LSTM.Evaluation metric: We evaluate the predictive performance of TreNet and baselines in terms ofRoot Mean Square Error (RMSE). The lower the RMSE, the more accurate the predictions.Training: The training procedure of TreNet and baselines in our paper follows the schema below.The CNN and LSTM components in TreNet share the same network structure (e.g., number of lay-ers, neurons in each layer) as CNN and LSTM baselines. CNN has two stacked convolutional layers,which have 32filters of size 2and4. The number of memory cells in LSTM is 600. For baselineCNN and LSTM, we tune the learning rate for each approach from f101;102;103;104;105g(Sutskever et al., 2013), in order to achieve the least prediction errors and then fix the learning rate.For TreNet, in addition to the learning rate, the number of neurons in the feature fusion layer ischosen from the range f300;600;900;1200gto achieve the best performance. We use dropout andL2 regularization to control the capacity of neural networks to prevent overfitting, and set the valuesto0:5and5104respectively for all datasets (Mao et al., 2014). The Adam optimizer (Kingma& Ba, 2014) is chosen to learn the weights in neural networks.Regarding the SVR based approaches, we carefully tune the parameters c(error penalty), d(degreeof kernel function), and (kernel coefficient) for kernels. Each parameter is selected from the setsc2f105;104;:::; 1;:::; 104;105g,d2f1;2;3g,2f105;104;:::; 1;:::; 105grespec-tively. We iterate through candidate values of each combination of c,dandto train our model,and keep the parameters that generate the lowest RMSE on the validation set, and then use them topredict on the test set.The training datasets of SVR and pHMM baselines are consistent as that of TreNet. Likewise, CNNand LSTM baselines are respectively fed by the set of local data and the trend sequence of the samesize as TreNet. In addition, since the window size of local data is tunable, we vary the windowsize of local data, i.e. w, from the rangef100;300;500;700;900g, so as to investigate how the sizeof local data influences the predication performance. The results will be presented in Section 5.2.The model’s performance on the validation set will be evaluated after each epoch of training. Eachmodel is trained for at least 50epochs. Meanwhile, the training process adopts early stopping if nofurther improvement in the performance of validation shows up after 50epochs.5.2 E XPERIMENT RESULTSTable 1 studies the prediction performances of TreNet and baselines. For each dataset, the windowsize of local data is constant for approaches (i.e., CNN, SVRBF, SVPOLY , SVSIG, pHMM andTreNet) that take local data as input. Then, the results of each approach are obtained by tuning thecorresponding parameter as described in Section 5.1.In Table 1, we observe that TreNet consistently outperforms baselines on the duration and slopeprediction by achieving around 30% less errors at the maximum. It verifies that the hybrid architec-ture of TreNet can improve the performance by utilizing the information captured by both CNN andLSTM. Specifically, pHMM method performs worse due to the limited representation capability ofHMM. On the slope prediction, SVR based approaches can get comparable results as TreNet.In the following group of experiments, we investigate the effect of local data size (i.e., w) on theprediction. In particular, we tune the value of local data size for the approaches whose input fea-8Under review as a conference paper at ICLR 2017tures contains local data and observe the prediction errors. Such approaches include CNN, SVRBF,SVPOLY , SVSIG, pHMM and TreNet. LSTM only consumes the trend sequence and thus is notincluded. Due to the page limit, we report the results on the HousePC dataset in Table 2 and Table 3.The results on Stock and GasSensor datasets can be referred to Section 7.Baseline Naive has no original time series data as input CLSTM works on the whole time series andhas no local data. Thus they are excluded from this set of experiments.In Table 2, we observe that compared to baselines TreNet has the lowest errors on the duration pre-diction across different window sizes. pHMM requires sufficient data points to model the relationsof segments and fails to work on 100size. As the window size increases and more local data pointsare fed to the training process, the prediction errors of CNN and TreNet decrease or nearly stabilize.This could be because only the certain amount of local data has predictive power. The filtering andpooling mechanism enables CNN to focus on the certain local data having strong predictive powerand thus giving more local data only gives rise to marginal improvements. Such similar phenomenonis observed on the slope prediction as is shown in Table 3. For more results and discussion, pleaserefer to Section 7.Window Size CNN SVRBF SVPOLY SVSIG pHMM TreNet100 29.37 31.48 31.96 31.88 - 25.93300 27.33 31.17 31.61 31.66 30.03 25.94500 27.51 31.81 31.81 31.80 34.06 25.89700 27.41 31.10 31.09 31.11 27.37 25.72900 27.42 31.28 31.27 31.27 28.45 25.62Table 2: RMSE of the duration predictions w.r.t. different sizes of local data in HousePC datasetWindow Size CNN SVRBF SVPOLY SVSIG pHMM TreNet100 13.68 12.93 12.9352 12.9346 - 13.14300 13.60 12.93 12.9346 12.9345 27.75 13.15500 13.56 12.94 12.9342 12.9346 26.00 12.89700 13.52 12.93 12.9345 12.9345 35.32 12.86900 13.60 12.94 12.9350 12.9346 37.60 12.96Table 3: RMSE of the slope predictions w.r.t. different sizes of local data in HousePC dataset6 C ONCLUSIONIn this paper we propose TreNet, a novel hybrid neural network to learn and predict the local trendbehaviour of time series. The experimental results demonstrate that such a hybrid framework canindeed utilize complementary information extracted by CNN and LSTM to enhance the predictionperformance. Moreover, such architecture is generic and extendible in that additional exogenoustime series can be fed to TreNet, so as to boost the performance and investigate the effect of differentdata sources on the trend evolving. | Hkz91UeVe | Interesting idea of trend prediction but incomplete baselines and experiments. | 5: Marginally below acceptance threshold | Revision of the review:
The authors did a commendable job of including additional references and baseline experiments.
---
This paper presents a hybrid architecture for time series prediction, focusing on the slope and duration of linear trends. The architecture consists of combining a 1D convnet for local time series and an LSTM for time series of trend descriptors. The convnet and LSTM features are combined into an MLP for predicting either the slope or the duration of the next trend in a 1D time series. The method is evaluated on 3 small datasets.
Summary:
This paper, while relative well written and presenting an interesting approach, has several methodology flaws, that should be handled by new experiments.
Pros:
The idea of extracting upward or downward trends from time series - although these should, ideally be learned, not rely on an ad-hoc technique, given that this is a submission for ICLR.
Methodology:
* In section 3, what do you mean by predicting “either [the duration] $\hat l_t$ or [slope] $\hat s_t$” of the trend? Predictions are valid only if those two predictions are done jointly. The two losses should be combined during training.
* In the entire paper, the trend slope and duration need to be predicted jointly. Predicting a time series without specifying the horizon of the prediction is meaningless. If the duration of the trends is short, the time series could go up or down alternatively; if the duration of the trend is long, the slope might be close to zero. Predictions at specific horizons are needed.
* In general, time series prediction for such applications as trading and load forecasting is pointless if no decision is made. A trading strategy would be radically different for short-term and noisy oscillations or from long-term, stable upward or downward trend. An actual evaluation in terms of trading profit/loss should be added for each of the baselines, including the naïve baselines.
* As mentioned earlier in the pre-review questions, an important baseline is missing: feeding the local time series to the convnet and connecting the convnet directly to the LSTM, without ad-hoc trend extraction.
* The convnet -> LSTM architecture would need an analysis of the convnet filters and trend prediction representation.
* Trend prediction/segmentation by the convnet could be an extra supervised loss.
* The detailed analysis of the trend extraction technique is missing.
* In section 5, the SVM baselines have local trend and local time series vectors concatenated. Why isn’t the same approach used for LSTM baselines (as a multivariate input) and why the convnet operates only on local
* An important, “naïve” baseline is missing: next local trend slope and duration = previous local trend slope and duration.
Missing references:
The related work section is very partial and omits important work in hybrid convnet + LSTM architectures, in particular:
Vinyals, Oriol, Toshev, Alexander, Bengio, Samy, and Erhan, Dumitru. Show and tell: A neural image caption generator. CoRR, abs/1411.4555, 2014.
Donahue, Jeff, Hendricks, Lisa Anne, Guadarrama, Sergio, Rohrbach, Marcus, Venugopalan, Subhashini, Saenko, Kate, and Darrell, Trevor. Long-term recurrent convolutional networks for visual recognition and description. CoRR, abs/1411.4389, 2014.
Karpathy, Andrej, Toderici, George, Shetty, Sanketh, Leung, Thomas, Sukthankar, Rahul, and Fei-Fei, Li. Large-scale video classification with convolutional neural networks. In CVPR, 2014.
The organization of the paper needs improvement:
* Section 3 does not explain the selection of the maximal tolerable variance in each trend segment. The appendix should be moved to the core part of the paper.
* Section 4 is unnecessarily long and gives well known details and equations about convnets and LSTMs. The only variation from standard algorithm descriptions is that $l_k$ $s_k$ are concatenated. The feature fusion layer can be expressed by a simple MLP on the concatenation of R(T(l)) and C(L(t)). Details could be moved to the appendix.
Additional questions:
*In section 5, how many datapoints are there in each dataset? Listing only the number of local trends is uninformative.
Typos:
* p. 5, top “duration and slop”
| 5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature |
ryCcJaqgl | ICLR.cc/2017/conference | 2017 | TreNet: Hybrid Neural Networks for Learning the Local Trend in Time Series | ["Tao Lin", "Tian Guo", "Karl Aberer"] | Local trends of time series characterize the intermediate upward and downward patterns of time series. Learning and forecasting the local trend in time series data play an important role in many real applications, ranging from investing in the stock market, resource allocation in data centers and load schedule in smart grid. Inspired by the recent successes of neural networks, in this paper we propose TreNet, a novel end-to-end hybrid neural network that predicts the local trend of time series based on local and global contextual features. TreNet leverages convolutional neural networks (CNNs) to extract salient features from local raw data of time series. Meanwhile, considering long-range dependencies existing in the sequence of historical local trends, TreNet uses a long-short term memory recurrent neural network (LSTM) to capture such dependency. Furthermore, for predicting the local trend, a feature fusion layer is designed in TreNet to learn joint representation from the features captured by CNN and LSTM. Our proposed TreNet demonstrates its effectiveness by outperforming conventional CNN, LSTM, HMM method and various kernel based baselines on real datasets. | ["trenet", "local trend", "time series", "lstm", "hybrid neural networks", "time series trenet", "intermediate upward", "downward patterns", "time series data"] | ABSTRACTLocal trends of time series characterize the intermediate upward and downwardpatterns of time series. Learning and forecasting the local trend in time series dataplay an important role in many real applications, ranging from investing in thestock market, resource allocation in data centers and load schedule in smart grid.Inspired by the recent successes of neural networks, in this paper we proposeTreNet, a novel end-to-end hybrid neural network that predicts the local trendof time series based on local and global contextual features. TreNet leveragesconvolutional neural networks (CNNs) to extract salient features from local rawdata of time series. Meanwhile, considering long-range dependencies existing inthe sequence of historical local trends, TreNet uses a long-short term memoryrecurrent neural network (LSTM) to capture such dependency. Furthermore, forpredicting the local trend, a feature fusion layer is designed in TreNet to learnjoint representation from the features captured by CNN and LSTM. Our pro-posed TreNet demonstrates its effectiveness by outperforming conventional CNN,LSTM, HMM method and various kernel based baselines on real datasets.1 I NTRODUCTIONTime series, which is a sequence of data points in time order, is being generated in a wide spectrum ofdomains, such as daily fluctuation of the stock market, power consumption records of households,performance monitoring data of clusters in data centres, and so on. In many applications, usersare interested in understanding the evolving trend in time series and forecasting the trend, sincethe conventional prediction on specific data points could deliver very little information about thesemantics and dynamics of the underlying process generating the time series. For instance, timeseries in Figure 1 are from the household power consumption dataset1. Figure 1(a) shows some rawdata points of time series. Though point AandBhave approximately the same value, the underlyingsystem is likely to be in two different states when it outputs AandB, becauseAis in an upwardtrend whileBis in a downward trend (Wang et al., 2011; Matsubara et al., 2014). On the other hand,even when two points with the similar value are both in the upward trend, e.g, point AandC, thedifferent slopes and durations of the trends where point AandClocate, could also indicate differentstates of the underlying process.Particularly, in this paper we are interested in the local trend of time series which measures the in-termediate local behaviour, i.e., upward or downward pattern of time series that characterized by theslope and duration (Wang et al., 2011). For instance, in Figure 1(b) the linear segments over raw datapoints of time series represent the local trends extracted from a real household power consumptiontime series. For the ease of presentation, we will use the term trend and local trend interchangeablyin the rest of the paper. Learning and forecasting local trends are quite useful in a wide range ofapplications. For instance, in the stock market, due to its high volatility and noisy environment,in reality predicting stock price trends is preferred over the prediction of the stock market absolutevalues (Atsalakis & Valavanis, 2009). Predicting the local trend of stock price time series empowersThese two authors contributed equally.1https://archive.ics.uci.edu/ml/datasets/Individual+household+electric+power+consumption1Under review as a conference paper at ICLR 2017traders to design profitable trading strategies (Chang et al., 2012b; Atsalakis & Valavanis, 2009).In the smart energy domain, knowing the predictive local trend of power consumption time se-ries enables energy providers to schedule power supply and maximize energy utilization (Zhao &Magoul `es, 2012).Meanwhile, in recent years neural networks have shown the dramatical power in a wide spectrum ofdomains, e.g., natural language processing, computer vision, speech recognition, time series anal-ysis, etc. (Wang et al., 2016b; Sutskever et al., 2014; Yang et al., 2015; Lipton et al., 2015). Fortime series data, two mainstream architectures, convolutional neural network (CNN) and recurrentneural network (RNN) have been exploited in different time series related tasks, e.g., RNN in timeseries classification (Lipton et al., 2015) and CNN in activity recognition and snippet learning (Liuet al., 2015; Yang et al., 2015). RNN is powerful in discovering the dependency in sequence data(Jain et al., 2014; Graves, 2012) and particularly the Long Short-Term Memory (LSTM) RNN workswell on sequence data with long-term dependencies (Chung et al., 2014; Hochreiter & Schmidhuber,1997) due to the internal memory mechanism. CNN excels in exacting effective representation oflocal salience from raw data of time series by enforcing a local connectivity between neurons. (Yanget al., 2015; Hammerla et al., 2016).Figure 1: (a) Time series of household power consumption. (b) Local trends in time series. (c)Effect of local raw data on the trend forecasting.In this paper, we focus on learning and forecasting the local trends in time series via neural networks.This involves learning different aspects of the data. On one hand, the sequence of historical localtrends describes the long-term contextual information of time series and thus naturally affects theevolution of the following local trend. On the other hand, the recent raw data points of time series(Wang et al., 2011; Batal et al., 2012), which represent the local variation and behaviour of timeseries, affect the evolving of the following trend as well and have particular predictive power forabruptly changing local trends (Wang et al., 2011). For instance, in Figure 1(c), trend 1,2and3present a continuous upward pattern. Then when we aim at predicting the subsequent trend oftime series at the end of the third local trend, the previous three successive upward trends outline aprobable increasing trend afterwards. However, the local data around the end of the third trend, e.g.,data points in the red circle, indicate that time series could stabilize and even decrease. The datapoints after the third trend indeed present a decreasing trend indicated by the red dotted segment. Inthis case, the subsequent trend has more dependency on the local data points. Therefore, it is highlydesired to develop a systematic way to model such various hidden and complementary dependenciesin time series for the local trend forecasting problem.To this end, we propose a end-to-end hybrid neural network, referred to as TreNet. In particular,it consists of a LSTM recurrent neural network to capture the long dependency in historical localtrends, a convolutional neural network to extract local features from local raw data of time series,and a feature fusion layer to learn joint representation to take advantage of both features drawn fromCNN and LSTM. Such joint representation is used for the local trend forecasting. The experimentalanalysis on real datasets demonstrates that TreNet outperforms individual recurrent neural network,convolutional neural network and a variety of baselines in term of local trend prediction accuracy.The rest of the paper is organized as follows. Section 2 presents related work, while Section 3 definesthe problem to be solved and introduces the notations. In Section 4, we present the proposed TreNet.Section 5 demonstrates the performance of our method and baselines on real datasets. Finally, thepaper is concluded in Section 6. Refer to Section 7 and Section 8 for more experiment results anddiscussion.2Under review as a conference paper at ICLR 20172 R ELATED WORKTraditional learning approaches over local trends of time series mainly make use of Hidden MarkovModels (HMMs) (Wang et al., 2011; Matsubara et al., 2014). HMMs maintain short-term state de-pendences, i.e., the memoryless Markov property and predefined number of states, which requiressignificant task specific knowledge. RNNs instead use high dimensional, distributed hidden statesthat could take into account long-term dependencies in sequence data. Previous time series seg-mentation approaches (Keogh et al., 2001; Matsubara et al., 2014; Yuan, 2015) focus on achievinga meaningful segmentation and finding patterns, rather than modeling the relation in segments andtherefore are not suitable for forecasting local trends. Multi-step ahead prediction is another wayto realize local trend prediction by fitting the predicted values to estimate the local trend. However,multi-step ahead prediction is a non-trivial problem itself (Chang et al., 2012a). In this paper, weconcentrate on directly learning local trends through neural networks.RNNs have recently shown promising results in a variety of applications, especially when there ex-ist sequential dependencies in data (Lyu & Zhu, 2014; Chung et al., 2014; Sutskever et al., 2014).Long short-term memory (LSTM) (Hochreiter & Schmidhuber, 1997; Lyu & Zhu, 2014; Chunget al., 2014), a class of recurrent neural networks with sophisticated recurrent hidden and gatedunits, are particularly successful and popular due to its ability to learn hidden long-term sequentialdependencies. (Lipton et al., 2015) uses LSTMs to recognize patterns in multivariate time series,especially for multi-label classification of diagnoses. (Chauhan & Vig, 2015; Malhotra et al., 2015)evaluate the ability of LSTMs to detect anomalies in ECG time series. Bidirectional LSTM (Graves& Schmidhuber, 2005) is usually intended for speech processing rather than time series forecastingproblems. Our paper focuses on using LSTM to capture the dependency in the sequence of histor-ical local trends and meanwhile the hidden states in LSTM are further used to learn joint featurerepresentations for the local trend forecasting.CNN is often used to learn effective representation of local salience from raw data (Vinyals et al.,2015; Donahue et al., 2015; Karpathy et al., 2014). (Hammerla et al., 2016; Yang et al., 2015; Leaet al., 2016) make use of CNNs to extract features from raw time series data for activity/actionrecognition. (Liu et al., 2015) focuses on the prediction of periodical time series values by usingCNN and embedding time series with the potential neighbors in the temporal domain. Our proposedTreNet will combine the strengths of both LSTM and CNN and form a novel and unified neuralnetwork architecture for local trend forecasting.Hybrid neural networks, which combines the strengths of various neural networks, are receiving in-creasing interest in the computer vision domain, such as image captioning (Mao et al., 2014; Vinyalset al., 2015; Donahue et al., 2015), image classification (Wang et al., 2016a), protein structure pre-diction (Li & Yu, 2016), action recognition (Ballas et al., 2015; Donahue et al., 2015) and so on.But efficient exploitation of such hybrid architectures has not been well studied for time series data,especially the trend forecasting problem. (Li & Yu, 2016; Ballas et al., 2015) utilize CNNs over im-ages in cascade of RNNs in order to capture the temporal features for classification. (Bashivan et al.,2015) transforms EEG data into a sequence of topology-preserving multi-spectral images and thentrains a cascaded convolutional-recurrent network over such images for EEG classification. (Wanget al., 2016a; Mao et al., 2014) propose the CNN-RNN framework to learn a shared representationfor image captioning and classification problems. In our proposed TreNet, LSTM and CNN firstrespectively learn the trend evolution and local raw data of time series and then TreNet fuses thefeatures captured by LSTM and CNN to predict the trend.3 P ROBLEM FORMULATIONIn this section, we provide the formal definition of the trend learning and forecasting problem in thispaper.We define time series as a sequence of data points X=fx1;:::;x Tg, where each data point xtisreal-valued and subscript trepresents the time instant. The corresponding local trend sequence ofXis a series of piecewise linear representations of X, denoted byT=fh`k;skig. Each elementofT, e.g.,h`k;skidescribes a linear function over a certain subsequence (or segment) of Xandcorresponds to a local trend in X. Such local trends in Tare extracted from Xby time seriessegmentation and fitting a linear function w.r.t. timetover each segment (Keogh et al., 2001; Wang3Under review as a conference paper at ICLR 2017et al., 2011). `kandskrespectively represent the duration and slope of trend k.`kis measured interms of the time range covered by trend k. Local trends inTare time ordered and non-overlapping.The durations of all the local trends in TaddressPk`k=T. In addition, a local trend sequenceending by time tis denoted byT(t) =fh`k;skijPk`ktg.Meanwhile, as we discussed in Section 1, local raw data of time series affects the varying of trendas well and thus we define the local data w.r.t. a certain time instant tas a sequence of data pointsin a window of size w, denoted byL(t) =fxtw;:::;x tg.At certain time t, trend forecasting is meant to predict the duration and slope of the following trendbased on a given sequence of historical trends T(t)and local data setL(t). The predicted durationand slope at time tare denoted by ^`tand^st. Our proposed TreNet can be trained for predictingeither ^`tor^st. For simplicity, we use ^ytto represent the predicted value of TreNet throughout thepaper.Therefore, given the training dataset D=X[T , we aim to propose a neural network based approachto learn a function ^yt=f(T(t);L(t))for the trend forecasting. In this paper, we focus on univariatetime series. The proposed method can be naturally generalized to multivariate time series as well byaugmenting the input to the neural network. Refer to Section 8 for more discussion.4 H YBRID NEURAL NETWORKS FOR TREND LEARNING AND FORECASTINGIn this section, we first present an overview about the proposed TreNet for the trend forecasting.Then we will detail the components of TreNet.Overview.The idea of our TreNet is to combine CNN with LSTM to utilize their representation abilities ondifferent aspects of training data D(D=X[T ) and then to learn a joint feature for the trend pre-diction. Technically, TreNet is designed to learn a predictive function ^yt=f(R(T(t));C(L(t))).R(T(t))is derived by training the LSTM over sequence Tto capture the dependency in the trendevolving, while C(L(t))corresponds to local features extracted by CNN from L(t). The long-termand local features captured by LSTM and CNN, i.e., R(T(t))andC(L(t))convey complementaryinformation pertaining to the trend varying. Therefore, the feature fusion layer is supposed to takeadvantages of both features to produce a fused representation for improved performance. Finally,the trend prediction is realized by the function f(;), which corresponds to the feature fusion andoutput layers in Figure 2.Figure 2: Illustration of the hybrid architecture of TreNet. (best viewed in colour)Learning the dependency in the trend sequence.During the training phase, the duration `kand slopeskof each local trend kin sequenceTare fedinto the LSTM layer of TreNet. Each j-th neuron in the LSTM layer maintains a memory cjkat stepk. The output hjkor the activation of this neuron is then expressed as (Hochreiter & Schmidhuber,4Under review as a conference paper at ICLR 20171997; Chung et al., 2014):hjk=ojktanh(cjk) (1)whereojkis an output gate and calculated as:ojk=(Wo[`ksk] +Uohk1+Vock)j(2)where [`ksk]is the concatenation of the duration and slope of the trend k,hk1andckare thevectorization of the activations of fhjk1gandfcjkg, andis a logistic sigmoid function. Then,the memory cell cjkis updated through partially forgetting the existing memory and adding a newmemory content ~cjk:cjk=fjkcjk1+ijk~cjk;~cjk=tanh(Wc[`ksk] +Uchk1)j(3)The extent to which the existing memory is forgotten is modulated by a forget gate fjk, and thedegree to which the new memory content is added to the memory cell is modulated by an input gateijk. Then, such gates are computed byfjk=(Wf[`ksk] +Ufhk1+Vfck1)j(4)ijk=(Wi[`ksk] +Uihk1+Vick1)j(5)At each step k, the hidden activation hkis the output to the feature fusion layer. Specifically, givenaT(t)containingnlocal trends (i.e.,jT(t)j=n), the output of R(T(t))isR(T(t)) =hn.Learning features from the local raw data of time series.When thek-th trend inTis fed to LSTM, the corresponding local raw time series data input tothe CNN part of TreNet is L(t), wheret=kPi=1`i. CNN consists of Hstacked layers of 1-dconvolutional, activation and pooling operations. Denote by aithe input signal of layer iand thusat the first layer a1=L(t). Each layer has a specified number of filters niof a specified filter sizedi. Each filter on a layer sweeps through the entire input signal to exact local features as follows:vi;jm=(bi;j+m+di=2Xz=mdi=2Wi;jzaiz);8m= 1;:::;jaij (6)wherevi;jmis the activation of j-th filter of layer ionmposition of the input signal. Here is theLeaky Rectified Linear Unit, which is shown to perform better (Xu et al., 2015). Then the max-pooling is performed over the vi;jmof each filter.Finally, the output of CNN in TreNet is the concatenation of max-pooling of each filter on the lastlayerH, namely:C(L(t)) = [p1;:::;pnH]; pj= [ max1zq(fvH;jm+zg)];8j= 1;:::;nH(7)whereqis the pooling size.Feature fusion and output layers.The feature fusion layer combines the representations R(T(t))andC(L(t)), to form a joint feature.Then, such joint feature is fed to the output layer to provide the trend prediction. Particularly, we firstmapR(T(t))andC(L(t))to the same feature space and add them together to obtain the activationof the feature fusion layer (Mao et al., 2014). The output layer is a fully-connect layer following thefeature fusion layer. Mathematically, the prediction of TreNet is expressed as:^yt=f(R(T(t)); C(L(t))) =Wo(WrR(T(t)) +WcC(L(t)))| {z }feature fusion+bo(8)where()is element-wise leaky ReLU activation function and +denotes the element-wise addi-tion.Woandboare the weights and bias of the output layer.5Under review as a conference paper at ICLR 2017To train TreNet, we adopt the squared error function plus a regularization term as:J(W;b;T;X) =1jTjjTjXk=1(^ykyk)2+kWk2 (9)whereWandbrepresent the weight and bias parameters in TreNet, is a hyperparameter for theregularization term and ykis the true value of trend slope or duration.The cost function is differentiable and the architecture of TreNet allows the gradients from the lossfunction (9) to be backpropagated to both LSTM and CNN parts. TreNet can be trained respectivelyfor the slope and duration of local trends using TandX. When performing forecasting, T(t)andL(t)are fed to TreNet and the prediction value ^ykcould be either the slope or duration dependingon the training target.5 E XPERIMENTAL ANALYSISIn this section, we conduct extensive experiments to demonstrate the prediction performance ofTreNet by comparing to a variety of baselines. Due to the page limit, refer to Section 7 for moreexperiment results.5.1 E XPERIMENT SETUPDataset: We test our method and baselines on three real time series datasets.Daily Household Power Consumption (HousePC). This dataset2contains measurementsof electric power consumption in one household with a one-minute sampling rate over aperiod of almost 4 years. Different electrical quantities and some sub-metering values areavailable. We use the voltage time series throughout the experiments.Gas Sensor (GasSensor). This dataset3contains the recordings of chemical sensors ex-posed to dynamic gas mixtures at varying concentrations. The measurement was con-structed by the continuous acquisition of the sensor array signals for a duration of about 12hours without interruption. We mainly use the gas mixture time series regarding Ethyleneand Methane in air.Stock Transaction (Stock): This dataset is extracted from Yahoo Finance and contains thedaily stock transaction information in New York Stock Exchange from 1950-10 to 2016-4.All datasets are preprocessed by (Keogh et al., 2001) to extract local trends. Alternative time seriessegmentation and local trend extraction approaches can be used as well. We choose (Keogh et al.,2001) here due to its high efficiency. Totally, we obtain 42591 ,4720 and1316 local trends respec-tively from above datasets. For the ease of experimental result interpretation, the slope of extractedlocal trends is represented by the angle of the corresponding linear function and thus in a boundedvalue range [90;90]. The duration of local trends is measured by the number of data points withinthe local trend. Then, the obtained trend sequences and the set of local data are split into training(80%), validation ( 10%) and test ( 10%) datasets.Baselines: We compare TreNet with the following six baselines:CNN . This baseline method predicts the trend by only using CNN over the set of local rawdata of time series to learn features for the forecasting. The size of local data is set at wasis defined in Section 3.LSTM . This method uses LSTM to learn dependencies in the trend sequence Tand pre-dicts the trend only using the trained LSTM.Support Vector Regression (SVR) . A family of support vector regression based ap-proaches with different kernel methods is used for the trend forecasting. We consider three2https://archive.ics.uci.edu/ml/datasets/Individual+household+electric+power+consumption3https://archive.ics.uci.edu/ml/datasets/Gas+sensor+array+under+dynamic+gas+mixtures6Under review as a conference paper at ICLR 2017Dataset Model RMSE @ Duration RMSE @ SlopeHousePCCNN 27.51 13.56LSTM 27.27 13.27SVRBF 31.81 12.94SVPOLY 31.81 12.93SVSIG 31.80 12.93pHMM 34.06 26.00Naive 39.68 21.17CLSTM 25.97 13.77TreNet 25.89 12.89StockCNN 18.87 12.78LSTM 11.07 8.40SVRBF 11.38 7.40SVPOLY 11.40 7.42SVSIG 11.49 7.41pHMM 36.37 8.70Naive 11.36 8.58CLSTM 9.26 7.31TreNet 8.86 6.84GasSensorCNN 53.99 11.51LSTM 55.77 11.22SVRBF 62.81 10.21SVPOLY 70.91 10.95SVSIG 85.69 11.92pHMM 111.62 13.07Naive 53.76 10.57CLSTM 54.20 14.86TreNet 52.28 9.57Table 1: RMSE of the prediction of local trend duration and slope on each dataset.7Under review as a conference paper at ICLR 2017commonly used kernels (Liu et al., 2015), i.e., Radial Basis kernel ( SVRBF ), Polynomialkernel ( SVPOLY ), Sigmoid kernel ( SVSIG ). The trend sequence and the correspondingset of local time series data are concatenated as the input features to such SVR approaches.Pattern-based Hidden Markov Model (pHMM) . (Wang et al., 2011) proposed a pattern-based hidden Markov model (HMM), which segments the time series and models the de-pendency in segments via HMM. The derived HMM model is used to predict the state oftime series and then to estimate the trend based on the state.Naive . This is the naive approach which takes the duration and slope of the last trend asthe prediction for the next one.ConvNet+LSTM(CLSTM) . It is based on the cascade structure of ConvNet and LSTMin (Bashivan et al., 2015) which feeds the features learnt by ConvNet over time series to aLSTM and obtains the prediction from the LSTM.Evaluation metric: We evaluate the predictive performance of TreNet and baselines in terms ofRoot Mean Square Error (RMSE). The lower the RMSE, the more accurate the predictions.Training: The training procedure of TreNet and baselines in our paper follows the schema below.The CNN and LSTM components in TreNet share the same network structure (e.g., number of lay-ers, neurons in each layer) as CNN and LSTM baselines. CNN has two stacked convolutional layers,which have 32filters of size 2and4. The number of memory cells in LSTM is 600. For baselineCNN and LSTM, we tune the learning rate for each approach from f101;102;103;104;105g(Sutskever et al., 2013), in order to achieve the least prediction errors and then fix the learning rate.For TreNet, in addition to the learning rate, the number of neurons in the feature fusion layer ischosen from the range f300;600;900;1200gto achieve the best performance. We use dropout andL2 regularization to control the capacity of neural networks to prevent overfitting, and set the valuesto0:5and5104respectively for all datasets (Mao et al., 2014). The Adam optimizer (Kingma& Ba, 2014) is chosen to learn the weights in neural networks.Regarding the SVR based approaches, we carefully tune the parameters c(error penalty), d(degreeof kernel function), and (kernel coefficient) for kernels. Each parameter is selected from the setsc2f105;104;:::; 1;:::; 104;105g,d2f1;2;3g,2f105;104;:::; 1;:::; 105grespec-tively. We iterate through candidate values of each combination of c,dandto train our model,and keep the parameters that generate the lowest RMSE on the validation set, and then use them topredict on the test set.The training datasets of SVR and pHMM baselines are consistent as that of TreNet. Likewise, CNNand LSTM baselines are respectively fed by the set of local data and the trend sequence of the samesize as TreNet. In addition, since the window size of local data is tunable, we vary the windowsize of local data, i.e. w, from the rangef100;300;500;700;900g, so as to investigate how the sizeof local data influences the predication performance. The results will be presented in Section 5.2.The model’s performance on the validation set will be evaluated after each epoch of training. Eachmodel is trained for at least 50epochs. Meanwhile, the training process adopts early stopping if nofurther improvement in the performance of validation shows up after 50epochs.5.2 E XPERIMENT RESULTSTable 1 studies the prediction performances of TreNet and baselines. For each dataset, the windowsize of local data is constant for approaches (i.e., CNN, SVRBF, SVPOLY , SVSIG, pHMM andTreNet) that take local data as input. Then, the results of each approach are obtained by tuning thecorresponding parameter as described in Section 5.1.In Table 1, we observe that TreNet consistently outperforms baselines on the duration and slopeprediction by achieving around 30% less errors at the maximum. It verifies that the hybrid architec-ture of TreNet can improve the performance by utilizing the information captured by both CNN andLSTM. Specifically, pHMM method performs worse due to the limited representation capability ofHMM. On the slope prediction, SVR based approaches can get comparable results as TreNet.In the following group of experiments, we investigate the effect of local data size (i.e., w) on theprediction. In particular, we tune the value of local data size for the approaches whose input fea-8Under review as a conference paper at ICLR 2017tures contains local data and observe the prediction errors. Such approaches include CNN, SVRBF,SVPOLY , SVSIG, pHMM and TreNet. LSTM only consumes the trend sequence and thus is notincluded. Due to the page limit, we report the results on the HousePC dataset in Table 2 and Table 3.The results on Stock and GasSensor datasets can be referred to Section 7.Baseline Naive has no original time series data as input CLSTM works on the whole time series andhas no local data. Thus they are excluded from this set of experiments.In Table 2, we observe that compared to baselines TreNet has the lowest errors on the duration pre-diction across different window sizes. pHMM requires sufficient data points to model the relationsof segments and fails to work on 100size. As the window size increases and more local data pointsare fed to the training process, the prediction errors of CNN and TreNet decrease or nearly stabilize.This could be because only the certain amount of local data has predictive power. The filtering andpooling mechanism enables CNN to focus on the certain local data having strong predictive powerand thus giving more local data only gives rise to marginal improvements. Such similar phenomenonis observed on the slope prediction as is shown in Table 3. For more results and discussion, pleaserefer to Section 7.Window Size CNN SVRBF SVPOLY SVSIG pHMM TreNet100 29.37 31.48 31.96 31.88 - 25.93300 27.33 31.17 31.61 31.66 30.03 25.94500 27.51 31.81 31.81 31.80 34.06 25.89700 27.41 31.10 31.09 31.11 27.37 25.72900 27.42 31.28 31.27 31.27 28.45 25.62Table 2: RMSE of the duration predictions w.r.t. different sizes of local data in HousePC datasetWindow Size CNN SVRBF SVPOLY SVSIG pHMM TreNet100 13.68 12.93 12.9352 12.9346 - 13.14300 13.60 12.93 12.9346 12.9345 27.75 13.15500 13.56 12.94 12.9342 12.9346 26.00 12.89700 13.52 12.93 12.9345 12.9345 35.32 12.86900 13.60 12.94 12.9350 12.9346 37.60 12.96Table 3: RMSE of the slope predictions w.r.t. different sizes of local data in HousePC dataset6 C ONCLUSIONIn this paper we propose TreNet, a novel hybrid neural network to learn and predict the local trendbehaviour of time series. The experimental results demonstrate that such a hybrid framework canindeed utilize complementary information extracted by CNN and LSTM to enhance the predictionperformance. Moreover, such architecture is generic and extendible in that additional exogenoustime series can be fed to TreNet, so as to boost the performance and investigate the effect of differentdata sources on the trend evolving. | BkMc3FWEe | Promising architecture but insufficient experiments | 4: Ok but not good enough - rejection | 1) Summary
This paper proposes an end-to-end hybrid architecture to predict the local linear trends of time series. A temporal convnet on raw data extracts short-term features. In parallel, long term representations are learned via a LSTM on piecewise linear approximations of the time series. Both representations are combined using a MLP with one hidden layer (in two parts, one for each stream), and the entire architecture is trained end-to-end by minimizing (using Adam) the (l2-regularized) euclidean loss w.r.t. ground truth local trend durations and slopes.
2) Contributions
+ Interesting end-to-end architecture decoupling short-term and long-term representation learning in two separate streams in the first part of the architecture.
+ Comparison to deep and shallow baselines.
3) Suggestions for improvement
Add a LRCN baseline and discussion:
The benefits of decoupling short-term and long-term representation learning need to be assessed by comparing to the popular "long-term recurrent convolutional network" (LRCN) of Donahue et al (https://arxiv.org/abs/1411.4389). This approach stacks a LSTM on top of CNN features and is typically used on time series of video frames for tasks that are more general than local linear trend prediction. Furthermore, LRCN does not require the hand-crafted preprocessing of time series to extract piecewise linear approximations needed by the LSTM of the TreNet architecture proposed here. Finally, LRCN might be more parameter efficient, as it does not have the fully connected fusion layers of TreNet (eq. 8).
Add more complex multivariate datasets:
The currently used 3 datasets are limited, especially compared to modern research in representation learning for time series forecasting. For instance, and of particular interest to ICLR, I would suggest investigating future frame prediction on natural video datasets like UCF101 where CNN+LSTM are typically used albeit with a more complex loss (cf. for instance the popular adversarial one of Mathieu et al). Although different from the task of local linear trend prediction, it would be interesting to see how TreNet could be applied to the encoder stage of existing encoder-decoder architectures for frame prediction. It seems that decoupling short term and long term motion representation learning (for instance) could be beneficial in natural videos, as they often contain fast object motions together with slower camera ones.
Clarification about the target variables:
The authors need to clarify whether they handle separately or jointly the duration and slope. The text is ambiguous and seems to suggest training two separate models, one for slope, one for duration, which is particularly puzzling considering that predicting them jointly is in fact much easier (just two output variables instead of one), makes more sense, and is entirely feasible with the current method.
Other parts of the text can be improved too. For instance, the authors can vastly compress the generic description of standard convnet and LSTM equations in section 4, while the preprocessing of the time series needs to appear much earlier.
4) Conclusion
Although the architecture seems promising, the current experiments are too preliminary to validate its usefulness, in particular to existing alternatives like LRCN, which are not compared to. | 4: The reviewer is confident but not absolutely certain that the evaluation is correct |
ryCcJaqgl | ICLR.cc/2017/conference | 2017 | TreNet: Hybrid Neural Networks for Learning the Local Trend in Time Series | ["Tao Lin", "Tian Guo", "Karl Aberer"] | Local trends of time series characterize the intermediate upward and downward patterns of time series. Learning and forecasting the local trend in time series data play an important role in many real applications, ranging from investing in the stock market, resource allocation in data centers and load schedule in smart grid. Inspired by the recent successes of neural networks, in this paper we propose TreNet, a novel end-to-end hybrid neural network that predicts the local trend of time series based on local and global contextual features. TreNet leverages convolutional neural networks (CNNs) to extract salient features from local raw data of time series. Meanwhile, considering long-range dependencies existing in the sequence of historical local trends, TreNet uses a long-short term memory recurrent neural network (LSTM) to capture such dependency. Furthermore, for predicting the local trend, a feature fusion layer is designed in TreNet to learn joint representation from the features captured by CNN and LSTM. Our proposed TreNet demonstrates its effectiveness by outperforming conventional CNN, LSTM, HMM method and various kernel based baselines on real datasets. | ["trenet", "local trend", "time series", "lstm", "hybrid neural networks", "time series trenet", "intermediate upward", "downward patterns", "time series data"] | ABSTRACTLocal trends of time series characterize the intermediate upward and downwardpatterns of time series. Learning and forecasting the local trend in time series dataplay an important role in many real applications, ranging from investing in thestock market, resource allocation in data centers and load schedule in smart grid.Inspired by the recent successes of neural networks, in this paper we proposeTreNet, a novel end-to-end hybrid neural network that predicts the local trendof time series based on local and global contextual features. TreNet leveragesconvolutional neural networks (CNNs) to extract salient features from local rawdata of time series. Meanwhile, considering long-range dependencies existing inthe sequence of historical local trends, TreNet uses a long-short term memoryrecurrent neural network (LSTM) to capture such dependency. Furthermore, forpredicting the local trend, a feature fusion layer is designed in TreNet to learnjoint representation from the features captured by CNN and LSTM. Our pro-posed TreNet demonstrates its effectiveness by outperforming conventional CNN,LSTM, HMM method and various kernel based baselines on real datasets.1 I NTRODUCTIONTime series, which is a sequence of data points in time order, is being generated in a wide spectrum ofdomains, such as daily fluctuation of the stock market, power consumption records of households,performance monitoring data of clusters in data centres, and so on. In many applications, usersare interested in understanding the evolving trend in time series and forecasting the trend, sincethe conventional prediction on specific data points could deliver very little information about thesemantics and dynamics of the underlying process generating the time series. For instance, timeseries in Figure 1 are from the household power consumption dataset1. Figure 1(a) shows some rawdata points of time series. Though point AandBhave approximately the same value, the underlyingsystem is likely to be in two different states when it outputs AandB, becauseAis in an upwardtrend whileBis in a downward trend (Wang et al., 2011; Matsubara et al., 2014). On the other hand,even when two points with the similar value are both in the upward trend, e.g, point AandC, thedifferent slopes and durations of the trends where point AandClocate, could also indicate differentstates of the underlying process.Particularly, in this paper we are interested in the local trend of time series which measures the in-termediate local behaviour, i.e., upward or downward pattern of time series that characterized by theslope and duration (Wang et al., 2011). For instance, in Figure 1(b) the linear segments over raw datapoints of time series represent the local trends extracted from a real household power consumptiontime series. For the ease of presentation, we will use the term trend and local trend interchangeablyin the rest of the paper. Learning and forecasting local trends are quite useful in a wide range ofapplications. For instance, in the stock market, due to its high volatility and noisy environment,in reality predicting stock price trends is preferred over the prediction of the stock market absolutevalues (Atsalakis & Valavanis, 2009). Predicting the local trend of stock price time series empowersThese two authors contributed equally.1https://archive.ics.uci.edu/ml/datasets/Individual+household+electric+power+consumption1Under review as a conference paper at ICLR 2017traders to design profitable trading strategies (Chang et al., 2012b; Atsalakis & Valavanis, 2009).In the smart energy domain, knowing the predictive local trend of power consumption time se-ries enables energy providers to schedule power supply and maximize energy utilization (Zhao &Magoul `es, 2012).Meanwhile, in recent years neural networks have shown the dramatical power in a wide spectrum ofdomains, e.g., natural language processing, computer vision, speech recognition, time series anal-ysis, etc. (Wang et al., 2016b; Sutskever et al., 2014; Yang et al., 2015; Lipton et al., 2015). Fortime series data, two mainstream architectures, convolutional neural network (CNN) and recurrentneural network (RNN) have been exploited in different time series related tasks, e.g., RNN in timeseries classification (Lipton et al., 2015) and CNN in activity recognition and snippet learning (Liuet al., 2015; Yang et al., 2015). RNN is powerful in discovering the dependency in sequence data(Jain et al., 2014; Graves, 2012) and particularly the Long Short-Term Memory (LSTM) RNN workswell on sequence data with long-term dependencies (Chung et al., 2014; Hochreiter & Schmidhuber,1997) due to the internal memory mechanism. CNN excels in exacting effective representation oflocal salience from raw data of time series by enforcing a local connectivity between neurons. (Yanget al., 2015; Hammerla et al., 2016).Figure 1: (a) Time series of household power consumption. (b) Local trends in time series. (c)Effect of local raw data on the trend forecasting.In this paper, we focus on learning and forecasting the local trends in time series via neural networks.This involves learning different aspects of the data. On one hand, the sequence of historical localtrends describes the long-term contextual information of time series and thus naturally affects theevolution of the following local trend. On the other hand, the recent raw data points of time series(Wang et al., 2011; Batal et al., 2012), which represent the local variation and behaviour of timeseries, affect the evolving of the following trend as well and have particular predictive power forabruptly changing local trends (Wang et al., 2011). For instance, in Figure 1(c), trend 1,2and3present a continuous upward pattern. Then when we aim at predicting the subsequent trend oftime series at the end of the third local trend, the previous three successive upward trends outline aprobable increasing trend afterwards. However, the local data around the end of the third trend, e.g.,data points in the red circle, indicate that time series could stabilize and even decrease. The datapoints after the third trend indeed present a decreasing trend indicated by the red dotted segment. Inthis case, the subsequent trend has more dependency on the local data points. Therefore, it is highlydesired to develop a systematic way to model such various hidden and complementary dependenciesin time series for the local trend forecasting problem.To this end, we propose a end-to-end hybrid neural network, referred to as TreNet. In particular,it consists of a LSTM recurrent neural network to capture the long dependency in historical localtrends, a convolutional neural network to extract local features from local raw data of time series,and a feature fusion layer to learn joint representation to take advantage of both features drawn fromCNN and LSTM. Such joint representation is used for the local trend forecasting. The experimentalanalysis on real datasets demonstrates that TreNet outperforms individual recurrent neural network,convolutional neural network and a variety of baselines in term of local trend prediction accuracy.The rest of the paper is organized as follows. Section 2 presents related work, while Section 3 definesthe problem to be solved and introduces the notations. In Section 4, we present the proposed TreNet.Section 5 demonstrates the performance of our method and baselines on real datasets. Finally, thepaper is concluded in Section 6. Refer to Section 7 and Section 8 for more experiment results anddiscussion.2Under review as a conference paper at ICLR 20172 R ELATED WORKTraditional learning approaches over local trends of time series mainly make use of Hidden MarkovModels (HMMs) (Wang et al., 2011; Matsubara et al., 2014). HMMs maintain short-term state de-pendences, i.e., the memoryless Markov property and predefined number of states, which requiressignificant task specific knowledge. RNNs instead use high dimensional, distributed hidden statesthat could take into account long-term dependencies in sequence data. Previous time series seg-mentation approaches (Keogh et al., 2001; Matsubara et al., 2014; Yuan, 2015) focus on achievinga meaningful segmentation and finding patterns, rather than modeling the relation in segments andtherefore are not suitable for forecasting local trends. Multi-step ahead prediction is another wayto realize local trend prediction by fitting the predicted values to estimate the local trend. However,multi-step ahead prediction is a non-trivial problem itself (Chang et al., 2012a). In this paper, weconcentrate on directly learning local trends through neural networks.RNNs have recently shown promising results in a variety of applications, especially when there ex-ist sequential dependencies in data (Lyu & Zhu, 2014; Chung et al., 2014; Sutskever et al., 2014).Long short-term memory (LSTM) (Hochreiter & Schmidhuber, 1997; Lyu & Zhu, 2014; Chunget al., 2014), a class of recurrent neural networks with sophisticated recurrent hidden and gatedunits, are particularly successful and popular due to its ability to learn hidden long-term sequentialdependencies. (Lipton et al., 2015) uses LSTMs to recognize patterns in multivariate time series,especially for multi-label classification of diagnoses. (Chauhan & Vig, 2015; Malhotra et al., 2015)evaluate the ability of LSTMs to detect anomalies in ECG time series. Bidirectional LSTM (Graves& Schmidhuber, 2005) is usually intended for speech processing rather than time series forecastingproblems. Our paper focuses on using LSTM to capture the dependency in the sequence of histor-ical local trends and meanwhile the hidden states in LSTM are further used to learn joint featurerepresentations for the local trend forecasting.CNN is often used to learn effective representation of local salience from raw data (Vinyals et al.,2015; Donahue et al., 2015; Karpathy et al., 2014). (Hammerla et al., 2016; Yang et al., 2015; Leaet al., 2016) make use of CNNs to extract features from raw time series data for activity/actionrecognition. (Liu et al., 2015) focuses on the prediction of periodical time series values by usingCNN and embedding time series with the potential neighbors in the temporal domain. Our proposedTreNet will combine the strengths of both LSTM and CNN and form a novel and unified neuralnetwork architecture for local trend forecasting.Hybrid neural networks, which combines the strengths of various neural networks, are receiving in-creasing interest in the computer vision domain, such as image captioning (Mao et al., 2014; Vinyalset al., 2015; Donahue et al., 2015), image classification (Wang et al., 2016a), protein structure pre-diction (Li & Yu, 2016), action recognition (Ballas et al., 2015; Donahue et al., 2015) and so on.But efficient exploitation of such hybrid architectures has not been well studied for time series data,especially the trend forecasting problem. (Li & Yu, 2016; Ballas et al., 2015) utilize CNNs over im-ages in cascade of RNNs in order to capture the temporal features for classification. (Bashivan et al.,2015) transforms EEG data into a sequence of topology-preserving multi-spectral images and thentrains a cascaded convolutional-recurrent network over such images for EEG classification. (Wanget al., 2016a; Mao et al., 2014) propose the CNN-RNN framework to learn a shared representationfor image captioning and classification problems. In our proposed TreNet, LSTM and CNN firstrespectively learn the trend evolution and local raw data of time series and then TreNet fuses thefeatures captured by LSTM and CNN to predict the trend.3 P ROBLEM FORMULATIONIn this section, we provide the formal definition of the trend learning and forecasting problem in thispaper.We define time series as a sequence of data points X=fx1;:::;x Tg, where each data point xtisreal-valued and subscript trepresents the time instant. The corresponding local trend sequence ofXis a series of piecewise linear representations of X, denoted byT=fh`k;skig. Each elementofT, e.g.,h`k;skidescribes a linear function over a certain subsequence (or segment) of Xandcorresponds to a local trend in X. Such local trends in Tare extracted from Xby time seriessegmentation and fitting a linear function w.r.t. timetover each segment (Keogh et al., 2001; Wang3Under review as a conference paper at ICLR 2017et al., 2011). `kandskrespectively represent the duration and slope of trend k.`kis measured interms of the time range covered by trend k. Local trends inTare time ordered and non-overlapping.The durations of all the local trends in TaddressPk`k=T. In addition, a local trend sequenceending by time tis denoted byT(t) =fh`k;skijPk`ktg.Meanwhile, as we discussed in Section 1, local raw data of time series affects the varying of trendas well and thus we define the local data w.r.t. a certain time instant tas a sequence of data pointsin a window of size w, denoted byL(t) =fxtw;:::;x tg.At certain time t, trend forecasting is meant to predict the duration and slope of the following trendbased on a given sequence of historical trends T(t)and local data setL(t). The predicted durationand slope at time tare denoted by ^`tand^st. Our proposed TreNet can be trained for predictingeither ^`tor^st. For simplicity, we use ^ytto represent the predicted value of TreNet throughout thepaper.Therefore, given the training dataset D=X[T , we aim to propose a neural network based approachto learn a function ^yt=f(T(t);L(t))for the trend forecasting. In this paper, we focus on univariatetime series. The proposed method can be naturally generalized to multivariate time series as well byaugmenting the input to the neural network. Refer to Section 8 for more discussion.4 H YBRID NEURAL NETWORKS FOR TREND LEARNING AND FORECASTINGIn this section, we first present an overview about the proposed TreNet for the trend forecasting.Then we will detail the components of TreNet.Overview.The idea of our TreNet is to combine CNN with LSTM to utilize their representation abilities ondifferent aspects of training data D(D=X[T ) and then to learn a joint feature for the trend pre-diction. Technically, TreNet is designed to learn a predictive function ^yt=f(R(T(t));C(L(t))).R(T(t))is derived by training the LSTM over sequence Tto capture the dependency in the trendevolving, while C(L(t))corresponds to local features extracted by CNN from L(t). The long-termand local features captured by LSTM and CNN, i.e., R(T(t))andC(L(t))convey complementaryinformation pertaining to the trend varying. Therefore, the feature fusion layer is supposed to takeadvantages of both features to produce a fused representation for improved performance. Finally,the trend prediction is realized by the function f(;), which corresponds to the feature fusion andoutput layers in Figure 2.Figure 2: Illustration of the hybrid architecture of TreNet. (best viewed in colour)Learning the dependency in the trend sequence.During the training phase, the duration `kand slopeskof each local trend kin sequenceTare fedinto the LSTM layer of TreNet. Each j-th neuron in the LSTM layer maintains a memory cjkat stepk. The output hjkor the activation of this neuron is then expressed as (Hochreiter & Schmidhuber,4Under review as a conference paper at ICLR 20171997; Chung et al., 2014):hjk=ojktanh(cjk) (1)whereojkis an output gate and calculated as:ojk=(Wo[`ksk] +Uohk1+Vock)j(2)where [`ksk]is the concatenation of the duration and slope of the trend k,hk1andckare thevectorization of the activations of fhjk1gandfcjkg, andis a logistic sigmoid function. Then,the memory cell cjkis updated through partially forgetting the existing memory and adding a newmemory content ~cjk:cjk=fjkcjk1+ijk~cjk;~cjk=tanh(Wc[`ksk] +Uchk1)j(3)The extent to which the existing memory is forgotten is modulated by a forget gate fjk, and thedegree to which the new memory content is added to the memory cell is modulated by an input gateijk. Then, such gates are computed byfjk=(Wf[`ksk] +Ufhk1+Vfck1)j(4)ijk=(Wi[`ksk] +Uihk1+Vick1)j(5)At each step k, the hidden activation hkis the output to the feature fusion layer. Specifically, givenaT(t)containingnlocal trends (i.e.,jT(t)j=n), the output of R(T(t))isR(T(t)) =hn.Learning features from the local raw data of time series.When thek-th trend inTis fed to LSTM, the corresponding local raw time series data input tothe CNN part of TreNet is L(t), wheret=kPi=1`i. CNN consists of Hstacked layers of 1-dconvolutional, activation and pooling operations. Denote by aithe input signal of layer iand thusat the first layer a1=L(t). Each layer has a specified number of filters niof a specified filter sizedi. Each filter on a layer sweeps through the entire input signal to exact local features as follows:vi;jm=(bi;j+m+di=2Xz=mdi=2Wi;jzaiz);8m= 1;:::;jaij (6)wherevi;jmis the activation of j-th filter of layer ionmposition of the input signal. Here is theLeaky Rectified Linear Unit, which is shown to perform better (Xu et al., 2015). Then the max-pooling is performed over the vi;jmof each filter.Finally, the output of CNN in TreNet is the concatenation of max-pooling of each filter on the lastlayerH, namely:C(L(t)) = [p1;:::;pnH]; pj= [ max1zq(fvH;jm+zg)];8j= 1;:::;nH(7)whereqis the pooling size.Feature fusion and output layers.The feature fusion layer combines the representations R(T(t))andC(L(t)), to form a joint feature.Then, such joint feature is fed to the output layer to provide the trend prediction. Particularly, we firstmapR(T(t))andC(L(t))to the same feature space and add them together to obtain the activationof the feature fusion layer (Mao et al., 2014). The output layer is a fully-connect layer following thefeature fusion layer. Mathematically, the prediction of TreNet is expressed as:^yt=f(R(T(t)); C(L(t))) =Wo(WrR(T(t)) +WcC(L(t)))| {z }feature fusion+bo(8)where()is element-wise leaky ReLU activation function and +denotes the element-wise addi-tion.Woandboare the weights and bias of the output layer.5Under review as a conference paper at ICLR 2017To train TreNet, we adopt the squared error function plus a regularization term as:J(W;b;T;X) =1jTjjTjXk=1(^ykyk)2+kWk2 (9)whereWandbrepresent the weight and bias parameters in TreNet, is a hyperparameter for theregularization term and ykis the true value of trend slope or duration.The cost function is differentiable and the architecture of TreNet allows the gradients from the lossfunction (9) to be backpropagated to both LSTM and CNN parts. TreNet can be trained respectivelyfor the slope and duration of local trends using TandX. When performing forecasting, T(t)andL(t)are fed to TreNet and the prediction value ^ykcould be either the slope or duration dependingon the training target.5 E XPERIMENTAL ANALYSISIn this section, we conduct extensive experiments to demonstrate the prediction performance ofTreNet by comparing to a variety of baselines. Due to the page limit, refer to Section 7 for moreexperiment results.5.1 E XPERIMENT SETUPDataset: We test our method and baselines on three real time series datasets.Daily Household Power Consumption (HousePC). This dataset2contains measurementsof electric power consumption in one household with a one-minute sampling rate over aperiod of almost 4 years. Different electrical quantities and some sub-metering values areavailable. We use the voltage time series throughout the experiments.Gas Sensor (GasSensor). This dataset3contains the recordings of chemical sensors ex-posed to dynamic gas mixtures at varying concentrations. The measurement was con-structed by the continuous acquisition of the sensor array signals for a duration of about 12hours without interruption. We mainly use the gas mixture time series regarding Ethyleneand Methane in air.Stock Transaction (Stock): This dataset is extracted from Yahoo Finance and contains thedaily stock transaction information in New York Stock Exchange from 1950-10 to 2016-4.All datasets are preprocessed by (Keogh et al., 2001) to extract local trends. Alternative time seriessegmentation and local trend extraction approaches can be used as well. We choose (Keogh et al.,2001) here due to its high efficiency. Totally, we obtain 42591 ,4720 and1316 local trends respec-tively from above datasets. For the ease of experimental result interpretation, the slope of extractedlocal trends is represented by the angle of the corresponding linear function and thus in a boundedvalue range [90;90]. The duration of local trends is measured by the number of data points withinthe local trend. Then, the obtained trend sequences and the set of local data are split into training(80%), validation ( 10%) and test ( 10%) datasets.Baselines: We compare TreNet with the following six baselines:CNN . This baseline method predicts the trend by only using CNN over the set of local rawdata of time series to learn features for the forecasting. The size of local data is set at wasis defined in Section 3.LSTM . This method uses LSTM to learn dependencies in the trend sequence Tand pre-dicts the trend only using the trained LSTM.Support Vector Regression (SVR) . A family of support vector regression based ap-proaches with different kernel methods is used for the trend forecasting. We consider three2https://archive.ics.uci.edu/ml/datasets/Individual+household+electric+power+consumption3https://archive.ics.uci.edu/ml/datasets/Gas+sensor+array+under+dynamic+gas+mixtures6Under review as a conference paper at ICLR 2017Dataset Model RMSE @ Duration RMSE @ SlopeHousePCCNN 27.51 13.56LSTM 27.27 13.27SVRBF 31.81 12.94SVPOLY 31.81 12.93SVSIG 31.80 12.93pHMM 34.06 26.00Naive 39.68 21.17CLSTM 25.97 13.77TreNet 25.89 12.89StockCNN 18.87 12.78LSTM 11.07 8.40SVRBF 11.38 7.40SVPOLY 11.40 7.42SVSIG 11.49 7.41pHMM 36.37 8.70Naive 11.36 8.58CLSTM 9.26 7.31TreNet 8.86 6.84GasSensorCNN 53.99 11.51LSTM 55.77 11.22SVRBF 62.81 10.21SVPOLY 70.91 10.95SVSIG 85.69 11.92pHMM 111.62 13.07Naive 53.76 10.57CLSTM 54.20 14.86TreNet 52.28 9.57Table 1: RMSE of the prediction of local trend duration and slope on each dataset.7Under review as a conference paper at ICLR 2017commonly used kernels (Liu et al., 2015), i.e., Radial Basis kernel ( SVRBF ), Polynomialkernel ( SVPOLY ), Sigmoid kernel ( SVSIG ). The trend sequence and the correspondingset of local time series data are concatenated as the input features to such SVR approaches.Pattern-based Hidden Markov Model (pHMM) . (Wang et al., 2011) proposed a pattern-based hidden Markov model (HMM), which segments the time series and models the de-pendency in segments via HMM. The derived HMM model is used to predict the state oftime series and then to estimate the trend based on the state.Naive . This is the naive approach which takes the duration and slope of the last trend asthe prediction for the next one.ConvNet+LSTM(CLSTM) . It is based on the cascade structure of ConvNet and LSTMin (Bashivan et al., 2015) which feeds the features learnt by ConvNet over time series to aLSTM and obtains the prediction from the LSTM.Evaluation metric: We evaluate the predictive performance of TreNet and baselines in terms ofRoot Mean Square Error (RMSE). The lower the RMSE, the more accurate the predictions.Training: The training procedure of TreNet and baselines in our paper follows the schema below.The CNN and LSTM components in TreNet share the same network structure (e.g., number of lay-ers, neurons in each layer) as CNN and LSTM baselines. CNN has two stacked convolutional layers,which have 32filters of size 2and4. The number of memory cells in LSTM is 600. For baselineCNN and LSTM, we tune the learning rate for each approach from f101;102;103;104;105g(Sutskever et al., 2013), in order to achieve the least prediction errors and then fix the learning rate.For TreNet, in addition to the learning rate, the number of neurons in the feature fusion layer ischosen from the range f300;600;900;1200gto achieve the best performance. We use dropout andL2 regularization to control the capacity of neural networks to prevent overfitting, and set the valuesto0:5and5104respectively for all datasets (Mao et al., 2014). The Adam optimizer (Kingma& Ba, 2014) is chosen to learn the weights in neural networks.Regarding the SVR based approaches, we carefully tune the parameters c(error penalty), d(degreeof kernel function), and (kernel coefficient) for kernels. Each parameter is selected from the setsc2f105;104;:::; 1;:::; 104;105g,d2f1;2;3g,2f105;104;:::; 1;:::; 105grespec-tively. We iterate through candidate values of each combination of c,dandto train our model,and keep the parameters that generate the lowest RMSE on the validation set, and then use them topredict on the test set.The training datasets of SVR and pHMM baselines are consistent as that of TreNet. Likewise, CNNand LSTM baselines are respectively fed by the set of local data and the trend sequence of the samesize as TreNet. In addition, since the window size of local data is tunable, we vary the windowsize of local data, i.e. w, from the rangef100;300;500;700;900g, so as to investigate how the sizeof local data influences the predication performance. The results will be presented in Section 5.2.The model’s performance on the validation set will be evaluated after each epoch of training. Eachmodel is trained for at least 50epochs. Meanwhile, the training process adopts early stopping if nofurther improvement in the performance of validation shows up after 50epochs.5.2 E XPERIMENT RESULTSTable 1 studies the prediction performances of TreNet and baselines. For each dataset, the windowsize of local data is constant for approaches (i.e., CNN, SVRBF, SVPOLY , SVSIG, pHMM andTreNet) that take local data as input. Then, the results of each approach are obtained by tuning thecorresponding parameter as described in Section 5.1.In Table 1, we observe that TreNet consistently outperforms baselines on the duration and slopeprediction by achieving around 30% less errors at the maximum. It verifies that the hybrid architec-ture of TreNet can improve the performance by utilizing the information captured by both CNN andLSTM. Specifically, pHMM method performs worse due to the limited representation capability ofHMM. On the slope prediction, SVR based approaches can get comparable results as TreNet.In the following group of experiments, we investigate the effect of local data size (i.e., w) on theprediction. In particular, we tune the value of local data size for the approaches whose input fea-8Under review as a conference paper at ICLR 2017tures contains local data and observe the prediction errors. Such approaches include CNN, SVRBF,SVPOLY , SVSIG, pHMM and TreNet. LSTM only consumes the trend sequence and thus is notincluded. Due to the page limit, we report the results on the HousePC dataset in Table 2 and Table 3.The results on Stock and GasSensor datasets can be referred to Section 7.Baseline Naive has no original time series data as input CLSTM works on the whole time series andhas no local data. Thus they are excluded from this set of experiments.In Table 2, we observe that compared to baselines TreNet has the lowest errors on the duration pre-diction across different window sizes. pHMM requires sufficient data points to model the relationsof segments and fails to work on 100size. As the window size increases and more local data pointsare fed to the training process, the prediction errors of CNN and TreNet decrease or nearly stabilize.This could be because only the certain amount of local data has predictive power. The filtering andpooling mechanism enables CNN to focus on the certain local data having strong predictive powerand thus giving more local data only gives rise to marginal improvements. Such similar phenomenonis observed on the slope prediction as is shown in Table 3. For more results and discussion, pleaserefer to Section 7.Window Size CNN SVRBF SVPOLY SVSIG pHMM TreNet100 29.37 31.48 31.96 31.88 - 25.93300 27.33 31.17 31.61 31.66 30.03 25.94500 27.51 31.81 31.81 31.80 34.06 25.89700 27.41 31.10 31.09 31.11 27.37 25.72900 27.42 31.28 31.27 31.27 28.45 25.62Table 2: RMSE of the duration predictions w.r.t. different sizes of local data in HousePC datasetWindow Size CNN SVRBF SVPOLY SVSIG pHMM TreNet100 13.68 12.93 12.9352 12.9346 - 13.14300 13.60 12.93 12.9346 12.9345 27.75 13.15500 13.56 12.94 12.9342 12.9346 26.00 12.89700 13.52 12.93 12.9345 12.9345 35.32 12.86900 13.60 12.94 12.9350 12.9346 37.60 12.96Table 3: RMSE of the slope predictions w.r.t. different sizes of local data in HousePC dataset6 C ONCLUSIONIn this paper we propose TreNet, a novel hybrid neural network to learn and predict the local trendbehaviour of time series. The experimental results demonstrate that such a hybrid framework canindeed utilize complementary information extracted by CNN and LSTM to enhance the predictionperformance. Moreover, such architecture is generic and extendible in that additional exogenoustime series can be fed to TreNet, so as to boost the performance and investigate the effect of differentdata sources on the trend evolving. | ryuUMeMNe | Intriguing problems and architecture but proposed approach not fully justified | 6: Marginally above acceptance threshold | Updated review: the authors did an admirable job of responding to and incorporating reviewer feedback. In particular, they put a lot of effort into additional experiments, even incorporating a new and much stronger baseline (the ConvNet -> LSTM baseline requested by multiple reviewers). I still have two lingering concerns previously stated -- that each model's architecture (# hidden units, etc.) should be tuned independently and that a pure time series forecasting baselines (without the trend preprocessing) should be tried. I'm going to bump up my score from a clear rejection to a borderline.
-----
This paper is concerned with time series prediction problems for which the prediction targets include the slope and duration of upcoming local trends. This setting is of great interest in several real world problem settings (e.g., financial markets) where decisions (e.g., buy or sell) are often driven by local changes and trends. The primary challenge in these problems is distinguishing true changes and trends (i.e., a downturn in share price) from noise. The authors tackle this with an interesting hybrid architecture (TreNet) with four parts: (1) preprocessing to extract trends, (2) an LSTM that accepts those trends as inputs to ostensibly capture long term dependencies, (3) a ConvNet that accepts a local window of raw data as its input at each time step, and (4) a higher "feature fusion" (i.e., dense) layer to combine the LSTM's and ConvNet's outputs. On three univariate time series data sets, the TreNet outperforms the competing baselines including those based on its constituent parts (LSTM + trend inputs, CNN).
Strengths:
- A very interesting problem setting that can plausibly be argued to differ from other sequential modeling problems in deep learning (e.g., video classification). This is a nice example of fairly thoughtful task-driven machine learning.
- Accepting the author's assumptions as true for the moment, the proposed architecture seems intuitive and well-designed.
Weaknesses:
- Although this is an interesting problem setting (decisions driven by trends and changes), the authors did not make a strong argument for why they formulated the machine learning task as they did. Trend targets are not provided from "on high" (by data oracle) but extracted from raw data using a deterministic algorithm. Thus, one could just easily formulate this as plain time series forecasting problem in which we forecast the next 100 steps and then apply the trend extractor to convert those predictions into a trend. If the forecasts are accurate, so will be the extracted trends.
- The proposed architecture, while interesting, is not justified, in particular the choice to feed the extracted trends and raw data into separate LSTM and ConvNet layers that are only combined at the end by a shallow MLP. An equally straightforward but more intuitive choice would have been to feed the output of the ConvNet into the LSTM, perhaps augmented by the trend input. Without a solid rationale, this unconventional choice comes across as arbitrary.
- Following up on that point, the raw->ConvNet->LSTM and {raw->ConvNet,trends}->LSTM architectures are natural baselines for experiments.
- The paper presupposes, rather than argues, the value of the extracted trends and durations as inputs. It is not unreasonable to think that, with enough training data, a sufficiently powerful ConvNet->LSTM architecture should be able to learn to detect these trends in raw data, if they are predictive.
- Following up on that point, two other obvious baselines that were omitted: raw->LSTM and {raw->ConvNet,trends}->MLP. Basically, the authors propose a complex architecture without demonstrating the value of each part (trend extraction, LSTM, ConvNet, MLP). The baselines are unnecessarily weak.
One thing I am uncertain about in general: the validity of the practice of using the same LSTM and ConvNet architectures in both the baselines and the TreNet. This *sounds* like an apples-to-apples comparison, but in the world of hyperparameter tuning, it could in fact disadvantage either. It seems like a more thorough approach would be to optimize each architecture independently.
Regarding related work and baselines: I think it is fair to limit the scope of in-depth analysis and experiments to a set of reasonable, representative baselines, at least in a conference paper submitted to a deep learning conference. That said, the authors ignored a large body of work on financial time series modeling using probabilistic models and related techniques. This is another way to frame the above "separate trends from noise" problem: treat the observations as noisy. One semi-recent example: J. Hernandez-Lobato, J. Lloyds, and D. Hernandez-Lobato. Gaussian process conditional copulas with applications to financial time series. NIPS 2013.
I appreciate this research direction in general, but at the moment, I believe that the work described in this manuscript is not suitable for inclusion at ICLR. My policy for interactive review is to keep an open mind and willingness to change my score, but a large revision is unlikely. I would encourage the authors to instead use their time and energy -- and reviewer feedback -- in order to prepare for a future conference deadline (e.g., ICML). | 4: The reviewer is confident but not absolutely certain that the evaluation is correct |
Hk95PK9le | ICLR.cc/2017/conference | 2017 | Deep Biaffine Attention for Neural Dependency Parsing | ["Timothy Dozat", "Christopher D. Manning"] | This paper builds off recent work from Kiperwasser & Goldberg (2016) using neural attention in a simple graph-based dependency parser. We use a larger but more thoroughly regularized parser than other recent BiLSTM-based approaches, with
biaffine classifiers to predict arcs and labels. Our parser gets state of the art or near state of the art performance on standard treebanks for six different languages, achieving 95.7% UAS and 94.1% LAS on the most popular English PTB dataset. This makes it the highest-performing graph-based parser on this benchmark—outperforming Kiperwasser & Goldberg (2016) by 1.8% and 2.2%—and comparable to the highest performing transition-based parser (Kuncoro et al., 2016), which achieves 95.8% UAS and 94.6% LAS. We also show which hyperparameter choices had a significant effect on parsing accuracy, allowing us to achieve large gains over other graph-based approaches.
| ["Natural language processing", "Deep learning"] | ABSTRACTThis paper builds off recent work from Kiperwasser & Goldberg (2016) usingneural attention in a simple graph-based dependency parser. We use a larger butmore thoroughly regularized parser than other recent BiLSTM-based approaches,with biaffine classifiers to predict arcs and labels. Our parser gets state of the art ornear state of the art performance on standard treebanks for six different languages,achieving 95.7% UAS and 94.1% LAS on the most popular English PTB dataset.This makes it the highest-performing graph-based parser on this benchmark—outperforming Kiperwasser & Goldberg (2016) by 1.8% and 2.2%—and com-parable to the highest performing transition-based parser (Kuncoro et al., 2016),which achieves 95.8% UAS and 94.6% LAS. We also show which hyperparameterchoices had a significant effect on parsing accuracy, allowing us to achieve largegains over other graph-based approaches.1 I NTRODUCTIONDependency parsers—which annotate sentences in a way designed to be easy for humans and com-puters alike to understand—have been found to be extremely useful for a sizable number of NLPtasks, especially those involving natural language understanding in some way (Bowman et al., 2016;Angeli et al., 2015; Levy & Goldberg, 2014; Toutanova et al., 2016; Parikh et al., 2015). How-ever, frequent incorrect parses can severely inhibit final performance, so improving the quality ofdependency parsers is needed for the improvement and success of these downstream tasks.The current state-of-the-art transition-based neural dependency parser (Kuncoro et al., 2016) sub-stantially outperforms many much simpler neural graph-based parsers. We modify the neural graph-based approach first proposed by Kiperwasser & Goldberg (2016) in a few ways to achieve com-petitive performance: we build a network that’s larger but uses more regularization; we replace thetraditional MLP-based attention mechanism and affine label classifier with biaffine ones; and ratherthan using the top recurrent states of the LSTM in the biaffine transformations, we first put themthrough MLP operations that reduce their dimensionality. Furthermore, we compare models trainedwith different architectures and hyperparameters to motivate our approach empirically. The result-ing parser maintains most of the simplicity of neural graph-based approaches while approaching theperformance of the SOTA transition-based one.2 B ACKGROUND AND RELATED WORKTransition-based parsers—such as shift-reduce parsers—parse sentences from left to right, main-taining a “buffer” of words that have not yet been parsed and a “stack” of words whose head has notbeen seen or whose dependents have not all been fully parsed. At each step, transition-based parserscan access and manipulate the stack and buffer and assign arcs from one word to another. One canthen train any multi-class machine learning classifier on features extracted from the stack, buffer,and previous arc actions in order to predict the next action.Chen & Manning (2014) make the first successful attempt at incorporating deep learning into atransition-based dependency parser. At each step, the (feedforward) network assigns a probability toeach action the parser can take based on word, tag, and label embeddings from certain words on the1Published as a conference paper at ICLR 2017root /ROOT Casey/NNP hugged/VBD Kim/NNProotnsubj dobjFigure 1: A dependency tree parse for Casey hugged Kim , including part-of-speech tags and a specialroot token. Directed edges (or arcs) with labels (or relations) connect the verb to the root and thearguments to the verb head.stack and buffer. A number of other researchers have attempted to address some limitations of Chen& Manning’s Chen & Manning parser by augmenting it with additional complexity: Weiss et al.(2015) and Andor et al. (2016) augment it with a beam search and a conditional random field lossobjective to allow the parser to “undo” previous actions once it finds evidence that they may havebeen incorrect; and Dyer et al. (2015) and (Kuncoro et al., 2016) instead use LSTMs to representthe stack and buffer, getting state-of-the-art performance by building in a way of composing parsedphrases together.Transition-based parsing processes a sentence sequentially to build up a parse tree one arc at atime. Consequently, these parsers don’t use machine learning for directly predicting edges; theyuse it for predicting the operations of the transition algorithm. Graph-based parsers, by contrast,use machine learning to assign a weight or probability to each possible edge and then construct amaximum spaning tree (MST) from these weighted edges. Kiperwasser & Goldberg (2016) present aneural graph-based parser (in addition to a transition-based one) that uses the same kind of attentionmechanism as Bahdanau et al. (2014) for machine translation. In Kiperwasser & Goldberg’s 2016model, the (bidirectional) LSTM’s recurrent output vector for each word is concatenated with eachpossible head’s recurrent vector, and the result is used as input to an MLP that scores each resultingarc. The predicted tree structure at training time is the one where each word depends on its highest-scoring head. Labels are generated analogously, with each word’s recurrent output vector and itsgold or predicted head word’s recurrent vector being used in a multi-class MLP.Similarly, Hashimoto et al. (2016) include a graph-based dependency parser in their multi-task neu-ral model. In addition to training the model with multiple distinct objectives, they replace the tra-ditional MLP-based attention mechanism that Kiperwasser & Goldberg (2016) use with a bilinearone (but still using an MLP label classifier). This makes it analogous to Luong et al.’s 2015 pro-posed attention mechanism for neural machine translation. Cheng et al. (2016) likewise propose agraph-based neural dependency parser, but in a way that attempts to circumvent the limitation ofother neural graph-based parsers being unable to condition the scores of each possible arc on pre-vious parsing decisions. In addition to having one bidirectional recurrent network that computes arecurrent hidden vector for each word, they have additional, unidirectional recurrent networks (left-to-right and right-to-left) that keep track of the probabilities of each previous arc, and use thesetogether to predict the scores for the next arc.3 P ROPOSED DEPENDENCY PARSER3.1 D EEP BIAFFINE ATTENTIONWe make a few modifications to the graph-based architectures of Kiperwasser & Goldberg (2016),Hashimoto et al. (2016), and Cheng et al. (2016), shown in Figure 2: we use biaffine attentioninstead of bilinear or traditional MLP-based attention; we use a biaffine dependency label classifier;and we apply dimension-reducing MLPs to each recurrent output vector ribefore applying thebiaffine transformation.1The choice of biaffine rather than bilinear or MLP mechanisms makes theclassifiers in our model analogous to traditional affine classifiers, which use an affine transformationover a single LSTM output state ri(or other vector input) to predict the vector of scores sifor allclasses (1). We can think of the proposed biaffine attention mechanism as being a traditional affine1In this paper we follow the convention of using lowercase italic letters for scalars and indices, lowercasebold letters for vectors, uppercase italic letters for matrices, uppercase bold letters for higher order tensors. Wealso maintain this notation when indexing; so row iof matrixRwould be represented as ri.2Published as a conference paper at ICLR 2017. . .root ROOT Kim NNP1111> =BiLSTM: riEmbeddings: xiMLP: h(arc-dep)i;h(arc-head )iH(arc-dep)1U(arc)H(arc-head )S(arc)Figure 2: BiLSTM with deep biaffine attention to score each possible head for each dependent,applied to the sentence “Casey hugged Kim”. We reverse the order of the biaffine transformationhere for clarity.classifier, but using a (dd)linear transformation of the stacked LSTM output RU(1)in place ofthe weight matrix Wand a (d1)transformation Ru(2)for the bias term b(2).si=Wri+b Fixed-class affine classifier (1)s(arc)i =RU(1)ri+Ru(2)Variable-class biaffine classifier (2)In addition to being arguably simpler than the MLP-based approach (involving one bilinear layerrather than two linear layers and a nonlinearity), this has the conceptual advantage of directly mod-eling both the prior probability of a word jreceiving any dependents in the term r>ju(2)and thelikelihood of jreceiving a specific dependent iin the term r>jU(1)ri. Analogously, we also use abiaffine classifier to predict dependency labels given the gold or predicted head yi(3).s(label )i =r>yiU(1)ri+ (ryiri)>U(2)+b Fixed-class biaffine classifier (3)This likewise directly models each of the prior probability of each class, the likelihood of a classgiven just word i(how probable a word is to take a particular label), the likelihood of a class givenjust the head word yi(how probable a word is to take dependents with a particular label), and thelikelihood of a class given both word iand its head (how probable a word is to take a particular labelgiven that word’s head).Applying smaller MLPs to the recurrent output states before the biaffine classifier has the advantageof stripping away information not relevant to the current decision. That is, every top recurrent stateriwill need to carry enough information to identify word i’s head, find all its dependents, exclude allits non-dependents, assign itself the correct label, and assign all its dependents their correct labels, aswell as transfer any relevant information to the recurrent states of words before and after it. Thus rinecessarily contains significantly more information than is needed to compute any individual score,and training on this superfluous information needlessly reduces parsing speed and increases the riskof overfitting. Reducing dimensionality and applying a nonlinearity (4 - 6) addresses both of theseproblems. We call this a deep bilinear attention mechanism, as opposed to shallow bilinear attention,which uses the recurrent states directly.h(arc-dep)i =MLP(arc-dep)(ri) (4)h(arc-head )j =MLP(arc-head )(rj) (5)s(arc)i =H(arc-head )U(1)h(arc-dep)i (6)+H(arc-head )u(2)We apply MLPs to the recurrent states before using them in the label classifier as well. As with othergraph-based models, the predicted tree at training time is the one where each word is a dependent ofits highest scoring head (although at test time we ensure that the parse is a well-formed tree via theMST algorithm).3Published as a conference paper at ICLR 20173.2 H YPERPARAMETER CONFIGURATIONParam Value Param ValueEmbedding size 100 Embedding dropout 33%LSTM size 400 LSTM dropout 33%Arc MLP size 500 Arc MLP dropout 33%Label MLP size 100 Label MLP dropout 33%LSTM depth 3 MLP depth 1 2e31,2 .9Annealing :75t5000tmax 50,000Table 1: Model hyperparametersAside from architectural differences between ours and the other graph-based parsers, we make anumber of hyperparameter choices that allow us to outperform theirs, laid out in Table 1. We use100-dimensional uncased word vectors2and POS tag vectors; three BiLSTM layers (400 dimensionsin each direction); and 500- and 100-dimensional ReLU MLP layers. We also apply dropout at everystage of the model: we drop words and tags (independently); we drop nodes in the LSTM layers(input and recurrent connections), applying the same dropout mask at every recurrent timestep (cf.the Bayesian dropout of Gal & Ghahramani (2015)); and we drop nodes in the MLP layers andclassifiers, likewise applying the same dropout mask at every timestep. We optimize the networkwith annealed Adam (Kingma & Ba, 2014) for about 50,000 steps, rounded up to the nearest epoch.4 E XPERIMENTS & R ESULTS4.1 D ATASETSWe show test results for the proposed model on the English Penn Treebank, converted into StanfordDependencies using both version 3.3.0 and version 3.5.0 of the Stanford Dependency converter(PTB-SD 3.3.0 and PTB-SD 3.5.0); the Chinese Penn Treebank; and the CoNLL 09 shared taskdataset,3following standard practices for each dataset. We omit punctuation from evaluation onlyfor the PTB-SD and CTB. For the English PTB-SD datasets, we use POS tags generated from theStanford POS tagger (Toutanova et al., 2003); for the Chinese PTB dataset we use gold tags; and forthe CoNLL 09 dataset we use the provided predicted tags. Our hyperparameter search was done withthe PTB-SD 3.5.0 validation dataset in order to minimize overfitting to the more popular PTB-SD3.3.0 benchmark, and in our hyperparameter analysis in the following section we report performanceon the PTB-SD 3.5.0 test set, shown in Tables 2 and 3.4.2 H YPERPARAMETER CHOICES4.2.1 A TTENTION MECHANISMWe examined the effect of different classifier architectures on accuracy and performance. What wesee is that the deep bilinear model outperforms the others with respect to both speed and accuracy.The model with shallow bilinear arc and label classifiers gets the same unlabeled performance as thedeep model with the same settings, but because the label classifier is much larger ( (801c801) asopposed to (101c101) ), it runs much slower and overfits. One way to decrease this overfittingis by increasing the MLP dropout, but that of course doesn’t change parsing speed; another way isto decrease the recurrent size to 300, but this hinders unlabeled accuracy without increasing parsingspeed up to the same levels as our deeper model. We also implemented the MLP-based approachto attention and classification used in Kiperwasser & Goldberg (2016).4We found this version to2We compute a “trained” embedding matrix composed of words that occur at least twice in the trainingdataset and add these embeddings to their corresponding pretrained embeddings. Any words that don’t occurin either embedding matrix are replaced with a separate OOV token.3We exclude the Japanese dataset from our evaluation because we do not have access to it.4In the version of TensorFlow we used, the model’s memory requirements during training exceeded theavailable memory on a single GPU when default settings were used, so we reduced the MLP hidden size to 2004Published as a conference paper at ICLR 2017Classifier SizeModel UAS LAS Sents/sec Model UAS LAS Sents/secDeep 95.75 94.22 410.91 3 layers, 400d 95.75 94.22 410.91Shallow 95.74 94.00* 298.99 3 layers, 300d 95.82 94.24 460.01Shallow, 50% drop 95.73 94.05* 300.04 3 layers, 200d 95.55* 93.89* 469.45Shallow, 300d 95.63* 93.86* 373.24 2 layers, 400d 95.62* 93.98* 497.99MLP 95.53* 93.91* 367.44 4 layers, 400d 95.83 94.22 362.09Recurrent CellModel UAS LAS Sents/secLSTM 95.75 94.22 410.91GRU 93.18* 91.08* 435.32Cif-LSTM 95.67 94.06* 463.25Table 2: Test accuracy and speed on PTB-SD 3.5.0. Statistically significant differences are markedwith an asterisk.Input Dropout AdamModel UAS LAS Model UAS LASDefault 95.75 94.22 2=:9 95.75 94.22No word dropout 95.74 94.08* 2=:999 95.53* 93.91*No tag dropout 95.28* 93.60*No tags 95.77 93.91*Table 3: Test Accuracy on PTB-SD 3.5.0. Statistically significant differences are marked with anasterisk.likewise be somewhat slower and significantly underperform the deep biaffine approach in bothlabeled and unlabeled accuracy.4.2.2 N ETWORK SIZEWe also examine more closely how network size influences speed and accuracy. In Kiperwasser& Goldberg’s 2016 model, the network uses 2 layers of 125-dimensional bidirectional LSTMs; inHashimoto et al.’s 2016 model, it has one layer of 100-dimensional bidirectional LSTMs dedicatedto parsing (two lower layers are also trained on other objectives); and Cheng et al.’s 2016 modelhas one layer of 368-dimensional GRU cells. We find that using three or four layers gets signifi-cantly better performance than two layers, and increasing the LSTM sizes from 200 to 300 or 400dimensions likewise signficantly improves performance.54.2.3 R ECURRENT CELLGRU cells have been promoted as a faster and simpler alternative to LSTM cells, and are used inthe approach of Cheng et al. (2016); however, in our model they drastically underperformed LSTMcells. We also implemented the coupled input-forget gate LSTM cells (Cif-LSTM) suggested byGreff et al. (2015),6finding that while the resulting model still slightly underperforms the morepopular LSTM cells, the difference between the two is much smaller. Additionally, because thegate and candidate cell activations can be computed simultaneously with one matrix multiplication,the Cif-LSTM model is faster than the GRU version even though they have the same number ofparameters. We hypothesize that the output gate in the Cif-LSTM model allows it to maintain asparse recurrent output state, which helps it adapt to the high levels of dropout needed to preventoverfitting in a way that GRU cells are unable to do.5The model with 400-dimensional recurrent states significantly outperforms the 300-dimensional one onthe validation set, but not on the test set6In addition to using a coupled input-forget gate, we remove the first tanh nonlinearity, which is no longerneeded when using a coupled gate5Published as a conference paper at ICLR 2017English PTB-SD 3.3.0 Chinese PTB 5.1Type Model UAS LAS UAS LASTransitionBallesteros et al. (2016) 93.56 91.42 87.65 86.21Andor et al. (2016) 94.61 92.79 – –Kuncoro et al. (2016) 95.8 94.6 – –GraphKiperwasser & Goldberg (2016) 93.9 91.9 87.6 86.1Cheng et al. (2016) 94.10 91.49 88.1 85.7Hashimoto et al. (2016) 94.67 92.90 – –Deep Biaffine 95.74 94.08 89.30 88.23Table 4: Results on the English PTB and Chinese PTB parsing datasetsCatalan Chinese CzechModel UAS LAS UAS LAS UAS LASAndor et al. 92.67 89.83 84.72 80.85 88.94 84.56Deep Biaffine 94.69 92.02 88.90 85.38 92.08 87.38English German SpanishModel UAS LAS UAS LAS UAS LASAndor et al. 93.22 91.23 90.91 89.15 92.62 89.95Deep Biaffine 95.21 93.20 93.46 91.44 94.34 91.65Table 5: Results on the CoNLL ’09 shared task datasets4.2.4 E MBEDDING DROPOUTBecause we increase the parser’s power, we also have to increase its regularization. In addition tousing relatively extreme dropout in the recurrent and MLP layers mentioned in Table 1, we alsoregularize the input layer. We drop 33% of words and 33% of tags during training: when one isdropped the other is scaled by a factor of two to compensate, and when both are dropped together,the model simply gets an input of zeros. Models trained with only word or tag dropout but notboth wind up signficantly overfitting, hindering label accuracy and—in the latter case—attachmentaccuracy. Interestingly, not using any tags at all actually results in better performance than usingtags without dropout.4.2.5 O PTIMIZERWe choose to optimize with Adam (Kingma & Ba, 2014), which (among other things) keeps amoving average of the L2norm of the gradient for each parameter throughout training and dividesthe gradient for each parameter by this moving average, ensuring that the magnitude of the gradientswill on average be close to one. However, we find that the value for 2recommended by Kingma& Ba—which controls the decay rate for this moving average—is too high for this task (and wesuspect more generally). When this value is very large, the magnitude of the current update isheavily influenced by the larger magnitude of gradients very far in the past, with the effect that theoptimizer can’t adapt quickly to recent changes in the model. Thus we find that setting 2to:9instead of:999makes a large positive impact on final performance.4.3 R ESULTSOur model gets nearly the same UAS performance on PTB-SD 3.3.0 as the current SOTA modelfrom Kuncoro et al. (2016) in spite of its substantially simpler architecture, and gets SOTA UASperformance on CTB 5.17as well as SOTA performance on all CoNLL 09 languages. It is worthnoting that the CoNLL 09 datasets contain many non-projective dependencies, which are difficultor impossible for transition-based—but not graph-based—parsers to predict. This may account forsome of the large, consistent difference between our model and Andor et al.’s 2016 transition-basedmodel applied to these datasets.7We’d like to thank Zhiyang Teng for finding a bug in the original code that affected the CTB 5.1 dataset6Published as a conference paper at ICLR 2017Where our model appears to lag behind the SOTA model is in LAS, indicating one of a few possibil-ities. Firstly, it may be the result of inefficiencies or errors in the GloVe embeddings or POS tagger,in which case using alternative pretrained embeddings or a more accurate tagger might improvelabel classification. Secondly, the SOTA model is specifically designed to capture phrasal composi-tionality; so another possibility is that ours doesn’t capture this compositionality as effectively, andthat this results in a worse label score. Similarly, it may be the result of a more general limitation ofgraph-based parsers, which have access to less explicit syntactic information than transition-basedparsers when making decisions. Addressing these latter two limitations would require a more inno-vative architecture than the relatively simple one used in current neural graph-based parsers.5 C ONCLUSIONIn this paper we proposed using a modified version of bilinear attention in a neural dependencyparser that increases parsing speed without hurting performance. We showed that our larger but moreregularized network outperforms other neural graph-based parsers and gets comparable performanceto the current SOTA transition-based parser. We also provided empirical motivation for the proposedarchitecture and configuration over similar ones in the existing literature. Future work will involveexploring ways of bridging the gap between labeled and unlabeled accuracy and augment the parserwith a smarter way of handling out-of-vocabulary tokens for morphologically richer languages. | BJJU3DWEl | Final review | 6: Marginally above acceptance threshold | This is primarily an engineering paper. The authors find a small architectural modification to prior work and some hyperparameter tuning which pushes up the state-of-the-art in dependency parsing in two languages.
The architecture modification is a biaffine attention mechanism, which was inspired work in neural machine translation by Luong et al. (2015). The proposed attention model appears to be a win-win: better accuracy, reduced memory requirements, and fewer parameters.
The performance of the model is impressive, but how the performance is achieved is not very impressive. I do not believe that there are novel insights in the paper that will generalize to other tasks, nor does the paper shed light on the dependency parsing tasks (e.g., does biaffine attention have a linguistic interpretation?).
| 4: The reviewer is confident but not absolutely certain that the evaluation is correct |
Hk95PK9le | ICLR.cc/2017/conference | 2017 | Deep Biaffine Attention for Neural Dependency Parsing | ["Timothy Dozat", "Christopher D. Manning"] | This paper builds off recent work from Kiperwasser & Goldberg (2016) using neural attention in a simple graph-based dependency parser. We use a larger but more thoroughly regularized parser than other recent BiLSTM-based approaches, with
biaffine classifiers to predict arcs and labels. Our parser gets state of the art or near state of the art performance on standard treebanks for six different languages, achieving 95.7% UAS and 94.1% LAS on the most popular English PTB dataset. This makes it the highest-performing graph-based parser on this benchmark—outperforming Kiperwasser & Goldberg (2016) by 1.8% and 2.2%—and comparable to the highest performing transition-based parser (Kuncoro et al., 2016), which achieves 95.8% UAS and 94.6% LAS. We also show which hyperparameter choices had a significant effect on parsing accuracy, allowing us to achieve large gains over other graph-based approaches.
| ["Natural language processing", "Deep learning"] | ABSTRACTThis paper builds off recent work from Kiperwasser & Goldberg (2016) usingneural attention in a simple graph-based dependency parser. We use a larger butmore thoroughly regularized parser than other recent BiLSTM-based approaches,with biaffine classifiers to predict arcs and labels. Our parser gets state of the art ornear state of the art performance on standard treebanks for six different languages,achieving 95.7% UAS and 94.1% LAS on the most popular English PTB dataset.This makes it the highest-performing graph-based parser on this benchmark—outperforming Kiperwasser & Goldberg (2016) by 1.8% and 2.2%—and com-parable to the highest performing transition-based parser (Kuncoro et al., 2016),which achieves 95.8% UAS and 94.6% LAS. We also show which hyperparameterchoices had a significant effect on parsing accuracy, allowing us to achieve largegains over other graph-based approaches.1 I NTRODUCTIONDependency parsers—which annotate sentences in a way designed to be easy for humans and com-puters alike to understand—have been found to be extremely useful for a sizable number of NLPtasks, especially those involving natural language understanding in some way (Bowman et al., 2016;Angeli et al., 2015; Levy & Goldberg, 2014; Toutanova et al., 2016; Parikh et al., 2015). How-ever, frequent incorrect parses can severely inhibit final performance, so improving the quality ofdependency parsers is needed for the improvement and success of these downstream tasks.The current state-of-the-art transition-based neural dependency parser (Kuncoro et al., 2016) sub-stantially outperforms many much simpler neural graph-based parsers. We modify the neural graph-based approach first proposed by Kiperwasser & Goldberg (2016) in a few ways to achieve com-petitive performance: we build a network that’s larger but uses more regularization; we replace thetraditional MLP-based attention mechanism and affine label classifier with biaffine ones; and ratherthan using the top recurrent states of the LSTM in the biaffine transformations, we first put themthrough MLP operations that reduce their dimensionality. Furthermore, we compare models trainedwith different architectures and hyperparameters to motivate our approach empirically. The result-ing parser maintains most of the simplicity of neural graph-based approaches while approaching theperformance of the SOTA transition-based one.2 B ACKGROUND AND RELATED WORKTransition-based parsers—such as shift-reduce parsers—parse sentences from left to right, main-taining a “buffer” of words that have not yet been parsed and a “stack” of words whose head has notbeen seen or whose dependents have not all been fully parsed. At each step, transition-based parserscan access and manipulate the stack and buffer and assign arcs from one word to another. One canthen train any multi-class machine learning classifier on features extracted from the stack, buffer,and previous arc actions in order to predict the next action.Chen & Manning (2014) make the first successful attempt at incorporating deep learning into atransition-based dependency parser. At each step, the (feedforward) network assigns a probability toeach action the parser can take based on word, tag, and label embeddings from certain words on the1Published as a conference paper at ICLR 2017root /ROOT Casey/NNP hugged/VBD Kim/NNProotnsubj dobjFigure 1: A dependency tree parse for Casey hugged Kim , including part-of-speech tags and a specialroot token. Directed edges (or arcs) with labels (or relations) connect the verb to the root and thearguments to the verb head.stack and buffer. A number of other researchers have attempted to address some limitations of Chen& Manning’s Chen & Manning parser by augmenting it with additional complexity: Weiss et al.(2015) and Andor et al. (2016) augment it with a beam search and a conditional random field lossobjective to allow the parser to “undo” previous actions once it finds evidence that they may havebeen incorrect; and Dyer et al. (2015) and (Kuncoro et al., 2016) instead use LSTMs to representthe stack and buffer, getting state-of-the-art performance by building in a way of composing parsedphrases together.Transition-based parsing processes a sentence sequentially to build up a parse tree one arc at atime. Consequently, these parsers don’t use machine learning for directly predicting edges; theyuse it for predicting the operations of the transition algorithm. Graph-based parsers, by contrast,use machine learning to assign a weight or probability to each possible edge and then construct amaximum spaning tree (MST) from these weighted edges. Kiperwasser & Goldberg (2016) present aneural graph-based parser (in addition to a transition-based one) that uses the same kind of attentionmechanism as Bahdanau et al. (2014) for machine translation. In Kiperwasser & Goldberg’s 2016model, the (bidirectional) LSTM’s recurrent output vector for each word is concatenated with eachpossible head’s recurrent vector, and the result is used as input to an MLP that scores each resultingarc. The predicted tree structure at training time is the one where each word depends on its highest-scoring head. Labels are generated analogously, with each word’s recurrent output vector and itsgold or predicted head word’s recurrent vector being used in a multi-class MLP.Similarly, Hashimoto et al. (2016) include a graph-based dependency parser in their multi-task neu-ral model. In addition to training the model with multiple distinct objectives, they replace the tra-ditional MLP-based attention mechanism that Kiperwasser & Goldberg (2016) use with a bilinearone (but still using an MLP label classifier). This makes it analogous to Luong et al.’s 2015 pro-posed attention mechanism for neural machine translation. Cheng et al. (2016) likewise propose agraph-based neural dependency parser, but in a way that attempts to circumvent the limitation ofother neural graph-based parsers being unable to condition the scores of each possible arc on pre-vious parsing decisions. In addition to having one bidirectional recurrent network that computes arecurrent hidden vector for each word, they have additional, unidirectional recurrent networks (left-to-right and right-to-left) that keep track of the probabilities of each previous arc, and use thesetogether to predict the scores for the next arc.3 P ROPOSED DEPENDENCY PARSER3.1 D EEP BIAFFINE ATTENTIONWe make a few modifications to the graph-based architectures of Kiperwasser & Goldberg (2016),Hashimoto et al. (2016), and Cheng et al. (2016), shown in Figure 2: we use biaffine attentioninstead of bilinear or traditional MLP-based attention; we use a biaffine dependency label classifier;and we apply dimension-reducing MLPs to each recurrent output vector ribefore applying thebiaffine transformation.1The choice of biaffine rather than bilinear or MLP mechanisms makes theclassifiers in our model analogous to traditional affine classifiers, which use an affine transformationover a single LSTM output state ri(or other vector input) to predict the vector of scores sifor allclasses (1). We can think of the proposed biaffine attention mechanism as being a traditional affine1In this paper we follow the convention of using lowercase italic letters for scalars and indices, lowercasebold letters for vectors, uppercase italic letters for matrices, uppercase bold letters for higher order tensors. Wealso maintain this notation when indexing; so row iof matrixRwould be represented as ri.2Published as a conference paper at ICLR 2017. . .root ROOT Kim NNP1111> =BiLSTM: riEmbeddings: xiMLP: h(arc-dep)i;h(arc-head )iH(arc-dep)1U(arc)H(arc-head )S(arc)Figure 2: BiLSTM with deep biaffine attention to score each possible head for each dependent,applied to the sentence “Casey hugged Kim”. We reverse the order of the biaffine transformationhere for clarity.classifier, but using a (dd)linear transformation of the stacked LSTM output RU(1)in place ofthe weight matrix Wand a (d1)transformation Ru(2)for the bias term b(2).si=Wri+b Fixed-class affine classifier (1)s(arc)i =RU(1)ri+Ru(2)Variable-class biaffine classifier (2)In addition to being arguably simpler than the MLP-based approach (involving one bilinear layerrather than two linear layers and a nonlinearity), this has the conceptual advantage of directly mod-eling both the prior probability of a word jreceiving any dependents in the term r>ju(2)and thelikelihood of jreceiving a specific dependent iin the term r>jU(1)ri. Analogously, we also use abiaffine classifier to predict dependency labels given the gold or predicted head yi(3).s(label )i =r>yiU(1)ri+ (ryiri)>U(2)+b Fixed-class biaffine classifier (3)This likewise directly models each of the prior probability of each class, the likelihood of a classgiven just word i(how probable a word is to take a particular label), the likelihood of a class givenjust the head word yi(how probable a word is to take dependents with a particular label), and thelikelihood of a class given both word iand its head (how probable a word is to take a particular labelgiven that word’s head).Applying smaller MLPs to the recurrent output states before the biaffine classifier has the advantageof stripping away information not relevant to the current decision. That is, every top recurrent stateriwill need to carry enough information to identify word i’s head, find all its dependents, exclude allits non-dependents, assign itself the correct label, and assign all its dependents their correct labels, aswell as transfer any relevant information to the recurrent states of words before and after it. Thus rinecessarily contains significantly more information than is needed to compute any individual score,and training on this superfluous information needlessly reduces parsing speed and increases the riskof overfitting. Reducing dimensionality and applying a nonlinearity (4 - 6) addresses both of theseproblems. We call this a deep bilinear attention mechanism, as opposed to shallow bilinear attention,which uses the recurrent states directly.h(arc-dep)i =MLP(arc-dep)(ri) (4)h(arc-head )j =MLP(arc-head )(rj) (5)s(arc)i =H(arc-head )U(1)h(arc-dep)i (6)+H(arc-head )u(2)We apply MLPs to the recurrent states before using them in the label classifier as well. As with othergraph-based models, the predicted tree at training time is the one where each word is a dependent ofits highest scoring head (although at test time we ensure that the parse is a well-formed tree via theMST algorithm).3Published as a conference paper at ICLR 20173.2 H YPERPARAMETER CONFIGURATIONParam Value Param ValueEmbedding size 100 Embedding dropout 33%LSTM size 400 LSTM dropout 33%Arc MLP size 500 Arc MLP dropout 33%Label MLP size 100 Label MLP dropout 33%LSTM depth 3 MLP depth 1 2e31,2 .9Annealing :75t5000tmax 50,000Table 1: Model hyperparametersAside from architectural differences between ours and the other graph-based parsers, we make anumber of hyperparameter choices that allow us to outperform theirs, laid out in Table 1. We use100-dimensional uncased word vectors2and POS tag vectors; three BiLSTM layers (400 dimensionsin each direction); and 500- and 100-dimensional ReLU MLP layers. We also apply dropout at everystage of the model: we drop words and tags (independently); we drop nodes in the LSTM layers(input and recurrent connections), applying the same dropout mask at every recurrent timestep (cf.the Bayesian dropout of Gal & Ghahramani (2015)); and we drop nodes in the MLP layers andclassifiers, likewise applying the same dropout mask at every timestep. We optimize the networkwith annealed Adam (Kingma & Ba, 2014) for about 50,000 steps, rounded up to the nearest epoch.4 E XPERIMENTS & R ESULTS4.1 D ATASETSWe show test results for the proposed model on the English Penn Treebank, converted into StanfordDependencies using both version 3.3.0 and version 3.5.0 of the Stanford Dependency converter(PTB-SD 3.3.0 and PTB-SD 3.5.0); the Chinese Penn Treebank; and the CoNLL 09 shared taskdataset,3following standard practices for each dataset. We omit punctuation from evaluation onlyfor the PTB-SD and CTB. For the English PTB-SD datasets, we use POS tags generated from theStanford POS tagger (Toutanova et al., 2003); for the Chinese PTB dataset we use gold tags; and forthe CoNLL 09 dataset we use the provided predicted tags. Our hyperparameter search was done withthe PTB-SD 3.5.0 validation dataset in order to minimize overfitting to the more popular PTB-SD3.3.0 benchmark, and in our hyperparameter analysis in the following section we report performanceon the PTB-SD 3.5.0 test set, shown in Tables 2 and 3.4.2 H YPERPARAMETER CHOICES4.2.1 A TTENTION MECHANISMWe examined the effect of different classifier architectures on accuracy and performance. What wesee is that the deep bilinear model outperforms the others with respect to both speed and accuracy.The model with shallow bilinear arc and label classifiers gets the same unlabeled performance as thedeep model with the same settings, but because the label classifier is much larger ( (801c801) asopposed to (101c101) ), it runs much slower and overfits. One way to decrease this overfittingis by increasing the MLP dropout, but that of course doesn’t change parsing speed; another way isto decrease the recurrent size to 300, but this hinders unlabeled accuracy without increasing parsingspeed up to the same levels as our deeper model. We also implemented the MLP-based approachto attention and classification used in Kiperwasser & Goldberg (2016).4We found this version to2We compute a “trained” embedding matrix composed of words that occur at least twice in the trainingdataset and add these embeddings to their corresponding pretrained embeddings. Any words that don’t occurin either embedding matrix are replaced with a separate OOV token.3We exclude the Japanese dataset from our evaluation because we do not have access to it.4In the version of TensorFlow we used, the model’s memory requirements during training exceeded theavailable memory on a single GPU when default settings were used, so we reduced the MLP hidden size to 2004Published as a conference paper at ICLR 2017Classifier SizeModel UAS LAS Sents/sec Model UAS LAS Sents/secDeep 95.75 94.22 410.91 3 layers, 400d 95.75 94.22 410.91Shallow 95.74 94.00* 298.99 3 layers, 300d 95.82 94.24 460.01Shallow, 50% drop 95.73 94.05* 300.04 3 layers, 200d 95.55* 93.89* 469.45Shallow, 300d 95.63* 93.86* 373.24 2 layers, 400d 95.62* 93.98* 497.99MLP 95.53* 93.91* 367.44 4 layers, 400d 95.83 94.22 362.09Recurrent CellModel UAS LAS Sents/secLSTM 95.75 94.22 410.91GRU 93.18* 91.08* 435.32Cif-LSTM 95.67 94.06* 463.25Table 2: Test accuracy and speed on PTB-SD 3.5.0. Statistically significant differences are markedwith an asterisk.Input Dropout AdamModel UAS LAS Model UAS LASDefault 95.75 94.22 2=:9 95.75 94.22No word dropout 95.74 94.08* 2=:999 95.53* 93.91*No tag dropout 95.28* 93.60*No tags 95.77 93.91*Table 3: Test Accuracy on PTB-SD 3.5.0. Statistically significant differences are marked with anasterisk.likewise be somewhat slower and significantly underperform the deep biaffine approach in bothlabeled and unlabeled accuracy.4.2.2 N ETWORK SIZEWe also examine more closely how network size influences speed and accuracy. In Kiperwasser& Goldberg’s 2016 model, the network uses 2 layers of 125-dimensional bidirectional LSTMs; inHashimoto et al.’s 2016 model, it has one layer of 100-dimensional bidirectional LSTMs dedicatedto parsing (two lower layers are also trained on other objectives); and Cheng et al.’s 2016 modelhas one layer of 368-dimensional GRU cells. We find that using three or four layers gets signifi-cantly better performance than two layers, and increasing the LSTM sizes from 200 to 300 or 400dimensions likewise signficantly improves performance.54.2.3 R ECURRENT CELLGRU cells have been promoted as a faster and simpler alternative to LSTM cells, and are used inthe approach of Cheng et al. (2016); however, in our model they drastically underperformed LSTMcells. We also implemented the coupled input-forget gate LSTM cells (Cif-LSTM) suggested byGreff et al. (2015),6finding that while the resulting model still slightly underperforms the morepopular LSTM cells, the difference between the two is much smaller. Additionally, because thegate and candidate cell activations can be computed simultaneously with one matrix multiplication,the Cif-LSTM model is faster than the GRU version even though they have the same number ofparameters. We hypothesize that the output gate in the Cif-LSTM model allows it to maintain asparse recurrent output state, which helps it adapt to the high levels of dropout needed to preventoverfitting in a way that GRU cells are unable to do.5The model with 400-dimensional recurrent states significantly outperforms the 300-dimensional one onthe validation set, but not on the test set6In addition to using a coupled input-forget gate, we remove the first tanh nonlinearity, which is no longerneeded when using a coupled gate5Published as a conference paper at ICLR 2017English PTB-SD 3.3.0 Chinese PTB 5.1Type Model UAS LAS UAS LASTransitionBallesteros et al. (2016) 93.56 91.42 87.65 86.21Andor et al. (2016) 94.61 92.79 – –Kuncoro et al. (2016) 95.8 94.6 – –GraphKiperwasser & Goldberg (2016) 93.9 91.9 87.6 86.1Cheng et al. (2016) 94.10 91.49 88.1 85.7Hashimoto et al. (2016) 94.67 92.90 – –Deep Biaffine 95.74 94.08 89.30 88.23Table 4: Results on the English PTB and Chinese PTB parsing datasetsCatalan Chinese CzechModel UAS LAS UAS LAS UAS LASAndor et al. 92.67 89.83 84.72 80.85 88.94 84.56Deep Biaffine 94.69 92.02 88.90 85.38 92.08 87.38English German SpanishModel UAS LAS UAS LAS UAS LASAndor et al. 93.22 91.23 90.91 89.15 92.62 89.95Deep Biaffine 95.21 93.20 93.46 91.44 94.34 91.65Table 5: Results on the CoNLL ’09 shared task datasets4.2.4 E MBEDDING DROPOUTBecause we increase the parser’s power, we also have to increase its regularization. In addition tousing relatively extreme dropout in the recurrent and MLP layers mentioned in Table 1, we alsoregularize the input layer. We drop 33% of words and 33% of tags during training: when one isdropped the other is scaled by a factor of two to compensate, and when both are dropped together,the model simply gets an input of zeros. Models trained with only word or tag dropout but notboth wind up signficantly overfitting, hindering label accuracy and—in the latter case—attachmentaccuracy. Interestingly, not using any tags at all actually results in better performance than usingtags without dropout.4.2.5 O PTIMIZERWe choose to optimize with Adam (Kingma & Ba, 2014), which (among other things) keeps amoving average of the L2norm of the gradient for each parameter throughout training and dividesthe gradient for each parameter by this moving average, ensuring that the magnitude of the gradientswill on average be close to one. However, we find that the value for 2recommended by Kingma& Ba—which controls the decay rate for this moving average—is too high for this task (and wesuspect more generally). When this value is very large, the magnitude of the current update isheavily influenced by the larger magnitude of gradients very far in the past, with the effect that theoptimizer can’t adapt quickly to recent changes in the model. Thus we find that setting 2to:9instead of:999makes a large positive impact on final performance.4.3 R ESULTSOur model gets nearly the same UAS performance on PTB-SD 3.3.0 as the current SOTA modelfrom Kuncoro et al. (2016) in spite of its substantially simpler architecture, and gets SOTA UASperformance on CTB 5.17as well as SOTA performance on all CoNLL 09 languages. It is worthnoting that the CoNLL 09 datasets contain many non-projective dependencies, which are difficultor impossible for transition-based—but not graph-based—parsers to predict. This may account forsome of the large, consistent difference between our model and Andor et al.’s 2016 transition-basedmodel applied to these datasets.7We’d like to thank Zhiyang Teng for finding a bug in the original code that affected the CTB 5.1 dataset6Published as a conference paper at ICLR 2017Where our model appears to lag behind the SOTA model is in LAS, indicating one of a few possibil-ities. Firstly, it may be the result of inefficiencies or errors in the GloVe embeddings or POS tagger,in which case using alternative pretrained embeddings or a more accurate tagger might improvelabel classification. Secondly, the SOTA model is specifically designed to capture phrasal composi-tionality; so another possibility is that ours doesn’t capture this compositionality as effectively, andthat this results in a worse label score. Similarly, it may be the result of a more general limitation ofgraph-based parsers, which have access to less explicit syntactic information than transition-basedparsers when making decisions. Addressing these latter two limitations would require a more inno-vative architecture than the relatively simple one used in current neural graph-based parsers.5 C ONCLUSIONIn this paper we proposed using a modified version of bilinear attention in a neural dependencyparser that increases parsing speed without hurting performance. We showed that our larger but moreregularized network outperforms other neural graph-based parsers and gets comparable performanceto the current SOTA transition-based parser. We also provided empirical motivation for the proposedarchitecture and configuration over similar ones in the existing literature. Future work will involveexploring ways of bridging the gap between labeled and unlabeled accuracy and augment the parserwith a smarter way of handling out-of-vocabulary tokens for morphologically richer languages. | ry1J89-4l | official review | 5: Marginally below acceptance threshold | The paper brings the new STOA in PTB dependency parsing. The numbers are very impressive.
Built upon the framework of K&G parser, this improvement is achieved by mainly two things -- (1) the paper replace the original scorer using bilinear scorer and make a difference between the head of modifier representation (2) the hyperparameter tuning in the ADAM trainer.
Although I think the bilinear modification make some sense intuitively, I don't think this contribution alone is strong enough for a conference publication. The authors did not show a good explanation of why this approach works better in this case nor did the author show this modification is generally applicable in any other tasks. | 4: The reviewer is confident but not absolutely certain that the evaluation is correct |
Hk95PK9le | ICLR.cc/2017/conference | 2017 | Deep Biaffine Attention for Neural Dependency Parsing | ["Timothy Dozat", "Christopher D. Manning"] | This paper builds off recent work from Kiperwasser & Goldberg (2016) using neural attention in a simple graph-based dependency parser. We use a larger but more thoroughly regularized parser than other recent BiLSTM-based approaches, with
biaffine classifiers to predict arcs and labels. Our parser gets state of the art or near state of the art performance on standard treebanks for six different languages, achieving 95.7% UAS and 94.1% LAS on the most popular English PTB dataset. This makes it the highest-performing graph-based parser on this benchmark—outperforming Kiperwasser & Goldberg (2016) by 1.8% and 2.2%—and comparable to the highest performing transition-based parser (Kuncoro et al., 2016), which achieves 95.8% UAS and 94.6% LAS. We also show which hyperparameter choices had a significant effect on parsing accuracy, allowing us to achieve large gains over other graph-based approaches.
| ["Natural language processing", "Deep learning"] | ABSTRACTThis paper builds off recent work from Kiperwasser & Goldberg (2016) usingneural attention in a simple graph-based dependency parser. We use a larger butmore thoroughly regularized parser than other recent BiLSTM-based approaches,with biaffine classifiers to predict arcs and labels. Our parser gets state of the art ornear state of the art performance on standard treebanks for six different languages,achieving 95.7% UAS and 94.1% LAS on the most popular English PTB dataset.This makes it the highest-performing graph-based parser on this benchmark—outperforming Kiperwasser & Goldberg (2016) by 1.8% and 2.2%—and com-parable to the highest performing transition-based parser (Kuncoro et al., 2016),which achieves 95.8% UAS and 94.6% LAS. We also show which hyperparameterchoices had a significant effect on parsing accuracy, allowing us to achieve largegains over other graph-based approaches.1 I NTRODUCTIONDependency parsers—which annotate sentences in a way designed to be easy for humans and com-puters alike to understand—have been found to be extremely useful for a sizable number of NLPtasks, especially those involving natural language understanding in some way (Bowman et al., 2016;Angeli et al., 2015; Levy & Goldberg, 2014; Toutanova et al., 2016; Parikh et al., 2015). How-ever, frequent incorrect parses can severely inhibit final performance, so improving the quality ofdependency parsers is needed for the improvement and success of these downstream tasks.The current state-of-the-art transition-based neural dependency parser (Kuncoro et al., 2016) sub-stantially outperforms many much simpler neural graph-based parsers. We modify the neural graph-based approach first proposed by Kiperwasser & Goldberg (2016) in a few ways to achieve com-petitive performance: we build a network that’s larger but uses more regularization; we replace thetraditional MLP-based attention mechanism and affine label classifier with biaffine ones; and ratherthan using the top recurrent states of the LSTM in the biaffine transformations, we first put themthrough MLP operations that reduce their dimensionality. Furthermore, we compare models trainedwith different architectures and hyperparameters to motivate our approach empirically. The result-ing parser maintains most of the simplicity of neural graph-based approaches while approaching theperformance of the SOTA transition-based one.2 B ACKGROUND AND RELATED WORKTransition-based parsers—such as shift-reduce parsers—parse sentences from left to right, main-taining a “buffer” of words that have not yet been parsed and a “stack” of words whose head has notbeen seen or whose dependents have not all been fully parsed. At each step, transition-based parserscan access and manipulate the stack and buffer and assign arcs from one word to another. One canthen train any multi-class machine learning classifier on features extracted from the stack, buffer,and previous arc actions in order to predict the next action.Chen & Manning (2014) make the first successful attempt at incorporating deep learning into atransition-based dependency parser. At each step, the (feedforward) network assigns a probability toeach action the parser can take based on word, tag, and label embeddings from certain words on the1Published as a conference paper at ICLR 2017root /ROOT Casey/NNP hugged/VBD Kim/NNProotnsubj dobjFigure 1: A dependency tree parse for Casey hugged Kim , including part-of-speech tags and a specialroot token. Directed edges (or arcs) with labels (or relations) connect the verb to the root and thearguments to the verb head.stack and buffer. A number of other researchers have attempted to address some limitations of Chen& Manning’s Chen & Manning parser by augmenting it with additional complexity: Weiss et al.(2015) and Andor et al. (2016) augment it with a beam search and a conditional random field lossobjective to allow the parser to “undo” previous actions once it finds evidence that they may havebeen incorrect; and Dyer et al. (2015) and (Kuncoro et al., 2016) instead use LSTMs to representthe stack and buffer, getting state-of-the-art performance by building in a way of composing parsedphrases together.Transition-based parsing processes a sentence sequentially to build up a parse tree one arc at atime. Consequently, these parsers don’t use machine learning for directly predicting edges; theyuse it for predicting the operations of the transition algorithm. Graph-based parsers, by contrast,use machine learning to assign a weight or probability to each possible edge and then construct amaximum spaning tree (MST) from these weighted edges. Kiperwasser & Goldberg (2016) present aneural graph-based parser (in addition to a transition-based one) that uses the same kind of attentionmechanism as Bahdanau et al. (2014) for machine translation. In Kiperwasser & Goldberg’s 2016model, the (bidirectional) LSTM’s recurrent output vector for each word is concatenated with eachpossible head’s recurrent vector, and the result is used as input to an MLP that scores each resultingarc. The predicted tree structure at training time is the one where each word depends on its highest-scoring head. Labels are generated analogously, with each word’s recurrent output vector and itsgold or predicted head word’s recurrent vector being used in a multi-class MLP.Similarly, Hashimoto et al. (2016) include a graph-based dependency parser in their multi-task neu-ral model. In addition to training the model with multiple distinct objectives, they replace the tra-ditional MLP-based attention mechanism that Kiperwasser & Goldberg (2016) use with a bilinearone (but still using an MLP label classifier). This makes it analogous to Luong et al.’s 2015 pro-posed attention mechanism for neural machine translation. Cheng et al. (2016) likewise propose agraph-based neural dependency parser, but in a way that attempts to circumvent the limitation ofother neural graph-based parsers being unable to condition the scores of each possible arc on pre-vious parsing decisions. In addition to having one bidirectional recurrent network that computes arecurrent hidden vector for each word, they have additional, unidirectional recurrent networks (left-to-right and right-to-left) that keep track of the probabilities of each previous arc, and use thesetogether to predict the scores for the next arc.3 P ROPOSED DEPENDENCY PARSER3.1 D EEP BIAFFINE ATTENTIONWe make a few modifications to the graph-based architectures of Kiperwasser & Goldberg (2016),Hashimoto et al. (2016), and Cheng et al. (2016), shown in Figure 2: we use biaffine attentioninstead of bilinear or traditional MLP-based attention; we use a biaffine dependency label classifier;and we apply dimension-reducing MLPs to each recurrent output vector ribefore applying thebiaffine transformation.1The choice of biaffine rather than bilinear or MLP mechanisms makes theclassifiers in our model analogous to traditional affine classifiers, which use an affine transformationover a single LSTM output state ri(or other vector input) to predict the vector of scores sifor allclasses (1). We can think of the proposed biaffine attention mechanism as being a traditional affine1In this paper we follow the convention of using lowercase italic letters for scalars and indices, lowercasebold letters for vectors, uppercase italic letters for matrices, uppercase bold letters for higher order tensors. Wealso maintain this notation when indexing; so row iof matrixRwould be represented as ri.2Published as a conference paper at ICLR 2017. . .root ROOT Kim NNP1111> =BiLSTM: riEmbeddings: xiMLP: h(arc-dep)i;h(arc-head )iH(arc-dep)1U(arc)H(arc-head )S(arc)Figure 2: BiLSTM with deep biaffine attention to score each possible head for each dependent,applied to the sentence “Casey hugged Kim”. We reverse the order of the biaffine transformationhere for clarity.classifier, but using a (dd)linear transformation of the stacked LSTM output RU(1)in place ofthe weight matrix Wand a (d1)transformation Ru(2)for the bias term b(2).si=Wri+b Fixed-class affine classifier (1)s(arc)i =RU(1)ri+Ru(2)Variable-class biaffine classifier (2)In addition to being arguably simpler than the MLP-based approach (involving one bilinear layerrather than two linear layers and a nonlinearity), this has the conceptual advantage of directly mod-eling both the prior probability of a word jreceiving any dependents in the term r>ju(2)and thelikelihood of jreceiving a specific dependent iin the term r>jU(1)ri. Analogously, we also use abiaffine classifier to predict dependency labels given the gold or predicted head yi(3).s(label )i =r>yiU(1)ri+ (ryiri)>U(2)+b Fixed-class biaffine classifier (3)This likewise directly models each of the prior probability of each class, the likelihood of a classgiven just word i(how probable a word is to take a particular label), the likelihood of a class givenjust the head word yi(how probable a word is to take dependents with a particular label), and thelikelihood of a class given both word iand its head (how probable a word is to take a particular labelgiven that word’s head).Applying smaller MLPs to the recurrent output states before the biaffine classifier has the advantageof stripping away information not relevant to the current decision. That is, every top recurrent stateriwill need to carry enough information to identify word i’s head, find all its dependents, exclude allits non-dependents, assign itself the correct label, and assign all its dependents their correct labels, aswell as transfer any relevant information to the recurrent states of words before and after it. Thus rinecessarily contains significantly more information than is needed to compute any individual score,and training on this superfluous information needlessly reduces parsing speed and increases the riskof overfitting. Reducing dimensionality and applying a nonlinearity (4 - 6) addresses both of theseproblems. We call this a deep bilinear attention mechanism, as opposed to shallow bilinear attention,which uses the recurrent states directly.h(arc-dep)i =MLP(arc-dep)(ri) (4)h(arc-head )j =MLP(arc-head )(rj) (5)s(arc)i =H(arc-head )U(1)h(arc-dep)i (6)+H(arc-head )u(2)We apply MLPs to the recurrent states before using them in the label classifier as well. As with othergraph-based models, the predicted tree at training time is the one where each word is a dependent ofits highest scoring head (although at test time we ensure that the parse is a well-formed tree via theMST algorithm).3Published as a conference paper at ICLR 20173.2 H YPERPARAMETER CONFIGURATIONParam Value Param ValueEmbedding size 100 Embedding dropout 33%LSTM size 400 LSTM dropout 33%Arc MLP size 500 Arc MLP dropout 33%Label MLP size 100 Label MLP dropout 33%LSTM depth 3 MLP depth 1 2e31,2 .9Annealing :75t5000tmax 50,000Table 1: Model hyperparametersAside from architectural differences between ours and the other graph-based parsers, we make anumber of hyperparameter choices that allow us to outperform theirs, laid out in Table 1. We use100-dimensional uncased word vectors2and POS tag vectors; three BiLSTM layers (400 dimensionsin each direction); and 500- and 100-dimensional ReLU MLP layers. We also apply dropout at everystage of the model: we drop words and tags (independently); we drop nodes in the LSTM layers(input and recurrent connections), applying the same dropout mask at every recurrent timestep (cf.the Bayesian dropout of Gal & Ghahramani (2015)); and we drop nodes in the MLP layers andclassifiers, likewise applying the same dropout mask at every timestep. We optimize the networkwith annealed Adam (Kingma & Ba, 2014) for about 50,000 steps, rounded up to the nearest epoch.4 E XPERIMENTS & R ESULTS4.1 D ATASETSWe show test results for the proposed model on the English Penn Treebank, converted into StanfordDependencies using both version 3.3.0 and version 3.5.0 of the Stanford Dependency converter(PTB-SD 3.3.0 and PTB-SD 3.5.0); the Chinese Penn Treebank; and the CoNLL 09 shared taskdataset,3following standard practices for each dataset. We omit punctuation from evaluation onlyfor the PTB-SD and CTB. For the English PTB-SD datasets, we use POS tags generated from theStanford POS tagger (Toutanova et al., 2003); for the Chinese PTB dataset we use gold tags; and forthe CoNLL 09 dataset we use the provided predicted tags. Our hyperparameter search was done withthe PTB-SD 3.5.0 validation dataset in order to minimize overfitting to the more popular PTB-SD3.3.0 benchmark, and in our hyperparameter analysis in the following section we report performanceon the PTB-SD 3.5.0 test set, shown in Tables 2 and 3.4.2 H YPERPARAMETER CHOICES4.2.1 A TTENTION MECHANISMWe examined the effect of different classifier architectures on accuracy and performance. What wesee is that the deep bilinear model outperforms the others with respect to both speed and accuracy.The model with shallow bilinear arc and label classifiers gets the same unlabeled performance as thedeep model with the same settings, but because the label classifier is much larger ( (801c801) asopposed to (101c101) ), it runs much slower and overfits. One way to decrease this overfittingis by increasing the MLP dropout, but that of course doesn’t change parsing speed; another way isto decrease the recurrent size to 300, but this hinders unlabeled accuracy without increasing parsingspeed up to the same levels as our deeper model. We also implemented the MLP-based approachto attention and classification used in Kiperwasser & Goldberg (2016).4We found this version to2We compute a “trained” embedding matrix composed of words that occur at least twice in the trainingdataset and add these embeddings to their corresponding pretrained embeddings. Any words that don’t occurin either embedding matrix are replaced with a separate OOV token.3We exclude the Japanese dataset from our evaluation because we do not have access to it.4In the version of TensorFlow we used, the model’s memory requirements during training exceeded theavailable memory on a single GPU when default settings were used, so we reduced the MLP hidden size to 2004Published as a conference paper at ICLR 2017Classifier SizeModel UAS LAS Sents/sec Model UAS LAS Sents/secDeep 95.75 94.22 410.91 3 layers, 400d 95.75 94.22 410.91Shallow 95.74 94.00* 298.99 3 layers, 300d 95.82 94.24 460.01Shallow, 50% drop 95.73 94.05* 300.04 3 layers, 200d 95.55* 93.89* 469.45Shallow, 300d 95.63* 93.86* 373.24 2 layers, 400d 95.62* 93.98* 497.99MLP 95.53* 93.91* 367.44 4 layers, 400d 95.83 94.22 362.09Recurrent CellModel UAS LAS Sents/secLSTM 95.75 94.22 410.91GRU 93.18* 91.08* 435.32Cif-LSTM 95.67 94.06* 463.25Table 2: Test accuracy and speed on PTB-SD 3.5.0. Statistically significant differences are markedwith an asterisk.Input Dropout AdamModel UAS LAS Model UAS LASDefault 95.75 94.22 2=:9 95.75 94.22No word dropout 95.74 94.08* 2=:999 95.53* 93.91*No tag dropout 95.28* 93.60*No tags 95.77 93.91*Table 3: Test Accuracy on PTB-SD 3.5.0. Statistically significant differences are marked with anasterisk.likewise be somewhat slower and significantly underperform the deep biaffine approach in bothlabeled and unlabeled accuracy.4.2.2 N ETWORK SIZEWe also examine more closely how network size influences speed and accuracy. In Kiperwasser& Goldberg’s 2016 model, the network uses 2 layers of 125-dimensional bidirectional LSTMs; inHashimoto et al.’s 2016 model, it has one layer of 100-dimensional bidirectional LSTMs dedicatedto parsing (two lower layers are also trained on other objectives); and Cheng et al.’s 2016 modelhas one layer of 368-dimensional GRU cells. We find that using three or four layers gets signifi-cantly better performance than two layers, and increasing the LSTM sizes from 200 to 300 or 400dimensions likewise signficantly improves performance.54.2.3 R ECURRENT CELLGRU cells have been promoted as a faster and simpler alternative to LSTM cells, and are used inthe approach of Cheng et al. (2016); however, in our model they drastically underperformed LSTMcells. We also implemented the coupled input-forget gate LSTM cells (Cif-LSTM) suggested byGreff et al. (2015),6finding that while the resulting model still slightly underperforms the morepopular LSTM cells, the difference between the two is much smaller. Additionally, because thegate and candidate cell activations can be computed simultaneously with one matrix multiplication,the Cif-LSTM model is faster than the GRU version even though they have the same number ofparameters. We hypothesize that the output gate in the Cif-LSTM model allows it to maintain asparse recurrent output state, which helps it adapt to the high levels of dropout needed to preventoverfitting in a way that GRU cells are unable to do.5The model with 400-dimensional recurrent states significantly outperforms the 300-dimensional one onthe validation set, but not on the test set6In addition to using a coupled input-forget gate, we remove the first tanh nonlinearity, which is no longerneeded when using a coupled gate5Published as a conference paper at ICLR 2017English PTB-SD 3.3.0 Chinese PTB 5.1Type Model UAS LAS UAS LASTransitionBallesteros et al. (2016) 93.56 91.42 87.65 86.21Andor et al. (2016) 94.61 92.79 – –Kuncoro et al. (2016) 95.8 94.6 – –GraphKiperwasser & Goldberg (2016) 93.9 91.9 87.6 86.1Cheng et al. (2016) 94.10 91.49 88.1 85.7Hashimoto et al. (2016) 94.67 92.90 – –Deep Biaffine 95.74 94.08 89.30 88.23Table 4: Results on the English PTB and Chinese PTB parsing datasetsCatalan Chinese CzechModel UAS LAS UAS LAS UAS LASAndor et al. 92.67 89.83 84.72 80.85 88.94 84.56Deep Biaffine 94.69 92.02 88.90 85.38 92.08 87.38English German SpanishModel UAS LAS UAS LAS UAS LASAndor et al. 93.22 91.23 90.91 89.15 92.62 89.95Deep Biaffine 95.21 93.20 93.46 91.44 94.34 91.65Table 5: Results on the CoNLL ’09 shared task datasets4.2.4 E MBEDDING DROPOUTBecause we increase the parser’s power, we also have to increase its regularization. In addition tousing relatively extreme dropout in the recurrent and MLP layers mentioned in Table 1, we alsoregularize the input layer. We drop 33% of words and 33% of tags during training: when one isdropped the other is scaled by a factor of two to compensate, and when both are dropped together,the model simply gets an input of zeros. Models trained with only word or tag dropout but notboth wind up signficantly overfitting, hindering label accuracy and—in the latter case—attachmentaccuracy. Interestingly, not using any tags at all actually results in better performance than usingtags without dropout.4.2.5 O PTIMIZERWe choose to optimize with Adam (Kingma & Ba, 2014), which (among other things) keeps amoving average of the L2norm of the gradient for each parameter throughout training and dividesthe gradient for each parameter by this moving average, ensuring that the magnitude of the gradientswill on average be close to one. However, we find that the value for 2recommended by Kingma& Ba—which controls the decay rate for this moving average—is too high for this task (and wesuspect more generally). When this value is very large, the magnitude of the current update isheavily influenced by the larger magnitude of gradients very far in the past, with the effect that theoptimizer can’t adapt quickly to recent changes in the model. Thus we find that setting 2to:9instead of:999makes a large positive impact on final performance.4.3 R ESULTSOur model gets nearly the same UAS performance on PTB-SD 3.3.0 as the current SOTA modelfrom Kuncoro et al. (2016) in spite of its substantially simpler architecture, and gets SOTA UASperformance on CTB 5.17as well as SOTA performance on all CoNLL 09 languages. It is worthnoting that the CoNLL 09 datasets contain many non-projective dependencies, which are difficultor impossible for transition-based—but not graph-based—parsers to predict. This may account forsome of the large, consistent difference between our model and Andor et al.’s 2016 transition-basedmodel applied to these datasets.7We’d like to thank Zhiyang Teng for finding a bug in the original code that affected the CTB 5.1 dataset6Published as a conference paper at ICLR 2017Where our model appears to lag behind the SOTA model is in LAS, indicating one of a few possibil-ities. Firstly, it may be the result of inefficiencies or errors in the GloVe embeddings or POS tagger,in which case using alternative pretrained embeddings or a more accurate tagger might improvelabel classification. Secondly, the SOTA model is specifically designed to capture phrasal composi-tionality; so another possibility is that ours doesn’t capture this compositionality as effectively, andthat this results in a worse label score. Similarly, it may be the result of a more general limitation ofgraph-based parsers, which have access to less explicit syntactic information than transition-basedparsers when making decisions. Addressing these latter two limitations would require a more inno-vative architecture than the relatively simple one used in current neural graph-based parsers.5 C ONCLUSIONIn this paper we proposed using a modified version of bilinear attention in a neural dependencyparser that increases parsing speed without hurting performance. We showed that our larger but moreregularized network outperforms other neural graph-based parsers and gets comparable performanceto the current SOTA transition-based parser. We also provided empirical motivation for the proposedarchitecture and configuration over similar ones in the existing literature. Future work will involveexploring ways of bridging the gap between labeled and unlabeled accuracy and augment the parserwith a smarter way of handling out-of-vocabulary tokens for morphologically richer languages. | S1iorxrNg | final review | 5: Marginally below acceptance threshold | The paper proposes a new function for computing arc score between two words in a sentence for dependency parsing. The proposed function is biaffine in the sense that it's a combination of a bilinear score function and a bias term playing a role as prior. The paper reports new state-of-the-art dependency parsing performances on both English PTB and Chinese TB.
The paper is very well written with impressive experimental results and analysis. However, the idea is hardly novel regarding to the theme of the conference: the framework that the paper uses is from Kiperwasser & Goldberg (2016), the use of bilinear score function for attention is from Luong et al (2015). Projecting BiLSTM outputs into different spaces using MLPs is a trivial step to make the model "deeper", whereas adding linear bias terms isn't confirmed to work in the experiments (table 2 shows that diag bilinear has a close performance to biaffine).
I think that this paper is more proper for NLP conferences. | 4: The reviewer is confident but not absolutely certain that the evaluation is correct |
BJ_MGwqlg | ICLR.cc/2017/conference | 2017 | Rethinking Numerical Representations for Deep Neural Networks | ["Parker Hill", "Babak Zamirai", "Shengshuo Lu", "Yu-Wei Chao", "Michael Laurenzano", "Mehrzad Samadi", "Marios Papaefthymiou", "Scott Mahlke", "Thomas Wenisch", "Jia Deng", "Lingjia Tang", "Jason Mars"] | With ever-increasing computational demand for deep learning, it is critical to investigate the implications of the numeric representation and precision of DNN model weights and activations on computational efficiency. In this work, we explore unconventional narrow-precision floating-point representations as it relates to inference accuracy and efficiency to steer the improved design of future DNN platforms. We show that inference using these custom numeric representations on production-grade DNNs, including GoogLeNet and VGG, achieves an average speedup of 7.6x with less than 1% degradation in inference accuracy relative to a state-of-the-art baseline platform representing the most sophisticated hardware using single-precision floating point. To facilitate the use of such customized precision, we also present a novel technique that drastically reduces the time required to derive the optimal precision configuration. | ["Deep learning"] | ABSTRACTWith ever-increasing computational demand for deep learning, it is critical to in-vestigate the implications of the numeric representation and precision of DNNmodel weights and activations on computational efficiency. In this work, we ex-plore unconventional narrow-precision floating-point representations as it relatesto inference accuracy and efficiency to steer the improved design of future DNNplatforms. We show that inference using these custom numeric representationson production-grade DNNs, including GoogLeNet and VGG, achieves an averagespeedup of 7.6with less than 1% degradation in inference accuracy relative toa state-of-the-art baseline platform representing the most sophisticated hardwareusing single-precision floating point. To facilitate the use of such customized pre-cision, we also present a novel technique that drastically reduces the time requiredto derive the optimal precision configuration.1 I NTRODUCTIONRecently, deep neural networks (DNNs) have yielded state-of-the-art performance on a wide arrayof AI tasks, including image classification Krizhevsky et al. (2012), speech recognition Hannunet al. (2014), and language understanding Sutskever et al. (2014). In addition to algorithmic inno-vations Nair & Hinton (2010); Srivastava et al. (2014); Taigman et al. (2014), a key driver behindthese successes are advances in computing infrastructure that enable large-scale deep learning—thetraining and inference of large DNN models on massive datasets Dean et al. (2012); Farabet et al.(2013). Indeed, highly efficient GPU implementations of DNNs played a key role in the first break-through of deep learning for image classification Krizhevsky et al. (2012). Given the ever growingamount of data available for indexing, analysis, and training, and the increasing prevalence of ever-larger DNNs as key building blocks for AI applications, it is critical to design computing platformsto support faster, more resource-efficient DNN computation.A set of core design decisions are common to the design of these infrastructures. One such criti-cal choice is the numerical representation and precision used in the implementation of underlyingstorage and computation. Several recent works have investigated the numerical representation forDNNs Cavigelli et al. (2015); Chen et al. (2014); Du et al. (2014); Muller & Indiveri (2015). Onerecent work found that substantially lower precision can be used for training when the correct nu-merical rounding method is employed Gupta et al. (2015). Their work resulted in the design of avery energy-efficient DNN platform.This work and other previous numerical representation studies for DNNs have either limited them-selves to a small subset of the customized precision design space or drew conclusions using onlysmall neural networks. For example, the work from Gupta et al. 2015 evaluates 16-bit fixed-pointand wider computational precision on LeNet-5 LeCun et al. (1998) and CIFARNET Krizhevsky& Hinton (2009). The fixed-point representation (Figure 1) is only one of many possible numericrepresentations. Exploring a limited customized precision design space inevitably results in designslacking in energy efficiency and computational performance. Evaluating customized precision ac-curacy based on small neural networks requires the assumption that much larger, production-gradeneural networks would operate comparably when subjected to the same customized precision.In this work, we explore the accuracy-efficiency trade-off made available via specialized custom-precision hardware for inference and present a method to efficiently traverse this large design spaceto find an optimal design. Specifically, we evaluate the impact of a wide spectrum of customized1Under review as a conference paper at ICLR 2017integer fraction11001.01110||||||||||... ... Figure 1: A fixed-point representation. Hard-ware parameters include the total number of bitsand the position of the radix point.x2mantissa1.01101|||||...exponent10011|||||... - biasFigure 2: A floating-point representation. Hard-ware parameters include the number of mantissaand exponent bits, and the bias.precision settings for fixed-point and floating-point representations on accuracy and computationalperformance. We evaluate these customized precision configurations on large, state-of-the-art neu-ral networks. By evaluating the full computational precision design space on a spectrum of theseproduction-grade DNNs, we find that:1. Precision requirements do not generalize across all neural networks. This prompts designersof future DNN infrastructures to carefully consider the applications that will be executed ontheir platforms, contrary to works that design for large networks and evaluate accuracy on smallnetworks Cavigelli et al. (2015); Chen et al. (2014).2. Many large-scale DNNs require considerably more precision for fixed-point arithmetic than pre-viously found from small-scale evaluations Cavigelli et al. (2015); Chen et al. (2014); Du et al.(2014). For example, we find that GoogLeNet requires on the order of 40 bits when implementedwith fixed-point arithmetic, as opposed to less than 16 bits for LeNet-5.3. Floating-point representations are more efficient than fixed-point representations when selectingoptimal precision settings. For example, a 17-bit floating-point representation is acceptable forGoogLeNet, while over 40 bits are required for the fixed-point representation – a more expensivecomputation than the standard single precision floating-point format. Current platform designersshould reconsider the use of the floating-point representations for DNN computations instead ofthe commonly used fixed-point representations Cavigelli et al. (2015); Chen et al. (2014); Duet al. (2014); Muller & Indiveri (2015).To make these conclusions on large-scale customized precision design readily actionable for DNNinfrastructure designers, we propose and validate a novel technique to quickly search the large cus-tomized precision design space. This technique leverages the activations in the last layer to builda model to predict accuracy based on the insight that these activations effectively capture the prop-agation of numerical error from computation. Using this method on deployable DNNs, includingGoogLeNet Szegedy et al. (2015) and VGG Simonyan & Zisserman (2014), we find that usingthese recommendations to introduce customized precision into a DNN accelerator fabric results inan average speedup of 7.6 with less than 1% degradation in inference accuracy.2 C USTOMIZED PRECISION HARDWAREWe begin with an overview of the available design choices in the representation of real numbers inbinary and discuss how these choices impact hardware performance.2.1 D ESIGN SPACEWe consider three aspects of customized precision number representations. First, we contrast thehigh-level choice between fixed-point and floating-point representations. Fixed-point binary arith-metic is computationally identical to integer arithmetic, simply changing the interpretation of eachbit position. Floating-point arithmetic, however, represents the sign, mantissa, and exponent of a realnumber separately. Floating-point calculations involve several steps absent in integer arithmetic. Inparticular, addition operations require aligning the mantissas of each operand. As a result, floating-point computation units are substantially larger, slower, and more complex than integer units.In CPUs and GPUs, available sizes for both integers and floating-point calculations are fixed accord-ing to the data types supported by the hardware. Thus, the second aspect of precision customizationwe examine is to consider customizing the number of bits used in representing floating-point andfixed-point numbers. Third, we may vary the interpretation of fixed-point numbers and assignmentof bits to the mantissa and exponent in a floating-point value.2.2 C USTOMIZED PRECISION TYPESIn a fixed-point representation, we select the number of bits as well as the position of the radix point,which separates integer and fractional bits, as illustrated in Figure 1. A bit array, x, encoded in fixedpoint with the radix point at bit l(counting from the right) represents the value 2lPN1i=02ixi.2Under review as a conference paper at ICLR 2017Sign Exponent Mantissa Sign Exponent MantissaComparatorSign Exponent Mantissa8 7 6 5 4 3 2 1 0Delay+×FSMControllerAlignmentAlignmentAddition/SubtractionAlignmentIncrement /Decrement8 7 6 5 4 3 2 1 0(a) (b) (c)Figure 3: Floating point multiply-accumulate (MAC) unit with various levels of detail: (a) the highlevel mathematical operation, (b) the modules that form a floating point MAC, and (c) the signalpropagation of the unit.In contrast to floating point, fixed-point representations with a particular number of bits have a fixedlevel of precision. By varying the position of the radix point, we change the representable range.An example floating-point representation is depicted in Figure 2. As shown in the figure, thereare three parameters to select when designing a floating-point representation: the bit-width ofthe mantissa, the bit-width of the exponent, and an exponent bias. The widths of the mantissaand exponent control precision and dynamic range, respectively. The exponent bias adjusts theoffset of the exponent (which is itself represented as an unsigned integer) relative to zero to fa-cilitate positive and negative exponents. Finally, an additional bit represents the sign. Thus, afloating-point format with Nmmantissa bits, Neexponent bits, and a bias of b, encodes the value2(PNe1i=02iei)b(1 +PNmi=12imi), where mandeare the segments of a bit array representingthe mantissa and exponent, respectively. Note that the leading bit of the mantissa is assumed to be1and hence is not explicitly stored, eliminating redundant encodings of the same value. A single-precision value in the IEEE-754 standard (i.e. float ) comprises 23 mantissa bits, 8 exponent bits,and a sign bit. IEEE-754 standardized floating-point formats include special encodings for specificvalues, such as zero and infinity.Both fixed-point and floating-point representations have limitations in terms of the precision and thedynamic ranges available given particular representations, manifesting themselves computationallyas rounding and saturation errors. These errors propagate through the deep neural network in a waythat is difficult to estimate holistically, prompting experimentation on the DNN itself.2.3 H ARDWARE IMPLICATIONSThe key hardware building block for implementing DNNs is the multiply-accumulate (MAC) op-eration. The MAC operation implements the sum-of-products operation that is fundamental to theactivation of each neuron. We show a high-level hardware block diagram of a MAC unit in Figure 3(a). Figure 3 (b) adds detail for the addition operation, the more complex of the two operations.As seen in the figure, floating-point addition operations involve a number of sub-components thatcompare exponents, align mantissas, perform the addition, and normalize the result. Nearly all ofthe sub-components of the MAC unit scale in speed, power, and area with the bit width.Reducing the floating-point bit width improves hardware performance in two ways. First, reducedbit width makes a computation unit faster. Binary arithmetic computations involve chains of logicoperations that typically grows at least logarithmically, and sometimes linearly (e.g., the propagationof carries in an addition, see Figure 3 (c)), in the number of bits. Reducing the bit width reduces thelength of these chains, allowing the logic to operate at a higher clock frequency. Second, reducedbit width makes a computation unit smaller and require less energy, typically linearly in the numberof bits. The circuit delay and area is shown in Figure 4 when the mantissa bit widths are varied. Asshown in the figure, scaling the length of the mantissa provides substantial opportunity because itdefines the size of the internal addition unit. Similar trends follow for bit-widths in other represen-tations. When a unit is smaller, more replicas can fit within the same chip area and power budget,all of which can operate in parallel. Hence, for computations like those in DNNs, where ampleparallelism is available, area reductions translate into proportional performance improvement.This trend of bit width versus speed, power, and area is applicable to every computation unit inhardware DNN implementations. Thus, in designing hardware that uses customized representations3Under review as a conference paper at ICLR 20175 10 15 200.00.20.40.60.81.0Normalized AreaNormalized DelayMantissa BitsFigure 4: Delay and area implications of man-tissa width, normalized to a 32-bit Single Preci-sion MAC with 23 mantissa bits.32-bit MAC11-bitMAC11-bitMAC11-bitMAC11-bitMACDelay: 10τDelay: 4τParallelism: 1v Parallelism: 4v1v / 10τ 4v / 4τ10x speedupFigure 5: Speedup calculation with a fixed areabudget. The speedup exploits the improvedfunction delay and parallelism.there is a trade-off between accuracy on the one hand and power, area, and speed on the other. Ourgoal is to use precision that delivers sufficient accuracy while attaining large improvements in power,area, and speed over standard floating-point designs.3 M ETHODOLOGYWe describe the methodology we use to evaluate the customized precision design space, using imageclassification tasks of varying complexity as a proxy for computer vision applications. We evaluateDNN implementations using several metrics, classification accuracy, speedup, and energy savingsrelative to a baseline custom hardware design that uses single-precision floating-point representa-tions. Using the results of this analysis, we propose and validate a search technique to efficientlydetermine the correct customized precision design point.3.1 A CCURACYWe evaluate accuracy by modifying the Caffe Jia et al. (2014) deep learning framework to performcalculations with arbitrary fixed-point and floating-point formats. We continue to store values as Cfloat s in Caffe, but truncate the mantissa and exponent to the desired format after each arithmeticoperation. Accuracy, using a set of test inputs disjoint from the training input set, is then measuredby running the forward pass of a DNN model with the customized format and comparing the out-puts with the ground truth. We use the standard accuracy metrics that accompany the dataset foreach DNN. For MNIST (LeNet-5) and CIFAR-10 (CIFARNET) we use top-1 accuracy and for Ima-geNet (GoogLeNet, VGG, and AlexNet) we use top-5 accuracy. Top-1 accuracy denotes the percentof inputs that the DNN predicts correctly after a single prediction attempt, while top-5 accuracyrepresents the percent of inputs that DNN predicts correctly after five attempts.3.2 E FFICIENCYWe quantify the efficiency advantages of customized floating-point representations by designing afloating-point MAC unit in each candidate precision and determining its silicon area and delay char-acteristics. We then report speedup and energy savings relative to a baseline custom hardware im-plementation of a DNN that uses standard single-precision floating-point computations. We designeach variant of the MAC unit using Synopsys Design Compiler and Synopsys PrimeTime, industrystandard ASIC design tools, targeting a commercial 28nm silicon manufacturing process. The toolsreport the power, delay, and area characteristics of each precision variant. As shown in Figure 5,we compute speedups and energy savings relative to the standardized IEEE-754 floating-point rep-resentation considering both the clock frequency advantage and improved parallelism due to areareduction of the narrower bit-width MAC units. This allows customized precision designs to yield aquadratic improvement in total system throughput.3.3 E FFICIENT CUSTOMIZED PRECISION SEARCHTo exploit the benefits of customized precision, a mechanism to select the correct configurationmust be introduced. There are hundreds of designs among floating-point and fixed-point formatsdue to designs varying by the total bit width and the allocation of those bits. This spectrum ofdesigns strains the ability to select an optimal configuration. A straightforward approach to selectthe customized precision design point is to exhaustively compute the accuracy of each design witha large number of neural network inputs. This strategy requires substantial computational resourcesthat are proportional to the size of the network and variety of output classifications. We describe ourtechnique that significantly reduces the time required to search for the correct configuration in orderto facilitate the use of customized precision.The key insight behind our search method is that customized precision impacts the underlying in-ternal computation, which is hidden by evaluating only the NN final accuracy metric. Thus, instead4Under review as a conference paper at ICLR 20170x5x10x15x20x25xSpeedup0%20%40%60%80%100%Accuracy(a) GoogLeNet●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●0x5x10x15x20x25xSpeedup0%20%40%60%80%100%Accuracy(b) VGG●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●0x5x10x15x20x25xSpeedup0%20%40%60%80%100%Accuracy(c) AlexNet●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●0x5x10x15x20x25xSpeedup0%20%40%60%80%100%Accuracy(d) CIFARNET●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●0x5x10x15x20x25xSpeedup0%20%40%60%80%100%Accuracy(e) LeNet−5●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●● ●●●●●●●●●●●●●●●●●●● ●●Custom Floating Point Custom Fixed Point IEEE 754 Single Prec. Figure 6: The inference accuracy versus speedup design space for each of the neural networks,showing substantial computational performance improvements for minimal accuracy degradationwhen customized precision floating-point formats are used.of comparing the final accuracy generated by networks with different precision configurations, wecompare the original NN activations to the customized precision activations. This circumvents theneed to evaluate the large number of inputs required to produce representative neural network accu-racy. Furthermore, instead of examining all of the activations, we only analyze the last layer, sincethe last layer captures the usable output from the neural network as well as the propagation of lostaccuracy. Our method summarizes the differences between the last layer of two configurations bycalculating the linear coefficient of determination between the last layer activations.A method to translate the coefficient of determination to a more desirable metric, such as end-to-endinference accuracy, is necessary. We find that a linear model provides such a transformation. Thecustomized precision setting with the highest speedup that meets a specified accuracy threshold isthen selected. In order to account for slight inaccuracies in the model, inference accuracy for asubset of configurations is evaluated. If the configuration provided by the accuracy model resultsin insufficient accuracy, then an additional bit is added and the process repeats. Similarly, if theaccuracy threshold is met, then a bit is removed from the customized precision format.4 E XPERIMENTSIn this section, we evaluate five common neural networks spanning a range of sizes and depths in thecontext of customized precision hardware. We explore the trade-off between accuracy and efficiencywhen various customized precision representations are employed. Next, we address the sources ofaccuracy degradation when customized precision is utilized. Finally, we examine the characteristicsof our customized precision search technique.4.1 E XPERIMENTAL SETUPWe evaluate the accuracy of customized precision operations on five DNNs: GoogLeNet Szegedyet al. (2015), VGG Simonyan & Zisserman (2014), AlexNet Krizhevsky et al. (2012), CIFAR-NET Krizhevsky & Hinton (2009), and LeNet-5 LeCun et al. (1998). The implementations andpre-trained weights for these DNNs were taken from Caffe Jia et al. (2014). The three largest DNNs(GoogLeNet, VGG, and AlexNet) represent real-world workloads, while the two smaller DNNs (CI-FARNET and LeNet-5) are the largest DNNs evaluated in prior work on customized precision. Foreach DNN, we use the canonical benchmark validation set: ImageNet for GoogLeNet, VGG, andAlexNet; CIFAR-10 for CIFARNET; MNIST for LeNet-5. We utilize the entire validation set forall experiments, except for GoogLeNet and VGG experiments involving the entire design space. Inthese cases we use a randomly-selected 1% of the validation set to make the experiments tractable.4.2 A CCURACY VERSUS EFFICIENCY TRADE -OFFSTo evaluate the benefits of customized precision hardware, we swept the design space for accuracyand performance characteristics. This performance-accuracy trade off is shown in Figure 6. Thisfigure shows the DNN inference accuracy across the full input set versus the speedup for each ofthe five DNN benchmarks. The black star represents the IEEE 754 single precision representation(i.e. the original accuracy with 1 speedup), while the red circles and blue triangles represent thecomplete set of our customized precision floating-point and fixed-point representations, respectively.For GoogLeNet, VGG, and AlexNet it is clear that the floating-point format is superior to the fixed-point format. In fact, the standard single precision floating-point format is faster than all fixed-point configurations that achieve above 40% accuracy. Although fixed-point computation is simplerand faster than floating-point computation when the number of bits is fixed, customized precisionfloating-point representations are more efficient because less bits are needed for similar accuracy.5Under review as a conference paper at ICLR 20173691215182124Mantissa Bits(a) Floating−point speedup46810Exponent Bits<1% AccuracyDegradation0.7x3.9x7.2x10.4x13.6x16.8x20x48121620242832Integer Bits(b) Fixed−point speedup48121620242832Fraction Bits<1% AccuracyDegradation0.2x4.3x8.5x12.6x16.8x21x25.1x3691215182124Mantissa Bits(c) Floating−point energy46810Exponent Bits<1% AccuracyDegradation0.8x1.7x2.6x3.5x4.4x5.3x6.2x48121620242832Integer Bits(d) Fixed−point energy48121620242832Fraction Bits<1% AccuracyDegradation0.3x1.4x2.5x3.7x4.8x5.9x7xFigure 7: The speedup and energy savings as the two parameters are adjusted for the custom floatingpoint and fixed-point representations. The marked area denotes configurations where the total lossin AlexNet accuracy is less than 1%.0 500 1000 1500 2000 2500 3000−1000−50005001000[1][2][3][4][5]# of Accumulated ValuesRunning Accumulator Total[1] IEEE 754 Single Prec.[2] Custom FL M=8/E=6[3] Custom FL M=2/E=14[4] Custom FL M=10/E=4[5] Custom FI L=8/R=6Figure 8: The accumulation of weighted neuron inputs for a spe-cific neuron with various customized precision DNNs as well asthe IEEE 754 single precision floating point configuration for refer-ence. FL and FI are used to abbreviate floating point and fixed-point,respectively. The format parameters are as follows: M=mantissa,E=exponent, L=bits left of radix point, R=bits right of radix point.Figure 9: The linear fit fromthe correlation between nor-malized accuracy and lastlayer activations of the ex-act and customized preci-sion DNNs.By comparing the results across the five different networks in Figure 6, it is apparent that the sizeand structure of the network impacts the customized precision flexibility of the network. This insightsuggests that hardware designers should carefully consider which neural network(s) they expect theirdevice to execute as one of the fundamental steps in the design process. The impact of network sizeon accuracy is discussed in further detail in the following section.The specific impact of bit assignments on performance and energy efficiency are illustrated in Fig-ure 7. This figure shows the the speedup and energy improvements over the single precision floating-point representation as the number of allocated bits is varied. For the floating-point representations,the number of bits allocated for the mantissa (x-axis) and exponent (y-axis) are varied. For the fixed-point representations, the number of bits allocated for the integer (x-axis) and fraction (y-axis) arevaried. We highlight a region in the plot deemed to have acceptable accuracy. In this case, we defineacceptable accuracy to be 99% normalized AlexNet accuracy (i.e., no less than a 1% degradation inaccuracy from the IEEE 754 single precision accuracy on classification in AlexNet).The fastest and most energy efficient representation occurs at the bottom-left corner of the regionwith acceptable accuracy, since a minimal number of bits are used. The configuration with thehighest performance that meets this requirement is a floating-point representation with 6 exponentbits and 7 mantissa bits, which yields a 7.2 speedup and a 3.4savings in energy over the singleprecision IEEE 754 floating-point format. If a more stringent accuracy requirement is necessary,0.3% accuracy degradation, the representation with one additional bit in the mantissa can be used,which achieves a 5.7 speedup and 3.0energy savings.4.3 S OURCES OF ACCUMULATION ERRORIn order to understand how customized precision degrades DNN accuracy among numeric represen-tations, we examine the impact of various reduced precision computations on a neuron. Figure 8presents the serialized accumulation of neuron inputs in the third convolution layer of AlexNet.The x-axis represents the number of inputs that have been accumulated, while the y-axis representsthe current value of the running sum. The black line represents the original DNN computation, abaseline for customized precision settings to match. We find two causes of error between the cus-tomized precision fixed-point and floating-point representations, saturation and excessive rounding.In the fixed-point case (green line, representing 16 bits with the radix point in the center), the centralcause of error is from saturation at the extreme values. The running sum exceeds 255, the maximumrepresentable value in this representation, after 60 inputs are accumulated, as seen in the figure.6Under review as a conference paper at ICLR 2017Figure 10: The speedup achieved by selecting the customized precision using an exhaustive search(i.e. the ideal design) and prediction using the accuracy model with accuracy evaluated for somenumber of configurations (model + X samples). The floating-point (FL) and fixed-point (FI) resultsare shown in the top and bottom rows, respectively. The model with two evaluated designs producesthe same configurations, but requires <0.6% of the search time.After reaching saturation, the positive values are discarded and the final output is unpredictable.Although floating-point representations do not saturate as easily, the floating-point configurationwith 10 mantissa bits and 4 exponent bits (orange line) saturates after accumulating 1128 inputs.Again, the lost information from saturation causes an unpredictable final output.For the next case, the floating-point configuration with 2 bits and 14 bits for the mantissa and ex-ponent (blue line), respectively, we find that the lack of precision for large values causes excessiverounding errors. As shown in the figure, after accumulating 120 inputs, this configuration’s run-ning sum exceeds 256, which limits the minimum adjustment in magnitude to 64 (the exponentnormalizes the mantissa to 256, so the two mantissa bits represent 128 and 64). Finally, one of thecustomized precision types that has high performance and accuracy for AlexNet, 8 mantissa bits and6 exponent bits (red line), is shown as well. This configuration almost perfectly matches the IEEE754 floating-point configuration, as expected based on the final output accuracy.The other main cause of accuracy loss is from values that are too small to be encoded as a non-zerovalue in the chosen customized precision configuration. These values, although not critical duringaddition, cause significant problems when multiplied with a large value, since the output should beencoded as a non-zero value in the specific precision setting. We found that the weighted input isminimally impacted, until the precision is reduced low enough for the weight to become zero.While it may be intuitive based on these results to apply different customized precision settings tovarious stages of the neural network in order to mitigate the sudden loss in accuracy, the realizablegains of multi-precision configurations present significant challenges. The variability between unitswill cause certain units to be unused during specific layers of the neural network causing gains todiminish (e.g., 11-bit units are idle when 16-bit units are required for a particular layer). Also, theapplication specific hardware design is already an extensive process and multiple customized preci-sion configurations increases the difficulty of the hardware design and verification process.4.4 C USTOMIZED PRECISION SEARCHNow we evaluate our proposed customized precision search method. The goal of this method is tosignificantly reduce the required time to navigate the customized precision design space and stillprovide an optimal design choice in terms of speedup, limited by an accuracy constraint.Correlation model. First, we present the linear correlation-accuracy model in Figure 9, which showsthe relationship between the normalized accuracy of each setting in the design space and the corre-lation between its last layer activations compared to those of the original NN. This model, althoughbuilt using all of the customized precision configurations from AlexNet, CIFARNET, and LeNet-5 neural networks, produces a good fit with a correlation of 0.96. It is important that the modelmatches across networks and precision design choices (e.g., floating point versus fixed point), sincecreating this model for each DNN, individually, requires as much time as exhaustive search.Validation. To validate our search technique, Figure 10 presents the accuracy-speedup trade-offcurves from our method compared to the ideal design points. We first obtain optimal results via7Under review as a conference paper at ICLR 2017Figure 11: The speedup resulting from searching for the fastest setting with less than 1% inferenceaccuracy degradation. All selected customized precision DNNs meet this accuracy constraint.exhaustive search. We present our search with a variable number of refinement iterations, wherewe evaluate the accuracy of the current design point and adjust the precision if necessary. To verifyrobustness, the accuracy models were generated using cross-validation where all configurations inthe DNN being searched are excluded (e.g., we build the AlexNet model with LeNet and CIFAR-NET accuracy/correlation pairs). The prediction is made using only ten randomly selected inputs,a tiny subset compared that needed for classification accuracy, some of which are even incorrectlyclassified by the original neural network. Thus, the cost of prediction using the model is negligible.We observe that, in all cases, the accuracy model combined with the evaluation of just two cus-tomized precision configurations provides the same result as the exhaustive search. Evaluating twodesigns out of 340 is 170 faster than exhaustively evaluating all designs. When only one con-figuration is evaluated instead of two (i.e. a further 50% reduction is search time), the selectedcustomized precision setting never violates the target accuracy, but concedes a small amount of per-formance. Finally, we note that our search mechanism, without evaluating inference accuracy forany of the design points, provides a representative prediction of the optimal customized precisionsetting. Although occasionally violating the target accuracy (i.e. the cases where the speedup ishigher than the exhaustive search), this prediction can be used to gauge the amenability of the NNto customized precision without investing any considerable amount of time in experimentation.Speedup. We present the final speedup produced by our search method in Figure 11 when thealgorithm is configured for 99% target accuracy and to use two samples for refinement. In allcases, the chosen customized precision configuration meets the targeted accuracy constraint. Inmost cases, we find that the larger networks require more precision (DNNs are sorted from left toright in descending order based on size). VGG requires less precision than expected, but VGG alsouses smaller convolution kernels than all of the other DNNs except LeNet-5.5 R ELATED WORKTo the best of our knowledge, our work is the first to examine the impact of numeric representationson the accuracy-efficiency trade-offs on large-scale, deployed DNNs with over half a million neu-rons (GoogLeNet, VGG, AlexNet), whereas prior work has only reported results on much smallernetworks such as CIFARNET and LeNet-5 Cavigelli et al. (2015); Chen et al. (2014); Courbariauxet al. (2014); Du et al. (2014); Gupta et al. (2015); Muller & Indiveri (2015). Many of these works fo-cused on fixed-point computation due to the fixed-point representation working well on small-scaleneural networks. We find very different conclusions when considering production-ready DNNs.Other recent works have looked at alternative neural network implementations such as spiking neuralnetworks for more efficient hardware implementation Conti & Benini (2015); Diehl & Cook (2014).This is a very different computational model that requires redevelopment of standard DNNs, unlikeour proposed methodologies. Other works have proposed several approaches to improve perfor-mance and reduce energy consumption of deep neural networks by taking advantage of the fact thatDNNs usually contain redundancies Chen et al. (2015); Figurnov et al. (2015).6 C ONCLUSIONIn this work, we introduced the importance of carefully considering customized precision whenrealizing neural networks. We show that using the IEEE 754 single precision floating point repre-sentation in hardware results in surrendering substantial performance. On the other hand, picking aconfiguration that has lower precision than optimal will result in severe accuracy loss. By reconsid-ering the representation from the ground up in designing custom precision hardware and using oursearch technique, we find an average speedup across deployable DNNs, including GoogLeNet andVGG, of 7.6with less than 1% degradation in inference accuracy.8Under review as a conference paper at ICLR 2017 | SJIZu9_Sg | Review | 6: Marginally above acceptance threshold | The paper studies the impact of using customized number representations on accuracy, speed, and energy consumption of neural network inference. Several standard computer vision architectures including VGG and GoogleNet are considered for the experiments, and it is concluded that floating point representations are preferred over fixed point representations, and floating point numbers with about 14 bits are sufficient for the considered architectures resulting in a small loss in accuracy.
The paper provides a nice overview of floating and fixed point representations and focuses on an important aspect of deep learning that is not well studied. There are several aspects of the paper that could be improved, but overall, I am leaned toward weak accept assuming that the authors address the issues below.
1- The paper is not clear that it is only focusing on neural network inference. Please include the word "inference" in the title / abstract to clarify this point and mention that the findings of the paper do not necessarily apply to neural network training as training dynamics could be different.
2- The paper does not discuss the possibility of adopting quantization tricks during training, which may result in the use of fewer bits at inference.
3- The paper is not clear whether in computing the running time and power consumption, it includes all of the modules or only multiply-accumulate units? Also, how accurate are these numbers given different possible designs and the potential difference between simulation and production? Please elaborate on the details of simulation in the paper.
4- The whole discussion about "efficient customized precision search" seem unimportant to me. When such important hardware considerations are concerned, even spending 20x simulation time is not that important. The exhaustive search process could be easily parallelized and one may rather spend more time at simulation at the cost of finding the exact best configuration rather than an approximation. That said, weak configurations could be easily filtered after evaluating just a few examples.
5- Nvidia's Pascal GP100 GPU supports FP16. This should be discussed in the paper and relevant Nvidia papers / documents should be cited.
More comments:
- Parts of the paper discussing "efficient customized precision search" are not clear to me.
- As future work, the impact of number representations on batch normalization and recurrent neural networks could be studied.
| 3: The reviewer is fairly confident that the evaluation is correct |
BJ_MGwqlg | ICLR.cc/2017/conference | 2017 | Rethinking Numerical Representations for Deep Neural Networks | ["Parker Hill", "Babak Zamirai", "Shengshuo Lu", "Yu-Wei Chao", "Michael Laurenzano", "Mehrzad Samadi", "Marios Papaefthymiou", "Scott Mahlke", "Thomas Wenisch", "Jia Deng", "Lingjia Tang", "Jason Mars"] | With ever-increasing computational demand for deep learning, it is critical to investigate the implications of the numeric representation and precision of DNN model weights and activations on computational efficiency. In this work, we explore unconventional narrow-precision floating-point representations as it relates to inference accuracy and efficiency to steer the improved design of future DNN platforms. We show that inference using these custom numeric representations on production-grade DNNs, including GoogLeNet and VGG, achieves an average speedup of 7.6x with less than 1% degradation in inference accuracy relative to a state-of-the-art baseline platform representing the most sophisticated hardware using single-precision floating point. To facilitate the use of such customized precision, we also present a novel technique that drastically reduces the time required to derive the optimal precision configuration. | ["Deep learning"] | ABSTRACTWith ever-increasing computational demand for deep learning, it is critical to in-vestigate the implications of the numeric representation and precision of DNNmodel weights and activations on computational efficiency. In this work, we ex-plore unconventional narrow-precision floating-point representations as it relatesto inference accuracy and efficiency to steer the improved design of future DNNplatforms. We show that inference using these custom numeric representationson production-grade DNNs, including GoogLeNet and VGG, achieves an averagespeedup of 7.6with less than 1% degradation in inference accuracy relative toa state-of-the-art baseline platform representing the most sophisticated hardwareusing single-precision floating point. To facilitate the use of such customized pre-cision, we also present a novel technique that drastically reduces the time requiredto derive the optimal precision configuration.1 I NTRODUCTIONRecently, deep neural networks (DNNs) have yielded state-of-the-art performance on a wide arrayof AI tasks, including image classification Krizhevsky et al. (2012), speech recognition Hannunet al. (2014), and language understanding Sutskever et al. (2014). In addition to algorithmic inno-vations Nair & Hinton (2010); Srivastava et al. (2014); Taigman et al. (2014), a key driver behindthese successes are advances in computing infrastructure that enable large-scale deep learning—thetraining and inference of large DNN models on massive datasets Dean et al. (2012); Farabet et al.(2013). Indeed, highly efficient GPU implementations of DNNs played a key role in the first break-through of deep learning for image classification Krizhevsky et al. (2012). Given the ever growingamount of data available for indexing, analysis, and training, and the increasing prevalence of ever-larger DNNs as key building blocks for AI applications, it is critical to design computing platformsto support faster, more resource-efficient DNN computation.A set of core design decisions are common to the design of these infrastructures. One such criti-cal choice is the numerical representation and precision used in the implementation of underlyingstorage and computation. Several recent works have investigated the numerical representation forDNNs Cavigelli et al. (2015); Chen et al. (2014); Du et al. (2014); Muller & Indiveri (2015). Onerecent work found that substantially lower precision can be used for training when the correct nu-merical rounding method is employed Gupta et al. (2015). Their work resulted in the design of avery energy-efficient DNN platform.This work and other previous numerical representation studies for DNNs have either limited them-selves to a small subset of the customized precision design space or drew conclusions using onlysmall neural networks. For example, the work from Gupta et al. 2015 evaluates 16-bit fixed-pointand wider computational precision on LeNet-5 LeCun et al. (1998) and CIFARNET Krizhevsky& Hinton (2009). The fixed-point representation (Figure 1) is only one of many possible numericrepresentations. Exploring a limited customized precision design space inevitably results in designslacking in energy efficiency and computational performance. Evaluating customized precision ac-curacy based on small neural networks requires the assumption that much larger, production-gradeneural networks would operate comparably when subjected to the same customized precision.In this work, we explore the accuracy-efficiency trade-off made available via specialized custom-precision hardware for inference and present a method to efficiently traverse this large design spaceto find an optimal design. Specifically, we evaluate the impact of a wide spectrum of customized1Under review as a conference paper at ICLR 2017integer fraction11001.01110||||||||||... ... Figure 1: A fixed-point representation. Hard-ware parameters include the total number of bitsand the position of the radix point.x2mantissa1.01101|||||...exponent10011|||||... - biasFigure 2: A floating-point representation. Hard-ware parameters include the number of mantissaand exponent bits, and the bias.precision settings for fixed-point and floating-point representations on accuracy and computationalperformance. We evaluate these customized precision configurations on large, state-of-the-art neu-ral networks. By evaluating the full computational precision design space on a spectrum of theseproduction-grade DNNs, we find that:1. Precision requirements do not generalize across all neural networks. This prompts designersof future DNN infrastructures to carefully consider the applications that will be executed ontheir platforms, contrary to works that design for large networks and evaluate accuracy on smallnetworks Cavigelli et al. (2015); Chen et al. (2014).2. Many large-scale DNNs require considerably more precision for fixed-point arithmetic than pre-viously found from small-scale evaluations Cavigelli et al. (2015); Chen et al. (2014); Du et al.(2014). For example, we find that GoogLeNet requires on the order of 40 bits when implementedwith fixed-point arithmetic, as opposed to less than 16 bits for LeNet-5.3. Floating-point representations are more efficient than fixed-point representations when selectingoptimal precision settings. For example, a 17-bit floating-point representation is acceptable forGoogLeNet, while over 40 bits are required for the fixed-point representation – a more expensivecomputation than the standard single precision floating-point format. Current platform designersshould reconsider the use of the floating-point representations for DNN computations instead ofthe commonly used fixed-point representations Cavigelli et al. (2015); Chen et al. (2014); Duet al. (2014); Muller & Indiveri (2015).To make these conclusions on large-scale customized precision design readily actionable for DNNinfrastructure designers, we propose and validate a novel technique to quickly search the large cus-tomized precision design space. This technique leverages the activations in the last layer to builda model to predict accuracy based on the insight that these activations effectively capture the prop-agation of numerical error from computation. Using this method on deployable DNNs, includingGoogLeNet Szegedy et al. (2015) and VGG Simonyan & Zisserman (2014), we find that usingthese recommendations to introduce customized precision into a DNN accelerator fabric results inan average speedup of 7.6 with less than 1% degradation in inference accuracy.2 C USTOMIZED PRECISION HARDWAREWe begin with an overview of the available design choices in the representation of real numbers inbinary and discuss how these choices impact hardware performance.2.1 D ESIGN SPACEWe consider three aspects of customized precision number representations. First, we contrast thehigh-level choice between fixed-point and floating-point representations. Fixed-point binary arith-metic is computationally identical to integer arithmetic, simply changing the interpretation of eachbit position. Floating-point arithmetic, however, represents the sign, mantissa, and exponent of a realnumber separately. Floating-point calculations involve several steps absent in integer arithmetic. Inparticular, addition operations require aligning the mantissas of each operand. As a result, floating-point computation units are substantially larger, slower, and more complex than integer units.In CPUs and GPUs, available sizes for both integers and floating-point calculations are fixed accord-ing to the data types supported by the hardware. Thus, the second aspect of precision customizationwe examine is to consider customizing the number of bits used in representing floating-point andfixed-point numbers. Third, we may vary the interpretation of fixed-point numbers and assignmentof bits to the mantissa and exponent in a floating-point value.2.2 C USTOMIZED PRECISION TYPESIn a fixed-point representation, we select the number of bits as well as the position of the radix point,which separates integer and fractional bits, as illustrated in Figure 1. A bit array, x, encoded in fixedpoint with the radix point at bit l(counting from the right) represents the value 2lPN1i=02ixi.2Under review as a conference paper at ICLR 2017Sign Exponent Mantissa Sign Exponent MantissaComparatorSign Exponent Mantissa8 7 6 5 4 3 2 1 0Delay+×FSMControllerAlignmentAlignmentAddition/SubtractionAlignmentIncrement /Decrement8 7 6 5 4 3 2 1 0(a) (b) (c)Figure 3: Floating point multiply-accumulate (MAC) unit with various levels of detail: (a) the highlevel mathematical operation, (b) the modules that form a floating point MAC, and (c) the signalpropagation of the unit.In contrast to floating point, fixed-point representations with a particular number of bits have a fixedlevel of precision. By varying the position of the radix point, we change the representable range.An example floating-point representation is depicted in Figure 2. As shown in the figure, thereare three parameters to select when designing a floating-point representation: the bit-width ofthe mantissa, the bit-width of the exponent, and an exponent bias. The widths of the mantissaand exponent control precision and dynamic range, respectively. The exponent bias adjusts theoffset of the exponent (which is itself represented as an unsigned integer) relative to zero to fa-cilitate positive and negative exponents. Finally, an additional bit represents the sign. Thus, afloating-point format with Nmmantissa bits, Neexponent bits, and a bias of b, encodes the value2(PNe1i=02iei)b(1 +PNmi=12imi), where mandeare the segments of a bit array representingthe mantissa and exponent, respectively. Note that the leading bit of the mantissa is assumed to be1and hence is not explicitly stored, eliminating redundant encodings of the same value. A single-precision value in the IEEE-754 standard (i.e. float ) comprises 23 mantissa bits, 8 exponent bits,and a sign bit. IEEE-754 standardized floating-point formats include special encodings for specificvalues, such as zero and infinity.Both fixed-point and floating-point representations have limitations in terms of the precision and thedynamic ranges available given particular representations, manifesting themselves computationallyas rounding and saturation errors. These errors propagate through the deep neural network in a waythat is difficult to estimate holistically, prompting experimentation on the DNN itself.2.3 H ARDWARE IMPLICATIONSThe key hardware building block for implementing DNNs is the multiply-accumulate (MAC) op-eration. The MAC operation implements the sum-of-products operation that is fundamental to theactivation of each neuron. We show a high-level hardware block diagram of a MAC unit in Figure 3(a). Figure 3 (b) adds detail for the addition operation, the more complex of the two operations.As seen in the figure, floating-point addition operations involve a number of sub-components thatcompare exponents, align mantissas, perform the addition, and normalize the result. Nearly all ofthe sub-components of the MAC unit scale in speed, power, and area with the bit width.Reducing the floating-point bit width improves hardware performance in two ways. First, reducedbit width makes a computation unit faster. Binary arithmetic computations involve chains of logicoperations that typically grows at least logarithmically, and sometimes linearly (e.g., the propagationof carries in an addition, see Figure 3 (c)), in the number of bits. Reducing the bit width reduces thelength of these chains, allowing the logic to operate at a higher clock frequency. Second, reducedbit width makes a computation unit smaller and require less energy, typically linearly in the numberof bits. The circuit delay and area is shown in Figure 4 when the mantissa bit widths are varied. Asshown in the figure, scaling the length of the mantissa provides substantial opportunity because itdefines the size of the internal addition unit. Similar trends follow for bit-widths in other represen-tations. When a unit is smaller, more replicas can fit within the same chip area and power budget,all of which can operate in parallel. Hence, for computations like those in DNNs, where ampleparallelism is available, area reductions translate into proportional performance improvement.This trend of bit width versus speed, power, and area is applicable to every computation unit inhardware DNN implementations. Thus, in designing hardware that uses customized representations3Under review as a conference paper at ICLR 20175 10 15 200.00.20.40.60.81.0Normalized AreaNormalized DelayMantissa BitsFigure 4: Delay and area implications of man-tissa width, normalized to a 32-bit Single Preci-sion MAC with 23 mantissa bits.32-bit MAC11-bitMAC11-bitMAC11-bitMAC11-bitMACDelay: 10τDelay: 4τParallelism: 1v Parallelism: 4v1v / 10τ 4v / 4τ10x speedupFigure 5: Speedup calculation with a fixed areabudget. The speedup exploits the improvedfunction delay and parallelism.there is a trade-off between accuracy on the one hand and power, area, and speed on the other. Ourgoal is to use precision that delivers sufficient accuracy while attaining large improvements in power,area, and speed over standard floating-point designs.3 M ETHODOLOGYWe describe the methodology we use to evaluate the customized precision design space, using imageclassification tasks of varying complexity as a proxy for computer vision applications. We evaluateDNN implementations using several metrics, classification accuracy, speedup, and energy savingsrelative to a baseline custom hardware design that uses single-precision floating-point representa-tions. Using the results of this analysis, we propose and validate a search technique to efficientlydetermine the correct customized precision design point.3.1 A CCURACYWe evaluate accuracy by modifying the Caffe Jia et al. (2014) deep learning framework to performcalculations with arbitrary fixed-point and floating-point formats. We continue to store values as Cfloat s in Caffe, but truncate the mantissa and exponent to the desired format after each arithmeticoperation. Accuracy, using a set of test inputs disjoint from the training input set, is then measuredby running the forward pass of a DNN model with the customized format and comparing the out-puts with the ground truth. We use the standard accuracy metrics that accompany the dataset foreach DNN. For MNIST (LeNet-5) and CIFAR-10 (CIFARNET) we use top-1 accuracy and for Ima-geNet (GoogLeNet, VGG, and AlexNet) we use top-5 accuracy. Top-1 accuracy denotes the percentof inputs that the DNN predicts correctly after a single prediction attempt, while top-5 accuracyrepresents the percent of inputs that DNN predicts correctly after five attempts.3.2 E FFICIENCYWe quantify the efficiency advantages of customized floating-point representations by designing afloating-point MAC unit in each candidate precision and determining its silicon area and delay char-acteristics. We then report speedup and energy savings relative to a baseline custom hardware im-plementation of a DNN that uses standard single-precision floating-point computations. We designeach variant of the MAC unit using Synopsys Design Compiler and Synopsys PrimeTime, industrystandard ASIC design tools, targeting a commercial 28nm silicon manufacturing process. The toolsreport the power, delay, and area characteristics of each precision variant. As shown in Figure 5,we compute speedups and energy savings relative to the standardized IEEE-754 floating-point rep-resentation considering both the clock frequency advantage and improved parallelism due to areareduction of the narrower bit-width MAC units. This allows customized precision designs to yield aquadratic improvement in total system throughput.3.3 E FFICIENT CUSTOMIZED PRECISION SEARCHTo exploit the benefits of customized precision, a mechanism to select the correct configurationmust be introduced. There are hundreds of designs among floating-point and fixed-point formatsdue to designs varying by the total bit width and the allocation of those bits. This spectrum ofdesigns strains the ability to select an optimal configuration. A straightforward approach to selectthe customized precision design point is to exhaustively compute the accuracy of each design witha large number of neural network inputs. This strategy requires substantial computational resourcesthat are proportional to the size of the network and variety of output classifications. We describe ourtechnique that significantly reduces the time required to search for the correct configuration in orderto facilitate the use of customized precision.The key insight behind our search method is that customized precision impacts the underlying in-ternal computation, which is hidden by evaluating only the NN final accuracy metric. Thus, instead4Under review as a conference paper at ICLR 20170x5x10x15x20x25xSpeedup0%20%40%60%80%100%Accuracy(a) GoogLeNet●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●0x5x10x15x20x25xSpeedup0%20%40%60%80%100%Accuracy(b) VGG●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●0x5x10x15x20x25xSpeedup0%20%40%60%80%100%Accuracy(c) AlexNet●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●0x5x10x15x20x25xSpeedup0%20%40%60%80%100%Accuracy(d) CIFARNET●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●0x5x10x15x20x25xSpeedup0%20%40%60%80%100%Accuracy(e) LeNet−5●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●● ●●●●●●●●●●●●●●●●●●● ●●Custom Floating Point Custom Fixed Point IEEE 754 Single Prec. Figure 6: The inference accuracy versus speedup design space for each of the neural networks,showing substantial computational performance improvements for minimal accuracy degradationwhen customized precision floating-point formats are used.of comparing the final accuracy generated by networks with different precision configurations, wecompare the original NN activations to the customized precision activations. This circumvents theneed to evaluate the large number of inputs required to produce representative neural network accu-racy. Furthermore, instead of examining all of the activations, we only analyze the last layer, sincethe last layer captures the usable output from the neural network as well as the propagation of lostaccuracy. Our method summarizes the differences between the last layer of two configurations bycalculating the linear coefficient of determination between the last layer activations.A method to translate the coefficient of determination to a more desirable metric, such as end-to-endinference accuracy, is necessary. We find that a linear model provides such a transformation. Thecustomized precision setting with the highest speedup that meets a specified accuracy threshold isthen selected. In order to account for slight inaccuracies in the model, inference accuracy for asubset of configurations is evaluated. If the configuration provided by the accuracy model resultsin insufficient accuracy, then an additional bit is added and the process repeats. Similarly, if theaccuracy threshold is met, then a bit is removed from the customized precision format.4 E XPERIMENTSIn this section, we evaluate five common neural networks spanning a range of sizes and depths in thecontext of customized precision hardware. We explore the trade-off between accuracy and efficiencywhen various customized precision representations are employed. Next, we address the sources ofaccuracy degradation when customized precision is utilized. Finally, we examine the characteristicsof our customized precision search technique.4.1 E XPERIMENTAL SETUPWe evaluate the accuracy of customized precision operations on five DNNs: GoogLeNet Szegedyet al. (2015), VGG Simonyan & Zisserman (2014), AlexNet Krizhevsky et al. (2012), CIFAR-NET Krizhevsky & Hinton (2009), and LeNet-5 LeCun et al. (1998). The implementations andpre-trained weights for these DNNs were taken from Caffe Jia et al. (2014). The three largest DNNs(GoogLeNet, VGG, and AlexNet) represent real-world workloads, while the two smaller DNNs (CI-FARNET and LeNet-5) are the largest DNNs evaluated in prior work on customized precision. Foreach DNN, we use the canonical benchmark validation set: ImageNet for GoogLeNet, VGG, andAlexNet; CIFAR-10 for CIFARNET; MNIST for LeNet-5. We utilize the entire validation set forall experiments, except for GoogLeNet and VGG experiments involving the entire design space. Inthese cases we use a randomly-selected 1% of the validation set to make the experiments tractable.4.2 A CCURACY VERSUS EFFICIENCY TRADE -OFFSTo evaluate the benefits of customized precision hardware, we swept the design space for accuracyand performance characteristics. This performance-accuracy trade off is shown in Figure 6. Thisfigure shows the DNN inference accuracy across the full input set versus the speedup for each ofthe five DNN benchmarks. The black star represents the IEEE 754 single precision representation(i.e. the original accuracy with 1 speedup), while the red circles and blue triangles represent thecomplete set of our customized precision floating-point and fixed-point representations, respectively.For GoogLeNet, VGG, and AlexNet it is clear that the floating-point format is superior to the fixed-point format. In fact, the standard single precision floating-point format is faster than all fixed-point configurations that achieve above 40% accuracy. Although fixed-point computation is simplerand faster than floating-point computation when the number of bits is fixed, customized precisionfloating-point representations are more efficient because less bits are needed for similar accuracy.5Under review as a conference paper at ICLR 20173691215182124Mantissa Bits(a) Floating−point speedup46810Exponent Bits<1% AccuracyDegradation0.7x3.9x7.2x10.4x13.6x16.8x20x48121620242832Integer Bits(b) Fixed−point speedup48121620242832Fraction Bits<1% AccuracyDegradation0.2x4.3x8.5x12.6x16.8x21x25.1x3691215182124Mantissa Bits(c) Floating−point energy46810Exponent Bits<1% AccuracyDegradation0.8x1.7x2.6x3.5x4.4x5.3x6.2x48121620242832Integer Bits(d) Fixed−point energy48121620242832Fraction Bits<1% AccuracyDegradation0.3x1.4x2.5x3.7x4.8x5.9x7xFigure 7: The speedup and energy savings as the two parameters are adjusted for the custom floatingpoint and fixed-point representations. The marked area denotes configurations where the total lossin AlexNet accuracy is less than 1%.0 500 1000 1500 2000 2500 3000−1000−50005001000[1][2][3][4][5]# of Accumulated ValuesRunning Accumulator Total[1] IEEE 754 Single Prec.[2] Custom FL M=8/E=6[3] Custom FL M=2/E=14[4] Custom FL M=10/E=4[5] Custom FI L=8/R=6Figure 8: The accumulation of weighted neuron inputs for a spe-cific neuron with various customized precision DNNs as well asthe IEEE 754 single precision floating point configuration for refer-ence. FL and FI are used to abbreviate floating point and fixed-point,respectively. The format parameters are as follows: M=mantissa,E=exponent, L=bits left of radix point, R=bits right of radix point.Figure 9: The linear fit fromthe correlation between nor-malized accuracy and lastlayer activations of the ex-act and customized preci-sion DNNs.By comparing the results across the five different networks in Figure 6, it is apparent that the sizeand structure of the network impacts the customized precision flexibility of the network. This insightsuggests that hardware designers should carefully consider which neural network(s) they expect theirdevice to execute as one of the fundamental steps in the design process. The impact of network sizeon accuracy is discussed in further detail in the following section.The specific impact of bit assignments on performance and energy efficiency are illustrated in Fig-ure 7. This figure shows the the speedup and energy improvements over the single precision floating-point representation as the number of allocated bits is varied. For the floating-point representations,the number of bits allocated for the mantissa (x-axis) and exponent (y-axis) are varied. For the fixed-point representations, the number of bits allocated for the integer (x-axis) and fraction (y-axis) arevaried. We highlight a region in the plot deemed to have acceptable accuracy. In this case, we defineacceptable accuracy to be 99% normalized AlexNet accuracy (i.e., no less than a 1% degradation inaccuracy from the IEEE 754 single precision accuracy on classification in AlexNet).The fastest and most energy efficient representation occurs at the bottom-left corner of the regionwith acceptable accuracy, since a minimal number of bits are used. The configuration with thehighest performance that meets this requirement is a floating-point representation with 6 exponentbits and 7 mantissa bits, which yields a 7.2 speedup and a 3.4savings in energy over the singleprecision IEEE 754 floating-point format. If a more stringent accuracy requirement is necessary,0.3% accuracy degradation, the representation with one additional bit in the mantissa can be used,which achieves a 5.7 speedup and 3.0energy savings.4.3 S OURCES OF ACCUMULATION ERRORIn order to understand how customized precision degrades DNN accuracy among numeric represen-tations, we examine the impact of various reduced precision computations on a neuron. Figure 8presents the serialized accumulation of neuron inputs in the third convolution layer of AlexNet.The x-axis represents the number of inputs that have been accumulated, while the y-axis representsthe current value of the running sum. The black line represents the original DNN computation, abaseline for customized precision settings to match. We find two causes of error between the cus-tomized precision fixed-point and floating-point representations, saturation and excessive rounding.In the fixed-point case (green line, representing 16 bits with the radix point in the center), the centralcause of error is from saturation at the extreme values. The running sum exceeds 255, the maximumrepresentable value in this representation, after 60 inputs are accumulated, as seen in the figure.6Under review as a conference paper at ICLR 2017Figure 10: The speedup achieved by selecting the customized precision using an exhaustive search(i.e. the ideal design) and prediction using the accuracy model with accuracy evaluated for somenumber of configurations (model + X samples). The floating-point (FL) and fixed-point (FI) resultsare shown in the top and bottom rows, respectively. The model with two evaluated designs producesthe same configurations, but requires <0.6% of the search time.After reaching saturation, the positive values are discarded and the final output is unpredictable.Although floating-point representations do not saturate as easily, the floating-point configurationwith 10 mantissa bits and 4 exponent bits (orange line) saturates after accumulating 1128 inputs.Again, the lost information from saturation causes an unpredictable final output.For the next case, the floating-point configuration with 2 bits and 14 bits for the mantissa and ex-ponent (blue line), respectively, we find that the lack of precision for large values causes excessiverounding errors. As shown in the figure, after accumulating 120 inputs, this configuration’s run-ning sum exceeds 256, which limits the minimum adjustment in magnitude to 64 (the exponentnormalizes the mantissa to 256, so the two mantissa bits represent 128 and 64). Finally, one of thecustomized precision types that has high performance and accuracy for AlexNet, 8 mantissa bits and6 exponent bits (red line), is shown as well. This configuration almost perfectly matches the IEEE754 floating-point configuration, as expected based on the final output accuracy.The other main cause of accuracy loss is from values that are too small to be encoded as a non-zerovalue in the chosen customized precision configuration. These values, although not critical duringaddition, cause significant problems when multiplied with a large value, since the output should beencoded as a non-zero value in the specific precision setting. We found that the weighted input isminimally impacted, until the precision is reduced low enough for the weight to become zero.While it may be intuitive based on these results to apply different customized precision settings tovarious stages of the neural network in order to mitigate the sudden loss in accuracy, the realizablegains of multi-precision configurations present significant challenges. The variability between unitswill cause certain units to be unused during specific layers of the neural network causing gains todiminish (e.g., 11-bit units are idle when 16-bit units are required for a particular layer). Also, theapplication specific hardware design is already an extensive process and multiple customized preci-sion configurations increases the difficulty of the hardware design and verification process.4.4 C USTOMIZED PRECISION SEARCHNow we evaluate our proposed customized precision search method. The goal of this method is tosignificantly reduce the required time to navigate the customized precision design space and stillprovide an optimal design choice in terms of speedup, limited by an accuracy constraint.Correlation model. First, we present the linear correlation-accuracy model in Figure 9, which showsthe relationship between the normalized accuracy of each setting in the design space and the corre-lation between its last layer activations compared to those of the original NN. This model, althoughbuilt using all of the customized precision configurations from AlexNet, CIFARNET, and LeNet-5 neural networks, produces a good fit with a correlation of 0.96. It is important that the modelmatches across networks and precision design choices (e.g., floating point versus fixed point), sincecreating this model for each DNN, individually, requires as much time as exhaustive search.Validation. To validate our search technique, Figure 10 presents the accuracy-speedup trade-offcurves from our method compared to the ideal design points. We first obtain optimal results via7Under review as a conference paper at ICLR 2017Figure 11: The speedup resulting from searching for the fastest setting with less than 1% inferenceaccuracy degradation. All selected customized precision DNNs meet this accuracy constraint.exhaustive search. We present our search with a variable number of refinement iterations, wherewe evaluate the accuracy of the current design point and adjust the precision if necessary. To verifyrobustness, the accuracy models were generated using cross-validation where all configurations inthe DNN being searched are excluded (e.g., we build the AlexNet model with LeNet and CIFAR-NET accuracy/correlation pairs). The prediction is made using only ten randomly selected inputs,a tiny subset compared that needed for classification accuracy, some of which are even incorrectlyclassified by the original neural network. Thus, the cost of prediction using the model is negligible.We observe that, in all cases, the accuracy model combined with the evaluation of just two cus-tomized precision configurations provides the same result as the exhaustive search. Evaluating twodesigns out of 340 is 170 faster than exhaustively evaluating all designs. When only one con-figuration is evaluated instead of two (i.e. a further 50% reduction is search time), the selectedcustomized precision setting never violates the target accuracy, but concedes a small amount of per-formance. Finally, we note that our search mechanism, without evaluating inference accuracy forany of the design points, provides a representative prediction of the optimal customized precisionsetting. Although occasionally violating the target accuracy (i.e. the cases where the speedup ishigher than the exhaustive search), this prediction can be used to gauge the amenability of the NNto customized precision without investing any considerable amount of time in experimentation.Speedup. We present the final speedup produced by our search method in Figure 11 when thealgorithm is configured for 99% target accuracy and to use two samples for refinement. In allcases, the chosen customized precision configuration meets the targeted accuracy constraint. Inmost cases, we find that the larger networks require more precision (DNNs are sorted from left toright in descending order based on size). VGG requires less precision than expected, but VGG alsouses smaller convolution kernels than all of the other DNNs except LeNet-5.5 R ELATED WORKTo the best of our knowledge, our work is the first to examine the impact of numeric representationson the accuracy-efficiency trade-offs on large-scale, deployed DNNs with over half a million neu-rons (GoogLeNet, VGG, AlexNet), whereas prior work has only reported results on much smallernetworks such as CIFARNET and LeNet-5 Cavigelli et al. (2015); Chen et al. (2014); Courbariauxet al. (2014); Du et al. (2014); Gupta et al. (2015); Muller & Indiveri (2015). Many of these works fo-cused on fixed-point computation due to the fixed-point representation working well on small-scaleneural networks. We find very different conclusions when considering production-ready DNNs.Other recent works have looked at alternative neural network implementations such as spiking neuralnetworks for more efficient hardware implementation Conti & Benini (2015); Diehl & Cook (2014).This is a very different computational model that requires redevelopment of standard DNNs, unlikeour proposed methodologies. Other works have proposed several approaches to improve perfor-mance and reduce energy consumption of deep neural networks by taking advantage of the fact thatDNNs usually contain redundancies Chen et al. (2015); Figurnov et al. (2015).6 C ONCLUSIONIn this work, we introduced the importance of carefully considering customized precision whenrealizing neural networks. We show that using the IEEE 754 single precision floating point repre-sentation in hardware results in surrendering substantial performance. On the other hand, picking aconfiguration that has lower precision than optimal will result in severe accuracy loss. By reconsid-ering the representation from the ground up in designing custom precision hardware and using oursearch technique, we find an average speedup across deployable DNNs, including GoogLeNet andVGG, of 7.6with less than 1% degradation in inference accuracy.8Under review as a conference paper at ICLR 2017 | SyusKkUNx | Can be improved | 5: Marginally below acceptance threshold | The paper provides a first study of customized precision hardware for large convolutional networks, namely alexnet, vgg and googlenet. It shows that it is possible to achieve larger speed-ups using floating-point precision (up to 7x) when using fewer bits, and better than using fixed-point representations.
The paper also explores predicting custom floating-point precision parameters directly from the neural network activations, avoiding exhaustive search, but i could not follow this part. Only the activations of the last layer are evaluated, but on what data ? On all the validation set ? Why would this be faster than computing the classification accuracy ?
The results should be useful for hardware manufacturers, but with a catch. All popular convolutional networks now use batch normalization, while none of the evaluated ones do. It may well be that the conclusions of this study will be completely different on batch normalization networks, and fixed-point representations are best there, but that remains to be seen. It seems like something worth exploring.
Overall there is not a great deal of novelty other than being a useful study on numerical precision trade-offs at neural network test time. Training time is also something of interest. There are a lot more researchers trying to train new networks fast than trying to evaluate old ones fast.
I am also no expert in digital logic design, but my educated guess is that this paper is marginally below the acceptance threshold. | 2: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper |
BJ_MGwqlg | ICLR.cc/2017/conference | 2017 | Rethinking Numerical Representations for Deep Neural Networks | ["Parker Hill", "Babak Zamirai", "Shengshuo Lu", "Yu-Wei Chao", "Michael Laurenzano", "Mehrzad Samadi", "Marios Papaefthymiou", "Scott Mahlke", "Thomas Wenisch", "Jia Deng", "Lingjia Tang", "Jason Mars"] | With ever-increasing computational demand for deep learning, it is critical to investigate the implications of the numeric representation and precision of DNN model weights and activations on computational efficiency. In this work, we explore unconventional narrow-precision floating-point representations as it relates to inference accuracy and efficiency to steer the improved design of future DNN platforms. We show that inference using these custom numeric representations on production-grade DNNs, including GoogLeNet and VGG, achieves an average speedup of 7.6x with less than 1% degradation in inference accuracy relative to a state-of-the-art baseline platform representing the most sophisticated hardware using single-precision floating point. To facilitate the use of such customized precision, we also present a novel technique that drastically reduces the time required to derive the optimal precision configuration. | ["Deep learning"] | ABSTRACTWith ever-increasing computational demand for deep learning, it is critical to in-vestigate the implications of the numeric representation and precision of DNNmodel weights and activations on computational efficiency. In this work, we ex-plore unconventional narrow-precision floating-point representations as it relatesto inference accuracy and efficiency to steer the improved design of future DNNplatforms. We show that inference using these custom numeric representationson production-grade DNNs, including GoogLeNet and VGG, achieves an averagespeedup of 7.6with less than 1% degradation in inference accuracy relative toa state-of-the-art baseline platform representing the most sophisticated hardwareusing single-precision floating point. To facilitate the use of such customized pre-cision, we also present a novel technique that drastically reduces the time requiredto derive the optimal precision configuration.1 I NTRODUCTIONRecently, deep neural networks (DNNs) have yielded state-of-the-art performance on a wide arrayof AI tasks, including image classification Krizhevsky et al. (2012), speech recognition Hannunet al. (2014), and language understanding Sutskever et al. (2014). In addition to algorithmic inno-vations Nair & Hinton (2010); Srivastava et al. (2014); Taigman et al. (2014), a key driver behindthese successes are advances in computing infrastructure that enable large-scale deep learning—thetraining and inference of large DNN models on massive datasets Dean et al. (2012); Farabet et al.(2013). Indeed, highly efficient GPU implementations of DNNs played a key role in the first break-through of deep learning for image classification Krizhevsky et al. (2012). Given the ever growingamount of data available for indexing, analysis, and training, and the increasing prevalence of ever-larger DNNs as key building blocks for AI applications, it is critical to design computing platformsto support faster, more resource-efficient DNN computation.A set of core design decisions are common to the design of these infrastructures. One such criti-cal choice is the numerical representation and precision used in the implementation of underlyingstorage and computation. Several recent works have investigated the numerical representation forDNNs Cavigelli et al. (2015); Chen et al. (2014); Du et al. (2014); Muller & Indiveri (2015). Onerecent work found that substantially lower precision can be used for training when the correct nu-merical rounding method is employed Gupta et al. (2015). Their work resulted in the design of avery energy-efficient DNN platform.This work and other previous numerical representation studies for DNNs have either limited them-selves to a small subset of the customized precision design space or drew conclusions using onlysmall neural networks. For example, the work from Gupta et al. 2015 evaluates 16-bit fixed-pointand wider computational precision on LeNet-5 LeCun et al. (1998) and CIFARNET Krizhevsky& Hinton (2009). The fixed-point representation (Figure 1) is only one of many possible numericrepresentations. Exploring a limited customized precision design space inevitably results in designslacking in energy efficiency and computational performance. Evaluating customized precision ac-curacy based on small neural networks requires the assumption that much larger, production-gradeneural networks would operate comparably when subjected to the same customized precision.In this work, we explore the accuracy-efficiency trade-off made available via specialized custom-precision hardware for inference and present a method to efficiently traverse this large design spaceto find an optimal design. Specifically, we evaluate the impact of a wide spectrum of customized1Under review as a conference paper at ICLR 2017integer fraction11001.01110||||||||||... ... Figure 1: A fixed-point representation. Hard-ware parameters include the total number of bitsand the position of the radix point.x2mantissa1.01101|||||...exponent10011|||||... - biasFigure 2: A floating-point representation. Hard-ware parameters include the number of mantissaand exponent bits, and the bias.precision settings for fixed-point and floating-point representations on accuracy and computationalperformance. We evaluate these customized precision configurations on large, state-of-the-art neu-ral networks. By evaluating the full computational precision design space on a spectrum of theseproduction-grade DNNs, we find that:1. Precision requirements do not generalize across all neural networks. This prompts designersof future DNN infrastructures to carefully consider the applications that will be executed ontheir platforms, contrary to works that design for large networks and evaluate accuracy on smallnetworks Cavigelli et al. (2015); Chen et al. (2014).2. Many large-scale DNNs require considerably more precision for fixed-point arithmetic than pre-viously found from small-scale evaluations Cavigelli et al. (2015); Chen et al. (2014); Du et al.(2014). For example, we find that GoogLeNet requires on the order of 40 bits when implementedwith fixed-point arithmetic, as opposed to less than 16 bits for LeNet-5.3. Floating-point representations are more efficient than fixed-point representations when selectingoptimal precision settings. For example, a 17-bit floating-point representation is acceptable forGoogLeNet, while over 40 bits are required for the fixed-point representation – a more expensivecomputation than the standard single precision floating-point format. Current platform designersshould reconsider the use of the floating-point representations for DNN computations instead ofthe commonly used fixed-point representations Cavigelli et al. (2015); Chen et al. (2014); Duet al. (2014); Muller & Indiveri (2015).To make these conclusions on large-scale customized precision design readily actionable for DNNinfrastructure designers, we propose and validate a novel technique to quickly search the large cus-tomized precision design space. This technique leverages the activations in the last layer to builda model to predict accuracy based on the insight that these activations effectively capture the prop-agation of numerical error from computation. Using this method on deployable DNNs, includingGoogLeNet Szegedy et al. (2015) and VGG Simonyan & Zisserman (2014), we find that usingthese recommendations to introduce customized precision into a DNN accelerator fabric results inan average speedup of 7.6 with less than 1% degradation in inference accuracy.2 C USTOMIZED PRECISION HARDWAREWe begin with an overview of the available design choices in the representation of real numbers inbinary and discuss how these choices impact hardware performance.2.1 D ESIGN SPACEWe consider three aspects of customized precision number representations. First, we contrast thehigh-level choice between fixed-point and floating-point representations. Fixed-point binary arith-metic is computationally identical to integer arithmetic, simply changing the interpretation of eachbit position. Floating-point arithmetic, however, represents the sign, mantissa, and exponent of a realnumber separately. Floating-point calculations involve several steps absent in integer arithmetic. Inparticular, addition operations require aligning the mantissas of each operand. As a result, floating-point computation units are substantially larger, slower, and more complex than integer units.In CPUs and GPUs, available sizes for both integers and floating-point calculations are fixed accord-ing to the data types supported by the hardware. Thus, the second aspect of precision customizationwe examine is to consider customizing the number of bits used in representing floating-point andfixed-point numbers. Third, we may vary the interpretation of fixed-point numbers and assignmentof bits to the mantissa and exponent in a floating-point value.2.2 C USTOMIZED PRECISION TYPESIn a fixed-point representation, we select the number of bits as well as the position of the radix point,which separates integer and fractional bits, as illustrated in Figure 1. A bit array, x, encoded in fixedpoint with the radix point at bit l(counting from the right) represents the value 2lPN1i=02ixi.2Under review as a conference paper at ICLR 2017Sign Exponent Mantissa Sign Exponent MantissaComparatorSign Exponent Mantissa8 7 6 5 4 3 2 1 0Delay+×FSMControllerAlignmentAlignmentAddition/SubtractionAlignmentIncrement /Decrement8 7 6 5 4 3 2 1 0(a) (b) (c)Figure 3: Floating point multiply-accumulate (MAC) unit with various levels of detail: (a) the highlevel mathematical operation, (b) the modules that form a floating point MAC, and (c) the signalpropagation of the unit.In contrast to floating point, fixed-point representations with a particular number of bits have a fixedlevel of precision. By varying the position of the radix point, we change the representable range.An example floating-point representation is depicted in Figure 2. As shown in the figure, thereare three parameters to select when designing a floating-point representation: the bit-width ofthe mantissa, the bit-width of the exponent, and an exponent bias. The widths of the mantissaand exponent control precision and dynamic range, respectively. The exponent bias adjusts theoffset of the exponent (which is itself represented as an unsigned integer) relative to zero to fa-cilitate positive and negative exponents. Finally, an additional bit represents the sign. Thus, afloating-point format with Nmmantissa bits, Neexponent bits, and a bias of b, encodes the value2(PNe1i=02iei)b(1 +PNmi=12imi), where mandeare the segments of a bit array representingthe mantissa and exponent, respectively. Note that the leading bit of the mantissa is assumed to be1and hence is not explicitly stored, eliminating redundant encodings of the same value. A single-precision value in the IEEE-754 standard (i.e. float ) comprises 23 mantissa bits, 8 exponent bits,and a sign bit. IEEE-754 standardized floating-point formats include special encodings for specificvalues, such as zero and infinity.Both fixed-point and floating-point representations have limitations in terms of the precision and thedynamic ranges available given particular representations, manifesting themselves computationallyas rounding and saturation errors. These errors propagate through the deep neural network in a waythat is difficult to estimate holistically, prompting experimentation on the DNN itself.2.3 H ARDWARE IMPLICATIONSThe key hardware building block for implementing DNNs is the multiply-accumulate (MAC) op-eration. The MAC operation implements the sum-of-products operation that is fundamental to theactivation of each neuron. We show a high-level hardware block diagram of a MAC unit in Figure 3(a). Figure 3 (b) adds detail for the addition operation, the more complex of the two operations.As seen in the figure, floating-point addition operations involve a number of sub-components thatcompare exponents, align mantissas, perform the addition, and normalize the result. Nearly all ofthe sub-components of the MAC unit scale in speed, power, and area with the bit width.Reducing the floating-point bit width improves hardware performance in two ways. First, reducedbit width makes a computation unit faster. Binary arithmetic computations involve chains of logicoperations that typically grows at least logarithmically, and sometimes linearly (e.g., the propagationof carries in an addition, see Figure 3 (c)), in the number of bits. Reducing the bit width reduces thelength of these chains, allowing the logic to operate at a higher clock frequency. Second, reducedbit width makes a computation unit smaller and require less energy, typically linearly in the numberof bits. The circuit delay and area is shown in Figure 4 when the mantissa bit widths are varied. Asshown in the figure, scaling the length of the mantissa provides substantial opportunity because itdefines the size of the internal addition unit. Similar trends follow for bit-widths in other represen-tations. When a unit is smaller, more replicas can fit within the same chip area and power budget,all of which can operate in parallel. Hence, for computations like those in DNNs, where ampleparallelism is available, area reductions translate into proportional performance improvement.This trend of bit width versus speed, power, and area is applicable to every computation unit inhardware DNN implementations. Thus, in designing hardware that uses customized representations3Under review as a conference paper at ICLR 20175 10 15 200.00.20.40.60.81.0Normalized AreaNormalized DelayMantissa BitsFigure 4: Delay and area implications of man-tissa width, normalized to a 32-bit Single Preci-sion MAC with 23 mantissa bits.32-bit MAC11-bitMAC11-bitMAC11-bitMAC11-bitMACDelay: 10τDelay: 4τParallelism: 1v Parallelism: 4v1v / 10τ 4v / 4τ10x speedupFigure 5: Speedup calculation with a fixed areabudget. The speedup exploits the improvedfunction delay and parallelism.there is a trade-off between accuracy on the one hand and power, area, and speed on the other. Ourgoal is to use precision that delivers sufficient accuracy while attaining large improvements in power,area, and speed over standard floating-point designs.3 M ETHODOLOGYWe describe the methodology we use to evaluate the customized precision design space, using imageclassification tasks of varying complexity as a proxy for computer vision applications. We evaluateDNN implementations using several metrics, classification accuracy, speedup, and energy savingsrelative to a baseline custom hardware design that uses single-precision floating-point representa-tions. Using the results of this analysis, we propose and validate a search technique to efficientlydetermine the correct customized precision design point.3.1 A CCURACYWe evaluate accuracy by modifying the Caffe Jia et al. (2014) deep learning framework to performcalculations with arbitrary fixed-point and floating-point formats. We continue to store values as Cfloat s in Caffe, but truncate the mantissa and exponent to the desired format after each arithmeticoperation. Accuracy, using a set of test inputs disjoint from the training input set, is then measuredby running the forward pass of a DNN model with the customized format and comparing the out-puts with the ground truth. We use the standard accuracy metrics that accompany the dataset foreach DNN. For MNIST (LeNet-5) and CIFAR-10 (CIFARNET) we use top-1 accuracy and for Ima-geNet (GoogLeNet, VGG, and AlexNet) we use top-5 accuracy. Top-1 accuracy denotes the percentof inputs that the DNN predicts correctly after a single prediction attempt, while top-5 accuracyrepresents the percent of inputs that DNN predicts correctly after five attempts.3.2 E FFICIENCYWe quantify the efficiency advantages of customized floating-point representations by designing afloating-point MAC unit in each candidate precision and determining its silicon area and delay char-acteristics. We then report speedup and energy savings relative to a baseline custom hardware im-plementation of a DNN that uses standard single-precision floating-point computations. We designeach variant of the MAC unit using Synopsys Design Compiler and Synopsys PrimeTime, industrystandard ASIC design tools, targeting a commercial 28nm silicon manufacturing process. The toolsreport the power, delay, and area characteristics of each precision variant. As shown in Figure 5,we compute speedups and energy savings relative to the standardized IEEE-754 floating-point rep-resentation considering both the clock frequency advantage and improved parallelism due to areareduction of the narrower bit-width MAC units. This allows customized precision designs to yield aquadratic improvement in total system throughput.3.3 E FFICIENT CUSTOMIZED PRECISION SEARCHTo exploit the benefits of customized precision, a mechanism to select the correct configurationmust be introduced. There are hundreds of designs among floating-point and fixed-point formatsdue to designs varying by the total bit width and the allocation of those bits. This spectrum ofdesigns strains the ability to select an optimal configuration. A straightforward approach to selectthe customized precision design point is to exhaustively compute the accuracy of each design witha large number of neural network inputs. This strategy requires substantial computational resourcesthat are proportional to the size of the network and variety of output classifications. We describe ourtechnique that significantly reduces the time required to search for the correct configuration in orderto facilitate the use of customized precision.The key insight behind our search method is that customized precision impacts the underlying in-ternal computation, which is hidden by evaluating only the NN final accuracy metric. Thus, instead4Under review as a conference paper at ICLR 20170x5x10x15x20x25xSpeedup0%20%40%60%80%100%Accuracy(a) GoogLeNet●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●0x5x10x15x20x25xSpeedup0%20%40%60%80%100%Accuracy(b) VGG●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●0x5x10x15x20x25xSpeedup0%20%40%60%80%100%Accuracy(c) AlexNet●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●0x5x10x15x20x25xSpeedup0%20%40%60%80%100%Accuracy(d) CIFARNET●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●0x5x10x15x20x25xSpeedup0%20%40%60%80%100%Accuracy(e) LeNet−5●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●● ●●●●●●●●●●●●●●●●●●● ●●Custom Floating Point Custom Fixed Point IEEE 754 Single Prec. Figure 6: The inference accuracy versus speedup design space for each of the neural networks,showing substantial computational performance improvements for minimal accuracy degradationwhen customized precision floating-point formats are used.of comparing the final accuracy generated by networks with different precision configurations, wecompare the original NN activations to the customized precision activations. This circumvents theneed to evaluate the large number of inputs required to produce representative neural network accu-racy. Furthermore, instead of examining all of the activations, we only analyze the last layer, sincethe last layer captures the usable output from the neural network as well as the propagation of lostaccuracy. Our method summarizes the differences between the last layer of two configurations bycalculating the linear coefficient of determination between the last layer activations.A method to translate the coefficient of determination to a more desirable metric, such as end-to-endinference accuracy, is necessary. We find that a linear model provides such a transformation. Thecustomized precision setting with the highest speedup that meets a specified accuracy threshold isthen selected. In order to account for slight inaccuracies in the model, inference accuracy for asubset of configurations is evaluated. If the configuration provided by the accuracy model resultsin insufficient accuracy, then an additional bit is added and the process repeats. Similarly, if theaccuracy threshold is met, then a bit is removed from the customized precision format.4 E XPERIMENTSIn this section, we evaluate five common neural networks spanning a range of sizes and depths in thecontext of customized precision hardware. We explore the trade-off between accuracy and efficiencywhen various customized precision representations are employed. Next, we address the sources ofaccuracy degradation when customized precision is utilized. Finally, we examine the characteristicsof our customized precision search technique.4.1 E XPERIMENTAL SETUPWe evaluate the accuracy of customized precision operations on five DNNs: GoogLeNet Szegedyet al. (2015), VGG Simonyan & Zisserman (2014), AlexNet Krizhevsky et al. (2012), CIFAR-NET Krizhevsky & Hinton (2009), and LeNet-5 LeCun et al. (1998). The implementations andpre-trained weights for these DNNs were taken from Caffe Jia et al. (2014). The three largest DNNs(GoogLeNet, VGG, and AlexNet) represent real-world workloads, while the two smaller DNNs (CI-FARNET and LeNet-5) are the largest DNNs evaluated in prior work on customized precision. Foreach DNN, we use the canonical benchmark validation set: ImageNet for GoogLeNet, VGG, andAlexNet; CIFAR-10 for CIFARNET; MNIST for LeNet-5. We utilize the entire validation set forall experiments, except for GoogLeNet and VGG experiments involving the entire design space. Inthese cases we use a randomly-selected 1% of the validation set to make the experiments tractable.4.2 A CCURACY VERSUS EFFICIENCY TRADE -OFFSTo evaluate the benefits of customized precision hardware, we swept the design space for accuracyand performance characteristics. This performance-accuracy trade off is shown in Figure 6. Thisfigure shows the DNN inference accuracy across the full input set versus the speedup for each ofthe five DNN benchmarks. The black star represents the IEEE 754 single precision representation(i.e. the original accuracy with 1 speedup), while the red circles and blue triangles represent thecomplete set of our customized precision floating-point and fixed-point representations, respectively.For GoogLeNet, VGG, and AlexNet it is clear that the floating-point format is superior to the fixed-point format. In fact, the standard single precision floating-point format is faster than all fixed-point configurations that achieve above 40% accuracy. Although fixed-point computation is simplerand faster than floating-point computation when the number of bits is fixed, customized precisionfloating-point representations are more efficient because less bits are needed for similar accuracy.5Under review as a conference paper at ICLR 20173691215182124Mantissa Bits(a) Floating−point speedup46810Exponent Bits<1% AccuracyDegradation0.7x3.9x7.2x10.4x13.6x16.8x20x48121620242832Integer Bits(b) Fixed−point speedup48121620242832Fraction Bits<1% AccuracyDegradation0.2x4.3x8.5x12.6x16.8x21x25.1x3691215182124Mantissa Bits(c) Floating−point energy46810Exponent Bits<1% AccuracyDegradation0.8x1.7x2.6x3.5x4.4x5.3x6.2x48121620242832Integer Bits(d) Fixed−point energy48121620242832Fraction Bits<1% AccuracyDegradation0.3x1.4x2.5x3.7x4.8x5.9x7xFigure 7: The speedup and energy savings as the two parameters are adjusted for the custom floatingpoint and fixed-point representations. The marked area denotes configurations where the total lossin AlexNet accuracy is less than 1%.0 500 1000 1500 2000 2500 3000−1000−50005001000[1][2][3][4][5]# of Accumulated ValuesRunning Accumulator Total[1] IEEE 754 Single Prec.[2] Custom FL M=8/E=6[3] Custom FL M=2/E=14[4] Custom FL M=10/E=4[5] Custom FI L=8/R=6Figure 8: The accumulation of weighted neuron inputs for a spe-cific neuron with various customized precision DNNs as well asthe IEEE 754 single precision floating point configuration for refer-ence. FL and FI are used to abbreviate floating point and fixed-point,respectively. The format parameters are as follows: M=mantissa,E=exponent, L=bits left of radix point, R=bits right of radix point.Figure 9: The linear fit fromthe correlation between nor-malized accuracy and lastlayer activations of the ex-act and customized preci-sion DNNs.By comparing the results across the five different networks in Figure 6, it is apparent that the sizeand structure of the network impacts the customized precision flexibility of the network. This insightsuggests that hardware designers should carefully consider which neural network(s) they expect theirdevice to execute as one of the fundamental steps in the design process. The impact of network sizeon accuracy is discussed in further detail in the following section.The specific impact of bit assignments on performance and energy efficiency are illustrated in Fig-ure 7. This figure shows the the speedup and energy improvements over the single precision floating-point representation as the number of allocated bits is varied. For the floating-point representations,the number of bits allocated for the mantissa (x-axis) and exponent (y-axis) are varied. For the fixed-point representations, the number of bits allocated for the integer (x-axis) and fraction (y-axis) arevaried. We highlight a region in the plot deemed to have acceptable accuracy. In this case, we defineacceptable accuracy to be 99% normalized AlexNet accuracy (i.e., no less than a 1% degradation inaccuracy from the IEEE 754 single precision accuracy on classification in AlexNet).The fastest and most energy efficient representation occurs at the bottom-left corner of the regionwith acceptable accuracy, since a minimal number of bits are used. The configuration with thehighest performance that meets this requirement is a floating-point representation with 6 exponentbits and 7 mantissa bits, which yields a 7.2 speedup and a 3.4savings in energy over the singleprecision IEEE 754 floating-point format. If a more stringent accuracy requirement is necessary,0.3% accuracy degradation, the representation with one additional bit in the mantissa can be used,which achieves a 5.7 speedup and 3.0energy savings.4.3 S OURCES OF ACCUMULATION ERRORIn order to understand how customized precision degrades DNN accuracy among numeric represen-tations, we examine the impact of various reduced precision computations on a neuron. Figure 8presents the serialized accumulation of neuron inputs in the third convolution layer of AlexNet.The x-axis represents the number of inputs that have been accumulated, while the y-axis representsthe current value of the running sum. The black line represents the original DNN computation, abaseline for customized precision settings to match. We find two causes of error between the cus-tomized precision fixed-point and floating-point representations, saturation and excessive rounding.In the fixed-point case (green line, representing 16 bits with the radix point in the center), the centralcause of error is from saturation at the extreme values. The running sum exceeds 255, the maximumrepresentable value in this representation, after 60 inputs are accumulated, as seen in the figure.6Under review as a conference paper at ICLR 2017Figure 10: The speedup achieved by selecting the customized precision using an exhaustive search(i.e. the ideal design) and prediction using the accuracy model with accuracy evaluated for somenumber of configurations (model + X samples). The floating-point (FL) and fixed-point (FI) resultsare shown in the top and bottom rows, respectively. The model with two evaluated designs producesthe same configurations, but requires <0.6% of the search time.After reaching saturation, the positive values are discarded and the final output is unpredictable.Although floating-point representations do not saturate as easily, the floating-point configurationwith 10 mantissa bits and 4 exponent bits (orange line) saturates after accumulating 1128 inputs.Again, the lost information from saturation causes an unpredictable final output.For the next case, the floating-point configuration with 2 bits and 14 bits for the mantissa and ex-ponent (blue line), respectively, we find that the lack of precision for large values causes excessiverounding errors. As shown in the figure, after accumulating 120 inputs, this configuration’s run-ning sum exceeds 256, which limits the minimum adjustment in magnitude to 64 (the exponentnormalizes the mantissa to 256, so the two mantissa bits represent 128 and 64). Finally, one of thecustomized precision types that has high performance and accuracy for AlexNet, 8 mantissa bits and6 exponent bits (red line), is shown as well. This configuration almost perfectly matches the IEEE754 floating-point configuration, as expected based on the final output accuracy.The other main cause of accuracy loss is from values that are too small to be encoded as a non-zerovalue in the chosen customized precision configuration. These values, although not critical duringaddition, cause significant problems when multiplied with a large value, since the output should beencoded as a non-zero value in the specific precision setting. We found that the weighted input isminimally impacted, until the precision is reduced low enough for the weight to become zero.While it may be intuitive based on these results to apply different customized precision settings tovarious stages of the neural network in order to mitigate the sudden loss in accuracy, the realizablegains of multi-precision configurations present significant challenges. The variability between unitswill cause certain units to be unused during specific layers of the neural network causing gains todiminish (e.g., 11-bit units are idle when 16-bit units are required for a particular layer). Also, theapplication specific hardware design is already an extensive process and multiple customized preci-sion configurations increases the difficulty of the hardware design and verification process.4.4 C USTOMIZED PRECISION SEARCHNow we evaluate our proposed customized precision search method. The goal of this method is tosignificantly reduce the required time to navigate the customized precision design space and stillprovide an optimal design choice in terms of speedup, limited by an accuracy constraint.Correlation model. First, we present the linear correlation-accuracy model in Figure 9, which showsthe relationship between the normalized accuracy of each setting in the design space and the corre-lation between its last layer activations compared to those of the original NN. This model, althoughbuilt using all of the customized precision configurations from AlexNet, CIFARNET, and LeNet-5 neural networks, produces a good fit with a correlation of 0.96. It is important that the modelmatches across networks and precision design choices (e.g., floating point versus fixed point), sincecreating this model for each DNN, individually, requires as much time as exhaustive search.Validation. To validate our search technique, Figure 10 presents the accuracy-speedup trade-offcurves from our method compared to the ideal design points. We first obtain optimal results via7Under review as a conference paper at ICLR 2017Figure 11: The speedup resulting from searching for the fastest setting with less than 1% inferenceaccuracy degradation. All selected customized precision DNNs meet this accuracy constraint.exhaustive search. We present our search with a variable number of refinement iterations, wherewe evaluate the accuracy of the current design point and adjust the precision if necessary. To verifyrobustness, the accuracy models were generated using cross-validation where all configurations inthe DNN being searched are excluded (e.g., we build the AlexNet model with LeNet and CIFAR-NET accuracy/correlation pairs). The prediction is made using only ten randomly selected inputs,a tiny subset compared that needed for classification accuracy, some of which are even incorrectlyclassified by the original neural network. Thus, the cost of prediction using the model is negligible.We observe that, in all cases, the accuracy model combined with the evaluation of just two cus-tomized precision configurations provides the same result as the exhaustive search. Evaluating twodesigns out of 340 is 170 faster than exhaustively evaluating all designs. When only one con-figuration is evaluated instead of two (i.e. a further 50% reduction is search time), the selectedcustomized precision setting never violates the target accuracy, but concedes a small amount of per-formance. Finally, we note that our search mechanism, without evaluating inference accuracy forany of the design points, provides a representative prediction of the optimal customized precisionsetting. Although occasionally violating the target accuracy (i.e. the cases where the speedup ishigher than the exhaustive search), this prediction can be used to gauge the amenability of the NNto customized precision without investing any considerable amount of time in experimentation.Speedup. We present the final speedup produced by our search method in Figure 11 when thealgorithm is configured for 99% target accuracy and to use two samples for refinement. In allcases, the chosen customized precision configuration meets the targeted accuracy constraint. Inmost cases, we find that the larger networks require more precision (DNNs are sorted from left toright in descending order based on size). VGG requires less precision than expected, but VGG alsouses smaller convolution kernels than all of the other DNNs except LeNet-5.5 R ELATED WORKTo the best of our knowledge, our work is the first to examine the impact of numeric representationson the accuracy-efficiency trade-offs on large-scale, deployed DNNs with over half a million neu-rons (GoogLeNet, VGG, AlexNet), whereas prior work has only reported results on much smallernetworks such as CIFARNET and LeNet-5 Cavigelli et al. (2015); Chen et al. (2014); Courbariauxet al. (2014); Du et al. (2014); Gupta et al. (2015); Muller & Indiveri (2015). Many of these works fo-cused on fixed-point computation due to the fixed-point representation working well on small-scaleneural networks. We find very different conclusions when considering production-ready DNNs.Other recent works have looked at alternative neural network implementations such as spiking neuralnetworks for more efficient hardware implementation Conti & Benini (2015); Diehl & Cook (2014).This is a very different computational model that requires redevelopment of standard DNNs, unlikeour proposed methodologies. Other works have proposed several approaches to improve perfor-mance and reduce energy consumption of deep neural networks by taking advantage of the fact thatDNNs usually contain redundancies Chen et al. (2015); Figurnov et al. (2015).6 C ONCLUSIONIn this work, we introduced the importance of carefully considering customized precision whenrealizing neural networks. We show that using the IEEE 754 single precision floating point repre-sentation in hardware results in surrendering substantial performance. On the other hand, picking aconfiguration that has lower precision than optimal will result in severe accuracy loss. By reconsid-ering the representation from the ground up in designing custom precision hardware and using oursearch technique, we find an average speedup across deployable DNNs, including GoogLeNet andVGG, of 7.6with less than 1% degradation in inference accuracy.8Under review as a conference paper at ICLR 2017 | SkO-TExSl | Ignores broader system-level issues, needs to use 16-bit floats as baseline | 5: Marginally below acceptance threshold | This paper explores the performance-area-energy-model accuracy tradeoff encountered in designing custom number representations for deep learning inference. Common image-based benchmarks: VGG, Googlenet etc are used to demonstrate that fewer than1 6 bits in a custom floating point representation can lead to improvement in runtime performance and energy efficiency with only a small loss in model accuracy.
Questions:
1. Does the custom floating point number representation take into account support for de-normal numbers?
2. Is the custom floating point unit clocked at the same frequency as the baseline 32-bit floating point unit? If not, what are the different frequencies used and how would this impact the overall system design in terms of feeding the data to the floating point units from the memory
Comments:
1. I would recommend using the IEEE half-precision floating point (1bit sign, 5bit exponent, and 10bit mantissa) as a baseline for comparison. At this point, it is well known in both the ML and the HW communities that 32-bit floats are an overkill for DNN inference and major HW vendors already include support for IEEE half-precision floats.
2. In my opinion, the claim that switching to custom floating point lead to a YY.ZZ x savings in energy is misleading. It might be true that the floating-point unit itself might consume less energy due to smaller bit-width of the operands, however a large fraction of the total energy is spent in data movement to/from the memories. As a result, reducing the floating point unit’s energy consumption by a certain factor will not translate to the same reduction in the total energy. A reader not familiar with such nuances (for example a typical member of the ML community), may be mislead by such claims.
3. On a similar note as comment 2, the authors should explicitly mention that the claimed speedup is that of the floating point unit only, and it will not translate to the overall workload speedup. Although the speedup of the compute unit is roughly quadratic in the bit-width, the bandwidth requirements scale linearly with bit-width. As a result, it is possible that these custom floating point units may be starved on memory bandwidth, in which case the claims of speedup and energy savings need to be revisited.
4. The authors should also comment on the complexities and overheads introduced in data accesses, designing the various system buses/ data paths when the number representation is not byte-aligned. Moving to a custom 14-bit number representation (for example) can improve the performance and energy-efficiency of the floating point unit, but these gains can be partially eroded due to the additional overhead in supporting non-byte aligned memory accesses.
| 5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature |
r1te3Fqel | ICLR.cc/2017/conference | 2017 | End-to-End Answer Chunk Extraction and Ranking for Reading Comprehension | ["Yang Yu", "Wei Zhang", "Bowen Zhou", "Kazi Hasan", "Mo Yu", "Bing Xiang"] | This paper proposes dynamic chunk reader (DCR), an end-to-end neural reading comprehension (RC) model that is able to extract and rank a set of answer candidates from a given document to answer questions. DCR is able to predict answers of variable lengths, whereas previous neural RC models primarily focused on predicting single tokens or entities. DCR encodes a document and an input question with recurrent neural networks, and then applies a word-by-word attention mechanism to acquire question-aware representations for the document, followed by the generation of chunk representations and a ranking module to propose the top-ranked chunk as the answer. Experimental results show that DCR could achieve a 66.3% Exact match and 74.7% F1 score on the Stanford Question Answering Dataset. | ["Natural language processing", "Deep learning", "Supervised Learning"] | ABSTRACTThis paper proposes dynamic chunk reader (DCR ), an end-to-end neural readingcomprehension (RC) model that is able to extract and rank a set of answer candi-dates from a given document to answer questions. DCR is able to predict answersof variable lengths, whereas previous neural RC models primarily focused on pre-dicting single tokens or entities. DCR encodes a document and an input questionwith recurrent neural networks, and then applies a word-by-word attention mech-anism to acquire question-aware representations for the document, followed bythe generation of chunk representations and a ranking module to propose the top-ranked chunk as the answer. Experimental results show that DCR could achievea 66.3% Exact match and 74.7% F1 score on the Stanford Question AnsweringDataset (Rajpurkar et al., 2016).1 I NTRODUCTIONReading comprehension-based question answering (RCQA) is the task of answering a question witha chunk of text taken from related document(s). A variety of neural models have been proposed re-cently either for extracting a single entity or a single token as an answer from a given text (Hermannet al., 2015; Kadlec et al., 2016; Trischler et al., 2016b; Dhingra et al., 2016; Chen et al., 2016;Sordoni et al., 2016; Cui et al., 2016a); or for selecting the correct answer by ranking a small setof human-provided candidates (Yin et al., 2016; Trischler et al., 2016a). In both cases, an answerboundary is either easy to determine or already given.Different from the above two assumptions for RCQA, in the real-world QA scenario, people mayask questions about both entities (factoid) and non-entities such as explanations and reasons (non-factoid) (see Table 1 for examples).In this regard, RCQA has the potential to complement other QA approaches that leverage structureddata (e.g., knowledge bases) for both the above question types. This is because RCQA can exploitthe textual evidences to ensure increased answer coverage, which is particularly helpful for non-factoid answers. However, it is also challenging for RCQA to identify answer in arbitrary positionin the passage with arbitrary length, especially for non-factoid answers which might be clauses orsentences.As a result, apart from a few exceptions (Rajpurkar et al., 2016; Wang & Jiang, 2016), this researchdirection has not been fully explored yet.Compared to the relatively easier RC task of predicting single tokens/entities1, predicting answersof arbitrary lengths and positions significantly increase the search space complexity:the number of possible candidates to consider is in the order of O(n2), wherenis the number ofpassage words. In contrast, for previous works in which answers are single tokens/entities or fromcandidate lists, the complexity is in O(n)or the size of candidate lists l(usuallyl5), respectively.To address the above complexity, Rajpurkar et al. (Rajpurkar et al., 2016) used a two-step chunk-and-rank approach that employs a rule-based algorithm to extract answer candidates from a passage,Both authors contribute equally1State-of-the-art RC models have a decent accuracy of 70% on the widely used CNN/DailyMail dataset(Hermann et al., 2015).1Under review as a conference paper at ICLR 2017Table 1: Example of questions (with answers) which can be potentially answered with RC on aWikipedia passage. The first question is factoid, asking for an entity. The second and third arenon-factoid.The United Kingdom (UK) intends to withdraw from the European Union (EU),a process commonly known as Brexit, as a result of a June 2016 referendum inwhich 51.9% voted to leave the EU. The separation process is complex, causingpolitical and economic changes for the UK and other countries. As of September2016, neither the timetable nor the terms for withdrawal have been established: inthe meantime, the UK remains a full member of the European Union. The term”Brexit” is a portmanteau of the words ”British” and ”exit”.Q1. Which country withdrew from EU in 2016?A1. United KingdomQ2. How did UK decide to leave the European Union?A2. as a result of a June 2016 referendum in which 51.9% voted to leave the EUQ3. What has not been finalized for Brexit as of September 2016?A3. neither the timetable nor the terms for withdrawalfollowed by a ranking approach with hand-crafted features to select the best answer. The rule-basedchunking approach suffered from low coverage ( 70% recall of answer chunks) that cannot beimproved during training; and candidate ranking performance depends greatly on the quality of thehand-crafted features. More recently, Wang and Jiang (Wang & Jiang, 2016) proposed two end-to-end neural network models, one of which chunks a candidate answer by predicting the answer’s twoboundary indices and the other classifies each passage word into answer/not-answer. Both modelsimproved significantly over the method proposed by Rajpurkar et al. (Rajpurkar et al., 2016).Our proposed model, called dynamic chunk reader (DCR ), not only significantly differs from boththe above systems in the way that answer candidates are generated and ranked, but also sharesmerits with both works. First, our model uses deep networks to learn better representations forcandidate answer chunks, instead of using fixed feature representations as in (Rajpurkar et al., 2016).Second, it represents answer candidates as chunks, as in (Rajpurkar et al., 2016), instead of word-level representations (Wang & Jiang, 2016), to make the model aware of the subtle differencesamong candidates (importantly, overlapping candidates).The contributions of this paper are three-fold. (1) We propose a novel neural network model forjoint candidate answer chunking and ranking, where the candidate answer chunks are dynamicallyconstructed and ranked in an end-to-end manner. (2) we propose a new question-attention mecha-nism to enhance passage word representation, which is subsequently used to construct chunk rep-resentations. (3) We also propose several simple but effective features to strengthen the attentionmechanism, which fundamentally improves candidate ranking, with the by-product of higher exactboundary match accuracy.The experiments on the Stanford Question Answering Dataset (SQuAD) (Rajpurkar et al., 2016),which contains a variety of human-generated factoid and non-factoid questions, have shown theeffectiveness of above three contributions.Our paper is organized as follows. We formally define the RCQA problem first. Next, we describeour baseline with a neural network component. We present the end-to-end dynamic chunk readermodel next. Finally, we analyze our experimental results and discuss the related work. In appendix,we show formal equations and details of the model.2 P ROBLEM DEFINITIONTable 1 shows an example of our RC setting where the goal is to answer a question Qi, factoid (Q1)or non-factoid (Q2 and Q3), based on a supporting passage Pi, by selecting a continuous sequenceof textAiPias answer.Qi,Pi, andAiare all word sequences, where each word is drawn froma vocabulary, V. Thei-th instance in the training set is a triple in the form of (Pi;Qi;Ai), wherePi= (pi1;:::;p ijPij),Qi= (qi1;:::;q ijQij), andAi= (ai1;:::;a ijAij)(pi;qi;ai2V). Owingto the disagreement among annotators, there could be more than one correct answer for the samequestion; and the k-th answer to Qiis denoted by Aki=faki1;:::;akijAkijg. An answer candidate forthei-th training example is defined as cm;ni, a sub-sequence in Pi, that spans from position mton(1mnjPij). The ground truth answer Aicould be included in the set of all candidates2Under review as a conference paper at ICLR 2017Ci=fcm;nij8m;n2N+;subj (m;n;P i)and 1mnjPijg, wheresubj(m;n;P i)isthe constraint put on the candidate chunk for Pi, such as, “cm;nican have at most 10 tokens”, or“cm;nimust have a pre-defined POS pattern”. To evaluate a system’s performance, its top answer toa question is matched against the corresponding gold standard answer(s).Remark: Categories of RC Tasks Other simpler variants of the aforementioned RC task wereexplored in the past. For example, quiz-style datasets (e.g., MCTest (Richardson et al., 2013),MovieQA (Tapaswi et al., 2015)) have multiple-choice questions with answer options. Cloze-styledatesets(Hermann et al., 2015; Hill et al., 2015; Onishi et al., 2016), usually automatically generated,have factoid “question”s created by replacing the answer in a sentence from the text with blank. Fortheanswer selection task this paper focuses on, several datasets exist, e.g. TREC-QA for factoidanswer extraction from multiple given passages, bAbI (Weston et al., 2014) designed for inferencepurpose, and the SQuAD dataset (Rajpurkar et al., 2016) used in this paper. To the best of ourknowledge, the SQuAD dataset is the only one for both factoid and non-factoid answer extractionwith a question distribution more close to real-world applications.3 B ASELINE : CHUNK -AND -RANK PIPELINE WITH NEURAL RCIn this section we modified a state-of-the-art RC system for cloze-style tasks for our answer extrac-tion purpose, to see how much gap we have for the two type of tasks, and to inspire our end-to-endsystem in the next section. In order to make the cloze-style RC system to make chunk-level deci-sion, we use the RC model to generate features for chunks, which are further used in a feature-basedranker like in (Rajpurkar et al., 2016). As a result, this baseline can be viewed as a deep learningbased counterpart of the system in (Rajpurkar et al., 2016). It has two main components: 1) a stand-alone answer chunker, which is trained to produce overlapping candidate chunks, and 2) a neuralRC model, which is used to score each word in a given passage to be used thereafter for generatingchunk scores.Answer Chunking To reduce the errors generated by the rule-based chunker in (Rajpurkar et al.,2016), first, we capture the part-of-speech (POS) pattern of all answer sub-sequences in the trainingdataset to form a POS pattern trie tree , and then apply the answer POS patterns to passage Pitoacquire a collection of all subsequences (chunk candidates) Ciwhose POS patterns can be matchedto the POS pattern trie . This is equivalent to putting an constraint subj(m;n;P i)to candidateanswer chunk generation process that only choose the chunk with a POS pattern seen for answersin the training data. Then the sub-sequences Ciare used as answer candidates for Pi. Note thatoverlapping chunks could be generated for a passage, and we rely on the ranker to choose the bestcandidate based on features from the cloze-style RC system. Experiments showed that for >90%of the questions on the development set, the ground truth answer is included in the candidate setconstructed in such manner.Feature Extraction and Ranking For chunk ranking, we (1) use neural RCQA model to annotateeachpijin passagePito get score sij, then (2) for every chunk cm;niin passagei, collect scores(sim;:::;s in)for all the (pim;:::;p in)contained within cm;ni, and (3) extract features on the se-quence of scores (sim;:::;s in)to characterize its scale and distribution information, which servesas the feature representation of cm;ni. In step (1) to acquire sijwe train and apply a word-levelsingle-layer Gated Attention Reader2(Dhingra et al., 2016), which has state-of-the-art performanceon CNN/DailyMail cloze-style RC task. In step (3) for chunk cm;ni, we designed 5 features, includ-ing 4 statistics on (sim;:::;s in):maximum, minimum, average and sum ; as well as the count ofmatched POS pattern within the chunk, which serves as an answer prior. We use these 5 features ina state-of-the-art ranker (Ganjisaffar et al., 2011).4 D YNAMIC CHUNK READERThe dynamic chunk reader (DCR) model is presented in Figure 1. Inspired by the baseline we built,DCR is deemed to be superior to the baseline for 3 reasons. First, each chunk has a representationconstructed dynamically, instead of having a set of pre-defined feature values. Second, each passage2We tried using more than one layers in Gated Attention Reader, but no improvement was observed.3Under review as a conference paper at ICLR 2017Figure 1: The main components in dynamic chunk reader model (from bottom to top) are bi-GRUencoders for passage and question, a word-by-word attention bi-GRU for passage, dynamic chunkrepresentations that are transformed from pooled dynamic chunks of hidden states, the questionattention on every chunk representation and final answer chunk prediction.word’s representation is enhanced by word-by-word attention that evaluates the relevance of thepassage word to the question. Third, these components are all within a single, end-to-end model thatcan be trained in a joint manner.DCR works in four steps. First, the encoder layer encodes passage and question separately, by usingbidirectional recurrent neural networks (RNN).Second, the attention layer calculates the relevance of each passage word to the question.Third, the convolution layer generates unigram, bigram and trigram representation for each word.bigram and trigram of a word ends with the same word, and proper padding is applied on the firstword to make sure the output is the same length as input to CNN layer.Fourth, the chunk representation layer dynamically extracts the candidate chunks from the givenpassage, and create chunk representation that encodes the contextual information of each chunk.Fifth, the ranker layer scores the relevance between the representations of a chunk and the givenquestion, and ranks all candidate chunks using a softmax layer.We describe each step below.Encoder Layer We use bi-directional RNN encoder to encode PiandQiof example i, and gethidden state for each word position pijandqik.3As RNN input, a word is represented by a rowvectorx2Rn.xcan be the concatenation of word embedding and word features (see Fig. 1). Theword vector for the t-th word isxt. A word sequence is processed using an RNN encoder with gatedrecurrent units (GRU) (Cho et al., 2014), which was proved to be effective in RC and neural machinetranslation tasks (Bahdanau et al., 2015; Kadlec et al., 2016; Dhingra et al., 2016). For each positiont, GRU computes htwith inputxtand previous state ht1, as:3We can have separated parameters for question and passage encoders but a single shared encoder for bothworks better in the experiments.4Under review as a conference paper at ICLR 2017rt=(Wrxt+Urht1) (1)ut=(Wuxt+Uuht1) (2)ht=tanh(Wxt+U(rtht1)) (3)ht= (1ut)ht1+utht (4)whereht,rt, andut2Rdare d-dimensional hidden state, reset gate, and update gate, respectively;Wfr;ug,W2RndandUfr;ug,U2Rddare the parameters of the GRU; is the sigmoidfunction, anddenotes element-wise production. For a word at t, we use the hidden state !htfromthe forward RNN as a representation of the preceding context, and the htfrom a backward RNNthat encodes text reversely, to incorporate the context after t. Next,ht= [ !ht; ht], the bi-directionalcontextual encoding of xt, is formed. [;]is the concatenation operator. To distinguish hidden statesfrom different sources, we denote the hjofj-th word inPand thehkofk-th word inQashpjandhqkrespectively.Attention Layer Attention mechanism in previous RC tasks (Kadlec et al., 2016; Hermann et al.,2015; Sordoni et al., 2016; Dhingra et al., 2016; Cui et al., 2016a;b) enables question-aware passagerepresentations. We propose a novel attention mechanism inspired by word-by-word style attentionmethods (Rockt ̈aschel et al., 2015; Wang & Jiang, 2015; Santos et al., 2016). For each pj, a question-attended representation vjis computed as follows (example index iis omitted for simplicity):jk=hpjhqk; (5)j=jQjXk=1jkhqk(6)vj= [hpj;j] (7)wherehpjandhqkare hidden states from the bi-directional RNN encoders (see Figure 1). An innerproduct,jk, is calculated between hpjand every question word hqk. It indicates how well thepassage word pjmatches with every question word qk.jis a weighted pooling of jQjquestionhidden states, which serves as a pj-aware question representation. The concatenation of hpjandjleads to a passage-question joint representation, vj2R4d.4Next, we apply a second bi-GRU layertaking thevjs as inputs, and obtain forward and backward representations !jand j2Rd, and inturn their concatenation, j= [ !j; j].Convolution Layer Every word is encoded with complete passage context through attention layerRNN. We would like to model more complex representation of the words, by introducing unigram,bigram and trigram representations. There are two benefits for this enhanced representation: 1)each word could be enhanced with local context information to help identify the boundary of theanswer chunk. Using previous words has been a common feature used in POS tagging and Namedentity recognition; and 2) The information brought in by the ngram into the word representationcould enhance the semantic match between the answer chunk internal and the question. Imaginescenario of a three word candidate, where the last word representation includes the two previouswords through the convolution layer. Matching to the last word could also lead to the match tothe semantics of the internal of the chunk. Specifically, we create for every word position jthreerepresentations, by using ngrams ending with the hidden state j:~j1=jWc1 (8)~j2= [j1;j]Wc2 (9)~j3= [j2;j1;j]Wc3 (10)4We tried another word-by-word attention methods as in (Santos et al., 2016), which has similar passagerepresentation input to question side. However, this does not lead to improvement due to the confusion causedby long passages in RC. Consequently, we used the proposed simplified version of word-by-word attention onpassage side only.5Under review as a conference paper at ICLR 2017The details shown in equations above. We used three different convolution kernels for differentn-grams.Chunk Representation Layer A candidate answer chunk representation is dynamically createdgiven convolution layer output. We first decide the text boundary for the candidate chunk, and thenform a chunk representation using all or part of those joutputs inside the chunk. To decide acandidate chunk (boundary): we tried two ways: (1) adopt the POS trie -based approach used inour baseline, and (2) enumerate all possible chunks up to a maximum number of tokens. For (2),we create up to N(max chunk length) chunks starting from any position jinPj. Approach (1) cangenerate candidates with arbitrary lengths, but fails to recall candidates whose POS pattern is unseenin training set; whereas approach (2) considers all possible candidates within a window and is moreflexible, but over-generates invalid candidates.For a candidate answer chunk cm;nspanning from position mtoninclusively, we construct chunkrepresentation lm;n2R2dusing every ~jlwithin range [m;n], with a function g(), andl2f1;2;3g. Formally,lm;n=g(~ml;:::; ~nl)Each ~jlis a convolution output over concatenated forward and backward RNN hidden states fromattention layer. So the first half in ~jlencodes information in forward RNN hidden states and thesecond half encodes information in backward RNN hidden states. We experimented with severalpooling functions (e.g., max, average) for g(), and found out that, instead of pooling, the best g()function is to concatenate the first half of convolution output of the chunk’s first word and the secondhalf of convolution output of the chunk’s last word. Formally,lm;n=g(~ml;:::; ~nl) = [!~ml; ~nl] (11)where!~mlis half of the hidden state for l-gram word representation corresponding to forward at-tention RNN output. We hypothesize that the hidden states at that two ends can better represent thechunk’s contexts, which is critical for this task, than the states within the chunk. This observationalso agrees with (Kobayashi et al., 2016).Ranker Layer A scoreslm;nfor eachl-gram chunk representation lm;ndenoting the probabilityof that chunk to be the true answer is calculated by dot product with question representation. Thequestion representation is the concatenation of the last hidden state in forward RNN and the firsthidden state in backward RNN. Formally for the chunk cm;niwe havesl(cm;nijPi;Qi) =lm;n[!hQijQij; hQi1] (12)wheresldenotes the score generated from l-gram representation.!hQikor hQikis thek-th hidden stateoutput from question Qi’s forward and backward RNN encoder, respectively.After that, the final score for cm;niis evaluated as the linear combination of three scores, followedby a softmax:s(cm;nijPi;Qi) =softmax (W[s1;s2;s3]) (13)whereslis the shorthand notation for sl(cm;nijPi;Qi);W2R3. In runtime, the chunk with thehighest probability is taken as the answer. In training, the following negative log likelihood isminimized:L=NXi=1logP(AijPi;Qi) (14)Note that the i-th training instance is only used when Aiis included in the corresponding candidatechunk setCi, i.e.9m;nAi=cm;ni. The softmax in the final layer serves as the list-wise rankingmodule similar in spirit to (Cao et al., 2007).5 E XPERIMENTSDataset We used the Stanford Question Answering Dataset (SQuAD) (Rajpurkar et al., 2016)for the experiment. SQuAD came into our sight because it is a mix of factoid and non-factoid6Under review as a conference paper at ICLR 2017Table 2: Results on the SQuAD dataset.Dev TestModels EM F1 EM F1Rajpurkar 2016 39.8% 51.0% 40.4% 51.0%Wang 2016 59.1% 70.0% 59.5% 70.3%DCR w/o Conv. 62.5% 71.2% 62.5% 71.0%DCR 63.4% 72.3% - -DCR Ensemble 66.3% 74.7% - -questions, a real-world data (crowd-sourced), and of large scale (over 100K question-answer pairscollected from 536 Wikipedia articles). Answers range from single words to long, variable-lengthphrase/clauses. It is a relaxation of assumptions by the cloze-style and quiz-style RC datasets in theProblem Definition section.Features The input vector representation of each word wto encoder RNNs has six parts including apre-trained 300-dimensional GloVe embedding (Pennington et al., 2014) and five features (see Fig-ure 1): (1) a one-hot encoding (46 dimensions) for the part-of-speech (POS) tag of w; (2) a one-hotencoding (14 dimensions) for named entity (NE) tag of w; (3) a binary value indicating whether w’ssurface form is the same to any word in the quesiton; (4) if the lemma form of wis the same to anyword in the question; and (5) if wis caplitalized. Feature (3) and (4) are designed to help the modelalign the passage text with question. Note that some types of questions (e.g., “who”, “when” ques-tions) have answers that have a specific POS/NE tag pattern. For instance, “who” questions mostlyhave proper nouns/persons as answers and “when” questions may frequently have numbers/dates(e.g., a year) as answers. Thus, we believe that the model could exploit the co-relation betweenquestion types and answer POS/NE patterns easier with POS and NE tag features. Implementa-tion Details We pre-processed the SQuAD dataset using Stanford CoreNLP tool5(Manning et al.,2014) with its default setting to tokenize the text and obtain the POS and NE annotations. To trainour model, we used stochastic gradient descent with the ADAM optimizer (Kingma & Ba, 2014),with an initial learning rate of 0.001. All GRU weights were initialized from a uniform distribu-tion between (-0.01, 0.01). The hidden state size, d, was set to 300 for all GRUs. The questionbi-GRU shared parameters with the passage bi-GRU, while the attention-based passage bi-GRU hadits own parameters. We shuffled all training examples at the beginning of each epoch and adopted acurriculum learning approach (Bengio et al., 2009), by sorting training instances by length in every10 batches, to enable the model start learning from relatively easier instances and to harder ones.We also applied dropout of rate 0.2 to the embedding layer of input bi-GRU encoder, and gradientclipping when the norm of gradients exceeded 10. We trained in mini-batch style (mini-batch sizeis 180) and applied zero-padding to the passage and question inputs in each batch. We also set themaximum passage length to be 300 tokens, and pruned all the tokens after the 300-th token in thetraining set to save memory and speed up the training process. This step reduced the training setsize by about 1.6%. During test, we test on the full length of passage, so that we don’t prune out thepotential candidates. We trained the model for at most 30 epochs, and in case the accuracy did notimprove for 10 epochs, we stopped training.For the feature ranking-based system, we used jforest ranker (Ganjisaffar et al., 2011) withLambdaMART-RegressionTree algorithm and the ranking metric was NDCG@10. For the GatedAttention Reader in baseline system, we replicated the method and use the same configurations asin (Dhingra et al., 2016).ResultsTable 2 shows our main results on the SQuAD dataset. Compared to the scores reported in (Wang& Jiang, 2016), our exact match (EM) and F1 on the development set and EM score on the test setare better, and F1 on the test set is comparable. We also studied how each component in our modelcontributes to the overall performance. Table 3 shows the details as well as the results of the baselineranker. As the first row of Table 3 shows, our baseline system improves 10% (EM) over Rajpurkaret al. (Rajpurkar et al., 2016) (Table 2, row 1), the feature-based ranking system. However whencompared to our DCR model (Table 3, row 2), the baseline (row 1) is more than 12% (EM) behind5stanfordnlp.github.io/CoreNLP/7Under review as a conference paper at ICLR 2017Table 3: Detailed system experiments on the SQuAD development set.Models EM F1Chunk-and-Rank Pipeline Baseline 49.7% 64.9%DCR w/o Convolution 62.5% 71.2%DCR w/o Word-by-Word Attention 57.6% 68.7%DCR w/o POS feature (1) 59.2% 68.8%DCR w/o NE feature (2) 60.4% 70.2%DCR w/o Question-word feature (3) 59.5% 69.0%DCR w/o Question-lemma feature (4) 61.2% 69.9%DCR w/o Capitalized feature (5) 61.5% 70.6%DCR w/o Conv. w POS-trie 62.1% 70.8%(a) (b)Figure 2: (a) Variations of DCR performance on ground truth answer length (up to 10) in the devel-opment set. The curve with diamond knots also shows the percentage of answers for each length inthe development set. (b) Performance comparisons for different question head word.even though it is based on the state-of-the-art model for cloze-style RC tasks. This can be attributedto the advanced model structure and end-to-end manner of DCR.We also did ablation tests on our DCR model. First, replacing the word-by-word attention withAttentive Reader style attention (Hermann et al., 2015) decreases the EM score by about 4.5%,showing the strength of our proposed attention mechanism.Second, we remove the features in input to see the contribution of each feature. The result showsthat POS feature (1) and question-word feature (3) are the two most important features.Finally, combining the DCR model with the proposed POS-trie constraints yields a score similar tothe one obtained using the DCR model with all possible n-gram chunks. The result shows that (1)our chunk representations are powerful enough to differentiate even a huge amount of chunks whenno constraints are applied; and (2) the proposed POS-trie reduces the search space at the cost of asmall drop in performance.Analysis To better understand our system, we calculated the accuracy of the attention mechanism ofthe gated attention reader used in our deep learning-based baseline. We found that it is 72% accuratei.e., 72% of the times a word with the highest attention score is inside the correct answer span. Thismeans that, if we could accurately detect the boundary around the word with the highest attentionscore to form the answer span, we could achieve an accuracy close to 72%. In addition, we checkedthe answer recall of our candidate chunking approach. When we use a window size of 10, 92% ofthe time, the ground truth answer will be included in the extracted Candidate chunk set. Thus theupper bound of the exact match score of our baseline system is around 66% (92% (the answer recall)72%). From the results, we see our DCR system’s exact match score is at 62%. This shows thatDCR is proficient at differentiating answer spans dynamically.To further analyze the system’s performance while predicting answers of different lengths, we showthe exact match (EM) and F1 scores for answers with lengths up to 10 tokens in Figure 2(a). Fromthe graph, we can see that, with the increase of answer length, both EM and F1 drops, but in differentspeed. The gap between F1 and exact match also widens as answer length increases. However, themodel still yields a decent accuracy when the answer is longer than a single word. Additionally,Figure 2(b) shows that the system is better at “when” and “who” questions, but performs poorly8Under review as a conference paper at ICLR 2017Figure 3: Development set performance comparisons for different types of “what” questions (con-sidering the types with more than 20 examples in the development set).on “why” questions. The large gap between exact match and F1 on “why” questions means thatperfectly identifying the span is harder than locating the core of the answer span.Since “what”, “which”, and “how” questions contain a broad range of question types, we split themfurther based on the bigram a question starts with, and Figure 3 shows the breakdown for “what”questions. We can see that “what” questions asking for explanations such as “what happens” and“what happened” have lower EM and F1 scores. In contrast, “what” questions asking for year andnumbers have much higher scores and, for these questions, exact match scores are close to F1 scores,which means chunking for these questions are easier for DCR.6 R ELATED WORKAttentive Reader was the first neural model for factoid RCQA (Hermann et al., 2015). It uses Bidi-rectional RNN (Cho et al., 2014; Chung et al.,2014) to encode document and query respectively,and use query representation to match with every token from the document. Attention Sum Reader(Kadlec et al., 2016) simplifies the model to just predicting positions of correct answer in the doc-ument and the training speed and test accuracy are both greatly improved on the CNN/Daily Maildataset. (Chen et al., 2016) also simplified Attentive Reader and reported higher accuracy. Window-based Memory Networks (MemN2N) is introduced along with the CBT dataset (Hill et al., 2015),which does not use RNN encoders, but embeds contexts as memory and matches questions withembedded contexts. Those models’ mechanism is to learn the match between answer context withquestion/query representation. In contrast, memory enhanced neural networks like Neural TuringMachines (Graves et al., 2014) and its variants (Zhang et al., 2015; Gulcehre et al., 2016; Zaremba& Sutskever, 2015; Chandar et al., 2016; Grefenstette et al., 2015) were also potential candidatesfor the task, and Gulcehre et al. (Gulcehre et al., 2016) reported results on the bAbI task, which isworse than memory networks. Similarly, sequence-to-sequence models were also used (Yu et al.,2015; Hermann et al., 2015), but they did not yield better results either.Recently, several models have been proposed to enable more complex inference for RC task. Forinstance, gated attention model (Dhingra et al., 2016) employs a multi-layer architecture, whereeach layer encodes the same document, but the attention is updated from layer to layer. EpiReader(Trischler et al., 2016b) adopted a joint training model for answer extractor and reasoner, where theextractor proposes top candidates, and the reasoner weighs each candidate by examining entailmentrelationship between question-answer representation and the document. An iterative alternating at-tention mechanism and gating strategies were proposed in (Sordoni et al., 2016) to optimize theattention through several hops. In contrast, Cui et al. (Cui et al., 2016a;b) introduced fine-graineddocument attention from each question word and then aggregated those attentions from each ques-tion token by summation with or without weights. This system achieved the state-of-the-art score onthe CNN dataset. Those different variations all result in roughly 3-5% improvement over attentionsum reader, but none of those could achieve higher than that. Other methods include using dynamicentity representation with max-pooling (Kobayashi et al., 2016) that aims to change entity represen-tation with context, and Weissenborn’s (Weissenborn, 2016) system, which tries to separate entityfrom the context and then matches the question to context, scoring an accuracy around 70% on theCNN dataset.9Under review as a conference paper at ICLR 2017However, all of those models assume that the answers are single tokens. This limits the type ofquestions the models can answer. Wang and Jiang (Wang & Jiang, 2016) proposed a match-lstm andachieved good results on SQuAD. However, this approach predicts a chunk boundary or whether aword is part of a chunk or not. In contrast, our approach explicitly constructs the chunk representa-tions and similar chunks are compared directly to determine correct answer boundaries.7 C ONCLUSIONIn this paper we proposed a novel neural reading comprehension model for question answering.Different from the previously proposed models for factoid RCQA, the proposed model, dynamicchunk reader, is not restricted to predicting a single named entity as an answer or selecting an answerfrom a small, pre-defined candidate list. Instead, it is capable of answering both factoid and non-factoid questions as it learns to select answer chunks that are suitable for an input question. DCRachieves this goal with a joint deep learning model enhanced with a novel attention mechanismand five simple yet effective features. Error analysis shows that the DCR model achieves goodperformance, but still needs to improve on predicting longer answers, which are usually non-factoidin nature. | HkfluXx4x | 6: Marginally above acceptance threshold | SUMMARY.
The paper propose a reading-comprehension question answering system for the recent QA task where answers of a question can be either single tokens or spans in the given text passage.
The model first encodes the passage and the query using a recurrent neural network.
With an attention mechanism the model calculates the importance of each word on the passage with respect to each word in the question.
The encoded words in the passage are concatenated with the attention; the resulting vector is re-encoded with a further RNN.
Three convolutional neural networks with different filter size (1,2,3-gram) are used to further capture local features.
Candidate answers are selected either matching POS patterns of answers in the training set or choosing all possible text span until a certain length.
Each candidate answer has three representations, one for each n-gram representation. The compatibility of these representation with the question representation is then calculated.
The scores are combined linearly and used for calculating the probability of the candidate answer being the right answer for the question.
The method is tested on the SQUAD dataset and outperforms the proposed baselines.
----------
OVERALL JUDGMENT
The method presented in this paper is interesting but not very motivated in some points.
For example, it is not explained why in the attention mechanism it is beneficial to concatenate the original passage encoding with the attention-weighted ones.
The contributions of the paper are moderately novel proposing mainly the attention mechanism and the convolutional re-encoding.
In fact, combining questions and passages and score their compatibility has became a fairly standard procedure in all QA models.
----------
DETAILED COMMENTS
Equation (13) i should be s, not s^l.
I still do not understand the sentence " the best function is to concatenate the hidden stat of the fist word in a chunk in forward RNN and that of the last word in backward RNN". The RNN is over what all the words in the chunk? in the passage?
The answer the authors gave in the response does not clarify this point. | 3: The reviewer is fairly confident that the evaluation is correct |
|
r1te3Fqel | ICLR.cc/2017/conference | 2017 | End-to-End Answer Chunk Extraction and Ranking for Reading Comprehension | ["Yang Yu", "Wei Zhang", "Bowen Zhou", "Kazi Hasan", "Mo Yu", "Bing Xiang"] | This paper proposes dynamic chunk reader (DCR), an end-to-end neural reading comprehension (RC) model that is able to extract and rank a set of answer candidates from a given document to answer questions. DCR is able to predict answers of variable lengths, whereas previous neural RC models primarily focused on predicting single tokens or entities. DCR encodes a document and an input question with recurrent neural networks, and then applies a word-by-word attention mechanism to acquire question-aware representations for the document, followed by the generation of chunk representations and a ranking module to propose the top-ranked chunk as the answer. Experimental results show that DCR could achieve a 66.3% Exact match and 74.7% F1 score on the Stanford Question Answering Dataset. | ["Natural language processing", "Deep learning", "Supervised Learning"] | ABSTRACTThis paper proposes dynamic chunk reader (DCR ), an end-to-end neural readingcomprehension (RC) model that is able to extract and rank a set of answer candi-dates from a given document to answer questions. DCR is able to predict answersof variable lengths, whereas previous neural RC models primarily focused on pre-dicting single tokens or entities. DCR encodes a document and an input questionwith recurrent neural networks, and then applies a word-by-word attention mech-anism to acquire question-aware representations for the document, followed bythe generation of chunk representations and a ranking module to propose the top-ranked chunk as the answer. Experimental results show that DCR could achievea 66.3% Exact match and 74.7% F1 score on the Stanford Question AnsweringDataset (Rajpurkar et al., 2016).1 I NTRODUCTIONReading comprehension-based question answering (RCQA) is the task of answering a question witha chunk of text taken from related document(s). A variety of neural models have been proposed re-cently either for extracting a single entity or a single token as an answer from a given text (Hermannet al., 2015; Kadlec et al., 2016; Trischler et al., 2016b; Dhingra et al., 2016; Chen et al., 2016;Sordoni et al., 2016; Cui et al., 2016a); or for selecting the correct answer by ranking a small setof human-provided candidates (Yin et al., 2016; Trischler et al., 2016a). In both cases, an answerboundary is either easy to determine or already given.Different from the above two assumptions for RCQA, in the real-world QA scenario, people mayask questions about both entities (factoid) and non-entities such as explanations and reasons (non-factoid) (see Table 1 for examples).In this regard, RCQA has the potential to complement other QA approaches that leverage structureddata (e.g., knowledge bases) for both the above question types. This is because RCQA can exploitthe textual evidences to ensure increased answer coverage, which is particularly helpful for non-factoid answers. However, it is also challenging for RCQA to identify answer in arbitrary positionin the passage with arbitrary length, especially for non-factoid answers which might be clauses orsentences.As a result, apart from a few exceptions (Rajpurkar et al., 2016; Wang & Jiang, 2016), this researchdirection has not been fully explored yet.Compared to the relatively easier RC task of predicting single tokens/entities1, predicting answersof arbitrary lengths and positions significantly increase the search space complexity:the number of possible candidates to consider is in the order of O(n2), wherenis the number ofpassage words. In contrast, for previous works in which answers are single tokens/entities or fromcandidate lists, the complexity is in O(n)or the size of candidate lists l(usuallyl5), respectively.To address the above complexity, Rajpurkar et al. (Rajpurkar et al., 2016) used a two-step chunk-and-rank approach that employs a rule-based algorithm to extract answer candidates from a passage,Both authors contribute equally1State-of-the-art RC models have a decent accuracy of 70% on the widely used CNN/DailyMail dataset(Hermann et al., 2015).1Under review as a conference paper at ICLR 2017Table 1: Example of questions (with answers) which can be potentially answered with RC on aWikipedia passage. The first question is factoid, asking for an entity. The second and third arenon-factoid.The United Kingdom (UK) intends to withdraw from the European Union (EU),a process commonly known as Brexit, as a result of a June 2016 referendum inwhich 51.9% voted to leave the EU. The separation process is complex, causingpolitical and economic changes for the UK and other countries. As of September2016, neither the timetable nor the terms for withdrawal have been established: inthe meantime, the UK remains a full member of the European Union. The term”Brexit” is a portmanteau of the words ”British” and ”exit”.Q1. Which country withdrew from EU in 2016?A1. United KingdomQ2. How did UK decide to leave the European Union?A2. as a result of a June 2016 referendum in which 51.9% voted to leave the EUQ3. What has not been finalized for Brexit as of September 2016?A3. neither the timetable nor the terms for withdrawalfollowed by a ranking approach with hand-crafted features to select the best answer. The rule-basedchunking approach suffered from low coverage ( 70% recall of answer chunks) that cannot beimproved during training; and candidate ranking performance depends greatly on the quality of thehand-crafted features. More recently, Wang and Jiang (Wang & Jiang, 2016) proposed two end-to-end neural network models, one of which chunks a candidate answer by predicting the answer’s twoboundary indices and the other classifies each passage word into answer/not-answer. Both modelsimproved significantly over the method proposed by Rajpurkar et al. (Rajpurkar et al., 2016).Our proposed model, called dynamic chunk reader (DCR ), not only significantly differs from boththe above systems in the way that answer candidates are generated and ranked, but also sharesmerits with both works. First, our model uses deep networks to learn better representations forcandidate answer chunks, instead of using fixed feature representations as in (Rajpurkar et al., 2016).Second, it represents answer candidates as chunks, as in (Rajpurkar et al., 2016), instead of word-level representations (Wang & Jiang, 2016), to make the model aware of the subtle differencesamong candidates (importantly, overlapping candidates).The contributions of this paper are three-fold. (1) We propose a novel neural network model forjoint candidate answer chunking and ranking, where the candidate answer chunks are dynamicallyconstructed and ranked in an end-to-end manner. (2) we propose a new question-attention mecha-nism to enhance passage word representation, which is subsequently used to construct chunk rep-resentations. (3) We also propose several simple but effective features to strengthen the attentionmechanism, which fundamentally improves candidate ranking, with the by-product of higher exactboundary match accuracy.The experiments on the Stanford Question Answering Dataset (SQuAD) (Rajpurkar et al., 2016),which contains a variety of human-generated factoid and non-factoid questions, have shown theeffectiveness of above three contributions.Our paper is organized as follows. We formally define the RCQA problem first. Next, we describeour baseline with a neural network component. We present the end-to-end dynamic chunk readermodel next. Finally, we analyze our experimental results and discuss the related work. In appendix,we show formal equations and details of the model.2 P ROBLEM DEFINITIONTable 1 shows an example of our RC setting where the goal is to answer a question Qi, factoid (Q1)or non-factoid (Q2 and Q3), based on a supporting passage Pi, by selecting a continuous sequenceof textAiPias answer.Qi,Pi, andAiare all word sequences, where each word is drawn froma vocabulary, V. Thei-th instance in the training set is a triple in the form of (Pi;Qi;Ai), wherePi= (pi1;:::;p ijPij),Qi= (qi1;:::;q ijQij), andAi= (ai1;:::;a ijAij)(pi;qi;ai2V). Owingto the disagreement among annotators, there could be more than one correct answer for the samequestion; and the k-th answer to Qiis denoted by Aki=faki1;:::;akijAkijg. An answer candidate forthei-th training example is defined as cm;ni, a sub-sequence in Pi, that spans from position mton(1mnjPij). The ground truth answer Aicould be included in the set of all candidates2Under review as a conference paper at ICLR 2017Ci=fcm;nij8m;n2N+;subj (m;n;P i)and 1mnjPijg, wheresubj(m;n;P i)isthe constraint put on the candidate chunk for Pi, such as, “cm;nican have at most 10 tokens”, or“cm;nimust have a pre-defined POS pattern”. To evaluate a system’s performance, its top answer toa question is matched against the corresponding gold standard answer(s).Remark: Categories of RC Tasks Other simpler variants of the aforementioned RC task wereexplored in the past. For example, quiz-style datasets (e.g., MCTest (Richardson et al., 2013),MovieQA (Tapaswi et al., 2015)) have multiple-choice questions with answer options. Cloze-styledatesets(Hermann et al., 2015; Hill et al., 2015; Onishi et al., 2016), usually automatically generated,have factoid “question”s created by replacing the answer in a sentence from the text with blank. Fortheanswer selection task this paper focuses on, several datasets exist, e.g. TREC-QA for factoidanswer extraction from multiple given passages, bAbI (Weston et al., 2014) designed for inferencepurpose, and the SQuAD dataset (Rajpurkar et al., 2016) used in this paper. To the best of ourknowledge, the SQuAD dataset is the only one for both factoid and non-factoid answer extractionwith a question distribution more close to real-world applications.3 B ASELINE : CHUNK -AND -RANK PIPELINE WITH NEURAL RCIn this section we modified a state-of-the-art RC system for cloze-style tasks for our answer extrac-tion purpose, to see how much gap we have for the two type of tasks, and to inspire our end-to-endsystem in the next section. In order to make the cloze-style RC system to make chunk-level deci-sion, we use the RC model to generate features for chunks, which are further used in a feature-basedranker like in (Rajpurkar et al., 2016). As a result, this baseline can be viewed as a deep learningbased counterpart of the system in (Rajpurkar et al., 2016). It has two main components: 1) a stand-alone answer chunker, which is trained to produce overlapping candidate chunks, and 2) a neuralRC model, which is used to score each word in a given passage to be used thereafter for generatingchunk scores.Answer Chunking To reduce the errors generated by the rule-based chunker in (Rajpurkar et al.,2016), first, we capture the part-of-speech (POS) pattern of all answer sub-sequences in the trainingdataset to form a POS pattern trie tree , and then apply the answer POS patterns to passage Pitoacquire a collection of all subsequences (chunk candidates) Ciwhose POS patterns can be matchedto the POS pattern trie . This is equivalent to putting an constraint subj(m;n;P i)to candidateanswer chunk generation process that only choose the chunk with a POS pattern seen for answersin the training data. Then the sub-sequences Ciare used as answer candidates for Pi. Note thatoverlapping chunks could be generated for a passage, and we rely on the ranker to choose the bestcandidate based on features from the cloze-style RC system. Experiments showed that for >90%of the questions on the development set, the ground truth answer is included in the candidate setconstructed in such manner.Feature Extraction and Ranking For chunk ranking, we (1) use neural RCQA model to annotateeachpijin passagePito get score sij, then (2) for every chunk cm;niin passagei, collect scores(sim;:::;s in)for all the (pim;:::;p in)contained within cm;ni, and (3) extract features on the se-quence of scores (sim;:::;s in)to characterize its scale and distribution information, which servesas the feature representation of cm;ni. In step (1) to acquire sijwe train and apply a word-levelsingle-layer Gated Attention Reader2(Dhingra et al., 2016), which has state-of-the-art performanceon CNN/DailyMail cloze-style RC task. In step (3) for chunk cm;ni, we designed 5 features, includ-ing 4 statistics on (sim;:::;s in):maximum, minimum, average and sum ; as well as the count ofmatched POS pattern within the chunk, which serves as an answer prior. We use these 5 features ina state-of-the-art ranker (Ganjisaffar et al., 2011).4 D YNAMIC CHUNK READERThe dynamic chunk reader (DCR) model is presented in Figure 1. Inspired by the baseline we built,DCR is deemed to be superior to the baseline for 3 reasons. First, each chunk has a representationconstructed dynamically, instead of having a set of pre-defined feature values. Second, each passage2We tried using more than one layers in Gated Attention Reader, but no improvement was observed.3Under review as a conference paper at ICLR 2017Figure 1: The main components in dynamic chunk reader model (from bottom to top) are bi-GRUencoders for passage and question, a word-by-word attention bi-GRU for passage, dynamic chunkrepresentations that are transformed from pooled dynamic chunks of hidden states, the questionattention on every chunk representation and final answer chunk prediction.word’s representation is enhanced by word-by-word attention that evaluates the relevance of thepassage word to the question. Third, these components are all within a single, end-to-end model thatcan be trained in a joint manner.DCR works in four steps. First, the encoder layer encodes passage and question separately, by usingbidirectional recurrent neural networks (RNN).Second, the attention layer calculates the relevance of each passage word to the question.Third, the convolution layer generates unigram, bigram and trigram representation for each word.bigram and trigram of a word ends with the same word, and proper padding is applied on the firstword to make sure the output is the same length as input to CNN layer.Fourth, the chunk representation layer dynamically extracts the candidate chunks from the givenpassage, and create chunk representation that encodes the contextual information of each chunk.Fifth, the ranker layer scores the relevance between the representations of a chunk and the givenquestion, and ranks all candidate chunks using a softmax layer.We describe each step below.Encoder Layer We use bi-directional RNN encoder to encode PiandQiof example i, and gethidden state for each word position pijandqik.3As RNN input, a word is represented by a rowvectorx2Rn.xcan be the concatenation of word embedding and word features (see Fig. 1). Theword vector for the t-th word isxt. A word sequence is processed using an RNN encoder with gatedrecurrent units (GRU) (Cho et al., 2014), which was proved to be effective in RC and neural machinetranslation tasks (Bahdanau et al., 2015; Kadlec et al., 2016; Dhingra et al., 2016). For each positiont, GRU computes htwith inputxtand previous state ht1, as:3We can have separated parameters for question and passage encoders but a single shared encoder for bothworks better in the experiments.4Under review as a conference paper at ICLR 2017rt=(Wrxt+Urht1) (1)ut=(Wuxt+Uuht1) (2)ht=tanh(Wxt+U(rtht1)) (3)ht= (1ut)ht1+utht (4)whereht,rt, andut2Rdare d-dimensional hidden state, reset gate, and update gate, respectively;Wfr;ug,W2RndandUfr;ug,U2Rddare the parameters of the GRU; is the sigmoidfunction, anddenotes element-wise production. For a word at t, we use the hidden state !htfromthe forward RNN as a representation of the preceding context, and the htfrom a backward RNNthat encodes text reversely, to incorporate the context after t. Next,ht= [ !ht; ht], the bi-directionalcontextual encoding of xt, is formed. [;]is the concatenation operator. To distinguish hidden statesfrom different sources, we denote the hjofj-th word inPand thehkofk-th word inQashpjandhqkrespectively.Attention Layer Attention mechanism in previous RC tasks (Kadlec et al., 2016; Hermann et al.,2015; Sordoni et al., 2016; Dhingra et al., 2016; Cui et al., 2016a;b) enables question-aware passagerepresentations. We propose a novel attention mechanism inspired by word-by-word style attentionmethods (Rockt ̈aschel et al., 2015; Wang & Jiang, 2015; Santos et al., 2016). For each pj, a question-attended representation vjis computed as follows (example index iis omitted for simplicity):jk=hpjhqk; (5)j=jQjXk=1jkhqk(6)vj= [hpj;j] (7)wherehpjandhqkare hidden states from the bi-directional RNN encoders (see Figure 1). An innerproduct,jk, is calculated between hpjand every question word hqk. It indicates how well thepassage word pjmatches with every question word qk.jis a weighted pooling of jQjquestionhidden states, which serves as a pj-aware question representation. The concatenation of hpjandjleads to a passage-question joint representation, vj2R4d.4Next, we apply a second bi-GRU layertaking thevjs as inputs, and obtain forward and backward representations !jand j2Rd, and inturn their concatenation, j= [ !j; j].Convolution Layer Every word is encoded with complete passage context through attention layerRNN. We would like to model more complex representation of the words, by introducing unigram,bigram and trigram representations. There are two benefits for this enhanced representation: 1)each word could be enhanced with local context information to help identify the boundary of theanswer chunk. Using previous words has been a common feature used in POS tagging and Namedentity recognition; and 2) The information brought in by the ngram into the word representationcould enhance the semantic match between the answer chunk internal and the question. Imaginescenario of a three word candidate, where the last word representation includes the two previouswords through the convolution layer. Matching to the last word could also lead to the match tothe semantics of the internal of the chunk. Specifically, we create for every word position jthreerepresentations, by using ngrams ending with the hidden state j:~j1=jWc1 (8)~j2= [j1;j]Wc2 (9)~j3= [j2;j1;j]Wc3 (10)4We tried another word-by-word attention methods as in (Santos et al., 2016), which has similar passagerepresentation input to question side. However, this does not lead to improvement due to the confusion causedby long passages in RC. Consequently, we used the proposed simplified version of word-by-word attention onpassage side only.5Under review as a conference paper at ICLR 2017The details shown in equations above. We used three different convolution kernels for differentn-grams.Chunk Representation Layer A candidate answer chunk representation is dynamically createdgiven convolution layer output. We first decide the text boundary for the candidate chunk, and thenform a chunk representation using all or part of those joutputs inside the chunk. To decide acandidate chunk (boundary): we tried two ways: (1) adopt the POS trie -based approach used inour baseline, and (2) enumerate all possible chunks up to a maximum number of tokens. For (2),we create up to N(max chunk length) chunks starting from any position jinPj. Approach (1) cangenerate candidates with arbitrary lengths, but fails to recall candidates whose POS pattern is unseenin training set; whereas approach (2) considers all possible candidates within a window and is moreflexible, but over-generates invalid candidates.For a candidate answer chunk cm;nspanning from position mtoninclusively, we construct chunkrepresentation lm;n2R2dusing every ~jlwithin range [m;n], with a function g(), andl2f1;2;3g. Formally,lm;n=g(~ml;:::; ~nl)Each ~jlis a convolution output over concatenated forward and backward RNN hidden states fromattention layer. So the first half in ~jlencodes information in forward RNN hidden states and thesecond half encodes information in backward RNN hidden states. We experimented with severalpooling functions (e.g., max, average) for g(), and found out that, instead of pooling, the best g()function is to concatenate the first half of convolution output of the chunk’s first word and the secondhalf of convolution output of the chunk’s last word. Formally,lm;n=g(~ml;:::; ~nl) = [!~ml; ~nl] (11)where!~mlis half of the hidden state for l-gram word representation corresponding to forward at-tention RNN output. We hypothesize that the hidden states at that two ends can better represent thechunk’s contexts, which is critical for this task, than the states within the chunk. This observationalso agrees with (Kobayashi et al., 2016).Ranker Layer A scoreslm;nfor eachl-gram chunk representation lm;ndenoting the probabilityof that chunk to be the true answer is calculated by dot product with question representation. Thequestion representation is the concatenation of the last hidden state in forward RNN and the firsthidden state in backward RNN. Formally for the chunk cm;niwe havesl(cm;nijPi;Qi) =lm;n[!hQijQij; hQi1] (12)wheresldenotes the score generated from l-gram representation.!hQikor hQikis thek-th hidden stateoutput from question Qi’s forward and backward RNN encoder, respectively.After that, the final score for cm;niis evaluated as the linear combination of three scores, followedby a softmax:s(cm;nijPi;Qi) =softmax (W[s1;s2;s3]) (13)whereslis the shorthand notation for sl(cm;nijPi;Qi);W2R3. In runtime, the chunk with thehighest probability is taken as the answer. In training, the following negative log likelihood isminimized:L=NXi=1logP(AijPi;Qi) (14)Note that the i-th training instance is only used when Aiis included in the corresponding candidatechunk setCi, i.e.9m;nAi=cm;ni. The softmax in the final layer serves as the list-wise rankingmodule similar in spirit to (Cao et al., 2007).5 E XPERIMENTSDataset We used the Stanford Question Answering Dataset (SQuAD) (Rajpurkar et al., 2016)for the experiment. SQuAD came into our sight because it is a mix of factoid and non-factoid6Under review as a conference paper at ICLR 2017Table 2: Results on the SQuAD dataset.Dev TestModels EM F1 EM F1Rajpurkar 2016 39.8% 51.0% 40.4% 51.0%Wang 2016 59.1% 70.0% 59.5% 70.3%DCR w/o Conv. 62.5% 71.2% 62.5% 71.0%DCR 63.4% 72.3% - -DCR Ensemble 66.3% 74.7% - -questions, a real-world data (crowd-sourced), and of large scale (over 100K question-answer pairscollected from 536 Wikipedia articles). Answers range from single words to long, variable-lengthphrase/clauses. It is a relaxation of assumptions by the cloze-style and quiz-style RC datasets in theProblem Definition section.Features The input vector representation of each word wto encoder RNNs has six parts including apre-trained 300-dimensional GloVe embedding (Pennington et al., 2014) and five features (see Fig-ure 1): (1) a one-hot encoding (46 dimensions) for the part-of-speech (POS) tag of w; (2) a one-hotencoding (14 dimensions) for named entity (NE) tag of w; (3) a binary value indicating whether w’ssurface form is the same to any word in the quesiton; (4) if the lemma form of wis the same to anyword in the question; and (5) if wis caplitalized. Feature (3) and (4) are designed to help the modelalign the passage text with question. Note that some types of questions (e.g., “who”, “when” ques-tions) have answers that have a specific POS/NE tag pattern. For instance, “who” questions mostlyhave proper nouns/persons as answers and “when” questions may frequently have numbers/dates(e.g., a year) as answers. Thus, we believe that the model could exploit the co-relation betweenquestion types and answer POS/NE patterns easier with POS and NE tag features. Implementa-tion Details We pre-processed the SQuAD dataset using Stanford CoreNLP tool5(Manning et al.,2014) with its default setting to tokenize the text and obtain the POS and NE annotations. To trainour model, we used stochastic gradient descent with the ADAM optimizer (Kingma & Ba, 2014),with an initial learning rate of 0.001. All GRU weights were initialized from a uniform distribu-tion between (-0.01, 0.01). The hidden state size, d, was set to 300 for all GRUs. The questionbi-GRU shared parameters with the passage bi-GRU, while the attention-based passage bi-GRU hadits own parameters. We shuffled all training examples at the beginning of each epoch and adopted acurriculum learning approach (Bengio et al., 2009), by sorting training instances by length in every10 batches, to enable the model start learning from relatively easier instances and to harder ones.We also applied dropout of rate 0.2 to the embedding layer of input bi-GRU encoder, and gradientclipping when the norm of gradients exceeded 10. We trained in mini-batch style (mini-batch sizeis 180) and applied zero-padding to the passage and question inputs in each batch. We also set themaximum passage length to be 300 tokens, and pruned all the tokens after the 300-th token in thetraining set to save memory and speed up the training process. This step reduced the training setsize by about 1.6%. During test, we test on the full length of passage, so that we don’t prune out thepotential candidates. We trained the model for at most 30 epochs, and in case the accuracy did notimprove for 10 epochs, we stopped training.For the feature ranking-based system, we used jforest ranker (Ganjisaffar et al., 2011) withLambdaMART-RegressionTree algorithm and the ranking metric was NDCG@10. For the GatedAttention Reader in baseline system, we replicated the method and use the same configurations asin (Dhingra et al., 2016).ResultsTable 2 shows our main results on the SQuAD dataset. Compared to the scores reported in (Wang& Jiang, 2016), our exact match (EM) and F1 on the development set and EM score on the test setare better, and F1 on the test set is comparable. We also studied how each component in our modelcontributes to the overall performance. Table 3 shows the details as well as the results of the baselineranker. As the first row of Table 3 shows, our baseline system improves 10% (EM) over Rajpurkaret al. (Rajpurkar et al., 2016) (Table 2, row 1), the feature-based ranking system. However whencompared to our DCR model (Table 3, row 2), the baseline (row 1) is more than 12% (EM) behind5stanfordnlp.github.io/CoreNLP/7Under review as a conference paper at ICLR 2017Table 3: Detailed system experiments on the SQuAD development set.Models EM F1Chunk-and-Rank Pipeline Baseline 49.7% 64.9%DCR w/o Convolution 62.5% 71.2%DCR w/o Word-by-Word Attention 57.6% 68.7%DCR w/o POS feature (1) 59.2% 68.8%DCR w/o NE feature (2) 60.4% 70.2%DCR w/o Question-word feature (3) 59.5% 69.0%DCR w/o Question-lemma feature (4) 61.2% 69.9%DCR w/o Capitalized feature (5) 61.5% 70.6%DCR w/o Conv. w POS-trie 62.1% 70.8%(a) (b)Figure 2: (a) Variations of DCR performance on ground truth answer length (up to 10) in the devel-opment set. The curve with diamond knots also shows the percentage of answers for each length inthe development set. (b) Performance comparisons for different question head word.even though it is based on the state-of-the-art model for cloze-style RC tasks. This can be attributedto the advanced model structure and end-to-end manner of DCR.We also did ablation tests on our DCR model. First, replacing the word-by-word attention withAttentive Reader style attention (Hermann et al., 2015) decreases the EM score by about 4.5%,showing the strength of our proposed attention mechanism.Second, we remove the features in input to see the contribution of each feature. The result showsthat POS feature (1) and question-word feature (3) are the two most important features.Finally, combining the DCR model with the proposed POS-trie constraints yields a score similar tothe one obtained using the DCR model with all possible n-gram chunks. The result shows that (1)our chunk representations are powerful enough to differentiate even a huge amount of chunks whenno constraints are applied; and (2) the proposed POS-trie reduces the search space at the cost of asmall drop in performance.Analysis To better understand our system, we calculated the accuracy of the attention mechanism ofthe gated attention reader used in our deep learning-based baseline. We found that it is 72% accuratei.e., 72% of the times a word with the highest attention score is inside the correct answer span. Thismeans that, if we could accurately detect the boundary around the word with the highest attentionscore to form the answer span, we could achieve an accuracy close to 72%. In addition, we checkedthe answer recall of our candidate chunking approach. When we use a window size of 10, 92% ofthe time, the ground truth answer will be included in the extracted Candidate chunk set. Thus theupper bound of the exact match score of our baseline system is around 66% (92% (the answer recall)72%). From the results, we see our DCR system’s exact match score is at 62%. This shows thatDCR is proficient at differentiating answer spans dynamically.To further analyze the system’s performance while predicting answers of different lengths, we showthe exact match (EM) and F1 scores for answers with lengths up to 10 tokens in Figure 2(a). Fromthe graph, we can see that, with the increase of answer length, both EM and F1 drops, but in differentspeed. The gap between F1 and exact match also widens as answer length increases. However, themodel still yields a decent accuracy when the answer is longer than a single word. Additionally,Figure 2(b) shows that the system is better at “when” and “who” questions, but performs poorly8Under review as a conference paper at ICLR 2017Figure 3: Development set performance comparisons for different types of “what” questions (con-sidering the types with more than 20 examples in the development set).on “why” questions. The large gap between exact match and F1 on “why” questions means thatperfectly identifying the span is harder than locating the core of the answer span.Since “what”, “which”, and “how” questions contain a broad range of question types, we split themfurther based on the bigram a question starts with, and Figure 3 shows the breakdown for “what”questions. We can see that “what” questions asking for explanations such as “what happens” and“what happened” have lower EM and F1 scores. In contrast, “what” questions asking for year andnumbers have much higher scores and, for these questions, exact match scores are close to F1 scores,which means chunking for these questions are easier for DCR.6 R ELATED WORKAttentive Reader was the first neural model for factoid RCQA (Hermann et al., 2015). It uses Bidi-rectional RNN (Cho et al., 2014; Chung et al.,2014) to encode document and query respectively,and use query representation to match with every token from the document. Attention Sum Reader(Kadlec et al., 2016) simplifies the model to just predicting positions of correct answer in the doc-ument and the training speed and test accuracy are both greatly improved on the CNN/Daily Maildataset. (Chen et al., 2016) also simplified Attentive Reader and reported higher accuracy. Window-based Memory Networks (MemN2N) is introduced along with the CBT dataset (Hill et al., 2015),which does not use RNN encoders, but embeds contexts as memory and matches questions withembedded contexts. Those models’ mechanism is to learn the match between answer context withquestion/query representation. In contrast, memory enhanced neural networks like Neural TuringMachines (Graves et al., 2014) and its variants (Zhang et al., 2015; Gulcehre et al., 2016; Zaremba& Sutskever, 2015; Chandar et al., 2016; Grefenstette et al., 2015) were also potential candidatesfor the task, and Gulcehre et al. (Gulcehre et al., 2016) reported results on the bAbI task, which isworse than memory networks. Similarly, sequence-to-sequence models were also used (Yu et al.,2015; Hermann et al., 2015), but they did not yield better results either.Recently, several models have been proposed to enable more complex inference for RC task. Forinstance, gated attention model (Dhingra et al., 2016) employs a multi-layer architecture, whereeach layer encodes the same document, but the attention is updated from layer to layer. EpiReader(Trischler et al., 2016b) adopted a joint training model for answer extractor and reasoner, where theextractor proposes top candidates, and the reasoner weighs each candidate by examining entailmentrelationship between question-answer representation and the document. An iterative alternating at-tention mechanism and gating strategies were proposed in (Sordoni et al., 2016) to optimize theattention through several hops. In contrast, Cui et al. (Cui et al., 2016a;b) introduced fine-graineddocument attention from each question word and then aggregated those attentions from each ques-tion token by summation with or without weights. This system achieved the state-of-the-art score onthe CNN dataset. Those different variations all result in roughly 3-5% improvement over attentionsum reader, but none of those could achieve higher than that. Other methods include using dynamicentity representation with max-pooling (Kobayashi et al., 2016) that aims to change entity represen-tation with context, and Weissenborn’s (Weissenborn, 2016) system, which tries to separate entityfrom the context and then matches the question to context, scoring an accuracy around 70% on theCNN dataset.9Under review as a conference paper at ICLR 2017However, all of those models assume that the answers are single tokens. This limits the type ofquestions the models can answer. Wang and Jiang (Wang & Jiang, 2016) proposed a match-lstm andachieved good results on SQuAD. However, this approach predicts a chunk boundary or whether aword is part of a chunk or not. In contrast, our approach explicitly constructs the chunk representa-tions and similar chunks are compared directly to determine correct answer boundaries.7 C ONCLUSIONIn this paper we proposed a novel neural reading comprehension model for question answering.Different from the previously proposed models for factoid RCQA, the proposed model, dynamicchunk reader, is not restricted to predicting a single named entity as an answer or selecting an answerfrom a small, pre-defined candidate list. Instead, it is capable of answering both factoid and non-factoid questions as it learns to select answer chunks that are suitable for an input question. DCRachieves this goal with a joint deep learning model enhanced with a novel attention mechanismand five simple yet effective features. Error analysis shows that the DCR model achieves goodperformance, but still needs to improve on predicting longer answers, which are usually non-factoidin nature. | SJ5ROhbNx | Review | 4: Ok but not good enough - rejection | SYNOPSIS: The paper proposes a new neural network-based model for reading comprehension (reading a passage of text and answering questions based on the passage). It is similar in spirit to several other recent models, with the main exception that it is able to predict answers of different lengths, as opposed to single words/tokens/entities. The authors compare their model on the Stanford Question Answering Dataset (SQuAD), and show improvements over the baselines, while apparently lagging quite far behind the current state of the art reported on the SQuAD leaderboard.
THOUGHTS: The main novelty of the method is to be able to identify phrases of different lengths as possible answers to the question. However, both approaches considered -- using a POS pattern trie tree to filter out word sequences with POS tags matching those of answers in the training set, and brute-force enumeration of all phrases up to length N -- seem somewhat orthogonal to the idea of "learning end-to-end " an answer chunk extraction model. Furthermore, as other reviews have pointed out, it seems that the linguistic features actually contribute a lot to the final accuracy (Table 3). One could argue that these are easy to obtain using standard taggers, but it takes away even more from the idea of an "end-to-end trained" system.
The paper is generally well written, but there are several crucial sections in parts describing the model where it was really hard for me to follow the descriptions. In particular, the attention mechanism seems fairly standard to me in a seq2seq sense (i.e. there is nothing architecturally novel about it, as is for instance the case with the Gated Attentive Reader). I may be missing something, but even after the clarification round I still don't understand how it is novel compared to standard attention used in for instance seq2seq models.
Finally, although the method is shown to outperform the baseline method reported in the original paper introducing the SQuAD dataset, it currently seems to be 12th (out of 15 systems) on the leaderboard (https://rajpurkar.github.io/SQuAD-explorer/). Of course, it may be that further training and hyperparameter optimizations may improve these results.
Therefore, given the lack of model novelty (based on my understanding), and the lack of strong results (based on the leaderboard), I don't feel the paper is ready in its current form to be accepted to the conference.
Note: The GRU citation should be (Cho et al., 2014), not (Bengio et al., 2015). | 3: The reviewer is fairly confident that the evaluation is correct |
r1te3Fqel | ICLR.cc/2017/conference | 2017 | End-to-End Answer Chunk Extraction and Ranking for Reading Comprehension | ["Yang Yu", "Wei Zhang", "Bowen Zhou", "Kazi Hasan", "Mo Yu", "Bing Xiang"] | This paper proposes dynamic chunk reader (DCR), an end-to-end neural reading comprehension (RC) model that is able to extract and rank a set of answer candidates from a given document to answer questions. DCR is able to predict answers of variable lengths, whereas previous neural RC models primarily focused on predicting single tokens or entities. DCR encodes a document and an input question with recurrent neural networks, and then applies a word-by-word attention mechanism to acquire question-aware representations for the document, followed by the generation of chunk representations and a ranking module to propose the top-ranked chunk as the answer. Experimental results show that DCR could achieve a 66.3% Exact match and 74.7% F1 score on the Stanford Question Answering Dataset. | ["Natural language processing", "Deep learning", "Supervised Learning"] | ABSTRACTThis paper proposes dynamic chunk reader (DCR ), an end-to-end neural readingcomprehension (RC) model that is able to extract and rank a set of answer candi-dates from a given document to answer questions. DCR is able to predict answersof variable lengths, whereas previous neural RC models primarily focused on pre-dicting single tokens or entities. DCR encodes a document and an input questionwith recurrent neural networks, and then applies a word-by-word attention mech-anism to acquire question-aware representations for the document, followed bythe generation of chunk representations and a ranking module to propose the top-ranked chunk as the answer. Experimental results show that DCR could achievea 66.3% Exact match and 74.7% F1 score on the Stanford Question AnsweringDataset (Rajpurkar et al., 2016).1 I NTRODUCTIONReading comprehension-based question answering (RCQA) is the task of answering a question witha chunk of text taken from related document(s). A variety of neural models have been proposed re-cently either for extracting a single entity or a single token as an answer from a given text (Hermannet al., 2015; Kadlec et al., 2016; Trischler et al., 2016b; Dhingra et al., 2016; Chen et al., 2016;Sordoni et al., 2016; Cui et al., 2016a); or for selecting the correct answer by ranking a small setof human-provided candidates (Yin et al., 2016; Trischler et al., 2016a). In both cases, an answerboundary is either easy to determine or already given.Different from the above two assumptions for RCQA, in the real-world QA scenario, people mayask questions about both entities (factoid) and non-entities such as explanations and reasons (non-factoid) (see Table 1 for examples).In this regard, RCQA has the potential to complement other QA approaches that leverage structureddata (e.g., knowledge bases) for both the above question types. This is because RCQA can exploitthe textual evidences to ensure increased answer coverage, which is particularly helpful for non-factoid answers. However, it is also challenging for RCQA to identify answer in arbitrary positionin the passage with arbitrary length, especially for non-factoid answers which might be clauses orsentences.As a result, apart from a few exceptions (Rajpurkar et al., 2016; Wang & Jiang, 2016), this researchdirection has not been fully explored yet.Compared to the relatively easier RC task of predicting single tokens/entities1, predicting answersof arbitrary lengths and positions significantly increase the search space complexity:the number of possible candidates to consider is in the order of O(n2), wherenis the number ofpassage words. In contrast, for previous works in which answers are single tokens/entities or fromcandidate lists, the complexity is in O(n)or the size of candidate lists l(usuallyl5), respectively.To address the above complexity, Rajpurkar et al. (Rajpurkar et al., 2016) used a two-step chunk-and-rank approach that employs a rule-based algorithm to extract answer candidates from a passage,Both authors contribute equally1State-of-the-art RC models have a decent accuracy of 70% on the widely used CNN/DailyMail dataset(Hermann et al., 2015).1Under review as a conference paper at ICLR 2017Table 1: Example of questions (with answers) which can be potentially answered with RC on aWikipedia passage. The first question is factoid, asking for an entity. The second and third arenon-factoid.The United Kingdom (UK) intends to withdraw from the European Union (EU),a process commonly known as Brexit, as a result of a June 2016 referendum inwhich 51.9% voted to leave the EU. The separation process is complex, causingpolitical and economic changes for the UK and other countries. As of September2016, neither the timetable nor the terms for withdrawal have been established: inthe meantime, the UK remains a full member of the European Union. The term”Brexit” is a portmanteau of the words ”British” and ”exit”.Q1. Which country withdrew from EU in 2016?A1. United KingdomQ2. How did UK decide to leave the European Union?A2. as a result of a June 2016 referendum in which 51.9% voted to leave the EUQ3. What has not been finalized for Brexit as of September 2016?A3. neither the timetable nor the terms for withdrawalfollowed by a ranking approach with hand-crafted features to select the best answer. The rule-basedchunking approach suffered from low coverage ( 70% recall of answer chunks) that cannot beimproved during training; and candidate ranking performance depends greatly on the quality of thehand-crafted features. More recently, Wang and Jiang (Wang & Jiang, 2016) proposed two end-to-end neural network models, one of which chunks a candidate answer by predicting the answer’s twoboundary indices and the other classifies each passage word into answer/not-answer. Both modelsimproved significantly over the method proposed by Rajpurkar et al. (Rajpurkar et al., 2016).Our proposed model, called dynamic chunk reader (DCR ), not only significantly differs from boththe above systems in the way that answer candidates are generated and ranked, but also sharesmerits with both works. First, our model uses deep networks to learn better representations forcandidate answer chunks, instead of using fixed feature representations as in (Rajpurkar et al., 2016).Second, it represents answer candidates as chunks, as in (Rajpurkar et al., 2016), instead of word-level representations (Wang & Jiang, 2016), to make the model aware of the subtle differencesamong candidates (importantly, overlapping candidates).The contributions of this paper are three-fold. (1) We propose a novel neural network model forjoint candidate answer chunking and ranking, where the candidate answer chunks are dynamicallyconstructed and ranked in an end-to-end manner. (2) we propose a new question-attention mecha-nism to enhance passage word representation, which is subsequently used to construct chunk rep-resentations. (3) We also propose several simple but effective features to strengthen the attentionmechanism, which fundamentally improves candidate ranking, with the by-product of higher exactboundary match accuracy.The experiments on the Stanford Question Answering Dataset (SQuAD) (Rajpurkar et al., 2016),which contains a variety of human-generated factoid and non-factoid questions, have shown theeffectiveness of above three contributions.Our paper is organized as follows. We formally define the RCQA problem first. Next, we describeour baseline with a neural network component. We present the end-to-end dynamic chunk readermodel next. Finally, we analyze our experimental results and discuss the related work. In appendix,we show formal equations and details of the model.2 P ROBLEM DEFINITIONTable 1 shows an example of our RC setting where the goal is to answer a question Qi, factoid (Q1)or non-factoid (Q2 and Q3), based on a supporting passage Pi, by selecting a continuous sequenceof textAiPias answer.Qi,Pi, andAiare all word sequences, where each word is drawn froma vocabulary, V. Thei-th instance in the training set is a triple in the form of (Pi;Qi;Ai), wherePi= (pi1;:::;p ijPij),Qi= (qi1;:::;q ijQij), andAi= (ai1;:::;a ijAij)(pi;qi;ai2V). Owingto the disagreement among annotators, there could be more than one correct answer for the samequestion; and the k-th answer to Qiis denoted by Aki=faki1;:::;akijAkijg. An answer candidate forthei-th training example is defined as cm;ni, a sub-sequence in Pi, that spans from position mton(1mnjPij). The ground truth answer Aicould be included in the set of all candidates2Under review as a conference paper at ICLR 2017Ci=fcm;nij8m;n2N+;subj (m;n;P i)and 1mnjPijg, wheresubj(m;n;P i)isthe constraint put on the candidate chunk for Pi, such as, “cm;nican have at most 10 tokens”, or“cm;nimust have a pre-defined POS pattern”. To evaluate a system’s performance, its top answer toa question is matched against the corresponding gold standard answer(s).Remark: Categories of RC Tasks Other simpler variants of the aforementioned RC task wereexplored in the past. For example, quiz-style datasets (e.g., MCTest (Richardson et al., 2013),MovieQA (Tapaswi et al., 2015)) have multiple-choice questions with answer options. Cloze-styledatesets(Hermann et al., 2015; Hill et al., 2015; Onishi et al., 2016), usually automatically generated,have factoid “question”s created by replacing the answer in a sentence from the text with blank. Fortheanswer selection task this paper focuses on, several datasets exist, e.g. TREC-QA for factoidanswer extraction from multiple given passages, bAbI (Weston et al., 2014) designed for inferencepurpose, and the SQuAD dataset (Rajpurkar et al., 2016) used in this paper. To the best of ourknowledge, the SQuAD dataset is the only one for both factoid and non-factoid answer extractionwith a question distribution more close to real-world applications.3 B ASELINE : CHUNK -AND -RANK PIPELINE WITH NEURAL RCIn this section we modified a state-of-the-art RC system for cloze-style tasks for our answer extrac-tion purpose, to see how much gap we have for the two type of tasks, and to inspire our end-to-endsystem in the next section. In order to make the cloze-style RC system to make chunk-level deci-sion, we use the RC model to generate features for chunks, which are further used in a feature-basedranker like in (Rajpurkar et al., 2016). As a result, this baseline can be viewed as a deep learningbased counterpart of the system in (Rajpurkar et al., 2016). It has two main components: 1) a stand-alone answer chunker, which is trained to produce overlapping candidate chunks, and 2) a neuralRC model, which is used to score each word in a given passage to be used thereafter for generatingchunk scores.Answer Chunking To reduce the errors generated by the rule-based chunker in (Rajpurkar et al.,2016), first, we capture the part-of-speech (POS) pattern of all answer sub-sequences in the trainingdataset to form a POS pattern trie tree , and then apply the answer POS patterns to passage Pitoacquire a collection of all subsequences (chunk candidates) Ciwhose POS patterns can be matchedto the POS pattern trie . This is equivalent to putting an constraint subj(m;n;P i)to candidateanswer chunk generation process that only choose the chunk with a POS pattern seen for answersin the training data. Then the sub-sequences Ciare used as answer candidates for Pi. Note thatoverlapping chunks could be generated for a passage, and we rely on the ranker to choose the bestcandidate based on features from the cloze-style RC system. Experiments showed that for >90%of the questions on the development set, the ground truth answer is included in the candidate setconstructed in such manner.Feature Extraction and Ranking For chunk ranking, we (1) use neural RCQA model to annotateeachpijin passagePito get score sij, then (2) for every chunk cm;niin passagei, collect scores(sim;:::;s in)for all the (pim;:::;p in)contained within cm;ni, and (3) extract features on the se-quence of scores (sim;:::;s in)to characterize its scale and distribution information, which servesas the feature representation of cm;ni. In step (1) to acquire sijwe train and apply a word-levelsingle-layer Gated Attention Reader2(Dhingra et al., 2016), which has state-of-the-art performanceon CNN/DailyMail cloze-style RC task. In step (3) for chunk cm;ni, we designed 5 features, includ-ing 4 statistics on (sim;:::;s in):maximum, minimum, average and sum ; as well as the count ofmatched POS pattern within the chunk, which serves as an answer prior. We use these 5 features ina state-of-the-art ranker (Ganjisaffar et al., 2011).4 D YNAMIC CHUNK READERThe dynamic chunk reader (DCR) model is presented in Figure 1. Inspired by the baseline we built,DCR is deemed to be superior to the baseline for 3 reasons. First, each chunk has a representationconstructed dynamically, instead of having a set of pre-defined feature values. Second, each passage2We tried using more than one layers in Gated Attention Reader, but no improvement was observed.3Under review as a conference paper at ICLR 2017Figure 1: The main components in dynamic chunk reader model (from bottom to top) are bi-GRUencoders for passage and question, a word-by-word attention bi-GRU for passage, dynamic chunkrepresentations that are transformed from pooled dynamic chunks of hidden states, the questionattention on every chunk representation and final answer chunk prediction.word’s representation is enhanced by word-by-word attention that evaluates the relevance of thepassage word to the question. Third, these components are all within a single, end-to-end model thatcan be trained in a joint manner.DCR works in four steps. First, the encoder layer encodes passage and question separately, by usingbidirectional recurrent neural networks (RNN).Second, the attention layer calculates the relevance of each passage word to the question.Third, the convolution layer generates unigram, bigram and trigram representation for each word.bigram and trigram of a word ends with the same word, and proper padding is applied on the firstword to make sure the output is the same length as input to CNN layer.Fourth, the chunk representation layer dynamically extracts the candidate chunks from the givenpassage, and create chunk representation that encodes the contextual information of each chunk.Fifth, the ranker layer scores the relevance between the representations of a chunk and the givenquestion, and ranks all candidate chunks using a softmax layer.We describe each step below.Encoder Layer We use bi-directional RNN encoder to encode PiandQiof example i, and gethidden state for each word position pijandqik.3As RNN input, a word is represented by a rowvectorx2Rn.xcan be the concatenation of word embedding and word features (see Fig. 1). Theword vector for the t-th word isxt. A word sequence is processed using an RNN encoder with gatedrecurrent units (GRU) (Cho et al., 2014), which was proved to be effective in RC and neural machinetranslation tasks (Bahdanau et al., 2015; Kadlec et al., 2016; Dhingra et al., 2016). For each positiont, GRU computes htwith inputxtand previous state ht1, as:3We can have separated parameters for question and passage encoders but a single shared encoder for bothworks better in the experiments.4Under review as a conference paper at ICLR 2017rt=(Wrxt+Urht1) (1)ut=(Wuxt+Uuht1) (2)ht=tanh(Wxt+U(rtht1)) (3)ht= (1ut)ht1+utht (4)whereht,rt, andut2Rdare d-dimensional hidden state, reset gate, and update gate, respectively;Wfr;ug,W2RndandUfr;ug,U2Rddare the parameters of the GRU; is the sigmoidfunction, anddenotes element-wise production. For a word at t, we use the hidden state !htfromthe forward RNN as a representation of the preceding context, and the htfrom a backward RNNthat encodes text reversely, to incorporate the context after t. Next,ht= [ !ht; ht], the bi-directionalcontextual encoding of xt, is formed. [;]is the concatenation operator. To distinguish hidden statesfrom different sources, we denote the hjofj-th word inPand thehkofk-th word inQashpjandhqkrespectively.Attention Layer Attention mechanism in previous RC tasks (Kadlec et al., 2016; Hermann et al.,2015; Sordoni et al., 2016; Dhingra et al., 2016; Cui et al., 2016a;b) enables question-aware passagerepresentations. We propose a novel attention mechanism inspired by word-by-word style attentionmethods (Rockt ̈aschel et al., 2015; Wang & Jiang, 2015; Santos et al., 2016). For each pj, a question-attended representation vjis computed as follows (example index iis omitted for simplicity):jk=hpjhqk; (5)j=jQjXk=1jkhqk(6)vj= [hpj;j] (7)wherehpjandhqkare hidden states from the bi-directional RNN encoders (see Figure 1). An innerproduct,jk, is calculated between hpjand every question word hqk. It indicates how well thepassage word pjmatches with every question word qk.jis a weighted pooling of jQjquestionhidden states, which serves as a pj-aware question representation. The concatenation of hpjandjleads to a passage-question joint representation, vj2R4d.4Next, we apply a second bi-GRU layertaking thevjs as inputs, and obtain forward and backward representations !jand j2Rd, and inturn their concatenation, j= [ !j; j].Convolution Layer Every word is encoded with complete passage context through attention layerRNN. We would like to model more complex representation of the words, by introducing unigram,bigram and trigram representations. There are two benefits for this enhanced representation: 1)each word could be enhanced with local context information to help identify the boundary of theanswer chunk. Using previous words has been a common feature used in POS tagging and Namedentity recognition; and 2) The information brought in by the ngram into the word representationcould enhance the semantic match between the answer chunk internal and the question. Imaginescenario of a three word candidate, where the last word representation includes the two previouswords through the convolution layer. Matching to the last word could also lead to the match tothe semantics of the internal of the chunk. Specifically, we create for every word position jthreerepresentations, by using ngrams ending with the hidden state j:~j1=jWc1 (8)~j2= [j1;j]Wc2 (9)~j3= [j2;j1;j]Wc3 (10)4We tried another word-by-word attention methods as in (Santos et al., 2016), which has similar passagerepresentation input to question side. However, this does not lead to improvement due to the confusion causedby long passages in RC. Consequently, we used the proposed simplified version of word-by-word attention onpassage side only.5Under review as a conference paper at ICLR 2017The details shown in equations above. We used three different convolution kernels for differentn-grams.Chunk Representation Layer A candidate answer chunk representation is dynamically createdgiven convolution layer output. We first decide the text boundary for the candidate chunk, and thenform a chunk representation using all or part of those joutputs inside the chunk. To decide acandidate chunk (boundary): we tried two ways: (1) adopt the POS trie -based approach used inour baseline, and (2) enumerate all possible chunks up to a maximum number of tokens. For (2),we create up to N(max chunk length) chunks starting from any position jinPj. Approach (1) cangenerate candidates with arbitrary lengths, but fails to recall candidates whose POS pattern is unseenin training set; whereas approach (2) considers all possible candidates within a window and is moreflexible, but over-generates invalid candidates.For a candidate answer chunk cm;nspanning from position mtoninclusively, we construct chunkrepresentation lm;n2R2dusing every ~jlwithin range [m;n], with a function g(), andl2f1;2;3g. Formally,lm;n=g(~ml;:::; ~nl)Each ~jlis a convolution output over concatenated forward and backward RNN hidden states fromattention layer. So the first half in ~jlencodes information in forward RNN hidden states and thesecond half encodes information in backward RNN hidden states. We experimented with severalpooling functions (e.g., max, average) for g(), and found out that, instead of pooling, the best g()function is to concatenate the first half of convolution output of the chunk’s first word and the secondhalf of convolution output of the chunk’s last word. Formally,lm;n=g(~ml;:::; ~nl) = [!~ml; ~nl] (11)where!~mlis half of the hidden state for l-gram word representation corresponding to forward at-tention RNN output. We hypothesize that the hidden states at that two ends can better represent thechunk’s contexts, which is critical for this task, than the states within the chunk. This observationalso agrees with (Kobayashi et al., 2016).Ranker Layer A scoreslm;nfor eachl-gram chunk representation lm;ndenoting the probabilityof that chunk to be the true answer is calculated by dot product with question representation. Thequestion representation is the concatenation of the last hidden state in forward RNN and the firsthidden state in backward RNN. Formally for the chunk cm;niwe havesl(cm;nijPi;Qi) =lm;n[!hQijQij; hQi1] (12)wheresldenotes the score generated from l-gram representation.!hQikor hQikis thek-th hidden stateoutput from question Qi’s forward and backward RNN encoder, respectively.After that, the final score for cm;niis evaluated as the linear combination of three scores, followedby a softmax:s(cm;nijPi;Qi) =softmax (W[s1;s2;s3]) (13)whereslis the shorthand notation for sl(cm;nijPi;Qi);W2R3. In runtime, the chunk with thehighest probability is taken as the answer. In training, the following negative log likelihood isminimized:L=NXi=1logP(AijPi;Qi) (14)Note that the i-th training instance is only used when Aiis included in the corresponding candidatechunk setCi, i.e.9m;nAi=cm;ni. The softmax in the final layer serves as the list-wise rankingmodule similar in spirit to (Cao et al., 2007).5 E XPERIMENTSDataset We used the Stanford Question Answering Dataset (SQuAD) (Rajpurkar et al., 2016)for the experiment. SQuAD came into our sight because it is a mix of factoid and non-factoid6Under review as a conference paper at ICLR 2017Table 2: Results on the SQuAD dataset.Dev TestModels EM F1 EM F1Rajpurkar 2016 39.8% 51.0% 40.4% 51.0%Wang 2016 59.1% 70.0% 59.5% 70.3%DCR w/o Conv. 62.5% 71.2% 62.5% 71.0%DCR 63.4% 72.3% - -DCR Ensemble 66.3% 74.7% - -questions, a real-world data (crowd-sourced), and of large scale (over 100K question-answer pairscollected from 536 Wikipedia articles). Answers range from single words to long, variable-lengthphrase/clauses. It is a relaxation of assumptions by the cloze-style and quiz-style RC datasets in theProblem Definition section.Features The input vector representation of each word wto encoder RNNs has six parts including apre-trained 300-dimensional GloVe embedding (Pennington et al., 2014) and five features (see Fig-ure 1): (1) a one-hot encoding (46 dimensions) for the part-of-speech (POS) tag of w; (2) a one-hotencoding (14 dimensions) for named entity (NE) tag of w; (3) a binary value indicating whether w’ssurface form is the same to any word in the quesiton; (4) if the lemma form of wis the same to anyword in the question; and (5) if wis caplitalized. Feature (3) and (4) are designed to help the modelalign the passage text with question. Note that some types of questions (e.g., “who”, “when” ques-tions) have answers that have a specific POS/NE tag pattern. For instance, “who” questions mostlyhave proper nouns/persons as answers and “when” questions may frequently have numbers/dates(e.g., a year) as answers. Thus, we believe that the model could exploit the co-relation betweenquestion types and answer POS/NE patterns easier with POS and NE tag features. Implementa-tion Details We pre-processed the SQuAD dataset using Stanford CoreNLP tool5(Manning et al.,2014) with its default setting to tokenize the text and obtain the POS and NE annotations. To trainour model, we used stochastic gradient descent with the ADAM optimizer (Kingma & Ba, 2014),with an initial learning rate of 0.001. All GRU weights were initialized from a uniform distribu-tion between (-0.01, 0.01). The hidden state size, d, was set to 300 for all GRUs. The questionbi-GRU shared parameters with the passage bi-GRU, while the attention-based passage bi-GRU hadits own parameters. We shuffled all training examples at the beginning of each epoch and adopted acurriculum learning approach (Bengio et al., 2009), by sorting training instances by length in every10 batches, to enable the model start learning from relatively easier instances and to harder ones.We also applied dropout of rate 0.2 to the embedding layer of input bi-GRU encoder, and gradientclipping when the norm of gradients exceeded 10. We trained in mini-batch style (mini-batch sizeis 180) and applied zero-padding to the passage and question inputs in each batch. We also set themaximum passage length to be 300 tokens, and pruned all the tokens after the 300-th token in thetraining set to save memory and speed up the training process. This step reduced the training setsize by about 1.6%. During test, we test on the full length of passage, so that we don’t prune out thepotential candidates. We trained the model for at most 30 epochs, and in case the accuracy did notimprove for 10 epochs, we stopped training.For the feature ranking-based system, we used jforest ranker (Ganjisaffar et al., 2011) withLambdaMART-RegressionTree algorithm and the ranking metric was NDCG@10. For the GatedAttention Reader in baseline system, we replicated the method and use the same configurations asin (Dhingra et al., 2016).ResultsTable 2 shows our main results on the SQuAD dataset. Compared to the scores reported in (Wang& Jiang, 2016), our exact match (EM) and F1 on the development set and EM score on the test setare better, and F1 on the test set is comparable. We also studied how each component in our modelcontributes to the overall performance. Table 3 shows the details as well as the results of the baselineranker. As the first row of Table 3 shows, our baseline system improves 10% (EM) over Rajpurkaret al. (Rajpurkar et al., 2016) (Table 2, row 1), the feature-based ranking system. However whencompared to our DCR model (Table 3, row 2), the baseline (row 1) is more than 12% (EM) behind5stanfordnlp.github.io/CoreNLP/7Under review as a conference paper at ICLR 2017Table 3: Detailed system experiments on the SQuAD development set.Models EM F1Chunk-and-Rank Pipeline Baseline 49.7% 64.9%DCR w/o Convolution 62.5% 71.2%DCR w/o Word-by-Word Attention 57.6% 68.7%DCR w/o POS feature (1) 59.2% 68.8%DCR w/o NE feature (2) 60.4% 70.2%DCR w/o Question-word feature (3) 59.5% 69.0%DCR w/o Question-lemma feature (4) 61.2% 69.9%DCR w/o Capitalized feature (5) 61.5% 70.6%DCR w/o Conv. w POS-trie 62.1% 70.8%(a) (b)Figure 2: (a) Variations of DCR performance on ground truth answer length (up to 10) in the devel-opment set. The curve with diamond knots also shows the percentage of answers for each length inthe development set. (b) Performance comparisons for different question head word.even though it is based on the state-of-the-art model for cloze-style RC tasks. This can be attributedto the advanced model structure and end-to-end manner of DCR.We also did ablation tests on our DCR model. First, replacing the word-by-word attention withAttentive Reader style attention (Hermann et al., 2015) decreases the EM score by about 4.5%,showing the strength of our proposed attention mechanism.Second, we remove the features in input to see the contribution of each feature. The result showsthat POS feature (1) and question-word feature (3) are the two most important features.Finally, combining the DCR model with the proposed POS-trie constraints yields a score similar tothe one obtained using the DCR model with all possible n-gram chunks. The result shows that (1)our chunk representations are powerful enough to differentiate even a huge amount of chunks whenno constraints are applied; and (2) the proposed POS-trie reduces the search space at the cost of asmall drop in performance.Analysis To better understand our system, we calculated the accuracy of the attention mechanism ofthe gated attention reader used in our deep learning-based baseline. We found that it is 72% accuratei.e., 72% of the times a word with the highest attention score is inside the correct answer span. Thismeans that, if we could accurately detect the boundary around the word with the highest attentionscore to form the answer span, we could achieve an accuracy close to 72%. In addition, we checkedthe answer recall of our candidate chunking approach. When we use a window size of 10, 92% ofthe time, the ground truth answer will be included in the extracted Candidate chunk set. Thus theupper bound of the exact match score of our baseline system is around 66% (92% (the answer recall)72%). From the results, we see our DCR system’s exact match score is at 62%. This shows thatDCR is proficient at differentiating answer spans dynamically.To further analyze the system’s performance while predicting answers of different lengths, we showthe exact match (EM) and F1 scores for answers with lengths up to 10 tokens in Figure 2(a). Fromthe graph, we can see that, with the increase of answer length, both EM and F1 drops, but in differentspeed. The gap between F1 and exact match also widens as answer length increases. However, themodel still yields a decent accuracy when the answer is longer than a single word. Additionally,Figure 2(b) shows that the system is better at “when” and “who” questions, but performs poorly8Under review as a conference paper at ICLR 2017Figure 3: Development set performance comparisons for different types of “what” questions (con-sidering the types with more than 20 examples in the development set).on “why” questions. The large gap between exact match and F1 on “why” questions means thatperfectly identifying the span is harder than locating the core of the answer span.Since “what”, “which”, and “how” questions contain a broad range of question types, we split themfurther based on the bigram a question starts with, and Figure 3 shows the breakdown for “what”questions. We can see that “what” questions asking for explanations such as “what happens” and“what happened” have lower EM and F1 scores. In contrast, “what” questions asking for year andnumbers have much higher scores and, for these questions, exact match scores are close to F1 scores,which means chunking for these questions are easier for DCR.6 R ELATED WORKAttentive Reader was the first neural model for factoid RCQA (Hermann et al., 2015). It uses Bidi-rectional RNN (Cho et al., 2014; Chung et al.,2014) to encode document and query respectively,and use query representation to match with every token from the document. Attention Sum Reader(Kadlec et al., 2016) simplifies the model to just predicting positions of correct answer in the doc-ument and the training speed and test accuracy are both greatly improved on the CNN/Daily Maildataset. (Chen et al., 2016) also simplified Attentive Reader and reported higher accuracy. Window-based Memory Networks (MemN2N) is introduced along with the CBT dataset (Hill et al., 2015),which does not use RNN encoders, but embeds contexts as memory and matches questions withembedded contexts. Those models’ mechanism is to learn the match between answer context withquestion/query representation. In contrast, memory enhanced neural networks like Neural TuringMachines (Graves et al., 2014) and its variants (Zhang et al., 2015; Gulcehre et al., 2016; Zaremba& Sutskever, 2015; Chandar et al., 2016; Grefenstette et al., 2015) were also potential candidatesfor the task, and Gulcehre et al. (Gulcehre et al., 2016) reported results on the bAbI task, which isworse than memory networks. Similarly, sequence-to-sequence models were also used (Yu et al.,2015; Hermann et al., 2015), but they did not yield better results either.Recently, several models have been proposed to enable more complex inference for RC task. Forinstance, gated attention model (Dhingra et al., 2016) employs a multi-layer architecture, whereeach layer encodes the same document, but the attention is updated from layer to layer. EpiReader(Trischler et al., 2016b) adopted a joint training model for answer extractor and reasoner, where theextractor proposes top candidates, and the reasoner weighs each candidate by examining entailmentrelationship between question-answer representation and the document. An iterative alternating at-tention mechanism and gating strategies were proposed in (Sordoni et al., 2016) to optimize theattention through several hops. In contrast, Cui et al. (Cui et al., 2016a;b) introduced fine-graineddocument attention from each question word and then aggregated those attentions from each ques-tion token by summation with or without weights. This system achieved the state-of-the-art score onthe CNN dataset. Those different variations all result in roughly 3-5% improvement over attentionsum reader, but none of those could achieve higher than that. Other methods include using dynamicentity representation with max-pooling (Kobayashi et al., 2016) that aims to change entity represen-tation with context, and Weissenborn’s (Weissenborn, 2016) system, which tries to separate entityfrom the context and then matches the question to context, scoring an accuracy around 70% on theCNN dataset.9Under review as a conference paper at ICLR 2017However, all of those models assume that the answers are single tokens. This limits the type ofquestions the models can answer. Wang and Jiang (Wang & Jiang, 2016) proposed a match-lstm andachieved good results on SQuAD. However, this approach predicts a chunk boundary or whether aword is part of a chunk or not. In contrast, our approach explicitly constructs the chunk representa-tions and similar chunks are compared directly to determine correct answer boundaries.7 C ONCLUSIONIn this paper we proposed a novel neural reading comprehension model for question answering.Different from the previously proposed models for factoid RCQA, the proposed model, dynamicchunk reader, is not restricted to predicting a single named entity as an answer or selecting an answerfrom a small, pre-defined candidate list. Instead, it is capable of answering both factoid and non-factoid questions as it learns to select answer chunks that are suitable for an input question. DCRachieves this goal with a joint deep learning model enhanced with a novel attention mechanismand five simple yet effective features. Error analysis shows that the DCR model achieves goodperformance, but still needs to improve on predicting longer answers, which are usually non-factoidin nature. | SkoowPWEe | Review | 5: Marginally below acceptance threshold | The paper proposed an end-to-end machine learning model called dynamic reader for the machine reading comprehension task. Compared to earlier systems, the proposed model is able to extract and rank a set of answer candidates from a given document.
There are many recent models focusing on building good question answering systems by extracting phrases from a given article. It seems that there are two different aspects that are unique in this work:
1. The use of convolution model, and
2. Dynamic chunking
Convolution network is often only used for modeling character-based word embeddings so I am curious about its effectiveness on representing phrases. Therefore, I wish there could be more analysis on how effective it is, as the authors do not compare the convolution framework to other alternative approaches such as LSTM. The comparisons are important, as the authors uses uni-gram, bi-gram and tri-gram information in the convolution network, and it is not clear to me that if tri-gram information is still needed for LSTM models.
The dynamic chunking is a good idea, and a very similar idea is proposed in some of the recent papers such as [Kenton et al, 16], which also targets at the same dataset. However, I would like to see more analysis on the dynamic chunking. Why this approach is a good approach for representing answer chunks? Given the representation of the chunk is constructed by the first and the end word representations generated by a convolution network, I am not sure about the ability of this representation to capture the long answer phrases.
The authors do not use character base embedding but use some of the previous trained NLP models. It would be interesting if the authors could show what are the advantages and disadvantages of using linguistic features compared to character embeddings.
In short, there are several good ideas proposed in the paper, but the lack of proper analysis make it difficult to judge how important the proposed techniques are.
| 3: The reviewer is fairly confident that the evaluation is correct |
B1Igu2ogg | ICLR.cc/2017/conference | 2017 | Efficient Vector Representation for Documents through Corruption | ["Minmin Chen"] | We present an efficient document representation learning framework, Document Vector through Corruption (Doc2VecC). Doc2VecC represents each document as a simple average of word embeddings. It ensures a representation generated as such captures the semantic meanings of the document during learning. A corruption model is included, which introduces a data-dependent regularization that favors informative or rare words while forcing the embeddings of common and non-discriminative ones to be close to zero. Doc2VecC produces significantly better word embeddings than Word2Vec. We compare Doc2VecC with several state-of-the-art document representation learning algorithms. The simple model architecture introduced by Doc2VecC matches or out-performs the state-of-the-art in generating high-quality document representations for sentiment analysis, document classification as well as semantic relatedness tasks. The simplicity of the model enables training on billions of words per hour on a single machine. At the same time, the model is very efficient in generating representations of unseen documents at test time.
| ["Natural language processing", "Deep learning", "Semi-Supervised Learning"] | ABSTRACTWe present an efficient document representation learning framework, DocumentVector through Corruption (Doc2VecC). Doc2VecC represents each document asa simple average of word embeddings. It ensures a representation generated assuch captures the semantic meanings of the document during learning. A cor-ruption model is included, which introduces a data-dependent regularization thatfavors informative or rare words while forcing the embeddings of common andnon-discriminative ones to be close to zero. Doc2VecC produces significantlybetter word embeddings than Word2Vec. We compare Doc2VecC with severalstate-of-the-art document representation learning algorithms. The simple modelarchitecture introduced by Doc2VecC matches or out-performs the state-of-the-artin generating high-quality document representations for sentiment analysis, doc-ument classification as well as semantic relatedness tasks. The simplicity of themodel enables training on billions of words per hour on a single machine. Atthe same time, the model is very efficient in generating representations of unseendocuments at test time.1 I NTRODUCTIONText understanding starts with the challenge of finding machine-understandable representation thatcaptures the semantics of texts. Bag-of-words (BoW) and its N-gram extensions are arguably themost commonly used document representations. Despite its simplicity, BoW works surprisinglywell for many tasks (Wang & Manning, 2012). However, by treating words and phrases as uniqueand discrete symbols, BoW often fails to capture the similarity between words or phrases and alsosuffers from sparsity and high dimensionality.Recent works on using neural networks to learn distributed vector representations of words havegained great popularity. The well celebrated Word2Vec (Mikolov et al., 2013a), by learning topredict the target word using its neighboring words, maps words of similar meanings to nearbypoints in the continuous vector space. The surprisingly simple model has succeeded in generatinghigh-quality word embeddings for tasks such as language modeling, text understanding and machinetranslation. Word2Vec naturally scales to large datasets thanks to its simple model architecture. Itcan be trained on billions of words per hour on a single machine.Paragraph Vectors (Le & Mikolov, 2014) generalize the idea to learn vector representation for docu-ments. A target word is predicted by the word embeddings of its neighbors in together with a uniquedocument vector learned for each document. It outperforms established document representations,such as BoW and Latent Dirichlet Allocation (Blei et al., 2003), on various text understandingtasks (Dai et al., 2015). However, two caveats come with this approach: 1) the number of parame-ters grows with the size of the training corpus, which can easily go to billions; and 2) it is expensiveto generate vector representations for unseen documents at test time.We propose an efficient model architecture, referred to as Document Vector through Corruption(Doc2VecC), to learn vector representations for documents. It is motivated by the observation thatlinear operations on the word embeddings learned by Word2Vec can sustain substantial amountof syntactic and semantic meanings of a phrase or a sentence (Mikolov et al., 2013b). For ex-ample, vec(“Russia”) + vec(“river”) is close to vec(“V olga River”) (Mikolov & Dean, 2013), and1Published as a conference paper at ICLR 2017vec(“king”) - vec(“man”) + vec(“women”) is close to vec(“queen”) (Mikolov et al., 2013b). InDoc2VecC, we represent each document as a simple average of the word embeddings of all thewords in the document. In contrast to existing approaches which post-process learned word em-beddings to form document representation (Socher et al., 2013; Mesnil et al., 2014), Doc2VecCenforces a meaningful document representation can be formed by averaging the word embeddingsduring learning . Furthermore, we include a corruption model that randomly remove words from adocument during learning, a mechanism that is critical to the performance and learning speed of ouralgorithm.Doc2VecC has several desirable properties: 1. The model complexity of Doc2VecC is decoupledfrom the size of the training corpus, depending only on the size of the vocabulary; 2. The modelarchitecture of Doc2VecC resembles that of Word2Vec, and can be trained very efficiently; 3. Thenew framework implicitly introduces a data-dependent regularization, which favors rare or informa-tive words and suppresses words that are common but not discriminative; 4. Vector representationof a document can be generated by simply averaging the learned word embeddings of all the wordsin the document, which significantly boost test efficiency; 5. The vector representation generated byDoc2VecC matches or beats the state-of-the-art for sentiment analysis, document classification aswell as semantic relatedness tasks.2 R ELATED WORKS AND NOTATIONSText representation learning has been extensively studied. Popular representations range from thesimplest BoW and its term-frequency based variants (Salton & Buckley, 1988), language modelbased methods (Croft & Lafferty, 2013; Mikolov et al., 2010; Kim et al., 2015), topic models (Deer-wester et al., 1990; Blei et al., 2003), Denoising Autoencoders and its variants (Vincent et al., 2008;Chen et al., 2012), and distributed vector representations (Mesnil et al., 2014; Le & Mikolov, 2014;Kiros et al., 2015). Another prominent line of work includes learning task-specific document rep-resentation with deep neural networks, such as CNN (Zhang & LeCun, 2015) or LSTM based ap-proaches (Tai et al., 2015; Dai & Le, 2015).In this section, we briefly introduce Word2Vec and Paragraph Vectors, the two approaches that aremost similar to ours. There are two well-know model architectures used for both methods, referredto as Continuous Bag-of-Words (CBoW) and Skipgram models (Mikolov et al., 2013a). In thiswork, we focus on CBoW. Extending to Skipgram is straightforward. Here are the notations we aregoing to use throughout the paper:D=fD1;;Dng: a training corpus of size n, in which each document Dicontains a variable-length sequence of words w1i;;wTii;V: the vocabulary used in the training corpus, of sizes v;x2Rv1: BoW of a document, where xj= 1iff wordjdoes appear in the document.ct2Rv1: BoW of the local context wtk;;wt1;wt+1;;wt+kat the target position t.ctj= 1iff wordjappears within the sliding window of the target;U2Rhv: the projection matrix from the input space to a hidden space of size h. We use uwtodenote the column in Ufor wordw, i.e., the “input“ vector of word w;V>2Rvh: the projection matrix from the hidden space to output. Similarly, we use vwtodenote the column in Vfor wordw, i.e., the “output“ vector of word w.Word2Vec. Word2Vec proposed a neural network architecture of an input layer, a projection layerparameterized by the matrix Uand an output layer by V>. It defines the probability of observingthe target word wtin a document Dgiven its local context ctasP(wtjct) =exp(v>wtUct)Pw02Vexp(v>w0Uct)The word vectors are then learned to maximize the log likelihood of observing the target word ateach position of the document. Various techniques (Mitchell & Lapata, 2010; Zanzotto et al., 2010;Yessenalina & Cardie, 2011; Grefenstette et al., 2013; Socher et al., 2013; Kusner et al., 2015)2Published as a conference paper at ICLR 2017have been studied to generate vector representations of documents from word embeddings, amongwhich the simplest approach is to use weighted average of word embeddings. Similarly, our methodforms document representation by averaging word embeddings of all the words in the document.Differently, as our model encodes the compositionality of words in the learned word embeddings,heuristic weighting at test time is not required.Paragraph Vectors. Paragraph Vectors, on the other hands, explicitly learns a document vectorwith the word embeddings. It introduces another projection matrix D2Rhn. Each column of Dacts as a memory of the global topic of the corresponding document. It then defines the probabilityof observing the target word wtin a document Dgiven its local context ctasP(wtjct;d) =exp(v>wt(Uct+d))Pw02Vexp(v>w0(Uct+d))where d2Dis the vector representation of the document. As we can see from this formula, thecomplexity of Paragraph Vectors grows with not only the size of the vocabulary, but also the size ofthe training corpus. While we can reasonably limit the size of a vocabulary to be within a millionfor most datasets, the size of a training corpus can easily go to billions. What is more concerning isthat, in order to come up with the vector representations of unseen documents, we need to performan expensive inference by appending more columns to Dand gradient descent on Dwhile fixingother parameters of the learned model.3 M ETHODSeveral works (Mikolov & Dean, 2013; Mikolov et al., 2013b) showcased that syntactic and seman-tic regularities of phrases and sentences are reasonably well preserved by adding or subtracting wordembeddings learned through Word2Vec. It prompts us to explore the option of simply representinga document as an average of word embeddings. Figure 1 illustrates the new model architecture.wt#1Wt+1Wt+2wpwqwrwtopeningfortheperformancepraisedbrazilceremonyword<vectorsdocument< vectorAverage/ConcatenateFigure 1: A new framework for learning document vectors.Similar to Word2Vec or Paragraph Vectors, Doc2VecC consists of an input layer, a projection layeras well as an output layer to predict the target word, “ceremony” in this example. The embeddings ofneighboring words (“opening”, “for”, “the”) provide local context while the vector representation ofthe entire document (shown in grey) serves as the global context. In contrast to Paragraph Vectors,which directly learns a unique vector for each document, Doc2VecC represents each document asan average of the embeddings of words randomly sampled from the document (“performance” atpositionp, “praised” at position q, and “brazil” at position r).Huang et al. (2012) also proposed the idea of using average of word embeddings to represent theglobal context of a document. Different from their work, we choose to corrupt the original documentby randomly removing significant portion of words, and represent the document using only theembeddings of the words remained. This corruption mechanism offers us great speedup duringtraining as it significantly reduces the number of parameters to update in back propagation. At thesame time, as we are going to detail in the next section, it introduces a special form of regularization,which brings great performance improvement.3Published as a conference paper at ICLR 2017Here we describe the stochastic process we used to generate a global context at each update. Theglobal context, which we denote as ~x, is generated through a unbiased mask-out/drop-out corruption,in which we randomly overwrites each dimension of the original document xwith probability q. Tomake the corruption unbiased, we set the uncorrupted dimensions to 1=(1q)times its originalvalue. Formally,~xd=(0; with probability qxd1q;otherwise(1)Doc2VecC then defines the probability of observing a target word wtgiven its local context ctaswell as the global context ~xasP(wtjct;~x) =exp(v>wt(local contextz}|{Uct+global contextz}|{1TU~x))Pw02Vexp(v>w0Uct+1TU~x) (2)HereTis the length of the document. Exactly computing the probability is impractical, instead weapproximate it with negative sampling (Mikolov et al., 2013a).f(w;c;~x)logP(wtjct;~x)logv>w(Uc+1TU~x)+Xw0Pvlogv>w0(Uc+1TU~x)(3)herePvstands for a uniform distribution over the terms in the vocabulary. The two projectionmatrices UandVare then learned to minimize the loss:`=nXi=1TiXt=1f(wti;cti;~xti) (4)Given the learned projection matrix U, we then represent each document simply as an average ofthe embeddings of the words in the document,d=1TXw2Duw: (5)We are going to elaborate next why we choose to corrupt the original document with the corruptionmodel in eq.(1) during learning, and how it enables us to simply use the average word embeddingsas the vector representation for documents at test time.3.1 C ORRUPTION AS DATA -DEPENDENT REGULARIZATIONWe approximate the log likelihood for each instance f(w;c;~x)in eq.(4) with its Taylor expansionwith respect to ~xup to the second-order (Van Der Maaten et al., 2013; Wager et al., 2013; Chenet al., 2014). Concretely, we choose to expand at the mean of the corruption x=Ep(~xjx)[~x]:f(w;c;~x)f(w;c;x) + (~xx)>r~xf+12(~xx)>r2~xf(~xx)wherer~xfandr2~xfare the first-order (i.e., gradient) and second-order (i.e., Hessian) of the loglikelihood with respect to ~x. Expansion at the mean xis crucial as shown in the following steps.Let us assume that for each instance, we are going to sample the global context ~xinfinitely manytimes, and thus compute the expected log likelihood with respect to the corrupted ~x.Ep(~xjx)[f(w;c;~x)]f(w;c;x) +12trE[(~xx)(~xx)>]r2~xfThe linear term disappears as Ep(~xjx)[~xx] = 0 . We substitute in xfor the mean xof thecorrupting distribution (unbiased corruption) and the matrix x=E[(~xx)(~xx)>]for thevariance, and obtainEp(~xjx)[f(w;c;~x)]f(w;c;x) +12trxr2~xf(6)4Published as a conference paper at ICLR 2017As each word in a document is corrupted independently of others, the variance matrix xis simpli-fied to a diagonal matrix with jthelement equalsq1qx2j. As a result, we only need to compute thediagonal terms of the Hessian matrix r2~xf.Thejthdimension of the Hessian’s diagonal evaluated at the mean xis given by@2f@x2j=w;c;x(1w;c;x)(1Tv>wuj)2Xw0Pvw0;c;x(1w0;c;x)(1Tv>w0uj)2Plug the Hessian matrix and the variance matrix back into eq.(6), and then back to the loss definedin eq.(4), we can see that Doc2VecC intrinsically minimizes`=nXi=1TiXt=1f(wti;cti;xi) +q1qvXj=1R(uj) (7)Eachf(wti;cti;xi)in the first term measures the log likelihood of observing the target word wtigiven its local context ctiand the document vector di=1TUxi.As such, Doc2VecC enforces that adocument vector generated by averaging word embeddings can capture the global semantics of thedocument, and fill in information missed in the local context.The second term here is a data-dependent regularization. The regularization on the embedding ujof each word jtakes the following form,R(uj)/nXi=1TiXt=1x2ij"wti;cti;xi(1wti;cti;xi)(1Tv>wtiuj)2+Xw0Pvw0;cti;xi(1w0;cti;xi)(1Tv>w0uj)2#wherew;c;x=(v>w(Uc+1TUx))prescribes the confidence of predicting the target word wgivenits neighboring context cas well as the document vector d=1TUx.Closely examining R(uj)leads to several interesting findings: 1. the regularizer penalizes moreon the embeddings of common words. A word jthat frequently appears across the training corpus,i.e,xij= 1 often, will have a bigger regularization than a rare word; 2. on the other hand, theregularization is modulated by w;c;x(1w;c;x), which is small if w;c;x!1or0. In otherwords, if ujis critical to a confident prediction w;c;xwhen it is active, then the regularization isdiminished. Similar effect was observed for dropout training for logistic regression model (Wageret al., 2013) and denoising autoencoders (Chen et al., 2014).4 E XPERIMENTSWe evaluate Doc2VecC on a sentiment analysis task, a document classification task and a semanticrelatedness task, along with several document representation learning algorithms. All experimentscan be reproduced using the code available at https://github.com/mchen24/iclr20174.1 B ASELINESWe compare against the following document representation baselines: bag-of-words (BoW) ;De-noising Autoencoders (DEA) (Vincent et al., 2008) , a representation learned from reconstructingoriginal document xusing corrupted one ~x. SDAs have been shown to be the state-of-the-art for sen-timent analysis tasks (Glorot et al., 2011). We used Kullback-Liebler divergence as the reconstruc-tion error and an affine encoder. To scale up the algorithm to large vocabulary, we only take into ac-count the non-zero elements of xin the reconstruction error and employed negative sampling for theremainings; Word2Vec (Mikolov et al., 2013a)+IDF , a representation generated through weightedaverage of word vectors learned using Word2Vec; Doc2Vec (Le & Mikolov, 2014) ;Skip-thoughtVectors(Kiros et al., 2015) , a generic, distributed sentence encoder that extends the Word2Vec skip-gram model to sentence level. It has been shown to produce highly generic sentence representationsthat apply to various natural language processing tasks. We also include RNNLM (Mikolov et al.,2010) , a recurrent neural network based language model in the comparison. In the semantic related-ness task, we further compare to LSTM-based methods (Tai et al., 2015) that have been reportedon this dataset.5Published as a conference paper at ICLR 2017Table 1: Classification error of a linear classifier trained on various document representations on theImdb dataset.Model Error rate % (include test) Error rate % (exclude test)Bag-of-Words (BOW) 12.53 12.59RNN-LM 13.59 13.59Denoising Autoencoders (DEA) 11.58 12.54Word2Vec + A VG 12.11 12.69Word2Vec + IDF 11.28 11.92Paragraph Vectors 10.81 12.10Skip-thought Vectors - 17.42Doc2VecC 10.48 11.704.2 S ENTIMENT ANALYSISFor sentiment analysis, we use the IMDB movie review dataset. It contains 100,000 movies reviewscategorized as either positive or negative. It comes with predefined train/test split (Maas et al.,2011): 25,000 reviews are used for training, 25,000 for testing, and the rest as unlabeled data. Thetwo classes are balanced in the training and testing sets. We remove words that appear less than 10times in the training set, resulting in a vocabulary of 43,375 distinct words and symbols.Setup. We test the various representation learning algorithms under two settings: one follows thesame protocol proposed in (Mesnil et al., 2014), where representation is learned using all the avail-able data, including the test set; another one where the representation is learned using training andunlabeled set only. For both settings, a linear support vector machine (SVM) (Fan et al., 2008)is trained afterwards on the learned representation for classification. For Skip-thought Vectors, weused the generic model1trained on a much bigger book corpus to encode the documents. A vector of4800 dimensions, first 2400 from the uni-skip model, and the last 2400 from the bi-skip model, aregenerated for each document. In comparison, all the other algorithms produce a vector representa-tion of size 100. The supervised RNN-LM is learned on the training set only. The hyper-parametersare tuned on a validation set subsampled from the training set.Accuracy. Comparing the two columns in Table 1, we can see that all the representation learn-ing algorithms benefits from including the testing data during the representation learning phrase.Doc2VecC achieved similar or even better performance than Paragraph Vectors. Both methodsoutperforms the other baselines, beating the BOW representation by 15%. In comparison withWord2Vec+IDF, which applies post-processing on learned word embeddings to form document rep-resentation, Doc2VecC naturally enforces document semantics to be captured by averaged wordembeddings during training. This leads to better performance. Doc2VecC reduces to Denoising Au-toencoders (DEA) if the local context words are removed from the paradigm shown in Figure 1. Byincluding the context words, Doc2VecC allows the document vector to focus more on capturing theglobal context. Skip-thought vectors perform surprisingly poor on this dataset comparing to othermethods. We hypothesized that it is due to the length of paragraphs in this dataset. The averagelength of paragraphs in the IMDB movie review dataset is 296:5, much longer than the ones usedfor training and testing in the original paper, which is in the order of 10. As noted in (Tai et al.,2015), the performance of LSTM based method (similarly, the gated RNN used in Skip-thoughtvectors) drops significantly with increasing paragraph length, as it is hard to preserve state over longsequences of words.Time. Table 2 summarizes the time required by these algorithms to learn and generate the documentrepresentation. Word2Vec is the fastest one to train. Denoising Autoencoders and Doc2VecC secondthat. The number of parameters that needs to be back-propagated in each update was increased bythe number of surviving words in ~x. We found that both models are not sensitive to the corruptionrateqin the noise model. Since the learning time decreases with higher corruption rate, we usedq= 0:9throughout the experiments. Paragraph Vectors takes longer time to train as there aremore parameters (linear to the number of document in the learning set) to learn. At test time,Word2Vec+IDF, DEA and Doc2VecC all use (weighted) averaging of word embeddings as document1available at https://github.com/ryankiros/skip-thoughts6Published as a conference paper at ICLR 2017Table 2: Learning time and representation generation time required by different representation learn-ing algorithms.Model Learning time Generation timeDenoising Autoencoders 3m 23s 7sWord2Vec + IDF 2m 33s 7sParagraph Vectors 4m 54s 4m 17sSkip-thought 2h 2hDoc2VecC 4m 30s 7sTable 3: Words with embeddings closest to 0 learned by different algorithms.Word2Vec harp(118) distasteful(115) switzerland(101) shabby(103) fireworks(101) heav-ens(100) thornton(108) endeavor(100) dense(108) circumstance(119) debacle(103)ParaVectors harp(118) dense(108) reels(115) fireworks(101) its’(103) unnoticed(112) pony(102)fulfilled(107) heavens(100) bliss(110) canned(114) shabby(103) debacle(103)Doc2VecC ,(1099319) .(1306691) the(1340408) of(581667) and(651119) up(49871) to(537570)that(275240) time(48205) endeavor(100) here(21118) way(31302) own(13456)representation. Paragraph Vectors, on the other hand, requires another round of inference to producethe vector representation of unseen test documents. It takes Paragraph Vectors 4 minutes and 17seconds to infer the vector representations for the 25,000 test documents, in comparison to 7 secondsfor the other methods. As we did not re-train the Skip-thought vector models on this dataset, thetraining time2reported in the table is the time it takes to generate the embeddings for the 25,000training documents. Due to repeated high-dimensional matrix operations required for encoding longparagraphs, it takes fairly long time to generate the representations for these documents. Similarlyfor testing. The experiments were conducted on a desktop with Intel i7 2.2Ghz cpu.Data dependent regularization. As explained in Section 3.1, the corruption introduced inDoc2VecC acts as a data-dependent regularization that suppresses the embeddings of frequent butuninformative words. Here we conduct an experiment to exam the effect. We used a cutoff of 100in this experiment. Table 3 lists the words having the smallest l2norm of embeddings found bydifferent algorithms. The number inside the parenthesis after each word is the number of times thisword appears in the learning set. In word2Vec or Paragraph Vectors, the least frequent words haveembeddings that are close to zero, despite some of them being indicative of sentiment such as deba-cle, bliss and shabby. In contrast, Doc2VecC manages to clamp down the representation of wordsfrequently appear in the training set, but are uninformative, such as symbols and stop words.Subsampling frequent words. Note that for all the numbers reported, we applied the trick ofsubsampling of frequent words introduced in (Mikolov & Dean, 2013) to counter the imbalancebetween frequent and rare words. It is critical to the performance of simple Word2Vec+A VG as thesole remedy to diminish the contribution of common words in the final document representation. Ifwe were to remove this step, the error rate of Word2Vec+A VG will increases from 12:1%to13:2%.Doc2VecC on the other hand naturally exerts a stronger regularization toward embeddings of wordsthat are frequent but uninformative, therefore does not rely on this trick.4.3 W ORD ANALOGYIn table 3, we demonstrated that the corruption model introduced in Doc2VecC dampens the embed-dings of words which are common and non-discriminative (stop words). In this experiment, we aregoing to quantatively compare the word embeddings generated by Doc2VecC to the ones generatedby Word2Vec, or Paragraph Vectors on the word analogy task introduced by Mikolov et al. (2013a).The dataset contains five types of semantic questions, and nine types of syntactic questions, with atotal of 8,869 semantic and 10,675 syntactic questions. The questions are answered through simplelinear algebraic operations on the word embeddings generated by different methods. Please refer tothe original paper for more details on the evaluation protocol.2As reported in the original paper, training of the skip-thought vector model on the book corpus datasettakes around 2 weeks on GPU.7Published as a conference paper at ICLR 20171M 2M 4M 8M 15M02040603:86:18:3 9:113:318:726:432:736:138:920:328:136:442:546:7Number of paragraphs used for learningAccuracy (%)ParagraphVectors Word2Vec Doc2VecC(a) h=501M 2M 4M 8M 15M02040605:17:510:9 10:2 10:223:634:742:448:250:724:334:144:152:658:2Number of paragraphs used for learningParagraphVectors Word2Vec Doc2VecC(b) h=100Figure 2: Accuracy on subset of the Semantic-Syntactic Word Relationship test set. Only questionscontaining words from the most frequent 30k words are included in the test.Semantic questions Word2Vec Doc2VecC Syntactic questions Word2Vec Doc2VecCcapital-common-countries 73.59 81.82 gram1-adjective-to-adverb 19.25 20.32capital-world 67.94 77.96 gram2-opposite 14.07 25.54currency 17.14 12.86 gram3-comparative 60.21 74.47city-in-state 34.49 42.86 gram4-superlative 52.87 55.40family 68.71 64.62 gram5-present-participle 56.34 65.81gram6-nationality-adjective 88.71 91.03gram7-past-tense 47.05 51.86gram8-plural 50.28 61.27gram9-plural-verbs 25.38 39.69Table 4: Top 1 accuracy on the 5 type of semantics and 9 types of syntactic questions.We trained the word embeddings of different methods using the English news dataset released underthe ACL workshop on statistical machine translation. The training set includes close to 15M para-graphs with 355M tokens. We compare the performance of word embeddings trained by differentmethods with increasing embedding dimensionality as well as increasing training data.We observe similar trends as in Mikolov et al. (2013a). Increasing embedding dimensionality aswell as training data size improves performance of the word embeddings on this task. However, theimprovement is diminishing. Doc2VecC produces word embeddings which performs significantlybetter than the ones generated by Word2Vec. We observe close to 20% uplift when we train on thefull training corpus. Paragraph vectors on the other hand performs surprisingly bad on this dataset.Our hypothesis is that due to the large capacity of the model architecture, Paragraph Vectors reliesmostly on the unique document vectors to capture the information in a text document instead oflearning the word semantic or syntactic similarities. This also explains why the PV-DBOW Le &Mikolov (2014) model architecture proposed in the original work, which completely removes wordembedding layers, performs comparable to the distributed memory version.In table 5, we list a detailed comparison of the performance of word embeddings generated byWord2Vec and Doc2VecC on the 14 subtasks, when trained on the full dataset with embedding ofsize 100. We can see that Doc2VecC significantly outperforms the word embeddings produced byWord2Vec across almost all the subtasks.4.4 D OCUMENT CLASSIFICATIONFor the document classification task, we use a subset of the wikipedia dump, which contains over300,000 wikipedia pages in 100 categories. The 100 categories includes categories under sports,8Published as a conference paper at ICLR 2017Table 5: Classification error (%) of a linear classifier trained on various document representationson the Wikipedia dataset.Model BOW DEA Word2Vec + A VG Word2Vec + IDF ParagraphVectors Doc2VecCh= 100 36.03 32.30 33.2 33.16 35.78 31.92h= 200 36.03 31.36 32.46 32.48 34.92 30.84h= 500 36.03 31.10 32.02 32.13 33.93 30.43h= 1000 36.03 31.13 31.78 32.06 33.02 30.24(a) Doc2Vec (b) Doc2VecCFigure 3: Visualization of document vectors on Wikipedia dataset using t-SNE.entertainment, literature, and politics etc. Examples of categories include American drama films,Directorial debut films, Major League Baseball pitchers and Sydney Swans players. Body texts(the second paragraph) were extracted for each page as a document. For each category, we select1,000 documents with unique category label, and 100 documents were used for training and 900documents for testing. The remaining documents are used as unlabeled data. The 100 classesare balanced in the training and testing sets. For this data set, we learn the word embedding anddocument representation for all the algorithms using all the available data. We apply a cutoff of 10,resulting in a vocabulary of size 107;691.Table 5 summarizes the classification error of a linear SVM trained on representations of differentsizes. We can see that most of the algorithms are not sensitive to the size of the vector represen-tation. Doc2Vec benefits most from increasing representation size. Across all sizes of representa-tions, Doc2VecC outperform the existing algorithms by a significant margin. In fact, Doc2VecC canachieve same or better performance with a much smaller representation vector.Figure 4: Visualization of Wikipedia Doc2VecCvectors using t-SNE.Figure 3 visualizes the document representa-tions learned by Doc2Vec (left) and Doc2VecC(right) using t-SNE (Maaten & Hinton, 2008).We can see that documents from the same cat-egory are nicely clustered using the representa-tion generated by Doc2VecC. Doc2Vec, on theother hand, does not produce a clear separationbetween different categories, which explains itsworse performance reported in Table 5.Figure 4 visualizes the vector representationgenerated by Doc2VecC w.r.t. coarser catego-rization. we manually grouped the 100 cate-gories into 7 coarse categories, television, al-bums, writers, musicians, athletes, species andactors. Categories that do no belong to any ofthese 7 groups are not included in the figure.9Published as a conference paper at ICLR 2017We can see that documents belonging to a coarser category are grouped together. This subset in-cludes is a wide range of sports descriptions, ranging from football, crickets, baseball, and cyclingetc., which explains why the athletes category are less concentrated. In the projection, we can seedocuments belonging to the musician category are closer to those belonging to albums category thanthose of athletes or species.4.5 S EMANTIC RELATEDNESSWe test Doc2VecC on the SemEval 2014 Task 1: semantic relatedness SICK dataset (Marelli et al.,2014). Given two sentences, the task is to determine how closely they are semantically related. Theset contains 9,927 pairs of sentences with human annotated relatedness score, ranging from 1 to 5.A score of 1 indicates that the two sentences are not related, while 5 indicates high relatedness. Theset is splitted into a training set of 4,500 instances, a validation set of 500, and a test set of 4,927.We compare Doc2VecC with several winning solutions of the competition as well as several morerecent techniques reported on this dataset, including bi-directional LSTM and Tree-LSTM3trainedfrom scratch on this dataset, Skip-thought vectors learned a large book corpus4(Zhu et al., 2015)and produced sentence embeddings of 4,800 dimensions on this dataset. We follow the same proto-col as in skip-thought vectors, and train Doc2VecC on the larger book corpus dataset. Contrary tothe vocabulary expansion technique used in (Kiros et al., 2015) to handle out-of-vocabulary words,we extend the vocabulary of the learned model directly on the target dataset in the following way:we use the pre-trained word embedding as an initialization, and fine-tune the word and sentencerepresentation on the SICK dataset. Notice that the fine-tuning is done for sentence representationlearning only, and we did not use the relatedness score in the learning. This step brings small im-provement to the performance of our algorithm. Given the sentence embeddings, we used the exactsame training and testing protocol as in (Kiros et al., 2015) to score each pair of sentences: withtwo sentence embedding u1andu2, we concatenate their component-wise product, u1u2and theirabsolute difference, ju1u2jas the feature representation.Table 6 summarizes the performance of various algorithms on this dataset. Despite its simplicity,Doc2VecC significantly out-performs the winning solutions of the competition, which are heavilyfeature engineered toward this dataset and several baseline methods, noticeably the dependency-treeRNNs introduced in (Socher et al., 2014), which relies on expensive dependency parsers to composesentence vectors from word embeddings. The performance of Doc2VecC is slightly worse than theLSTM based methods or skip-thought vectors on this dataset, while it significantly outperformsskip-thought vectors on the IMDB movie review dataset ( 11:70% error rate vs 17:42%). As wehypothesized in previous section, while Doc2VecC is better at handling longer paragraphs, LSTM-based methods are superior for relatively short sentences (of length in the order of 10s). We wouldlike to point out that Doc2VecC is much faster to train and test comparing to skip-thought vectors. Ittakes less than 2 hours to learn the embeddings on the large book corpus for Doc2VecC on a desktopwith Intel i7 2.2Ghz cpu, in comparison to the 2 weeks on GPU required by skip-thought vectors.5 C ONCLUSIONWe introduce a new model architecture Doc2VecC for document representation learning. It is veryefficient to train and test thanks to its simple model architecture. Doc2VecC intrinsically makes suredocument representation generated by averaging word embeddings capture semantics of documentduring learning. It also introduces a data-dependent regularization which favors informative or rarewords while dampening the embeddings of common and non-discriminative words. As such, eachdocument can be efficiently represented as a simple average of the learned word embeddings. Incomparison to several existing document representation learning algorithms, Doc2VecC outperformsnot only in testing efficiency, but also in the expressiveness of the generated representations.3The word representation was initialized using publicly available 300-dimensional Glove vectors trained on840 billion tokens of Common Crawl data4The dataset contains 11,038 books with over one billion words10Published as a conference paper at ICLR 2017Table 6: Test set results on the SICK semantic relatedness task. The first group of results are from thesubmission to the 2014 SemEval competition; the second group includes several baseline methodsreported in (Tai et al., 2015); the third group are methods based on LSTM reported in (Tai et al.,2015) as well as the skip-thought vectors (Kiros et al., 2015).Method Pearson’sSpearman’s MSEIllinois-LH 0.7993 0.7538 0.3692UNAL-NLP 0.8070 0.7489 0.3550Meaning Factory 0.8268 0.7721 0.3224ECNU 0.8279 0.7689 0.3250Mean vectors (Word2Vec + avg) 0.7577 0.6738 0.4557DT-RNN (Socher et al., 2014) 0.7923 0.7319 0.3822SDT-RNN (Socher et al., 2014) 0.7900 0.7304 0.3848LSTM (Tai et al., 2015) 0.8528 0.7911 0.2831Bidirectional LSTM (Tai et al., 2015) 0.8567 0.7966 0.2736Dependency Tree-LSTM (Tai et al., 2015) 0.8676 0.8083 0.2532combine-skip (Kiros et al., 2015) 0.8584 0.7916 0.2687Doc2VecC 0.8381 0.7621 0.3053 | B1vz0k8Ne | Simple idea, nicely composed | 7: Good paper, accept | This paper discusses a method for computing vector representations for documents by using a skip-gram style learning mechanism with an added regularizer in the form of a global context vector with various bits of drop out. While none of the individual components proposed in this paper are new, I believe that the combination in this fashion is. Further, I appreciated the detailed analysis of model behaviour in section 3.
The main downside to this submission is in its relative weakness on the empirical front. Arguably there are more interesting tasks than sentiment analysis and k-way classification! Likewise, why waste 2/3 of a page on t-sne projections rather than use that space for further analysis?
While I am a bit disappointed by this reduced evaluation and agree with the other reviewers concerning soft baselines, I think this paper should be accepted: it's an interesting algorithm, nicely composed and very efficient, so it's reasonable to assume that other readers might have use for some of the ideas presented here. | 3: The reviewer is fairly confident that the evaluation is correct |
B1Igu2ogg | ICLR.cc/2017/conference | 2017 | Efficient Vector Representation for Documents through Corruption | ["Minmin Chen"] | We present an efficient document representation learning framework, Document Vector through Corruption (Doc2VecC). Doc2VecC represents each document as a simple average of word embeddings. It ensures a representation generated as such captures the semantic meanings of the document during learning. A corruption model is included, which introduces a data-dependent regularization that favors informative or rare words while forcing the embeddings of common and non-discriminative ones to be close to zero. Doc2VecC produces significantly better word embeddings than Word2Vec. We compare Doc2VecC with several state-of-the-art document representation learning algorithms. The simple model architecture introduced by Doc2VecC matches or out-performs the state-of-the-art in generating high-quality document representations for sentiment analysis, document classification as well as semantic relatedness tasks. The simplicity of the model enables training on billions of words per hour on a single machine. At the same time, the model is very efficient in generating representations of unseen documents at test time.
| ["Natural language processing", "Deep learning", "Semi-Supervised Learning"] | ABSTRACTWe present an efficient document representation learning framework, DocumentVector through Corruption (Doc2VecC). Doc2VecC represents each document asa simple average of word embeddings. It ensures a representation generated assuch captures the semantic meanings of the document during learning. A cor-ruption model is included, which introduces a data-dependent regularization thatfavors informative or rare words while forcing the embeddings of common andnon-discriminative ones to be close to zero. Doc2VecC produces significantlybetter word embeddings than Word2Vec. We compare Doc2VecC with severalstate-of-the-art document representation learning algorithms. The simple modelarchitecture introduced by Doc2VecC matches or out-performs the state-of-the-artin generating high-quality document representations for sentiment analysis, doc-ument classification as well as semantic relatedness tasks. The simplicity of themodel enables training on billions of words per hour on a single machine. Atthe same time, the model is very efficient in generating representations of unseendocuments at test time.1 I NTRODUCTIONText understanding starts with the challenge of finding machine-understandable representation thatcaptures the semantics of texts. Bag-of-words (BoW) and its N-gram extensions are arguably themost commonly used document representations. Despite its simplicity, BoW works surprisinglywell for many tasks (Wang & Manning, 2012). However, by treating words and phrases as uniqueand discrete symbols, BoW often fails to capture the similarity between words or phrases and alsosuffers from sparsity and high dimensionality.Recent works on using neural networks to learn distributed vector representations of words havegained great popularity. The well celebrated Word2Vec (Mikolov et al., 2013a), by learning topredict the target word using its neighboring words, maps words of similar meanings to nearbypoints in the continuous vector space. The surprisingly simple model has succeeded in generatinghigh-quality word embeddings for tasks such as language modeling, text understanding and machinetranslation. Word2Vec naturally scales to large datasets thanks to its simple model architecture. Itcan be trained on billions of words per hour on a single machine.Paragraph Vectors (Le & Mikolov, 2014) generalize the idea to learn vector representation for docu-ments. A target word is predicted by the word embeddings of its neighbors in together with a uniquedocument vector learned for each document. It outperforms established document representations,such as BoW and Latent Dirichlet Allocation (Blei et al., 2003), on various text understandingtasks (Dai et al., 2015). However, two caveats come with this approach: 1) the number of parame-ters grows with the size of the training corpus, which can easily go to billions; and 2) it is expensiveto generate vector representations for unseen documents at test time.We propose an efficient model architecture, referred to as Document Vector through Corruption(Doc2VecC), to learn vector representations for documents. It is motivated by the observation thatlinear operations on the word embeddings learned by Word2Vec can sustain substantial amountof syntactic and semantic meanings of a phrase or a sentence (Mikolov et al., 2013b). For ex-ample, vec(“Russia”) + vec(“river”) is close to vec(“V olga River”) (Mikolov & Dean, 2013), and1Published as a conference paper at ICLR 2017vec(“king”) - vec(“man”) + vec(“women”) is close to vec(“queen”) (Mikolov et al., 2013b). InDoc2VecC, we represent each document as a simple average of the word embeddings of all thewords in the document. In contrast to existing approaches which post-process learned word em-beddings to form document representation (Socher et al., 2013; Mesnil et al., 2014), Doc2VecCenforces a meaningful document representation can be formed by averaging the word embeddingsduring learning . Furthermore, we include a corruption model that randomly remove words from adocument during learning, a mechanism that is critical to the performance and learning speed of ouralgorithm.Doc2VecC has several desirable properties: 1. The model complexity of Doc2VecC is decoupledfrom the size of the training corpus, depending only on the size of the vocabulary; 2. The modelarchitecture of Doc2VecC resembles that of Word2Vec, and can be trained very efficiently; 3. Thenew framework implicitly introduces a data-dependent regularization, which favors rare or informa-tive words and suppresses words that are common but not discriminative; 4. Vector representationof a document can be generated by simply averaging the learned word embeddings of all the wordsin the document, which significantly boost test efficiency; 5. The vector representation generated byDoc2VecC matches or beats the state-of-the-art for sentiment analysis, document classification aswell as semantic relatedness tasks.2 R ELATED WORKS AND NOTATIONSText representation learning has been extensively studied. Popular representations range from thesimplest BoW and its term-frequency based variants (Salton & Buckley, 1988), language modelbased methods (Croft & Lafferty, 2013; Mikolov et al., 2010; Kim et al., 2015), topic models (Deer-wester et al., 1990; Blei et al., 2003), Denoising Autoencoders and its variants (Vincent et al., 2008;Chen et al., 2012), and distributed vector representations (Mesnil et al., 2014; Le & Mikolov, 2014;Kiros et al., 2015). Another prominent line of work includes learning task-specific document rep-resentation with deep neural networks, such as CNN (Zhang & LeCun, 2015) or LSTM based ap-proaches (Tai et al., 2015; Dai & Le, 2015).In this section, we briefly introduce Word2Vec and Paragraph Vectors, the two approaches that aremost similar to ours. There are two well-know model architectures used for both methods, referredto as Continuous Bag-of-Words (CBoW) and Skipgram models (Mikolov et al., 2013a). In thiswork, we focus on CBoW. Extending to Skipgram is straightforward. Here are the notations we aregoing to use throughout the paper:D=fD1;;Dng: a training corpus of size n, in which each document Dicontains a variable-length sequence of words w1i;;wTii;V: the vocabulary used in the training corpus, of sizes v;x2Rv1: BoW of a document, where xj= 1iff wordjdoes appear in the document.ct2Rv1: BoW of the local context wtk;;wt1;wt+1;;wt+kat the target position t.ctj= 1iff wordjappears within the sliding window of the target;U2Rhv: the projection matrix from the input space to a hidden space of size h. We use uwtodenote the column in Ufor wordw, i.e., the “input“ vector of word w;V>2Rvh: the projection matrix from the hidden space to output. Similarly, we use vwtodenote the column in Vfor wordw, i.e., the “output“ vector of word w.Word2Vec. Word2Vec proposed a neural network architecture of an input layer, a projection layerparameterized by the matrix Uand an output layer by V>. It defines the probability of observingthe target word wtin a document Dgiven its local context ctasP(wtjct) =exp(v>wtUct)Pw02Vexp(v>w0Uct)The word vectors are then learned to maximize the log likelihood of observing the target word ateach position of the document. Various techniques (Mitchell & Lapata, 2010; Zanzotto et al., 2010;Yessenalina & Cardie, 2011; Grefenstette et al., 2013; Socher et al., 2013; Kusner et al., 2015)2Published as a conference paper at ICLR 2017have been studied to generate vector representations of documents from word embeddings, amongwhich the simplest approach is to use weighted average of word embeddings. Similarly, our methodforms document representation by averaging word embeddings of all the words in the document.Differently, as our model encodes the compositionality of words in the learned word embeddings,heuristic weighting at test time is not required.Paragraph Vectors. Paragraph Vectors, on the other hands, explicitly learns a document vectorwith the word embeddings. It introduces another projection matrix D2Rhn. Each column of Dacts as a memory of the global topic of the corresponding document. It then defines the probabilityof observing the target word wtin a document Dgiven its local context ctasP(wtjct;d) =exp(v>wt(Uct+d))Pw02Vexp(v>w0(Uct+d))where d2Dis the vector representation of the document. As we can see from this formula, thecomplexity of Paragraph Vectors grows with not only the size of the vocabulary, but also the size ofthe training corpus. While we can reasonably limit the size of a vocabulary to be within a millionfor most datasets, the size of a training corpus can easily go to billions. What is more concerning isthat, in order to come up with the vector representations of unseen documents, we need to performan expensive inference by appending more columns to Dand gradient descent on Dwhile fixingother parameters of the learned model.3 M ETHODSeveral works (Mikolov & Dean, 2013; Mikolov et al., 2013b) showcased that syntactic and seman-tic regularities of phrases and sentences are reasonably well preserved by adding or subtracting wordembeddings learned through Word2Vec. It prompts us to explore the option of simply representinga document as an average of word embeddings. Figure 1 illustrates the new model architecture.wt#1Wt+1Wt+2wpwqwrwtopeningfortheperformancepraisedbrazilceremonyword<vectorsdocument< vectorAverage/ConcatenateFigure 1: A new framework for learning document vectors.Similar to Word2Vec or Paragraph Vectors, Doc2VecC consists of an input layer, a projection layeras well as an output layer to predict the target word, “ceremony” in this example. The embeddings ofneighboring words (“opening”, “for”, “the”) provide local context while the vector representation ofthe entire document (shown in grey) serves as the global context. In contrast to Paragraph Vectors,which directly learns a unique vector for each document, Doc2VecC represents each document asan average of the embeddings of words randomly sampled from the document (“performance” atpositionp, “praised” at position q, and “brazil” at position r).Huang et al. (2012) also proposed the idea of using average of word embeddings to represent theglobal context of a document. Different from their work, we choose to corrupt the original documentby randomly removing significant portion of words, and represent the document using only theembeddings of the words remained. This corruption mechanism offers us great speedup duringtraining as it significantly reduces the number of parameters to update in back propagation. At thesame time, as we are going to detail in the next section, it introduces a special form of regularization,which brings great performance improvement.3Published as a conference paper at ICLR 2017Here we describe the stochastic process we used to generate a global context at each update. Theglobal context, which we denote as ~x, is generated through a unbiased mask-out/drop-out corruption,in which we randomly overwrites each dimension of the original document xwith probability q. Tomake the corruption unbiased, we set the uncorrupted dimensions to 1=(1q)times its originalvalue. Formally,~xd=(0; with probability qxd1q;otherwise(1)Doc2VecC then defines the probability of observing a target word wtgiven its local context ctaswell as the global context ~xasP(wtjct;~x) =exp(v>wt(local contextz}|{Uct+global contextz}|{1TU~x))Pw02Vexp(v>w0Uct+1TU~x) (2)HereTis the length of the document. Exactly computing the probability is impractical, instead weapproximate it with negative sampling (Mikolov et al., 2013a).f(w;c;~x)logP(wtjct;~x)logv>w(Uc+1TU~x)+Xw0Pvlogv>w0(Uc+1TU~x)(3)herePvstands for a uniform distribution over the terms in the vocabulary. The two projectionmatrices UandVare then learned to minimize the loss:`=nXi=1TiXt=1f(wti;cti;~xti) (4)Given the learned projection matrix U, we then represent each document simply as an average ofthe embeddings of the words in the document,d=1TXw2Duw: (5)We are going to elaborate next why we choose to corrupt the original document with the corruptionmodel in eq.(1) during learning, and how it enables us to simply use the average word embeddingsas the vector representation for documents at test time.3.1 C ORRUPTION AS DATA -DEPENDENT REGULARIZATIONWe approximate the log likelihood for each instance f(w;c;~x)in eq.(4) with its Taylor expansionwith respect to ~xup to the second-order (Van Der Maaten et al., 2013; Wager et al., 2013; Chenet al., 2014). Concretely, we choose to expand at the mean of the corruption x=Ep(~xjx)[~x]:f(w;c;~x)f(w;c;x) + (~xx)>r~xf+12(~xx)>r2~xf(~xx)wherer~xfandr2~xfare the first-order (i.e., gradient) and second-order (i.e., Hessian) of the loglikelihood with respect to ~x. Expansion at the mean xis crucial as shown in the following steps.Let us assume that for each instance, we are going to sample the global context ~xinfinitely manytimes, and thus compute the expected log likelihood with respect to the corrupted ~x.Ep(~xjx)[f(w;c;~x)]f(w;c;x) +12trE[(~xx)(~xx)>]r2~xfThe linear term disappears as Ep(~xjx)[~xx] = 0 . We substitute in xfor the mean xof thecorrupting distribution (unbiased corruption) and the matrix x=E[(~xx)(~xx)>]for thevariance, and obtainEp(~xjx)[f(w;c;~x)]f(w;c;x) +12trxr2~xf(6)4Published as a conference paper at ICLR 2017As each word in a document is corrupted independently of others, the variance matrix xis simpli-fied to a diagonal matrix with jthelement equalsq1qx2j. As a result, we only need to compute thediagonal terms of the Hessian matrix r2~xf.Thejthdimension of the Hessian’s diagonal evaluated at the mean xis given by@2f@x2j=w;c;x(1w;c;x)(1Tv>wuj)2Xw0Pvw0;c;x(1w0;c;x)(1Tv>w0uj)2Plug the Hessian matrix and the variance matrix back into eq.(6), and then back to the loss definedin eq.(4), we can see that Doc2VecC intrinsically minimizes`=nXi=1TiXt=1f(wti;cti;xi) +q1qvXj=1R(uj) (7)Eachf(wti;cti;xi)in the first term measures the log likelihood of observing the target word wtigiven its local context ctiand the document vector di=1TUxi.As such, Doc2VecC enforces that adocument vector generated by averaging word embeddings can capture the global semantics of thedocument, and fill in information missed in the local context.The second term here is a data-dependent regularization. The regularization on the embedding ujof each word jtakes the following form,R(uj)/nXi=1TiXt=1x2ij"wti;cti;xi(1wti;cti;xi)(1Tv>wtiuj)2+Xw0Pvw0;cti;xi(1w0;cti;xi)(1Tv>w0uj)2#wherew;c;x=(v>w(Uc+1TUx))prescribes the confidence of predicting the target word wgivenits neighboring context cas well as the document vector d=1TUx.Closely examining R(uj)leads to several interesting findings: 1. the regularizer penalizes moreon the embeddings of common words. A word jthat frequently appears across the training corpus,i.e,xij= 1 often, will have a bigger regularization than a rare word; 2. on the other hand, theregularization is modulated by w;c;x(1w;c;x), which is small if w;c;x!1or0. In otherwords, if ujis critical to a confident prediction w;c;xwhen it is active, then the regularization isdiminished. Similar effect was observed for dropout training for logistic regression model (Wageret al., 2013) and denoising autoencoders (Chen et al., 2014).4 E XPERIMENTSWe evaluate Doc2VecC on a sentiment analysis task, a document classification task and a semanticrelatedness task, along with several document representation learning algorithms. All experimentscan be reproduced using the code available at https://github.com/mchen24/iclr20174.1 B ASELINESWe compare against the following document representation baselines: bag-of-words (BoW) ;De-noising Autoencoders (DEA) (Vincent et al., 2008) , a representation learned from reconstructingoriginal document xusing corrupted one ~x. SDAs have been shown to be the state-of-the-art for sen-timent analysis tasks (Glorot et al., 2011). We used Kullback-Liebler divergence as the reconstruc-tion error and an affine encoder. To scale up the algorithm to large vocabulary, we only take into ac-count the non-zero elements of xin the reconstruction error and employed negative sampling for theremainings; Word2Vec (Mikolov et al., 2013a)+IDF , a representation generated through weightedaverage of word vectors learned using Word2Vec; Doc2Vec (Le & Mikolov, 2014) ;Skip-thoughtVectors(Kiros et al., 2015) , a generic, distributed sentence encoder that extends the Word2Vec skip-gram model to sentence level. It has been shown to produce highly generic sentence representationsthat apply to various natural language processing tasks. We also include RNNLM (Mikolov et al.,2010) , a recurrent neural network based language model in the comparison. In the semantic related-ness task, we further compare to LSTM-based methods (Tai et al., 2015) that have been reportedon this dataset.5Published as a conference paper at ICLR 2017Table 1: Classification error of a linear classifier trained on various document representations on theImdb dataset.Model Error rate % (include test) Error rate % (exclude test)Bag-of-Words (BOW) 12.53 12.59RNN-LM 13.59 13.59Denoising Autoencoders (DEA) 11.58 12.54Word2Vec + A VG 12.11 12.69Word2Vec + IDF 11.28 11.92Paragraph Vectors 10.81 12.10Skip-thought Vectors - 17.42Doc2VecC 10.48 11.704.2 S ENTIMENT ANALYSISFor sentiment analysis, we use the IMDB movie review dataset. It contains 100,000 movies reviewscategorized as either positive or negative. It comes with predefined train/test split (Maas et al.,2011): 25,000 reviews are used for training, 25,000 for testing, and the rest as unlabeled data. Thetwo classes are balanced in the training and testing sets. We remove words that appear less than 10times in the training set, resulting in a vocabulary of 43,375 distinct words and symbols.Setup. We test the various representation learning algorithms under two settings: one follows thesame protocol proposed in (Mesnil et al., 2014), where representation is learned using all the avail-able data, including the test set; another one where the representation is learned using training andunlabeled set only. For both settings, a linear support vector machine (SVM) (Fan et al., 2008)is trained afterwards on the learned representation for classification. For Skip-thought Vectors, weused the generic model1trained on a much bigger book corpus to encode the documents. A vector of4800 dimensions, first 2400 from the uni-skip model, and the last 2400 from the bi-skip model, aregenerated for each document. In comparison, all the other algorithms produce a vector representa-tion of size 100. The supervised RNN-LM is learned on the training set only. The hyper-parametersare tuned on a validation set subsampled from the training set.Accuracy. Comparing the two columns in Table 1, we can see that all the representation learn-ing algorithms benefits from including the testing data during the representation learning phrase.Doc2VecC achieved similar or even better performance than Paragraph Vectors. Both methodsoutperforms the other baselines, beating the BOW representation by 15%. In comparison withWord2Vec+IDF, which applies post-processing on learned word embeddings to form document rep-resentation, Doc2VecC naturally enforces document semantics to be captured by averaged wordembeddings during training. This leads to better performance. Doc2VecC reduces to Denoising Au-toencoders (DEA) if the local context words are removed from the paradigm shown in Figure 1. Byincluding the context words, Doc2VecC allows the document vector to focus more on capturing theglobal context. Skip-thought vectors perform surprisingly poor on this dataset comparing to othermethods. We hypothesized that it is due to the length of paragraphs in this dataset. The averagelength of paragraphs in the IMDB movie review dataset is 296:5, much longer than the ones usedfor training and testing in the original paper, which is in the order of 10. As noted in (Tai et al.,2015), the performance of LSTM based method (similarly, the gated RNN used in Skip-thoughtvectors) drops significantly with increasing paragraph length, as it is hard to preserve state over longsequences of words.Time. Table 2 summarizes the time required by these algorithms to learn and generate the documentrepresentation. Word2Vec is the fastest one to train. Denoising Autoencoders and Doc2VecC secondthat. The number of parameters that needs to be back-propagated in each update was increased bythe number of surviving words in ~x. We found that both models are not sensitive to the corruptionrateqin the noise model. Since the learning time decreases with higher corruption rate, we usedq= 0:9throughout the experiments. Paragraph Vectors takes longer time to train as there aremore parameters (linear to the number of document in the learning set) to learn. At test time,Word2Vec+IDF, DEA and Doc2VecC all use (weighted) averaging of word embeddings as document1available at https://github.com/ryankiros/skip-thoughts6Published as a conference paper at ICLR 2017Table 2: Learning time and representation generation time required by different representation learn-ing algorithms.Model Learning time Generation timeDenoising Autoencoders 3m 23s 7sWord2Vec + IDF 2m 33s 7sParagraph Vectors 4m 54s 4m 17sSkip-thought 2h 2hDoc2VecC 4m 30s 7sTable 3: Words with embeddings closest to 0 learned by different algorithms.Word2Vec harp(118) distasteful(115) switzerland(101) shabby(103) fireworks(101) heav-ens(100) thornton(108) endeavor(100) dense(108) circumstance(119) debacle(103)ParaVectors harp(118) dense(108) reels(115) fireworks(101) its’(103) unnoticed(112) pony(102)fulfilled(107) heavens(100) bliss(110) canned(114) shabby(103) debacle(103)Doc2VecC ,(1099319) .(1306691) the(1340408) of(581667) and(651119) up(49871) to(537570)that(275240) time(48205) endeavor(100) here(21118) way(31302) own(13456)representation. Paragraph Vectors, on the other hand, requires another round of inference to producethe vector representation of unseen test documents. It takes Paragraph Vectors 4 minutes and 17seconds to infer the vector representations for the 25,000 test documents, in comparison to 7 secondsfor the other methods. As we did not re-train the Skip-thought vector models on this dataset, thetraining time2reported in the table is the time it takes to generate the embeddings for the 25,000training documents. Due to repeated high-dimensional matrix operations required for encoding longparagraphs, it takes fairly long time to generate the representations for these documents. Similarlyfor testing. The experiments were conducted on a desktop with Intel i7 2.2Ghz cpu.Data dependent regularization. As explained in Section 3.1, the corruption introduced inDoc2VecC acts as a data-dependent regularization that suppresses the embeddings of frequent butuninformative words. Here we conduct an experiment to exam the effect. We used a cutoff of 100in this experiment. Table 3 lists the words having the smallest l2norm of embeddings found bydifferent algorithms. The number inside the parenthesis after each word is the number of times thisword appears in the learning set. In word2Vec or Paragraph Vectors, the least frequent words haveembeddings that are close to zero, despite some of them being indicative of sentiment such as deba-cle, bliss and shabby. In contrast, Doc2VecC manages to clamp down the representation of wordsfrequently appear in the training set, but are uninformative, such as symbols and stop words.Subsampling frequent words. Note that for all the numbers reported, we applied the trick ofsubsampling of frequent words introduced in (Mikolov & Dean, 2013) to counter the imbalancebetween frequent and rare words. It is critical to the performance of simple Word2Vec+A VG as thesole remedy to diminish the contribution of common words in the final document representation. Ifwe were to remove this step, the error rate of Word2Vec+A VG will increases from 12:1%to13:2%.Doc2VecC on the other hand naturally exerts a stronger regularization toward embeddings of wordsthat are frequent but uninformative, therefore does not rely on this trick.4.3 W ORD ANALOGYIn table 3, we demonstrated that the corruption model introduced in Doc2VecC dampens the embed-dings of words which are common and non-discriminative (stop words). In this experiment, we aregoing to quantatively compare the word embeddings generated by Doc2VecC to the ones generatedby Word2Vec, or Paragraph Vectors on the word analogy task introduced by Mikolov et al. (2013a).The dataset contains five types of semantic questions, and nine types of syntactic questions, with atotal of 8,869 semantic and 10,675 syntactic questions. The questions are answered through simplelinear algebraic operations on the word embeddings generated by different methods. Please refer tothe original paper for more details on the evaluation protocol.2As reported in the original paper, training of the skip-thought vector model on the book corpus datasettakes around 2 weeks on GPU.7Published as a conference paper at ICLR 20171M 2M 4M 8M 15M02040603:86:18:3 9:113:318:726:432:736:138:920:328:136:442:546:7Number of paragraphs used for learningAccuracy (%)ParagraphVectors Word2Vec Doc2VecC(a) h=501M 2M 4M 8M 15M02040605:17:510:9 10:2 10:223:634:742:448:250:724:334:144:152:658:2Number of paragraphs used for learningParagraphVectors Word2Vec Doc2VecC(b) h=100Figure 2: Accuracy on subset of the Semantic-Syntactic Word Relationship test set. Only questionscontaining words from the most frequent 30k words are included in the test.Semantic questions Word2Vec Doc2VecC Syntactic questions Word2Vec Doc2VecCcapital-common-countries 73.59 81.82 gram1-adjective-to-adverb 19.25 20.32capital-world 67.94 77.96 gram2-opposite 14.07 25.54currency 17.14 12.86 gram3-comparative 60.21 74.47city-in-state 34.49 42.86 gram4-superlative 52.87 55.40family 68.71 64.62 gram5-present-participle 56.34 65.81gram6-nationality-adjective 88.71 91.03gram7-past-tense 47.05 51.86gram8-plural 50.28 61.27gram9-plural-verbs 25.38 39.69Table 4: Top 1 accuracy on the 5 type of semantics and 9 types of syntactic questions.We trained the word embeddings of different methods using the English news dataset released underthe ACL workshop on statistical machine translation. The training set includes close to 15M para-graphs with 355M tokens. We compare the performance of word embeddings trained by differentmethods with increasing embedding dimensionality as well as increasing training data.We observe similar trends as in Mikolov et al. (2013a). Increasing embedding dimensionality aswell as training data size improves performance of the word embeddings on this task. However, theimprovement is diminishing. Doc2VecC produces word embeddings which performs significantlybetter than the ones generated by Word2Vec. We observe close to 20% uplift when we train on thefull training corpus. Paragraph vectors on the other hand performs surprisingly bad on this dataset.Our hypothesis is that due to the large capacity of the model architecture, Paragraph Vectors reliesmostly on the unique document vectors to capture the information in a text document instead oflearning the word semantic or syntactic similarities. This also explains why the PV-DBOW Le &Mikolov (2014) model architecture proposed in the original work, which completely removes wordembedding layers, performs comparable to the distributed memory version.In table 5, we list a detailed comparison of the performance of word embeddings generated byWord2Vec and Doc2VecC on the 14 subtasks, when trained on the full dataset with embedding ofsize 100. We can see that Doc2VecC significantly outperforms the word embeddings produced byWord2Vec across almost all the subtasks.4.4 D OCUMENT CLASSIFICATIONFor the document classification task, we use a subset of the wikipedia dump, which contains over300,000 wikipedia pages in 100 categories. The 100 categories includes categories under sports,8Published as a conference paper at ICLR 2017Table 5: Classification error (%) of a linear classifier trained on various document representationson the Wikipedia dataset.Model BOW DEA Word2Vec + A VG Word2Vec + IDF ParagraphVectors Doc2VecCh= 100 36.03 32.30 33.2 33.16 35.78 31.92h= 200 36.03 31.36 32.46 32.48 34.92 30.84h= 500 36.03 31.10 32.02 32.13 33.93 30.43h= 1000 36.03 31.13 31.78 32.06 33.02 30.24(a) Doc2Vec (b) Doc2VecCFigure 3: Visualization of document vectors on Wikipedia dataset using t-SNE.entertainment, literature, and politics etc. Examples of categories include American drama films,Directorial debut films, Major League Baseball pitchers and Sydney Swans players. Body texts(the second paragraph) were extracted for each page as a document. For each category, we select1,000 documents with unique category label, and 100 documents were used for training and 900documents for testing. The remaining documents are used as unlabeled data. The 100 classesare balanced in the training and testing sets. For this data set, we learn the word embedding anddocument representation for all the algorithms using all the available data. We apply a cutoff of 10,resulting in a vocabulary of size 107;691.Table 5 summarizes the classification error of a linear SVM trained on representations of differentsizes. We can see that most of the algorithms are not sensitive to the size of the vector represen-tation. Doc2Vec benefits most from increasing representation size. Across all sizes of representa-tions, Doc2VecC outperform the existing algorithms by a significant margin. In fact, Doc2VecC canachieve same or better performance with a much smaller representation vector.Figure 4: Visualization of Wikipedia Doc2VecCvectors using t-SNE.Figure 3 visualizes the document representa-tions learned by Doc2Vec (left) and Doc2VecC(right) using t-SNE (Maaten & Hinton, 2008).We can see that documents from the same cat-egory are nicely clustered using the representa-tion generated by Doc2VecC. Doc2Vec, on theother hand, does not produce a clear separationbetween different categories, which explains itsworse performance reported in Table 5.Figure 4 visualizes the vector representationgenerated by Doc2VecC w.r.t. coarser catego-rization. we manually grouped the 100 cate-gories into 7 coarse categories, television, al-bums, writers, musicians, athletes, species andactors. Categories that do no belong to any ofthese 7 groups are not included in the figure.9Published as a conference paper at ICLR 2017We can see that documents belonging to a coarser category are grouped together. This subset in-cludes is a wide range of sports descriptions, ranging from football, crickets, baseball, and cyclingetc., which explains why the athletes category are less concentrated. In the projection, we can seedocuments belonging to the musician category are closer to those belonging to albums category thanthose of athletes or species.4.5 S EMANTIC RELATEDNESSWe test Doc2VecC on the SemEval 2014 Task 1: semantic relatedness SICK dataset (Marelli et al.,2014). Given two sentences, the task is to determine how closely they are semantically related. Theset contains 9,927 pairs of sentences with human annotated relatedness score, ranging from 1 to 5.A score of 1 indicates that the two sentences are not related, while 5 indicates high relatedness. Theset is splitted into a training set of 4,500 instances, a validation set of 500, and a test set of 4,927.We compare Doc2VecC with several winning solutions of the competition as well as several morerecent techniques reported on this dataset, including bi-directional LSTM and Tree-LSTM3trainedfrom scratch on this dataset, Skip-thought vectors learned a large book corpus4(Zhu et al., 2015)and produced sentence embeddings of 4,800 dimensions on this dataset. We follow the same proto-col as in skip-thought vectors, and train Doc2VecC on the larger book corpus dataset. Contrary tothe vocabulary expansion technique used in (Kiros et al., 2015) to handle out-of-vocabulary words,we extend the vocabulary of the learned model directly on the target dataset in the following way:we use the pre-trained word embedding as an initialization, and fine-tune the word and sentencerepresentation on the SICK dataset. Notice that the fine-tuning is done for sentence representationlearning only, and we did not use the relatedness score in the learning. This step brings small im-provement to the performance of our algorithm. Given the sentence embeddings, we used the exactsame training and testing protocol as in (Kiros et al., 2015) to score each pair of sentences: withtwo sentence embedding u1andu2, we concatenate their component-wise product, u1u2and theirabsolute difference, ju1u2jas the feature representation.Table 6 summarizes the performance of various algorithms on this dataset. Despite its simplicity,Doc2VecC significantly out-performs the winning solutions of the competition, which are heavilyfeature engineered toward this dataset and several baseline methods, noticeably the dependency-treeRNNs introduced in (Socher et al., 2014), which relies on expensive dependency parsers to composesentence vectors from word embeddings. The performance of Doc2VecC is slightly worse than theLSTM based methods or skip-thought vectors on this dataset, while it significantly outperformsskip-thought vectors on the IMDB movie review dataset ( 11:70% error rate vs 17:42%). As wehypothesized in previous section, while Doc2VecC is better at handling longer paragraphs, LSTM-based methods are superior for relatively short sentences (of length in the order of 10s). We wouldlike to point out that Doc2VecC is much faster to train and test comparing to skip-thought vectors. Ittakes less than 2 hours to learn the embeddings on the large book corpus for Doc2VecC on a desktopwith Intel i7 2.2Ghz cpu, in comparison to the 2 weeks on GPU required by skip-thought vectors.5 C ONCLUSIONWe introduce a new model architecture Doc2VecC for document representation learning. It is veryefficient to train and test thanks to its simple model architecture. Doc2VecC intrinsically makes suredocument representation generated by averaging word embeddings capture semantics of documentduring learning. It also introduces a data-dependent regularization which favors informative or rarewords while dampening the embeddings of common and non-discriminative words. As such, eachdocument can be efficiently represented as a simple average of the learned word embeddings. Incomparison to several existing document representation learning algorithms, Doc2VecC outperformsnot only in testing efficiency, but also in the expressiveness of the generated representations.3The word representation was initialized using publicly available 300-dimensional Glove vectors trained on840 billion tokens of Common Crawl data4The dataset contains 11,038 books with over one billion words10Published as a conference paper at ICLR 2017Table 6: Test set results on the SICK semantic relatedness task. The first group of results are from thesubmission to the 2014 SemEval competition; the second group includes several baseline methodsreported in (Tai et al., 2015); the third group are methods based on LSTM reported in (Tai et al.,2015) as well as the skip-thought vectors (Kiros et al., 2015).Method Pearson’sSpearman’s MSEIllinois-LH 0.7993 0.7538 0.3692UNAL-NLP 0.8070 0.7489 0.3550Meaning Factory 0.8268 0.7721 0.3224ECNU 0.8279 0.7689 0.3250Mean vectors (Word2Vec + avg) 0.7577 0.6738 0.4557DT-RNN (Socher et al., 2014) 0.7923 0.7319 0.3822SDT-RNN (Socher et al., 2014) 0.7900 0.7304 0.3848LSTM (Tai et al., 2015) 0.8528 0.7911 0.2831Bidirectional LSTM (Tai et al., 2015) 0.8567 0.7966 0.2736Dependency Tree-LSTM (Tai et al., 2015) 0.8676 0.8083 0.2532combine-skip (Kiros et al., 2015) 0.8584 0.7916 0.2687Doc2VecC 0.8381 0.7621 0.3053 | rkslf-rVl | review | 6: Marginally above acceptance threshold | This paper presents a framework for creating document representations.
The main idea is to represent a document as an average of its word embeddings with a data-dependent regularization that favors informative or rare words while forcing common words to be close to 0.
Experiments on sentiment analysis and document classification show that the proposed method has the lowest error rates compared to baseline document embedding methods.
While I like the motivation of finding the best way to encode a document into a vector, the paper does not offer significant technical contributions.
Most of the techniques are not new, and the main selling point is the simplicity and speed of the proposed method.
For this reason, I would like to see good results for more than two tasks to be convinced that this is the best way to learn document representations.
For RNN-LM, is the LM trained to minimize classification error, or is it trained as a language model? Did you use the final hidden state as the representation, or the average of all hidden states?
One of the most widely used method to represent documents now is to have a bidirectional LSTM and concatenate the final hidden states as the document representation.
I think it would be useful to know how the proposed method compares to this approach for tasks such as document classification or sentiment analysis. | 4: The reviewer is confident but not absolutely certain that the evaluation is correct |
B1Igu2ogg | ICLR.cc/2017/conference | 2017 | Efficient Vector Representation for Documents through Corruption | ["Minmin Chen"] | We present an efficient document representation learning framework, Document Vector through Corruption (Doc2VecC). Doc2VecC represents each document as a simple average of word embeddings. It ensures a representation generated as such captures the semantic meanings of the document during learning. A corruption model is included, which introduces a data-dependent regularization that favors informative or rare words while forcing the embeddings of common and non-discriminative ones to be close to zero. Doc2VecC produces significantly better word embeddings than Word2Vec. We compare Doc2VecC with several state-of-the-art document representation learning algorithms. The simple model architecture introduced by Doc2VecC matches or out-performs the state-of-the-art in generating high-quality document representations for sentiment analysis, document classification as well as semantic relatedness tasks. The simplicity of the model enables training on billions of words per hour on a single machine. At the same time, the model is very efficient in generating representations of unseen documents at test time.
| ["Natural language processing", "Deep learning", "Semi-Supervised Learning"] | ABSTRACTWe present an efficient document representation learning framework, DocumentVector through Corruption (Doc2VecC). Doc2VecC represents each document asa simple average of word embeddings. It ensures a representation generated assuch captures the semantic meanings of the document during learning. A cor-ruption model is included, which introduces a data-dependent regularization thatfavors informative or rare words while forcing the embeddings of common andnon-discriminative ones to be close to zero. Doc2VecC produces significantlybetter word embeddings than Word2Vec. We compare Doc2VecC with severalstate-of-the-art document representation learning algorithms. The simple modelarchitecture introduced by Doc2VecC matches or out-performs the state-of-the-artin generating high-quality document representations for sentiment analysis, doc-ument classification as well as semantic relatedness tasks. The simplicity of themodel enables training on billions of words per hour on a single machine. Atthe same time, the model is very efficient in generating representations of unseendocuments at test time.1 I NTRODUCTIONText understanding starts with the challenge of finding machine-understandable representation thatcaptures the semantics of texts. Bag-of-words (BoW) and its N-gram extensions are arguably themost commonly used document representations. Despite its simplicity, BoW works surprisinglywell for many tasks (Wang & Manning, 2012). However, by treating words and phrases as uniqueand discrete symbols, BoW often fails to capture the similarity between words or phrases and alsosuffers from sparsity and high dimensionality.Recent works on using neural networks to learn distributed vector representations of words havegained great popularity. The well celebrated Word2Vec (Mikolov et al., 2013a), by learning topredict the target word using its neighboring words, maps words of similar meanings to nearbypoints in the continuous vector space. The surprisingly simple model has succeeded in generatinghigh-quality word embeddings for tasks such as language modeling, text understanding and machinetranslation. Word2Vec naturally scales to large datasets thanks to its simple model architecture. Itcan be trained on billions of words per hour on a single machine.Paragraph Vectors (Le & Mikolov, 2014) generalize the idea to learn vector representation for docu-ments. A target word is predicted by the word embeddings of its neighbors in together with a uniquedocument vector learned for each document. It outperforms established document representations,such as BoW and Latent Dirichlet Allocation (Blei et al., 2003), on various text understandingtasks (Dai et al., 2015). However, two caveats come with this approach: 1) the number of parame-ters grows with the size of the training corpus, which can easily go to billions; and 2) it is expensiveto generate vector representations for unseen documents at test time.We propose an efficient model architecture, referred to as Document Vector through Corruption(Doc2VecC), to learn vector representations for documents. It is motivated by the observation thatlinear operations on the word embeddings learned by Word2Vec can sustain substantial amountof syntactic and semantic meanings of a phrase or a sentence (Mikolov et al., 2013b). For ex-ample, vec(“Russia”) + vec(“river”) is close to vec(“V olga River”) (Mikolov & Dean, 2013), and1Published as a conference paper at ICLR 2017vec(“king”) - vec(“man”) + vec(“women”) is close to vec(“queen”) (Mikolov et al., 2013b). InDoc2VecC, we represent each document as a simple average of the word embeddings of all thewords in the document. In contrast to existing approaches which post-process learned word em-beddings to form document representation (Socher et al., 2013; Mesnil et al., 2014), Doc2VecCenforces a meaningful document representation can be formed by averaging the word embeddingsduring learning . Furthermore, we include a corruption model that randomly remove words from adocument during learning, a mechanism that is critical to the performance and learning speed of ouralgorithm.Doc2VecC has several desirable properties: 1. The model complexity of Doc2VecC is decoupledfrom the size of the training corpus, depending only on the size of the vocabulary; 2. The modelarchitecture of Doc2VecC resembles that of Word2Vec, and can be trained very efficiently; 3. Thenew framework implicitly introduces a data-dependent regularization, which favors rare or informa-tive words and suppresses words that are common but not discriminative; 4. Vector representationof a document can be generated by simply averaging the learned word embeddings of all the wordsin the document, which significantly boost test efficiency; 5. The vector representation generated byDoc2VecC matches or beats the state-of-the-art for sentiment analysis, document classification aswell as semantic relatedness tasks.2 R ELATED WORKS AND NOTATIONSText representation learning has been extensively studied. Popular representations range from thesimplest BoW and its term-frequency based variants (Salton & Buckley, 1988), language modelbased methods (Croft & Lafferty, 2013; Mikolov et al., 2010; Kim et al., 2015), topic models (Deer-wester et al., 1990; Blei et al., 2003), Denoising Autoencoders and its variants (Vincent et al., 2008;Chen et al., 2012), and distributed vector representations (Mesnil et al., 2014; Le & Mikolov, 2014;Kiros et al., 2015). Another prominent line of work includes learning task-specific document rep-resentation with deep neural networks, such as CNN (Zhang & LeCun, 2015) or LSTM based ap-proaches (Tai et al., 2015; Dai & Le, 2015).In this section, we briefly introduce Word2Vec and Paragraph Vectors, the two approaches that aremost similar to ours. There are two well-know model architectures used for both methods, referredto as Continuous Bag-of-Words (CBoW) and Skipgram models (Mikolov et al., 2013a). In thiswork, we focus on CBoW. Extending to Skipgram is straightforward. Here are the notations we aregoing to use throughout the paper:D=fD1;;Dng: a training corpus of size n, in which each document Dicontains a variable-length sequence of words w1i;;wTii;V: the vocabulary used in the training corpus, of sizes v;x2Rv1: BoW of a document, where xj= 1iff wordjdoes appear in the document.ct2Rv1: BoW of the local context wtk;;wt1;wt+1;;wt+kat the target position t.ctj= 1iff wordjappears within the sliding window of the target;U2Rhv: the projection matrix from the input space to a hidden space of size h. We use uwtodenote the column in Ufor wordw, i.e., the “input“ vector of word w;V>2Rvh: the projection matrix from the hidden space to output. Similarly, we use vwtodenote the column in Vfor wordw, i.e., the “output“ vector of word w.Word2Vec. Word2Vec proposed a neural network architecture of an input layer, a projection layerparameterized by the matrix Uand an output layer by V>. It defines the probability of observingthe target word wtin a document Dgiven its local context ctasP(wtjct) =exp(v>wtUct)Pw02Vexp(v>w0Uct)The word vectors are then learned to maximize the log likelihood of observing the target word ateach position of the document. Various techniques (Mitchell & Lapata, 2010; Zanzotto et al., 2010;Yessenalina & Cardie, 2011; Grefenstette et al., 2013; Socher et al., 2013; Kusner et al., 2015)2Published as a conference paper at ICLR 2017have been studied to generate vector representations of documents from word embeddings, amongwhich the simplest approach is to use weighted average of word embeddings. Similarly, our methodforms document representation by averaging word embeddings of all the words in the document.Differently, as our model encodes the compositionality of words in the learned word embeddings,heuristic weighting at test time is not required.Paragraph Vectors. Paragraph Vectors, on the other hands, explicitly learns a document vectorwith the word embeddings. It introduces another projection matrix D2Rhn. Each column of Dacts as a memory of the global topic of the corresponding document. It then defines the probabilityof observing the target word wtin a document Dgiven its local context ctasP(wtjct;d) =exp(v>wt(Uct+d))Pw02Vexp(v>w0(Uct+d))where d2Dis the vector representation of the document. As we can see from this formula, thecomplexity of Paragraph Vectors grows with not only the size of the vocabulary, but also the size ofthe training corpus. While we can reasonably limit the size of a vocabulary to be within a millionfor most datasets, the size of a training corpus can easily go to billions. What is more concerning isthat, in order to come up with the vector representations of unseen documents, we need to performan expensive inference by appending more columns to Dand gradient descent on Dwhile fixingother parameters of the learned model.3 M ETHODSeveral works (Mikolov & Dean, 2013; Mikolov et al., 2013b) showcased that syntactic and seman-tic regularities of phrases and sentences are reasonably well preserved by adding or subtracting wordembeddings learned through Word2Vec. It prompts us to explore the option of simply representinga document as an average of word embeddings. Figure 1 illustrates the new model architecture.wt#1Wt+1Wt+2wpwqwrwtopeningfortheperformancepraisedbrazilceremonyword<vectorsdocument< vectorAverage/ConcatenateFigure 1: A new framework for learning document vectors.Similar to Word2Vec or Paragraph Vectors, Doc2VecC consists of an input layer, a projection layeras well as an output layer to predict the target word, “ceremony” in this example. The embeddings ofneighboring words (“opening”, “for”, “the”) provide local context while the vector representation ofthe entire document (shown in grey) serves as the global context. In contrast to Paragraph Vectors,which directly learns a unique vector for each document, Doc2VecC represents each document asan average of the embeddings of words randomly sampled from the document (“performance” atpositionp, “praised” at position q, and “brazil” at position r).Huang et al. (2012) also proposed the idea of using average of word embeddings to represent theglobal context of a document. Different from their work, we choose to corrupt the original documentby randomly removing significant portion of words, and represent the document using only theembeddings of the words remained. This corruption mechanism offers us great speedup duringtraining as it significantly reduces the number of parameters to update in back propagation. At thesame time, as we are going to detail in the next section, it introduces a special form of regularization,which brings great performance improvement.3Published as a conference paper at ICLR 2017Here we describe the stochastic process we used to generate a global context at each update. Theglobal context, which we denote as ~x, is generated through a unbiased mask-out/drop-out corruption,in which we randomly overwrites each dimension of the original document xwith probability q. Tomake the corruption unbiased, we set the uncorrupted dimensions to 1=(1q)times its originalvalue. Formally,~xd=(0; with probability qxd1q;otherwise(1)Doc2VecC then defines the probability of observing a target word wtgiven its local context ctaswell as the global context ~xasP(wtjct;~x) =exp(v>wt(local contextz}|{Uct+global contextz}|{1TU~x))Pw02Vexp(v>w0Uct+1TU~x) (2)HereTis the length of the document. Exactly computing the probability is impractical, instead weapproximate it with negative sampling (Mikolov et al., 2013a).f(w;c;~x)logP(wtjct;~x)logv>w(Uc+1TU~x)+Xw0Pvlogv>w0(Uc+1TU~x)(3)herePvstands for a uniform distribution over the terms in the vocabulary. The two projectionmatrices UandVare then learned to minimize the loss:`=nXi=1TiXt=1f(wti;cti;~xti) (4)Given the learned projection matrix U, we then represent each document simply as an average ofthe embeddings of the words in the document,d=1TXw2Duw: (5)We are going to elaborate next why we choose to corrupt the original document with the corruptionmodel in eq.(1) during learning, and how it enables us to simply use the average word embeddingsas the vector representation for documents at test time.3.1 C ORRUPTION AS DATA -DEPENDENT REGULARIZATIONWe approximate the log likelihood for each instance f(w;c;~x)in eq.(4) with its Taylor expansionwith respect to ~xup to the second-order (Van Der Maaten et al., 2013; Wager et al., 2013; Chenet al., 2014). Concretely, we choose to expand at the mean of the corruption x=Ep(~xjx)[~x]:f(w;c;~x)f(w;c;x) + (~xx)>r~xf+12(~xx)>r2~xf(~xx)wherer~xfandr2~xfare the first-order (i.e., gradient) and second-order (i.e., Hessian) of the loglikelihood with respect to ~x. Expansion at the mean xis crucial as shown in the following steps.Let us assume that for each instance, we are going to sample the global context ~xinfinitely manytimes, and thus compute the expected log likelihood with respect to the corrupted ~x.Ep(~xjx)[f(w;c;~x)]f(w;c;x) +12trE[(~xx)(~xx)>]r2~xfThe linear term disappears as Ep(~xjx)[~xx] = 0 . We substitute in xfor the mean xof thecorrupting distribution (unbiased corruption) and the matrix x=E[(~xx)(~xx)>]for thevariance, and obtainEp(~xjx)[f(w;c;~x)]f(w;c;x) +12trxr2~xf(6)4Published as a conference paper at ICLR 2017As each word in a document is corrupted independently of others, the variance matrix xis simpli-fied to a diagonal matrix with jthelement equalsq1qx2j. As a result, we only need to compute thediagonal terms of the Hessian matrix r2~xf.Thejthdimension of the Hessian’s diagonal evaluated at the mean xis given by@2f@x2j=w;c;x(1w;c;x)(1Tv>wuj)2Xw0Pvw0;c;x(1w0;c;x)(1Tv>w0uj)2Plug the Hessian matrix and the variance matrix back into eq.(6), and then back to the loss definedin eq.(4), we can see that Doc2VecC intrinsically minimizes`=nXi=1TiXt=1f(wti;cti;xi) +q1qvXj=1R(uj) (7)Eachf(wti;cti;xi)in the first term measures the log likelihood of observing the target word wtigiven its local context ctiand the document vector di=1TUxi.As such, Doc2VecC enforces that adocument vector generated by averaging word embeddings can capture the global semantics of thedocument, and fill in information missed in the local context.The second term here is a data-dependent regularization. The regularization on the embedding ujof each word jtakes the following form,R(uj)/nXi=1TiXt=1x2ij"wti;cti;xi(1wti;cti;xi)(1Tv>wtiuj)2+Xw0Pvw0;cti;xi(1w0;cti;xi)(1Tv>w0uj)2#wherew;c;x=(v>w(Uc+1TUx))prescribes the confidence of predicting the target word wgivenits neighboring context cas well as the document vector d=1TUx.Closely examining R(uj)leads to several interesting findings: 1. the regularizer penalizes moreon the embeddings of common words. A word jthat frequently appears across the training corpus,i.e,xij= 1 often, will have a bigger regularization than a rare word; 2. on the other hand, theregularization is modulated by w;c;x(1w;c;x), which is small if w;c;x!1or0. In otherwords, if ujis critical to a confident prediction w;c;xwhen it is active, then the regularization isdiminished. Similar effect was observed for dropout training for logistic regression model (Wageret al., 2013) and denoising autoencoders (Chen et al., 2014).4 E XPERIMENTSWe evaluate Doc2VecC on a sentiment analysis task, a document classification task and a semanticrelatedness task, along with several document representation learning algorithms. All experimentscan be reproduced using the code available at https://github.com/mchen24/iclr20174.1 B ASELINESWe compare against the following document representation baselines: bag-of-words (BoW) ;De-noising Autoencoders (DEA) (Vincent et al., 2008) , a representation learned from reconstructingoriginal document xusing corrupted one ~x. SDAs have been shown to be the state-of-the-art for sen-timent analysis tasks (Glorot et al., 2011). We used Kullback-Liebler divergence as the reconstruc-tion error and an affine encoder. To scale up the algorithm to large vocabulary, we only take into ac-count the non-zero elements of xin the reconstruction error and employed negative sampling for theremainings; Word2Vec (Mikolov et al., 2013a)+IDF , a representation generated through weightedaverage of word vectors learned using Word2Vec; Doc2Vec (Le & Mikolov, 2014) ;Skip-thoughtVectors(Kiros et al., 2015) , a generic, distributed sentence encoder that extends the Word2Vec skip-gram model to sentence level. It has been shown to produce highly generic sentence representationsthat apply to various natural language processing tasks. We also include RNNLM (Mikolov et al.,2010) , a recurrent neural network based language model in the comparison. In the semantic related-ness task, we further compare to LSTM-based methods (Tai et al., 2015) that have been reportedon this dataset.5Published as a conference paper at ICLR 2017Table 1: Classification error of a linear classifier trained on various document representations on theImdb dataset.Model Error rate % (include test) Error rate % (exclude test)Bag-of-Words (BOW) 12.53 12.59RNN-LM 13.59 13.59Denoising Autoencoders (DEA) 11.58 12.54Word2Vec + A VG 12.11 12.69Word2Vec + IDF 11.28 11.92Paragraph Vectors 10.81 12.10Skip-thought Vectors - 17.42Doc2VecC 10.48 11.704.2 S ENTIMENT ANALYSISFor sentiment analysis, we use the IMDB movie review dataset. It contains 100,000 movies reviewscategorized as either positive or negative. It comes with predefined train/test split (Maas et al.,2011): 25,000 reviews are used for training, 25,000 for testing, and the rest as unlabeled data. Thetwo classes are balanced in the training and testing sets. We remove words that appear less than 10times in the training set, resulting in a vocabulary of 43,375 distinct words and symbols.Setup. We test the various representation learning algorithms under two settings: one follows thesame protocol proposed in (Mesnil et al., 2014), where representation is learned using all the avail-able data, including the test set; another one where the representation is learned using training andunlabeled set only. For both settings, a linear support vector machine (SVM) (Fan et al., 2008)is trained afterwards on the learned representation for classification. For Skip-thought Vectors, weused the generic model1trained on a much bigger book corpus to encode the documents. A vector of4800 dimensions, first 2400 from the uni-skip model, and the last 2400 from the bi-skip model, aregenerated for each document. In comparison, all the other algorithms produce a vector representa-tion of size 100. The supervised RNN-LM is learned on the training set only. The hyper-parametersare tuned on a validation set subsampled from the training set.Accuracy. Comparing the two columns in Table 1, we can see that all the representation learn-ing algorithms benefits from including the testing data during the representation learning phrase.Doc2VecC achieved similar or even better performance than Paragraph Vectors. Both methodsoutperforms the other baselines, beating the BOW representation by 15%. In comparison withWord2Vec+IDF, which applies post-processing on learned word embeddings to form document rep-resentation, Doc2VecC naturally enforces document semantics to be captured by averaged wordembeddings during training. This leads to better performance. Doc2VecC reduces to Denoising Au-toencoders (DEA) if the local context words are removed from the paradigm shown in Figure 1. Byincluding the context words, Doc2VecC allows the document vector to focus more on capturing theglobal context. Skip-thought vectors perform surprisingly poor on this dataset comparing to othermethods. We hypothesized that it is due to the length of paragraphs in this dataset. The averagelength of paragraphs in the IMDB movie review dataset is 296:5, much longer than the ones usedfor training and testing in the original paper, which is in the order of 10. As noted in (Tai et al.,2015), the performance of LSTM based method (similarly, the gated RNN used in Skip-thoughtvectors) drops significantly with increasing paragraph length, as it is hard to preserve state over longsequences of words.Time. Table 2 summarizes the time required by these algorithms to learn and generate the documentrepresentation. Word2Vec is the fastest one to train. Denoising Autoencoders and Doc2VecC secondthat. The number of parameters that needs to be back-propagated in each update was increased bythe number of surviving words in ~x. We found that both models are not sensitive to the corruptionrateqin the noise model. Since the learning time decreases with higher corruption rate, we usedq= 0:9throughout the experiments. Paragraph Vectors takes longer time to train as there aremore parameters (linear to the number of document in the learning set) to learn. At test time,Word2Vec+IDF, DEA and Doc2VecC all use (weighted) averaging of word embeddings as document1available at https://github.com/ryankiros/skip-thoughts6Published as a conference paper at ICLR 2017Table 2: Learning time and representation generation time required by different representation learn-ing algorithms.Model Learning time Generation timeDenoising Autoencoders 3m 23s 7sWord2Vec + IDF 2m 33s 7sParagraph Vectors 4m 54s 4m 17sSkip-thought 2h 2hDoc2VecC 4m 30s 7sTable 3: Words with embeddings closest to 0 learned by different algorithms.Word2Vec harp(118) distasteful(115) switzerland(101) shabby(103) fireworks(101) heav-ens(100) thornton(108) endeavor(100) dense(108) circumstance(119) debacle(103)ParaVectors harp(118) dense(108) reels(115) fireworks(101) its’(103) unnoticed(112) pony(102)fulfilled(107) heavens(100) bliss(110) canned(114) shabby(103) debacle(103)Doc2VecC ,(1099319) .(1306691) the(1340408) of(581667) and(651119) up(49871) to(537570)that(275240) time(48205) endeavor(100) here(21118) way(31302) own(13456)representation. Paragraph Vectors, on the other hand, requires another round of inference to producethe vector representation of unseen test documents. It takes Paragraph Vectors 4 minutes and 17seconds to infer the vector representations for the 25,000 test documents, in comparison to 7 secondsfor the other methods. As we did not re-train the Skip-thought vector models on this dataset, thetraining time2reported in the table is the time it takes to generate the embeddings for the 25,000training documents. Due to repeated high-dimensional matrix operations required for encoding longparagraphs, it takes fairly long time to generate the representations for these documents. Similarlyfor testing. The experiments were conducted on a desktop with Intel i7 2.2Ghz cpu.Data dependent regularization. As explained in Section 3.1, the corruption introduced inDoc2VecC acts as a data-dependent regularization that suppresses the embeddings of frequent butuninformative words. Here we conduct an experiment to exam the effect. We used a cutoff of 100in this experiment. Table 3 lists the words having the smallest l2norm of embeddings found bydifferent algorithms. The number inside the parenthesis after each word is the number of times thisword appears in the learning set. In word2Vec or Paragraph Vectors, the least frequent words haveembeddings that are close to zero, despite some of them being indicative of sentiment such as deba-cle, bliss and shabby. In contrast, Doc2VecC manages to clamp down the representation of wordsfrequently appear in the training set, but are uninformative, such as symbols and stop words.Subsampling frequent words. Note that for all the numbers reported, we applied the trick ofsubsampling of frequent words introduced in (Mikolov & Dean, 2013) to counter the imbalancebetween frequent and rare words. It is critical to the performance of simple Word2Vec+A VG as thesole remedy to diminish the contribution of common words in the final document representation. Ifwe were to remove this step, the error rate of Word2Vec+A VG will increases from 12:1%to13:2%.Doc2VecC on the other hand naturally exerts a stronger regularization toward embeddings of wordsthat are frequent but uninformative, therefore does not rely on this trick.4.3 W ORD ANALOGYIn table 3, we demonstrated that the corruption model introduced in Doc2VecC dampens the embed-dings of words which are common and non-discriminative (stop words). In this experiment, we aregoing to quantatively compare the word embeddings generated by Doc2VecC to the ones generatedby Word2Vec, or Paragraph Vectors on the word analogy task introduced by Mikolov et al. (2013a).The dataset contains five types of semantic questions, and nine types of syntactic questions, with atotal of 8,869 semantic and 10,675 syntactic questions. The questions are answered through simplelinear algebraic operations on the word embeddings generated by different methods. Please refer tothe original paper for more details on the evaluation protocol.2As reported in the original paper, training of the skip-thought vector model on the book corpus datasettakes around 2 weeks on GPU.7Published as a conference paper at ICLR 20171M 2M 4M 8M 15M02040603:86:18:3 9:113:318:726:432:736:138:920:328:136:442:546:7Number of paragraphs used for learningAccuracy (%)ParagraphVectors Word2Vec Doc2VecC(a) h=501M 2M 4M 8M 15M02040605:17:510:9 10:2 10:223:634:742:448:250:724:334:144:152:658:2Number of paragraphs used for learningParagraphVectors Word2Vec Doc2VecC(b) h=100Figure 2: Accuracy on subset of the Semantic-Syntactic Word Relationship test set. Only questionscontaining words from the most frequent 30k words are included in the test.Semantic questions Word2Vec Doc2VecC Syntactic questions Word2Vec Doc2VecCcapital-common-countries 73.59 81.82 gram1-adjective-to-adverb 19.25 20.32capital-world 67.94 77.96 gram2-opposite 14.07 25.54currency 17.14 12.86 gram3-comparative 60.21 74.47city-in-state 34.49 42.86 gram4-superlative 52.87 55.40family 68.71 64.62 gram5-present-participle 56.34 65.81gram6-nationality-adjective 88.71 91.03gram7-past-tense 47.05 51.86gram8-plural 50.28 61.27gram9-plural-verbs 25.38 39.69Table 4: Top 1 accuracy on the 5 type of semantics and 9 types of syntactic questions.We trained the word embeddings of different methods using the English news dataset released underthe ACL workshop on statistical machine translation. The training set includes close to 15M para-graphs with 355M tokens. We compare the performance of word embeddings trained by differentmethods with increasing embedding dimensionality as well as increasing training data.We observe similar trends as in Mikolov et al. (2013a). Increasing embedding dimensionality aswell as training data size improves performance of the word embeddings on this task. However, theimprovement is diminishing. Doc2VecC produces word embeddings which performs significantlybetter than the ones generated by Word2Vec. We observe close to 20% uplift when we train on thefull training corpus. Paragraph vectors on the other hand performs surprisingly bad on this dataset.Our hypothesis is that due to the large capacity of the model architecture, Paragraph Vectors reliesmostly on the unique document vectors to capture the information in a text document instead oflearning the word semantic or syntactic similarities. This also explains why the PV-DBOW Le &Mikolov (2014) model architecture proposed in the original work, which completely removes wordembedding layers, performs comparable to the distributed memory version.In table 5, we list a detailed comparison of the performance of word embeddings generated byWord2Vec and Doc2VecC on the 14 subtasks, when trained on the full dataset with embedding ofsize 100. We can see that Doc2VecC significantly outperforms the word embeddings produced byWord2Vec across almost all the subtasks.4.4 D OCUMENT CLASSIFICATIONFor the document classification task, we use a subset of the wikipedia dump, which contains over300,000 wikipedia pages in 100 categories. The 100 categories includes categories under sports,8Published as a conference paper at ICLR 2017Table 5: Classification error (%) of a linear classifier trained on various document representationson the Wikipedia dataset.Model BOW DEA Word2Vec + A VG Word2Vec + IDF ParagraphVectors Doc2VecCh= 100 36.03 32.30 33.2 33.16 35.78 31.92h= 200 36.03 31.36 32.46 32.48 34.92 30.84h= 500 36.03 31.10 32.02 32.13 33.93 30.43h= 1000 36.03 31.13 31.78 32.06 33.02 30.24(a) Doc2Vec (b) Doc2VecCFigure 3: Visualization of document vectors on Wikipedia dataset using t-SNE.entertainment, literature, and politics etc. Examples of categories include American drama films,Directorial debut films, Major League Baseball pitchers and Sydney Swans players. Body texts(the second paragraph) were extracted for each page as a document. For each category, we select1,000 documents with unique category label, and 100 documents were used for training and 900documents for testing. The remaining documents are used as unlabeled data. The 100 classesare balanced in the training and testing sets. For this data set, we learn the word embedding anddocument representation for all the algorithms using all the available data. We apply a cutoff of 10,resulting in a vocabulary of size 107;691.Table 5 summarizes the classification error of a linear SVM trained on representations of differentsizes. We can see that most of the algorithms are not sensitive to the size of the vector represen-tation. Doc2Vec benefits most from increasing representation size. Across all sizes of representa-tions, Doc2VecC outperform the existing algorithms by a significant margin. In fact, Doc2VecC canachieve same or better performance with a much smaller representation vector.Figure 4: Visualization of Wikipedia Doc2VecCvectors using t-SNE.Figure 3 visualizes the document representa-tions learned by Doc2Vec (left) and Doc2VecC(right) using t-SNE (Maaten & Hinton, 2008).We can see that documents from the same cat-egory are nicely clustered using the representa-tion generated by Doc2VecC. Doc2Vec, on theother hand, does not produce a clear separationbetween different categories, which explains itsworse performance reported in Table 5.Figure 4 visualizes the vector representationgenerated by Doc2VecC w.r.t. coarser catego-rization. we manually grouped the 100 cate-gories into 7 coarse categories, television, al-bums, writers, musicians, athletes, species andactors. Categories that do no belong to any ofthese 7 groups are not included in the figure.9Published as a conference paper at ICLR 2017We can see that documents belonging to a coarser category are grouped together. This subset in-cludes is a wide range of sports descriptions, ranging from football, crickets, baseball, and cyclingetc., which explains why the athletes category are less concentrated. In the projection, we can seedocuments belonging to the musician category are closer to those belonging to albums category thanthose of athletes or species.4.5 S EMANTIC RELATEDNESSWe test Doc2VecC on the SemEval 2014 Task 1: semantic relatedness SICK dataset (Marelli et al.,2014). Given two sentences, the task is to determine how closely they are semantically related. Theset contains 9,927 pairs of sentences with human annotated relatedness score, ranging from 1 to 5.A score of 1 indicates that the two sentences are not related, while 5 indicates high relatedness. Theset is splitted into a training set of 4,500 instances, a validation set of 500, and a test set of 4,927.We compare Doc2VecC with several winning solutions of the competition as well as several morerecent techniques reported on this dataset, including bi-directional LSTM and Tree-LSTM3trainedfrom scratch on this dataset, Skip-thought vectors learned a large book corpus4(Zhu et al., 2015)and produced sentence embeddings of 4,800 dimensions on this dataset. We follow the same proto-col as in skip-thought vectors, and train Doc2VecC on the larger book corpus dataset. Contrary tothe vocabulary expansion technique used in (Kiros et al., 2015) to handle out-of-vocabulary words,we extend the vocabulary of the learned model directly on the target dataset in the following way:we use the pre-trained word embedding as an initialization, and fine-tune the word and sentencerepresentation on the SICK dataset. Notice that the fine-tuning is done for sentence representationlearning only, and we did not use the relatedness score in the learning. This step brings small im-provement to the performance of our algorithm. Given the sentence embeddings, we used the exactsame training and testing protocol as in (Kiros et al., 2015) to score each pair of sentences: withtwo sentence embedding u1andu2, we concatenate their component-wise product, u1u2and theirabsolute difference, ju1u2jas the feature representation.Table 6 summarizes the performance of various algorithms on this dataset. Despite its simplicity,Doc2VecC significantly out-performs the winning solutions of the competition, which are heavilyfeature engineered toward this dataset and several baseline methods, noticeably the dependency-treeRNNs introduced in (Socher et al., 2014), which relies on expensive dependency parsers to composesentence vectors from word embeddings. The performance of Doc2VecC is slightly worse than theLSTM based methods or skip-thought vectors on this dataset, while it significantly outperformsskip-thought vectors on the IMDB movie review dataset ( 11:70% error rate vs 17:42%). As wehypothesized in previous section, while Doc2VecC is better at handling longer paragraphs, LSTM-based methods are superior for relatively short sentences (of length in the order of 10s). We wouldlike to point out that Doc2VecC is much faster to train and test comparing to skip-thought vectors. Ittakes less than 2 hours to learn the embeddings on the large book corpus for Doc2VecC on a desktopwith Intel i7 2.2Ghz cpu, in comparison to the 2 weeks on GPU required by skip-thought vectors.5 C ONCLUSIONWe introduce a new model architecture Doc2VecC for document representation learning. It is veryefficient to train and test thanks to its simple model architecture. Doc2VecC intrinsically makes suredocument representation generated by averaging word embeddings capture semantics of documentduring learning. It also introduces a data-dependent regularization which favors informative or rarewords while dampening the embeddings of common and non-discriminative words. As such, eachdocument can be efficiently represented as a simple average of the learned word embeddings. Incomparison to several existing document representation learning algorithms, Doc2VecC outperformsnot only in testing efficiency, but also in the expressiveness of the generated representations.3The word representation was initialized using publicly available 300-dimensional Glove vectors trained on840 billion tokens of Common Crawl data4The dataset contains 11,038 books with over one billion words10Published as a conference paper at ICLR 2017Table 6: Test set results on the SICK semantic relatedness task. The first group of results are from thesubmission to the 2014 SemEval competition; the second group includes several baseline methodsreported in (Tai et al., 2015); the third group are methods based on LSTM reported in (Tai et al.,2015) as well as the skip-thought vectors (Kiros et al., 2015).Method Pearson’sSpearman’s MSEIllinois-LH 0.7993 0.7538 0.3692UNAL-NLP 0.8070 0.7489 0.3550Meaning Factory 0.8268 0.7721 0.3224ECNU 0.8279 0.7689 0.3250Mean vectors (Word2Vec + avg) 0.7577 0.6738 0.4557DT-RNN (Socher et al., 2014) 0.7923 0.7319 0.3822SDT-RNN (Socher et al., 2014) 0.7900 0.7304 0.3848LSTM (Tai et al., 2015) 0.8528 0.7911 0.2831Bidirectional LSTM (Tai et al., 2015) 0.8567 0.7966 0.2736Dependency Tree-LSTM (Tai et al., 2015) 0.8676 0.8083 0.2532combine-skip (Kiros et al., 2015) 0.8584 0.7916 0.2687Doc2VecC 0.8381 0.7621 0.3053 | rJBM9YbVg | Interesting corruption mechanism for document representation | 7: Good paper, accept | This paper proposes learning document embeddings as a sum of the constituent word embeddings, which are jointly learned and randomly dropped out ('corrupted') during training. While none of the pieces of this model are particularly novel, the result is an efficient learning algorithm for document representation with good empirical performance.
Joint training of word and document embeddings is not a new idea, nor is the idea of enforcing the document to be represented by the sum of its word embeddings (see, e.g. '“The Sum of Its Parts”: Joint Learning of Word and Phrase Representations with Autoencoders' by Lebret and Collobert). Furthermore, the corruption mechanism is nothing other than traditional dropout on the input layer. Coupled with the word2vec-style loss and training methods, this paper offers little on the novelty front.
On the other hand, it is very efficient at generation time, requiring only an average of the word embeddings rather than a complicated inference step as in Doc2Vec. Moreover, by construction, the embedding captures salient global information about the document -- it captures specifically that information that aids in local-context prediction. For such a simple model, the performance on sentiment analysis and document classification is quite encouraging.
Overall, despite the lack of novelty, the simplicity, efficiency, and performance of this model make it worthy of wider readership and study, and I recommend acceptance. | 4: The reviewer is confident but not absolutely certain that the evaluation is correct |
SyWvgP5el | ICLR.cc/2017/conference | 2017 | EPOpt: Learning Robust Neural Network Policies Using Model Ensembles | ["Aravind Rajeswaran", "Sarvjeet Ghotra", "Balaraman Ravindran", "Sergey Levine"] | Sample complexity and safety are major challenges when learning policies with reinforcement learning for real-world tasks, especially when the policies are represented using rich function approximators like deep neural networks. Model-based methods where the real-world target domain is approximated using a simulated source domain provide an avenue to tackle the above challenges by augmenting real data with simulated data. However, discrepancies between the simulated source domain and the target domain pose a challenge for simulated training. We introduce the EPOpt algorithm, which uses an ensemble of simulated source domains and a form of adversarial training to learn policies that are robust and generalize to a broad range of possible target domains, including to unmodeled effects. Further, the probability distribution over source domains in the ensemble can be adapted using data from the target domain and approximate Bayesian methods, to progressively make it a better approximation. Thus, learning on a model ensemble, along with source domain adaptation, provides the benefit of both robustness and learning. | ["Reinforcement Learning", "Applications"] | ABSTRACTSample complexity and safety are major challenges when learning policies withreinforcement learning for real-world tasks, especially when the policies are repre-sented using rich function approximators like deep neural networks. Model-basedmethods where the real-world target domain is approximated using a simulatedsource domain provide an avenue to tackle the above challenges by augmenting realdata with simulated data. However, discrepancies between the simulated sourcedomain and the target domain pose a challenge for simulated training. We introducethe EPOpt algorithm, which uses an ensemble of simulated source domains anda form of adversarial training to learn policies that are robust and generalize to abroad range of possible target domains, including unmodeled effects. Further, theprobability distribution over source domains in the ensemble can be adapted usingdata from target domain and approximate Bayesian methods, to progressively makeit a better approximation. Thus, learning on a model ensemble, along with sourcedomain adaptation, provides the benefit of both robustness and learning/adaptation.1 I NTRODUCTIONReinforcement learning with powerful function approximators like deep neural networks (deep RL)has recently demonstrated remarkable success in a wide range of tasks like games (Mnih et al., 2015;Silver et al., 2016), simulated control problems (Lillicrap et al., 2015; Mordatch et al., 2015b), andgraphics (Peng et al., 2016). However, high sample complexity is a major barrier for directly applyingmodel-free deep RL methods for physical control tasks. Model-free algorithms like Q-learning,actor-critic, and policy gradients are known to suffer from long learning times (Kakade, 2003), whichis compounded when used in conjunction with expressive function approximators like deep neuralnetworks (DNNs). The challenge of gathering samples from the real world is further exacerbatedby issues of safety for the agent and environment, since sampling with partially learned policiescould be unstable (Garc ́ıa & Fern ́andez, 2015). Thus, model-free deep RL methods often require aprohibitively large numbers of potentially dangerous samples for physical control tasks.Model-based methods, where the real-world target domain is approximated with a simulated sourcedomain, provide an avenue to tackle the above challenges by learning policies using simulated data.The principal challenge with simulated training is the systematic discrepancy between source andtarget domains, and therefore, methods that compensate for systematic discrepancies (modelingerrors) are needed to transfer results from simulations to real world using RL. We show that theimpact of such discrepancies can be mitigated through two key ideas: (1) training on an ensembleof models in an adversarial fashion to learn policies that are robust to parametric model errors, aswell as to unmodeled effects; and (2) adaptation of the source domain ensemble using data fromthe target domain to progressively make it a better approximation. This can be viewed either as aninstance of model-based Bayesian RL (Ghavamzadeh et al., 2015); or as transfer learning from acollection of simulated source domains to a real-world target domain (Taylor & Stone, 2009). Whilea number of model-free RL algorithms have been proposed (see, e.g., Duan et al. (2016) for a survey),their high sample complexity demands use of a simulator, effectively making them model-based. We1Published as a conference paper at ICLR 2017show in our experiments that such methods learn policies which are highly optimized for the specificmodels used in the simulator, but are brittle under model mismatch. This is not surprising, since deepnetworks are remarkably proficient at exploiting any systematic regularities in a simulator. Addressingrobustness of DNN-policies is particularly important to transfer their success from simulated tasks tophysical systems.In this paper, we propose the Ensemble Policy Optimization (EPOpt ) algorithm for finding policiesthat are robust to model mismatch. In line with model-based Bayesian RL, we learn a policy for thetarget domain by alternating between two phases: (i) given a source (model) distribution (i.e. ensembleof models), find a robust policy that is competent for the whole distribution; (ii) gather data from thetarget domain using said robust policy, and adapt the source distribution. EPOpt uses an ensembleof models sampled from the source distribution, and a form of adversarial training to learn robustpolicies that generalize to a broad range of models. By robust, we mean insensitivity to parametricmodel errors and broadly competent performance for direct-transfer (also referred to as jumpstartlike in Taylor & Stone (2009)). Direct-transfer performance refers to the average initial performance(return) in the target domain, without any direct training on the target domain. By adversarial training,we mean that model instances on which the policy performs poorly in the source distribution aresampled more often in order to encourage learning of policies that perform well for a wide range ofmodel instances. This is in contrast to methods which learn highly optimized policies for specificmodel instances, but brittle under model perturbations. In our experiments, we did not observesignificant loss in performance by requiring the policy to work on multiple models (for example,through adopting a more conservative strategy). Further, we show that policies learned using EPOptare robust even to effects not modeled in the source domain. Such unmodeled effects are a majorissue when transferring from simulation to the real world. For the model adaptation step (ii), wepresent a simple method using approximate Bayesian updates, which progressively makes the sourcedistribution a better approximation of the target domain. We evaluate the proposed methods on thehopper (12 dimensional state space; 3 dimensional action space) and half-cheetah (18 dimensionalstate space; 6 dimensional action space) benchmarks in MuJoCo. Our experimental results suggestthat adversarial training on model ensembles produces robust policies which generalize better thanpolicies trained on a single, maximum-likelihood model (of source distribution) alone.2 P ROBLEM FORMULATIONWe consider parametrized Markov Decision Processes (MDPs), which are tuples of the form:M(p)<S;A;Tp;Rp;;S 0;p>whereS,Aare (continuous) states and actions respectively;TpRp, andS0;pare the state transition, reward function, and initial state distribution respectively, allparametrized by p; andis the discount factor. Thus, we consider a set of MDPs with the same stateand action spaces. Each MDP in this set could potentially have different transition functions, rewards,and initial state distributions. We use transition functions of the form St+1Tp(st;at)whereTpisa random process and St+1is a random variable.We distinguish between source and target MDPs using MandWrespectively. We also refer to MandWas source and target domains respectively, as is common in the transfer learning set-up. Ourobjective is to learn the optimal policy for W; and to do so, we have access to M(p). We assumethat we have a distribution ( D) over the source domains (MDPs) generated by a distribution overthe parameters PP(p)that capture our subjective belief about the parameters of W. LetPbeparametrized by (e.g. mean, standard deviation). For example, Mcould be a hopping task withreward proportional to hopping velocity and falling down corresponds to a terminal state. For thistask,pcould correspond to parameters like torso mass, ground friction, and damping in joints, allof which affect the dynamics. Ideally, we would like the target domain to be in the model class, i.e.f9pjM(p) =Wg. However, in practice, there are likely to be unmodeled effects, and we analyzethis setting in our experiments. We wish to learn a policy (s)that performs well for all MD .Note that this robust policy does not have an explicit dependence on p, and we require it to performwell without knowledge of p.3 L EARNING PROTOCOL AND EPO PT ALGORITHMWe follow the round-based learning protocol of Bayesian model-based RL. We use the term roundswhen interacting with the target domain, and episode when performing rollouts with the simulator. Ineach round, we interact with the target domain after computing the robust policy on the current (i.e.2Published as a conference paper at ICLR 2017posterior) simulated source distribution. Following this, we update the source distribution using datafrom the target domain collected by executing the robust policy. Thus, in round i, we update two setsof parameters: i, the parameters of the robust policy (neural network); and i, the parameters of thesource distribution. The two key steps in this procedure are finding a robust policy given a sourcedistribution; and updating the source distribution using data from the target domain. In this section,we present our approach for both of these steps.3.1 R OBUST POLICY SEARCHWe introduce the EPOpt algorithm for finding a robust policy using the source distribution. EPOpt isa policy gradient based meta-algorithm which uses batch policy optimization methods as a subroutine.Batch policy optimization algorithms (Williams, 1992; Kakade, 2001; Schulman et al., 2015) collecta batch of trajectories by rolling out the current policy, and use the trajectories to make a policyupdate. The basic structure of EPOpt is to sample a collection of models from the source distribution,sample trajectories from each of these models, and make a gradient update based on a subset ofsampled trajectories. We first define evaluation metrics for the parametrized policy, :M(;p) =E~"T1Xt=0trt(st;at)p#; (1)D() =EpP[M(;p)] =EpP"E^"T1Xt=0trt(st;at)p##=E"T1Xt=0trt(st;at)#:In (1),M(;p)is the evaluation of on the modelM(p), with ~being trajectories generatedbyM(p)and:~=fst;at;rtgTt=0wherest+1Tp(st;at),s0S0;p,rtRp(st;at), andat(st). Similarly, D()is the evaluation of over the source domain distribution. Thecorresponding expectation is over trajectories generated byDand:=fst;at;rtgTt=0, wherest+1Tpt(st;at),pt+1=pt,s0S0;p0,rtRpt(st;at),at(st), andp0P . With thismodified notation of trajectories, batch policy optimization can be invoked for policy search.OptimizingDallows us to learn a policy that performs best in expectation over models in the sourcedomain distribution. However, this does not necessarily lead to a robust policy, since there could behigh variability in performance for different models in the distribution. To explicitly seek a robustpolicy, we use a softer version of max-min objective suggested in robust control, and optimize for theconditional value at risk (CVaR) (Tamar et al., 2015):max;yZF()M(;p)P(p)dp s:t: P(M(;P)y) =; (2)whereF() =fpjM(;p)ygis the set of parameters corresponding to models that produce theworstpercentile of returns, and provides the limit for the integral; M(;P)is the random variableof returns, which is induced by the distribution over model parameters; and is a hyperparameterwhich governs the level of relaxation from max-min objective. The interpretation is that (2) maximizesthe expected return for the worst -percentile of MDPs in the source domain distribution. We adaptthe previous policy gradient formulation to approximately optimize the objective in (2). The resultingalgorithm, which we call EPOpt- , generalizes learning a policy using an ensemble of source MDPswhich are sampled from a source domain distribution.In Algorithm 1, R(k)PT1t=0trt;kdenotes the discounted return obtained in trajectory samplek. In line 7, we compute the percentile value of returns from the Ntrajectories. In line 8, wefind the subset of sampled trajectories which have returns lower than Q. Line 9calls one step ofan underlying batch policy optimization subroutine on the subset of trajectories from line 8. For theCVaR objective, it is important to use a good baseline for the value function. Tamar et al. (2015)show that without a baseline, the resulting policy gradient is biased and not consistent. We use alinear function as the baseline with a time varying feature vector to approximate the value function,similar to Duan et al. (2016). The parameters of the baseline are estimated using only the subset oftrajectories with return less than Q. We found that this approach led to empirically good results.For small values of , we observed that using the sub-sampling step from the beginning led to unstablelearning. Policy gradient methods adjust parameters of policy to increase probability of trajectories3Published as a conference paper at ICLR 2017Algorithm 1: EPOpt–for Robust Policy Search1Input: ,0,niter ,N,2foriterationi= 0;1;2;:::niter do3 fork= 1;2;:::N do4 sample model parameters pkP 5 sample a trajectory k=fst;at;rt;st+1gT1t=0fromM(pk)using policy (i)6 end7 computeQ=percentile offR(k)gNk=18 select sub-set T=fk:R(k)Qg9 Update policy: i+1=BatchPolOpt (i;T)10endwith high returns and reduce probability of poor trajectories. EPOpt due to the sub-sampling stepemphasizes penalizing poor trajectories more. This might constrain the initial exploration neededto find good trajectories. Thus, we initially use a setting of = 1for few iterations before settingepsilon to the desired value. This corresponds to exploring initially to find promising trajectories andrapidly reducing probability of trajectories that do not generalize.3.2 A DAPTING THE SOURCE DOMAIN DISTRIBUTIONIn line with model-based Bayesian RL, we can adapt the ensemble distribution after observingtrajectory data from the target domain. The Bayesian update can be written as:P(Pjk) =1ZP(kjP)P(P) =1ZT1Yt=0P(St+1=s(k)t+1js(k)t;a(k)t;p)P(P=p);(3)where1Zis the partition function (normalization) required to make the probabilities sum to 1, St+1isthe random variable representing the next state, ands(k)t;a(k)t;s(k)t+1Tt=0are data observed alongtrajectoryk. We try to explain the target trajectory using the stochasticity in the state-transitionfunction, which also models sensor errors. This provides the following expression for the likelihood:P(St+1jst;at;p)Tp(st;at): (4)We follow a sampling based approach to calculate the posterior, by sampling a set of model parameters:pi= [p1;p2;:::;pM]from a sampling distribution, PS(pi). Consequently, using Bayes rule andimportance sampling, we have:P(pijk)/L(kjpi)PP(pi)PS(pi); (5)where PP(pi)is the probability of drawing pifrom the prior distribution; and L(kjpi)is the likeli-hood of generating the observed trajectory with model parameters pi. The weighted samples from theposterior can be used to estimate a parametric model, as we do in this paper. Alternatively, one couldapproximate the continuous probability distribution using discrete weighted samples like in case of par-ticle filters. In cases where the prior has very low probability density in certain parts of the parameterspace, it might be advantageous to choose a sampling distribution different from the prior. The like-lihood can be factored using the Markov property as: L(kjpi) =QtP(St+1=s(k)t+1js(k)t;a(k)t;pi).This simple model adaptation rule allows us to illustrate the utility of EPOpt for robust policy search,as well as its integration with model adaptation to learn policies in cases where the target model couldbe very different from the initially assumed distribution.4 E XPERIMENTSWe evaluated the proposed EPOpt- algorithm on the 2D hopper (Erez et al., 2011) and half-cheetah (Wawrzynski, 2009) benchmarks using the MuJoCo physics simulator (Todorov et al.,2012).1Both tasks involve complex second order dynamics and direct torque control. Underactuation,1Supplementary video: https://youtu.be/w1YJ9vwaoto4Published as a conference paper at ICLR 2017high dimensionality, and contact discontinuities make these tasks challenging reinforcement learningbenchmarks. These challenges when coupled with systematic parameter discrepancies can quicklydegrade the performance of policies and make them unstable, as we show in the experiments. Thebatch policy optimization sub-routine is implemented using TRPO. We parametrize the stochasticpolicy using the scheme presented in Schulman et al. (2015). The policy is represented with aGaussian distribution, the mean of which is parametrized using a neural network with two hiddenlayers. Each hidden layer has 64 units, with a tanh non-linearity, and the final output layer is made oflinear units. Normally distributed independent random variables are added to the output of this neuralnetwork, and we also learn the standard deviation of their distributions. Our experiments are aimed atanswering the following questions:1.How does the performance of standard policy search methods (like TRPO) degrade in the presenceof systematic physical differences between the training and test domains, as might be the casewhen training in simulation and testing in the real world?2.Does training on a distribution of models with EPOpt improve the performance of the policy whentested under various model discrepancies, and how much does ensemble training degrade overallperformance (e.g. due to acquiring a more conservative strategy)?3.How does the robustness of the policy to physical parameter discrepancies change when using therobust EPOpt- variant of our method?4.Can EPOpt learn policies that are robust to unmodeled effects – that is, discrepancies in physicalparameters between source and target domains that do not vary in the source domain ensemble?5.When the initial model ensemble differs substantially from the target domain, can the ensemblebe adapted efficiently, and how much data from the target domain is required for this?In all the comparisons, performance refers to the average undiscounted return per trajectory or episode(we consider finite horizon episodic problems). In addition to the previously defined performance,we also use the 10thpercentile of the return distribution as a proxy for the worst-case return.4.1 C OMPARISON TO STANDARD POLICY SEARCHIn Figure 1, we evaluate the performance of standard TRPO and EPOpt (= 0:1)on the hoppertask, in the presence of a simple parametric discrepancy in the physics of the system between thetraining (source) and test (target) domains. The plots show the performance of various policies ontest domains with different torso mass. The first three plots show policies that are each trained ona single torso mass in the source domain, while the last plot illustrates the performance of EPOpt,3456789Torso Mass05001000150020002500300035004000Performancem = 33456789Torso Massm = 63456789Torso Massm = 93456789Torso MassEnsembleFigure 1: Performance of hopper policies when testing on target domains with different torso masses.The first three plots (blue, green, and red) show the performance of policies trained with TRPOon source domains with torso mass 3, 6, and 9, respectively (denoted by m=in the legend). Therightmost plot shows the performance of EPOpt( = 0:1) trained on a Gaussian source distributionwith mean mass = 6and standard deviation = 1:5. The shaded regions show the 10thand 90thpercentile of the return distribution. Policies trained using traditional approaches on a single massvalue are unstable for even slightly different masses, making the hopper fall over when trying tomove forward. In contrast, the EPOpt policy is stable and achieves a high level of performance on theentire range of masses considered. Further, the EPOpt policy does not suffer from degradation inperformance as a consequence of adopting a more robust policy.5Published as a conference paper at ICLR 2017Figure 2: On the left, is an illustration of the simulated 2D hopper task studied in this paper. Onright, we depict the performance of policies for various model instances of the hopper task. Theperformance is depicted as a heat map for various model configurations, parameters of which aregiven in the x and y axis. The adversarially trained policy, EPOpt (= 0:1), is observed to generalizeto a wider range of models and is more robust.which is trained on a Gaussian mass distribution. The results show that no single torso mass valueproduces a policy that is successful in all target domains. However, the EPOpt policy succeeds almostuniformly for all tested mass values. Furthermore, the results show that there is almost no degradationin the performance of EPOpt for any mass setting, suggesting that the EPOpt policy does not suffersubstantially from adopting a more robust strategy.4.2 A NALYSIS OF ROBUSTNESSNext, we analyze the robustness of policies trained using EPOpt on the hopper domain. For thisanalysis, we construct a source distribution which varies four different physical parameters: torsomass, ground friction, foot joint damping, and joint inertia (armature). This distribution is presentedin Table 1. Using this source distribution, we compare between three different methods: (1) standardpolicy search (TRPO) trained on a single model corresponding to the mean parameters in Table 1;(2) EPOpt (= 1) trained on the source distribution; (3) EPOpt (= 0:1)– i.e. the adversariallytrained policy, again trained on the previously described source distribution. The aim of the compari-son is to study direct-transfer performance, similar to the robustness evaluations common in robustcontroller design (Zhou et al., 1996). Hence, we learn a policy using each of the methods, and thentest policies on different model instances (i.e. different combinations of physical parameters) withoutany adaptation. The results of this comparison are summarized in Figure 2, where we present theperformance of the policy for testing conditions corresponding to different torso mass and frictionvalues, which we found to have the most pronounced impact on performance. The results indicatethat EPOpt (= 0:1)produces highly robust policies. A similar analysis for the 10thpercentile of thereturn distribution (softer version of worst-case performance), the half-cheetah task, and different settings are presented in the appendix.Table 1: Initial source domain distributionHopper low highmass 6.0 1.5 3.0 9.0ground friction 2.0 0.25 1.5 2.5joint damping 2.5 1.0 1.0 4.0armature 1.0 0.25 0.5 1.5Half-Cheetah low highmass 6.0 1.5 3.0 9.0ground friction 0.5 0.1 0.3 0.7joint damping 1.5 0.5 0.5 2.5armature 0.125 0.04 0.05 0.23 4 5 6 7 8 9Torso Mass05001000150020002500300035004000PerformanceEnsemble (unmodeled)Maximum-LikelihoodFigure 3: Comparison between policies trainedon a fixed maximum-likelihood model with mass(6), and an ensemble where all models have thesame mass (6) and other parameters varying asdescribed in Table 1.6Published as a conference paper at ICLR 20174.3 R OBUSTNESS TO UNMODELED EFFECTSTo analyze the robustness to unmodeled effects, our next experiment considers the setting wherethe source domain distribution is obtained by varying friction, damping, and armature as in Table 1,but does not consider a distribution over torso mass. Specifically, all models in the source domaindistribution have the same torso mass (value of 6), but we will evaluate the policy trained onthis distribution on target domains where the torso mass is different. Figure 3 indicates that theEPOpt (= 0:1)policy is robust to a broad range of torso masses even when its variation is notconsidered. However, as expected, this policy is not as robust as the case when mass is also modeledas part of the source domain distribution.4.4 M ODEL ADAPTATIONThe preceding experiments show that EPOpt can find robust policies, but the source distribution inthese experiments was chosen to be broad enough such that the target domain is not too far fromhigh-density regions of the distribution. However, for real-world problems, we might not have thedomain knowledge to identify a good source distribution in advance. In such settings, model (source)adaptation allows us to change the parameters of the source distribution using data gathered from thetarget domain. Additionally, model adaptation is helpful when the parameters of the target domaincould change over time, for example due to wear and tear in a physical system. To illustrate modeladaptation, we performed an experiment where the target domain was very far from the high densityregions of the initial source distribution, as depicted in Figure 4(a). In this experiment, the sourcedistribution varies the torso mass and ground friction. We observe that progressively, the sourcedistribution becomes a better approximation of the target domain and consequently the performanceimproves. In this case, since we followed a sampling based approach, we used a uniform samplingdistribution, and weighted each sample with the importance weight as described in Section 3.2.Eventually, after 10 iterations, the source domain distribution is able to accurately match the targetdomain. Figure 4(b) depicts the learning curve, and we see that a robust policy with return more than2500, which roughly corresponds to a situation where the hopper is able to move forward withoutfalling down for the duration of the episode, can be discovered with just 5 trajectories from the targetdomain. Subsequently, the policy improves near monotonically, and EPOpt finds a good policy withjust 11 episodes worth of data from the target domain. In contrast, to achieve the same level ofperformance on the target domain, completely model-free methods like TRPO would require morethan2104trajectories when the neural network parameters are initialized randomly.1.01.52.02.53.0Iteration 0 Iteration 10 5 10 15 201.01.52.02.53.0Iteration 20 5 10 15 20Iteration 7FrictionTorso Mass(a)0 2 4 6 8 10Iterations0500100015002000250030003500Performance (b)Figure 4: (a) Visualizes the source distribution during model adaptation on the hopper task, wheremass and friction coefficient are varied in the source domain. The red cross indicates the unknownparameters of the target domain. The contours in the plot indicate the distribution over models(we assume a Gaussian distribution). Lighter colors and more concentrated contour lines indicateregions of higher density. Each iteration corresponds to one round (episode) of interaction with thetarget domain. The high-density regions gradually move toward the true model, while maintainingprobability mass over a range of parameters which can explain the behavior of target domain.Figure 4(b) presents the corresponding learning curve, where the shaded region describes the 10thand 90th percentiles of the performance distribution, and the solid line is the average performance.7Published as a conference paper at ICLR 20175 R ELATED WORKRobust control is a branch of control theory which formally studies development of robust policies(Zhou et al., 1996; Nilim & Ghaoui, 2005; Lim et al., 2013). However, typically no distribution oversource or target tasks is assumed, and a worst case analysis is performed. Most results from thisfield have been concentrated around linear systems or finite MDPs, which often cannot adequatelymodel complexities of real-world tasks. The set-up of model-based Bayesian RL maintains a beliefover models for decision making under uncertainty (Vlassis et al., 2012; Ghavamzadeh et al., 2015).In Bayesian RL, through interaction with the target domain, the uncertainty is reduced to find thecorrect or closest model. Application of this idea in its full general form is difficult, and requireseither restrictive assumptions like finite MDPs (Poupart et al., 2006), gaussian dynamics (Rosset al., 2008), or task specific innovations. Previous methods have also suggested treating uncertainmodel parameters as unobserved state variables in a continuous POMDP framework, and solving thePOMDP to get optimal exploration-exploitation trade-off (Duff, 2003; Porta et al., 2006). While thisapproach is general, and allows automatic learning of epistemic actions, extending such methods tolarge continuous control tasks like those considered in this paper is difficult.Risk sensitive RL methods (Delage & Mannor, 2010; Tamar et al., 2015) have been proposed to actas a bridge between robust control and Bayesian RL. These approaches allow for using subjectivemodel belief priors, prevent overly conservative policies, and enjoy some strong guarantees typicallyassociated with robust control. However, their application in high dimensional continuous controltasks have not been sufficiently explored. We refer readers to Garc ́ıa & Fern ́andez (2015) for a surveyof related risk sensitive RL methods in the context of robustness and safety.Standard model-based control methods typically operate by finding a maximum-likelihood estimateof the target model (Ljung, 1998; Ross & Bagnell, 2012; Deisenroth et al., 2013), followed bypolicy optimization. Use of model ensembles to produce robust controllers was explored recentlyin robotics. Mordatch et al. (2015a) use a trajectory optimization approach and an ensemble withsmall finite set of models; whereas we follow a sampling based direct policy search approach over acontinuous distribution of uncertain parameters, and also show domain adaptation. Sampling basedapproaches can be applied to complex models and discrete MDPs which cannot be planned througheasily. Similarly, Wang et al. (2010) use an ensemble of models, but their goal is to optimize foraverage case performance as opposed to transferring to a target MDP. Wang et al. (2010) use a handengineered policy class whose parameters are optimized with CMA-ES. EPOpt on the other handcan optimize expressive neural network policies directly. In addition, we show model adaptation,effectiveness of the sub-sampling step ( <1case), and robustness to unmodeled effects, all of whichare important for transfering to a target MDP.Learning of parametrized skills (da Silva et al., 2012) is also concerned with finding policies fora distribution of parametrized tasks. However, this is primarily geared towards situations wheretask parameters are revealed during test time. Our work is motivated by situations where target taskparameters (e.g. friction) are unknown. A number of methods have also been suggested to reducesample complexity when provided with either a baseline policy (Thomas et al., 2015; Kakade &Langford, 2002), expert demonstration (Levine & Koltun, 2013; Argall et al., 2009), or approximatesimulator (Tamar et al., 2012; Abbeel et al., 2006). These are complimentary to our work, in thesense that our policy, which has good direct-transfer performance, can be used to sample from thetarget domain and other off-policy methods could be explored for policy improvement.6 C ONCLUSIONS AND FUTURE WORKIn this paper, we presented the EPOpt- algorithm for training robust policies on ensembles of sourcedomains. Our method provides for training of robust policies, and supports an adversarial trainingregime designed to provide good direct-transfer performance. We also describe how our approachcan be combined with Bayesian model adaptation to adapt the source domain ensemble to a targetdomain using a small amount of target domain experience. Our experimental results demonstratethat the ensemble approach provides for highly robust and generalizable policies in fairly complexsimulated robotic tasks. Our experiments also demonstrate that Bayesian model adaptation canproduce distributions over models that lead to better policies on the target domain than more standardmaximum likelihood estimation, particularly in presence of unmodeled effects.8Published as a conference paper at ICLR 2017Although our method exhibits good generalization performance, the adaptation algorithm we usecurrently relies on sampling the parameter space, which is computationally intensive as the number ofvariable physical parameters increase. We observed that (adaptive) sampling from the prior leads tofast and reliable adaptation if the true model does not have very low probability in the prior. However,when this assumption breaks, we require a different sampling distribution which could producesamples from all regions of the parameter space. This is a general drawback of Bayesian adaptationmethods. In future work, we plan to explore alternative sampling and parameterization schemes,including non-parametric distributions. An eventual end-goal would be to replace the physicssimulator entirely with learned Bayesian neural network models, which could be adapted with limiteddata from the physical system. These models could be pre-trained using physics based simulators likeMuJoCo to get a practical initialization of neural network parameters. Such representations are likelyuseful when dealing with high dimensional inputs like simulated vision from rendered images ortasks with complex dynamics like deformable bodies, which are needed to train highly generalizablepolicies that can successfully transfer to physical robots acting in the real world.ACKNOWLEDGMENTSThe authors would like to thank Emo Todorov, Sham Kakade, and students of Emo Todorov’s researchgroup for insightful comments about the work. The authors would also like to thank Emo Todorovfor the MuJoCo simulator. Aravind Rajeswaran and Balaraman Ravindran acknowledge financialsupport from ILDS, IIT Madras. | r1tEPyWEg | Review | 7: Good paper, accept | The paper looks at the problem of transferring a policy learned in a simulator to a target real-world system. The proposed approach considers using an ensemble of simulated source domains, along with adversarial training, to learn a robust policy that is able to generalize to several target domains.
Overall, the paper tackles an interesting problem, and provides a reasonable solution. The notion of adversarial training used here does not seem the same as other recent literature (e.g. on GANs). It would be useful to add more details on a few components, as discussed in the question/response round. I also encourage including the results with alternative policy gradient subroutines, even if they don’t perform well (e.g. Reinforce), as well as results with and without the baseline on the value function. Such results are very useful to other researchers.
| |
SyWvgP5el | ICLR.cc/2017/conference | 2017 | EPOpt: Learning Robust Neural Network Policies Using Model Ensembles | ["Aravind Rajeswaran", "Sarvjeet Ghotra", "Balaraman Ravindran", "Sergey Levine"] | Sample complexity and safety are major challenges when learning policies with reinforcement learning for real-world tasks, especially when the policies are represented using rich function approximators like deep neural networks. Model-based methods where the real-world target domain is approximated using a simulated source domain provide an avenue to tackle the above challenges by augmenting real data with simulated data. However, discrepancies between the simulated source domain and the target domain pose a challenge for simulated training. We introduce the EPOpt algorithm, which uses an ensemble of simulated source domains and a form of adversarial training to learn policies that are robust and generalize to a broad range of possible target domains, including to unmodeled effects. Further, the probability distribution over source domains in the ensemble can be adapted using data from the target domain and approximate Bayesian methods, to progressively make it a better approximation. Thus, learning on a model ensemble, along with source domain adaptation, provides the benefit of both robustness and learning. | ["Reinforcement Learning", "Applications"] | ABSTRACTSample complexity and safety are major challenges when learning policies withreinforcement learning for real-world tasks, especially when the policies are repre-sented using rich function approximators like deep neural networks. Model-basedmethods where the real-world target domain is approximated using a simulatedsource domain provide an avenue to tackle the above challenges by augmenting realdata with simulated data. However, discrepancies between the simulated sourcedomain and the target domain pose a challenge for simulated training. We introducethe EPOpt algorithm, which uses an ensemble of simulated source domains anda form of adversarial training to learn policies that are robust and generalize to abroad range of possible target domains, including unmodeled effects. Further, theprobability distribution over source domains in the ensemble can be adapted usingdata from target domain and approximate Bayesian methods, to progressively makeit a better approximation. Thus, learning on a model ensemble, along with sourcedomain adaptation, provides the benefit of both robustness and learning/adaptation.1 I NTRODUCTIONReinforcement learning with powerful function approximators like deep neural networks (deep RL)has recently demonstrated remarkable success in a wide range of tasks like games (Mnih et al., 2015;Silver et al., 2016), simulated control problems (Lillicrap et al., 2015; Mordatch et al., 2015b), andgraphics (Peng et al., 2016). However, high sample complexity is a major barrier for directly applyingmodel-free deep RL methods for physical control tasks. Model-free algorithms like Q-learning,actor-critic, and policy gradients are known to suffer from long learning times (Kakade, 2003), whichis compounded when used in conjunction with expressive function approximators like deep neuralnetworks (DNNs). The challenge of gathering samples from the real world is further exacerbatedby issues of safety for the agent and environment, since sampling with partially learned policiescould be unstable (Garc ́ıa & Fern ́andez, 2015). Thus, model-free deep RL methods often require aprohibitively large numbers of potentially dangerous samples for physical control tasks.Model-based methods, where the real-world target domain is approximated with a simulated sourcedomain, provide an avenue to tackle the above challenges by learning policies using simulated data.The principal challenge with simulated training is the systematic discrepancy between source andtarget domains, and therefore, methods that compensate for systematic discrepancies (modelingerrors) are needed to transfer results from simulations to real world using RL. We show that theimpact of such discrepancies can be mitigated through two key ideas: (1) training on an ensembleof models in an adversarial fashion to learn policies that are robust to parametric model errors, aswell as to unmodeled effects; and (2) adaptation of the source domain ensemble using data fromthe target domain to progressively make it a better approximation. This can be viewed either as aninstance of model-based Bayesian RL (Ghavamzadeh et al., 2015); or as transfer learning from acollection of simulated source domains to a real-world target domain (Taylor & Stone, 2009). Whilea number of model-free RL algorithms have been proposed (see, e.g., Duan et al. (2016) for a survey),their high sample complexity demands use of a simulator, effectively making them model-based. We1Published as a conference paper at ICLR 2017show in our experiments that such methods learn policies which are highly optimized for the specificmodels used in the simulator, but are brittle under model mismatch. This is not surprising, since deepnetworks are remarkably proficient at exploiting any systematic regularities in a simulator. Addressingrobustness of DNN-policies is particularly important to transfer their success from simulated tasks tophysical systems.In this paper, we propose the Ensemble Policy Optimization (EPOpt ) algorithm for finding policiesthat are robust to model mismatch. In line with model-based Bayesian RL, we learn a policy for thetarget domain by alternating between two phases: (i) given a source (model) distribution (i.e. ensembleof models), find a robust policy that is competent for the whole distribution; (ii) gather data from thetarget domain using said robust policy, and adapt the source distribution. EPOpt uses an ensembleof models sampled from the source distribution, and a form of adversarial training to learn robustpolicies that generalize to a broad range of models. By robust, we mean insensitivity to parametricmodel errors and broadly competent performance for direct-transfer (also referred to as jumpstartlike in Taylor & Stone (2009)). Direct-transfer performance refers to the average initial performance(return) in the target domain, without any direct training on the target domain. By adversarial training,we mean that model instances on which the policy performs poorly in the source distribution aresampled more often in order to encourage learning of policies that perform well for a wide range ofmodel instances. This is in contrast to methods which learn highly optimized policies for specificmodel instances, but brittle under model perturbations. In our experiments, we did not observesignificant loss in performance by requiring the policy to work on multiple models (for example,through adopting a more conservative strategy). Further, we show that policies learned using EPOptare robust even to effects not modeled in the source domain. Such unmodeled effects are a majorissue when transferring from simulation to the real world. For the model adaptation step (ii), wepresent a simple method using approximate Bayesian updates, which progressively makes the sourcedistribution a better approximation of the target domain. We evaluate the proposed methods on thehopper (12 dimensional state space; 3 dimensional action space) and half-cheetah (18 dimensionalstate space; 6 dimensional action space) benchmarks in MuJoCo. Our experimental results suggestthat adversarial training on model ensembles produces robust policies which generalize better thanpolicies trained on a single, maximum-likelihood model (of source distribution) alone.2 P ROBLEM FORMULATIONWe consider parametrized Markov Decision Processes (MDPs), which are tuples of the form:M(p)<S;A;Tp;Rp;;S 0;p>whereS,Aare (continuous) states and actions respectively;TpRp, andS0;pare the state transition, reward function, and initial state distribution respectively, allparametrized by p; andis the discount factor. Thus, we consider a set of MDPs with the same stateand action spaces. Each MDP in this set could potentially have different transition functions, rewards,and initial state distributions. We use transition functions of the form St+1Tp(st;at)whereTpisa random process and St+1is a random variable.We distinguish between source and target MDPs using MandWrespectively. We also refer to MandWas source and target domains respectively, as is common in the transfer learning set-up. Ourobjective is to learn the optimal policy for W; and to do so, we have access to M(p). We assumethat we have a distribution ( D) over the source domains (MDPs) generated by a distribution overthe parameters PP(p)that capture our subjective belief about the parameters of W. LetPbeparametrized by (e.g. mean, standard deviation). For example, Mcould be a hopping task withreward proportional to hopping velocity and falling down corresponds to a terminal state. For thistask,pcould correspond to parameters like torso mass, ground friction, and damping in joints, allof which affect the dynamics. Ideally, we would like the target domain to be in the model class, i.e.f9pjM(p) =Wg. However, in practice, there are likely to be unmodeled effects, and we analyzethis setting in our experiments. We wish to learn a policy (s)that performs well for all MD .Note that this robust policy does not have an explicit dependence on p, and we require it to performwell without knowledge of p.3 L EARNING PROTOCOL AND EPO PT ALGORITHMWe follow the round-based learning protocol of Bayesian model-based RL. We use the term roundswhen interacting with the target domain, and episode when performing rollouts with the simulator. Ineach round, we interact with the target domain after computing the robust policy on the current (i.e.2Published as a conference paper at ICLR 2017posterior) simulated source distribution. Following this, we update the source distribution using datafrom the target domain collected by executing the robust policy. Thus, in round i, we update two setsof parameters: i, the parameters of the robust policy (neural network); and i, the parameters of thesource distribution. The two key steps in this procedure are finding a robust policy given a sourcedistribution; and updating the source distribution using data from the target domain. In this section,we present our approach for both of these steps.3.1 R OBUST POLICY SEARCHWe introduce the EPOpt algorithm for finding a robust policy using the source distribution. EPOpt isa policy gradient based meta-algorithm which uses batch policy optimization methods as a subroutine.Batch policy optimization algorithms (Williams, 1992; Kakade, 2001; Schulman et al., 2015) collecta batch of trajectories by rolling out the current policy, and use the trajectories to make a policyupdate. The basic structure of EPOpt is to sample a collection of models from the source distribution,sample trajectories from each of these models, and make a gradient update based on a subset ofsampled trajectories. We first define evaluation metrics for the parametrized policy, :M(;p) =E~"T1Xt=0trt(st;at)p#; (1)D() =EpP[M(;p)] =EpP"E^"T1Xt=0trt(st;at)p##=E"T1Xt=0trt(st;at)#:In (1),M(;p)is the evaluation of on the modelM(p), with ~being trajectories generatedbyM(p)and:~=fst;at;rtgTt=0wherest+1Tp(st;at),s0S0;p,rtRp(st;at), andat(st). Similarly, D()is the evaluation of over the source domain distribution. Thecorresponding expectation is over trajectories generated byDand:=fst;at;rtgTt=0, wherest+1Tpt(st;at),pt+1=pt,s0S0;p0,rtRpt(st;at),at(st), andp0P . With thismodified notation of trajectories, batch policy optimization can be invoked for policy search.OptimizingDallows us to learn a policy that performs best in expectation over models in the sourcedomain distribution. However, this does not necessarily lead to a robust policy, since there could behigh variability in performance for different models in the distribution. To explicitly seek a robustpolicy, we use a softer version of max-min objective suggested in robust control, and optimize for theconditional value at risk (CVaR) (Tamar et al., 2015):max;yZF()M(;p)P(p)dp s:t: P(M(;P)y) =; (2)whereF() =fpjM(;p)ygis the set of parameters corresponding to models that produce theworstpercentile of returns, and provides the limit for the integral; M(;P)is the random variableof returns, which is induced by the distribution over model parameters; and is a hyperparameterwhich governs the level of relaxation from max-min objective. The interpretation is that (2) maximizesthe expected return for the worst -percentile of MDPs in the source domain distribution. We adaptthe previous policy gradient formulation to approximately optimize the objective in (2). The resultingalgorithm, which we call EPOpt- , generalizes learning a policy using an ensemble of source MDPswhich are sampled from a source domain distribution.In Algorithm 1, R(k)PT1t=0trt;kdenotes the discounted return obtained in trajectory samplek. In line 7, we compute the percentile value of returns from the Ntrajectories. In line 8, wefind the subset of sampled trajectories which have returns lower than Q. Line 9calls one step ofan underlying batch policy optimization subroutine on the subset of trajectories from line 8. For theCVaR objective, it is important to use a good baseline for the value function. Tamar et al. (2015)show that without a baseline, the resulting policy gradient is biased and not consistent. We use alinear function as the baseline with a time varying feature vector to approximate the value function,similar to Duan et al. (2016). The parameters of the baseline are estimated using only the subset oftrajectories with return less than Q. We found that this approach led to empirically good results.For small values of , we observed that using the sub-sampling step from the beginning led to unstablelearning. Policy gradient methods adjust parameters of policy to increase probability of trajectories3Published as a conference paper at ICLR 2017Algorithm 1: EPOpt–for Robust Policy Search1Input: ,0,niter ,N,2foriterationi= 0;1;2;:::niter do3 fork= 1;2;:::N do4 sample model parameters pkP 5 sample a trajectory k=fst;at;rt;st+1gT1t=0fromM(pk)using policy (i)6 end7 computeQ=percentile offR(k)gNk=18 select sub-set T=fk:R(k)Qg9 Update policy: i+1=BatchPolOpt (i;T)10endwith high returns and reduce probability of poor trajectories. EPOpt due to the sub-sampling stepemphasizes penalizing poor trajectories more. This might constrain the initial exploration neededto find good trajectories. Thus, we initially use a setting of = 1for few iterations before settingepsilon to the desired value. This corresponds to exploring initially to find promising trajectories andrapidly reducing probability of trajectories that do not generalize.3.2 A DAPTING THE SOURCE DOMAIN DISTRIBUTIONIn line with model-based Bayesian RL, we can adapt the ensemble distribution after observingtrajectory data from the target domain. The Bayesian update can be written as:P(Pjk) =1ZP(kjP)P(P) =1ZT1Yt=0P(St+1=s(k)t+1js(k)t;a(k)t;p)P(P=p);(3)where1Zis the partition function (normalization) required to make the probabilities sum to 1, St+1isthe random variable representing the next state, ands(k)t;a(k)t;s(k)t+1Tt=0are data observed alongtrajectoryk. We try to explain the target trajectory using the stochasticity in the state-transitionfunction, which also models sensor errors. This provides the following expression for the likelihood:P(St+1jst;at;p)Tp(st;at): (4)We follow a sampling based approach to calculate the posterior, by sampling a set of model parameters:pi= [p1;p2;:::;pM]from a sampling distribution, PS(pi). Consequently, using Bayes rule andimportance sampling, we have:P(pijk)/L(kjpi)PP(pi)PS(pi); (5)where PP(pi)is the probability of drawing pifrom the prior distribution; and L(kjpi)is the likeli-hood of generating the observed trajectory with model parameters pi. The weighted samples from theposterior can be used to estimate a parametric model, as we do in this paper. Alternatively, one couldapproximate the continuous probability distribution using discrete weighted samples like in case of par-ticle filters. In cases where the prior has very low probability density in certain parts of the parameterspace, it might be advantageous to choose a sampling distribution different from the prior. The like-lihood can be factored using the Markov property as: L(kjpi) =QtP(St+1=s(k)t+1js(k)t;a(k)t;pi).This simple model adaptation rule allows us to illustrate the utility of EPOpt for robust policy search,as well as its integration with model adaptation to learn policies in cases where the target model couldbe very different from the initially assumed distribution.4 E XPERIMENTSWe evaluated the proposed EPOpt- algorithm on the 2D hopper (Erez et al., 2011) and half-cheetah (Wawrzynski, 2009) benchmarks using the MuJoCo physics simulator (Todorov et al.,2012).1Both tasks involve complex second order dynamics and direct torque control. Underactuation,1Supplementary video: https://youtu.be/w1YJ9vwaoto4Published as a conference paper at ICLR 2017high dimensionality, and contact discontinuities make these tasks challenging reinforcement learningbenchmarks. These challenges when coupled with systematic parameter discrepancies can quicklydegrade the performance of policies and make them unstable, as we show in the experiments. Thebatch policy optimization sub-routine is implemented using TRPO. We parametrize the stochasticpolicy using the scheme presented in Schulman et al. (2015). The policy is represented with aGaussian distribution, the mean of which is parametrized using a neural network with two hiddenlayers. Each hidden layer has 64 units, with a tanh non-linearity, and the final output layer is made oflinear units. Normally distributed independent random variables are added to the output of this neuralnetwork, and we also learn the standard deviation of their distributions. Our experiments are aimed atanswering the following questions:1.How does the performance of standard policy search methods (like TRPO) degrade in the presenceof systematic physical differences between the training and test domains, as might be the casewhen training in simulation and testing in the real world?2.Does training on a distribution of models with EPOpt improve the performance of the policy whentested under various model discrepancies, and how much does ensemble training degrade overallperformance (e.g. due to acquiring a more conservative strategy)?3.How does the robustness of the policy to physical parameter discrepancies change when using therobust EPOpt- variant of our method?4.Can EPOpt learn policies that are robust to unmodeled effects – that is, discrepancies in physicalparameters between source and target domains that do not vary in the source domain ensemble?5.When the initial model ensemble differs substantially from the target domain, can the ensemblebe adapted efficiently, and how much data from the target domain is required for this?In all the comparisons, performance refers to the average undiscounted return per trajectory or episode(we consider finite horizon episodic problems). In addition to the previously defined performance,we also use the 10thpercentile of the return distribution as a proxy for the worst-case return.4.1 C OMPARISON TO STANDARD POLICY SEARCHIn Figure 1, we evaluate the performance of standard TRPO and EPOpt (= 0:1)on the hoppertask, in the presence of a simple parametric discrepancy in the physics of the system between thetraining (source) and test (target) domains. The plots show the performance of various policies ontest domains with different torso mass. The first three plots show policies that are each trained ona single torso mass in the source domain, while the last plot illustrates the performance of EPOpt,3456789Torso Mass05001000150020002500300035004000Performancem = 33456789Torso Massm = 63456789Torso Massm = 93456789Torso MassEnsembleFigure 1: Performance of hopper policies when testing on target domains with different torso masses.The first three plots (blue, green, and red) show the performance of policies trained with TRPOon source domains with torso mass 3, 6, and 9, respectively (denoted by m=in the legend). Therightmost plot shows the performance of EPOpt( = 0:1) trained on a Gaussian source distributionwith mean mass = 6and standard deviation = 1:5. The shaded regions show the 10thand 90thpercentile of the return distribution. Policies trained using traditional approaches on a single massvalue are unstable for even slightly different masses, making the hopper fall over when trying tomove forward. In contrast, the EPOpt policy is stable and achieves a high level of performance on theentire range of masses considered. Further, the EPOpt policy does not suffer from degradation inperformance as a consequence of adopting a more robust policy.5Published as a conference paper at ICLR 2017Figure 2: On the left, is an illustration of the simulated 2D hopper task studied in this paper. Onright, we depict the performance of policies for various model instances of the hopper task. Theperformance is depicted as a heat map for various model configurations, parameters of which aregiven in the x and y axis. The adversarially trained policy, EPOpt (= 0:1), is observed to generalizeto a wider range of models and is more robust.which is trained on a Gaussian mass distribution. The results show that no single torso mass valueproduces a policy that is successful in all target domains. However, the EPOpt policy succeeds almostuniformly for all tested mass values. Furthermore, the results show that there is almost no degradationin the performance of EPOpt for any mass setting, suggesting that the EPOpt policy does not suffersubstantially from adopting a more robust strategy.4.2 A NALYSIS OF ROBUSTNESSNext, we analyze the robustness of policies trained using EPOpt on the hopper domain. For thisanalysis, we construct a source distribution which varies four different physical parameters: torsomass, ground friction, foot joint damping, and joint inertia (armature). This distribution is presentedin Table 1. Using this source distribution, we compare between three different methods: (1) standardpolicy search (TRPO) trained on a single model corresponding to the mean parameters in Table 1;(2) EPOpt (= 1) trained on the source distribution; (3) EPOpt (= 0:1)– i.e. the adversariallytrained policy, again trained on the previously described source distribution. The aim of the compari-son is to study direct-transfer performance, similar to the robustness evaluations common in robustcontroller design (Zhou et al., 1996). Hence, we learn a policy using each of the methods, and thentest policies on different model instances (i.e. different combinations of physical parameters) withoutany adaptation. The results of this comparison are summarized in Figure 2, where we present theperformance of the policy for testing conditions corresponding to different torso mass and frictionvalues, which we found to have the most pronounced impact on performance. The results indicatethat EPOpt (= 0:1)produces highly robust policies. A similar analysis for the 10thpercentile of thereturn distribution (softer version of worst-case performance), the half-cheetah task, and different settings are presented in the appendix.Table 1: Initial source domain distributionHopper low highmass 6.0 1.5 3.0 9.0ground friction 2.0 0.25 1.5 2.5joint damping 2.5 1.0 1.0 4.0armature 1.0 0.25 0.5 1.5Half-Cheetah low highmass 6.0 1.5 3.0 9.0ground friction 0.5 0.1 0.3 0.7joint damping 1.5 0.5 0.5 2.5armature 0.125 0.04 0.05 0.23 4 5 6 7 8 9Torso Mass05001000150020002500300035004000PerformanceEnsemble (unmodeled)Maximum-LikelihoodFigure 3: Comparison between policies trainedon a fixed maximum-likelihood model with mass(6), and an ensemble where all models have thesame mass (6) and other parameters varying asdescribed in Table 1.6Published as a conference paper at ICLR 20174.3 R OBUSTNESS TO UNMODELED EFFECTSTo analyze the robustness to unmodeled effects, our next experiment considers the setting wherethe source domain distribution is obtained by varying friction, damping, and armature as in Table 1,but does not consider a distribution over torso mass. Specifically, all models in the source domaindistribution have the same torso mass (value of 6), but we will evaluate the policy trained onthis distribution on target domains where the torso mass is different. Figure 3 indicates that theEPOpt (= 0:1)policy is robust to a broad range of torso masses even when its variation is notconsidered. However, as expected, this policy is not as robust as the case when mass is also modeledas part of the source domain distribution.4.4 M ODEL ADAPTATIONThe preceding experiments show that EPOpt can find robust policies, but the source distribution inthese experiments was chosen to be broad enough such that the target domain is not too far fromhigh-density regions of the distribution. However, for real-world problems, we might not have thedomain knowledge to identify a good source distribution in advance. In such settings, model (source)adaptation allows us to change the parameters of the source distribution using data gathered from thetarget domain. Additionally, model adaptation is helpful when the parameters of the target domaincould change over time, for example due to wear and tear in a physical system. To illustrate modeladaptation, we performed an experiment where the target domain was very far from the high densityregions of the initial source distribution, as depicted in Figure 4(a). In this experiment, the sourcedistribution varies the torso mass and ground friction. We observe that progressively, the sourcedistribution becomes a better approximation of the target domain and consequently the performanceimproves. In this case, since we followed a sampling based approach, we used a uniform samplingdistribution, and weighted each sample with the importance weight as described in Section 3.2.Eventually, after 10 iterations, the source domain distribution is able to accurately match the targetdomain. Figure 4(b) depicts the learning curve, and we see that a robust policy with return more than2500, which roughly corresponds to a situation where the hopper is able to move forward withoutfalling down for the duration of the episode, can be discovered with just 5 trajectories from the targetdomain. Subsequently, the policy improves near monotonically, and EPOpt finds a good policy withjust 11 episodes worth of data from the target domain. In contrast, to achieve the same level ofperformance on the target domain, completely model-free methods like TRPO would require morethan2104trajectories when the neural network parameters are initialized randomly.1.01.52.02.53.0Iteration 0 Iteration 10 5 10 15 201.01.52.02.53.0Iteration 20 5 10 15 20Iteration 7FrictionTorso Mass(a)0 2 4 6 8 10Iterations0500100015002000250030003500Performance (b)Figure 4: (a) Visualizes the source distribution during model adaptation on the hopper task, wheremass and friction coefficient are varied in the source domain. The red cross indicates the unknownparameters of the target domain. The contours in the plot indicate the distribution over models(we assume a Gaussian distribution). Lighter colors and more concentrated contour lines indicateregions of higher density. Each iteration corresponds to one round (episode) of interaction with thetarget domain. The high-density regions gradually move toward the true model, while maintainingprobability mass over a range of parameters which can explain the behavior of target domain.Figure 4(b) presents the corresponding learning curve, where the shaded region describes the 10thand 90th percentiles of the performance distribution, and the solid line is the average performance.7Published as a conference paper at ICLR 20175 R ELATED WORKRobust control is a branch of control theory which formally studies development of robust policies(Zhou et al., 1996; Nilim & Ghaoui, 2005; Lim et al., 2013). However, typically no distribution oversource or target tasks is assumed, and a worst case analysis is performed. Most results from thisfield have been concentrated around linear systems or finite MDPs, which often cannot adequatelymodel complexities of real-world tasks. The set-up of model-based Bayesian RL maintains a beliefover models for decision making under uncertainty (Vlassis et al., 2012; Ghavamzadeh et al., 2015).In Bayesian RL, through interaction with the target domain, the uncertainty is reduced to find thecorrect or closest model. Application of this idea in its full general form is difficult, and requireseither restrictive assumptions like finite MDPs (Poupart et al., 2006), gaussian dynamics (Rosset al., 2008), or task specific innovations. Previous methods have also suggested treating uncertainmodel parameters as unobserved state variables in a continuous POMDP framework, and solving thePOMDP to get optimal exploration-exploitation trade-off (Duff, 2003; Porta et al., 2006). While thisapproach is general, and allows automatic learning of epistemic actions, extending such methods tolarge continuous control tasks like those considered in this paper is difficult.Risk sensitive RL methods (Delage & Mannor, 2010; Tamar et al., 2015) have been proposed to actas a bridge between robust control and Bayesian RL. These approaches allow for using subjectivemodel belief priors, prevent overly conservative policies, and enjoy some strong guarantees typicallyassociated with robust control. However, their application in high dimensional continuous controltasks have not been sufficiently explored. We refer readers to Garc ́ıa & Fern ́andez (2015) for a surveyof related risk sensitive RL methods in the context of robustness and safety.Standard model-based control methods typically operate by finding a maximum-likelihood estimateof the target model (Ljung, 1998; Ross & Bagnell, 2012; Deisenroth et al., 2013), followed bypolicy optimization. Use of model ensembles to produce robust controllers was explored recentlyin robotics. Mordatch et al. (2015a) use a trajectory optimization approach and an ensemble withsmall finite set of models; whereas we follow a sampling based direct policy search approach over acontinuous distribution of uncertain parameters, and also show domain adaptation. Sampling basedapproaches can be applied to complex models and discrete MDPs which cannot be planned througheasily. Similarly, Wang et al. (2010) use an ensemble of models, but their goal is to optimize foraverage case performance as opposed to transferring to a target MDP. Wang et al. (2010) use a handengineered policy class whose parameters are optimized with CMA-ES. EPOpt on the other handcan optimize expressive neural network policies directly. In addition, we show model adaptation,effectiveness of the sub-sampling step ( <1case), and robustness to unmodeled effects, all of whichare important for transfering to a target MDP.Learning of parametrized skills (da Silva et al., 2012) is also concerned with finding policies fora distribution of parametrized tasks. However, this is primarily geared towards situations wheretask parameters are revealed during test time. Our work is motivated by situations where target taskparameters (e.g. friction) are unknown. A number of methods have also been suggested to reducesample complexity when provided with either a baseline policy (Thomas et al., 2015; Kakade &Langford, 2002), expert demonstration (Levine & Koltun, 2013; Argall et al., 2009), or approximatesimulator (Tamar et al., 2012; Abbeel et al., 2006). These are complimentary to our work, in thesense that our policy, which has good direct-transfer performance, can be used to sample from thetarget domain and other off-policy methods could be explored for policy improvement.6 C ONCLUSIONS AND FUTURE WORKIn this paper, we presented the EPOpt- algorithm for training robust policies on ensembles of sourcedomains. Our method provides for training of robust policies, and supports an adversarial trainingregime designed to provide good direct-transfer performance. We also describe how our approachcan be combined with Bayesian model adaptation to adapt the source domain ensemble to a targetdomain using a small amount of target domain experience. Our experimental results demonstratethat the ensemble approach provides for highly robust and generalizable policies in fairly complexsimulated robotic tasks. Our experiments also demonstrate that Bayesian model adaptation canproduce distributions over models that lead to better policies on the target domain than more standardmaximum likelihood estimation, particularly in presence of unmodeled effects.8Published as a conference paper at ICLR 2017Although our method exhibits good generalization performance, the adaptation algorithm we usecurrently relies on sampling the parameter space, which is computationally intensive as the number ofvariable physical parameters increase. We observed that (adaptive) sampling from the prior leads tofast and reliable adaptation if the true model does not have very low probability in the prior. However,when this assumption breaks, we require a different sampling distribution which could producesamples from all regions of the parameter space. This is a general drawback of Bayesian adaptationmethods. In future work, we plan to explore alternative sampling and parameterization schemes,including non-parametric distributions. An eventual end-goal would be to replace the physicssimulator entirely with learned Bayesian neural network models, which could be adapted with limiteddata from the physical system. These models could be pre-trained using physics based simulators likeMuJoCo to get a practical initialization of neural network parameters. Such representations are likelyuseful when dealing with high dimensional inputs like simulated vision from rendered images ortasks with complex dynamics like deformable bodies, which are needed to train highly generalizablepolicies that can successfully transfer to physical robots acting in the real world.ACKNOWLEDGMENTSThe authors would like to thank Emo Todorov, Sham Kakade, and students of Emo Todorov’s researchgroup for insightful comments about the work. The authors would also like to thank Emo Todorovfor the MuJoCo simulator. Aravind Rajeswaran and Balaraman Ravindran acknowledge financialsupport from ILDS, IIT Madras. | BJwiMAWVe | ICLR 2017 conference review | 7: Good paper, accept | Paper addresses systematic discrepancies between simulated and real-world policy control domains. Proposed method contains two ideas: 1) training on an ensemble of models in an adversarial fashion to learn policies that are robust to errors and 2) adaptation of the source domain ensemble using data from a (real-world) target domain.
> Significance
Paper addresses and important and significant problem. The approach taken in addressing it is also interesting
> Clarity
Paper is well written, but does require domain knowledge to understand.
My main concerns were well addressed by the rebuttal and corresponding revisions to the paper. | 4: The reviewer is confident but not absolutely certain that the evaluation is correct |
SyWvgP5el | ICLR.cc/2017/conference | 2017 | EPOpt: Learning Robust Neural Network Policies Using Model Ensembles | ["Aravind Rajeswaran", "Sarvjeet Ghotra", "Balaraman Ravindran", "Sergey Levine"] | Sample complexity and safety are major challenges when learning policies with reinforcement learning for real-world tasks, especially when the policies are represented using rich function approximators like deep neural networks. Model-based methods where the real-world target domain is approximated using a simulated source domain provide an avenue to tackle the above challenges by augmenting real data with simulated data. However, discrepancies between the simulated source domain and the target domain pose a challenge for simulated training. We introduce the EPOpt algorithm, which uses an ensemble of simulated source domains and a form of adversarial training to learn policies that are robust and generalize to a broad range of possible target domains, including to unmodeled effects. Further, the probability distribution over source domains in the ensemble can be adapted using data from the target domain and approximate Bayesian methods, to progressively make it a better approximation. Thus, learning on a model ensemble, along with source domain adaptation, provides the benefit of both robustness and learning. | ["Reinforcement Learning", "Applications"] | ABSTRACTSample complexity and safety are major challenges when learning policies withreinforcement learning for real-world tasks, especially when the policies are repre-sented using rich function approximators like deep neural networks. Model-basedmethods where the real-world target domain is approximated using a simulatedsource domain provide an avenue to tackle the above challenges by augmenting realdata with simulated data. However, discrepancies between the simulated sourcedomain and the target domain pose a challenge for simulated training. We introducethe EPOpt algorithm, which uses an ensemble of simulated source domains anda form of adversarial training to learn policies that are robust and generalize to abroad range of possible target domains, including unmodeled effects. Further, theprobability distribution over source domains in the ensemble can be adapted usingdata from target domain and approximate Bayesian methods, to progressively makeit a better approximation. Thus, learning on a model ensemble, along with sourcedomain adaptation, provides the benefit of both robustness and learning/adaptation.1 I NTRODUCTIONReinforcement learning with powerful function approximators like deep neural networks (deep RL)has recently demonstrated remarkable success in a wide range of tasks like games (Mnih et al., 2015;Silver et al., 2016), simulated control problems (Lillicrap et al., 2015; Mordatch et al., 2015b), andgraphics (Peng et al., 2016). However, high sample complexity is a major barrier for directly applyingmodel-free deep RL methods for physical control tasks. Model-free algorithms like Q-learning,actor-critic, and policy gradients are known to suffer from long learning times (Kakade, 2003), whichis compounded when used in conjunction with expressive function approximators like deep neuralnetworks (DNNs). The challenge of gathering samples from the real world is further exacerbatedby issues of safety for the agent and environment, since sampling with partially learned policiescould be unstable (Garc ́ıa & Fern ́andez, 2015). Thus, model-free deep RL methods often require aprohibitively large numbers of potentially dangerous samples for physical control tasks.Model-based methods, where the real-world target domain is approximated with a simulated sourcedomain, provide an avenue to tackle the above challenges by learning policies using simulated data.The principal challenge with simulated training is the systematic discrepancy between source andtarget domains, and therefore, methods that compensate for systematic discrepancies (modelingerrors) are needed to transfer results from simulations to real world using RL. We show that theimpact of such discrepancies can be mitigated through two key ideas: (1) training on an ensembleof models in an adversarial fashion to learn policies that are robust to parametric model errors, aswell as to unmodeled effects; and (2) adaptation of the source domain ensemble using data fromthe target domain to progressively make it a better approximation. This can be viewed either as aninstance of model-based Bayesian RL (Ghavamzadeh et al., 2015); or as transfer learning from acollection of simulated source domains to a real-world target domain (Taylor & Stone, 2009). Whilea number of model-free RL algorithms have been proposed (see, e.g., Duan et al. (2016) for a survey),their high sample complexity demands use of a simulator, effectively making them model-based. We1Published as a conference paper at ICLR 2017show in our experiments that such methods learn policies which are highly optimized for the specificmodels used in the simulator, but are brittle under model mismatch. This is not surprising, since deepnetworks are remarkably proficient at exploiting any systematic regularities in a simulator. Addressingrobustness of DNN-policies is particularly important to transfer their success from simulated tasks tophysical systems.In this paper, we propose the Ensemble Policy Optimization (EPOpt ) algorithm for finding policiesthat are robust to model mismatch. In line with model-based Bayesian RL, we learn a policy for thetarget domain by alternating between two phases: (i) given a source (model) distribution (i.e. ensembleof models), find a robust policy that is competent for the whole distribution; (ii) gather data from thetarget domain using said robust policy, and adapt the source distribution. EPOpt uses an ensembleof models sampled from the source distribution, and a form of adversarial training to learn robustpolicies that generalize to a broad range of models. By robust, we mean insensitivity to parametricmodel errors and broadly competent performance for direct-transfer (also referred to as jumpstartlike in Taylor & Stone (2009)). Direct-transfer performance refers to the average initial performance(return) in the target domain, without any direct training on the target domain. By adversarial training,we mean that model instances on which the policy performs poorly in the source distribution aresampled more often in order to encourage learning of policies that perform well for a wide range ofmodel instances. This is in contrast to methods which learn highly optimized policies for specificmodel instances, but brittle under model perturbations. In our experiments, we did not observesignificant loss in performance by requiring the policy to work on multiple models (for example,through adopting a more conservative strategy). Further, we show that policies learned using EPOptare robust even to effects not modeled in the source domain. Such unmodeled effects are a majorissue when transferring from simulation to the real world. For the model adaptation step (ii), wepresent a simple method using approximate Bayesian updates, which progressively makes the sourcedistribution a better approximation of the target domain. We evaluate the proposed methods on thehopper (12 dimensional state space; 3 dimensional action space) and half-cheetah (18 dimensionalstate space; 6 dimensional action space) benchmarks in MuJoCo. Our experimental results suggestthat adversarial training on model ensembles produces robust policies which generalize better thanpolicies trained on a single, maximum-likelihood model (of source distribution) alone.2 P ROBLEM FORMULATIONWe consider parametrized Markov Decision Processes (MDPs), which are tuples of the form:M(p)<S;A;Tp;Rp;;S 0;p>whereS,Aare (continuous) states and actions respectively;TpRp, andS0;pare the state transition, reward function, and initial state distribution respectively, allparametrized by p; andis the discount factor. Thus, we consider a set of MDPs with the same stateand action spaces. Each MDP in this set could potentially have different transition functions, rewards,and initial state distributions. We use transition functions of the form St+1Tp(st;at)whereTpisa random process and St+1is a random variable.We distinguish between source and target MDPs using MandWrespectively. We also refer to MandWas source and target domains respectively, as is common in the transfer learning set-up. Ourobjective is to learn the optimal policy for W; and to do so, we have access to M(p). We assumethat we have a distribution ( D) over the source domains (MDPs) generated by a distribution overthe parameters PP(p)that capture our subjective belief about the parameters of W. LetPbeparametrized by (e.g. mean, standard deviation). For example, Mcould be a hopping task withreward proportional to hopping velocity and falling down corresponds to a terminal state. For thistask,pcould correspond to parameters like torso mass, ground friction, and damping in joints, allof which affect the dynamics. Ideally, we would like the target domain to be in the model class, i.e.f9pjM(p) =Wg. However, in practice, there are likely to be unmodeled effects, and we analyzethis setting in our experiments. We wish to learn a policy (s)that performs well for all MD .Note that this robust policy does not have an explicit dependence on p, and we require it to performwell without knowledge of p.3 L EARNING PROTOCOL AND EPO PT ALGORITHMWe follow the round-based learning protocol of Bayesian model-based RL. We use the term roundswhen interacting with the target domain, and episode when performing rollouts with the simulator. Ineach round, we interact with the target domain after computing the robust policy on the current (i.e.2Published as a conference paper at ICLR 2017posterior) simulated source distribution. Following this, we update the source distribution using datafrom the target domain collected by executing the robust policy. Thus, in round i, we update two setsof parameters: i, the parameters of the robust policy (neural network); and i, the parameters of thesource distribution. The two key steps in this procedure are finding a robust policy given a sourcedistribution; and updating the source distribution using data from the target domain. In this section,we present our approach for both of these steps.3.1 R OBUST POLICY SEARCHWe introduce the EPOpt algorithm for finding a robust policy using the source distribution. EPOpt isa policy gradient based meta-algorithm which uses batch policy optimization methods as a subroutine.Batch policy optimization algorithms (Williams, 1992; Kakade, 2001; Schulman et al., 2015) collecta batch of trajectories by rolling out the current policy, and use the trajectories to make a policyupdate. The basic structure of EPOpt is to sample a collection of models from the source distribution,sample trajectories from each of these models, and make a gradient update based on a subset ofsampled trajectories. We first define evaluation metrics for the parametrized policy, :M(;p) =E~"T1Xt=0trt(st;at)p#; (1)D() =EpP[M(;p)] =EpP"E^"T1Xt=0trt(st;at)p##=E"T1Xt=0trt(st;at)#:In (1),M(;p)is the evaluation of on the modelM(p), with ~being trajectories generatedbyM(p)and:~=fst;at;rtgTt=0wherest+1Tp(st;at),s0S0;p,rtRp(st;at), andat(st). Similarly, D()is the evaluation of over the source domain distribution. Thecorresponding expectation is over trajectories generated byDand:=fst;at;rtgTt=0, wherest+1Tpt(st;at),pt+1=pt,s0S0;p0,rtRpt(st;at),at(st), andp0P . With thismodified notation of trajectories, batch policy optimization can be invoked for policy search.OptimizingDallows us to learn a policy that performs best in expectation over models in the sourcedomain distribution. However, this does not necessarily lead to a robust policy, since there could behigh variability in performance for different models in the distribution. To explicitly seek a robustpolicy, we use a softer version of max-min objective suggested in robust control, and optimize for theconditional value at risk (CVaR) (Tamar et al., 2015):max;yZF()M(;p)P(p)dp s:t: P(M(;P)y) =; (2)whereF() =fpjM(;p)ygis the set of parameters corresponding to models that produce theworstpercentile of returns, and provides the limit for the integral; M(;P)is the random variableof returns, which is induced by the distribution over model parameters; and is a hyperparameterwhich governs the level of relaxation from max-min objective. The interpretation is that (2) maximizesthe expected return for the worst -percentile of MDPs in the source domain distribution. We adaptthe previous policy gradient formulation to approximately optimize the objective in (2). The resultingalgorithm, which we call EPOpt- , generalizes learning a policy using an ensemble of source MDPswhich are sampled from a source domain distribution.In Algorithm 1, R(k)PT1t=0trt;kdenotes the discounted return obtained in trajectory samplek. In line 7, we compute the percentile value of returns from the Ntrajectories. In line 8, wefind the subset of sampled trajectories which have returns lower than Q. Line 9calls one step ofan underlying batch policy optimization subroutine on the subset of trajectories from line 8. For theCVaR objective, it is important to use a good baseline for the value function. Tamar et al. (2015)show that without a baseline, the resulting policy gradient is biased and not consistent. We use alinear function as the baseline with a time varying feature vector to approximate the value function,similar to Duan et al. (2016). The parameters of the baseline are estimated using only the subset oftrajectories with return less than Q. We found that this approach led to empirically good results.For small values of , we observed that using the sub-sampling step from the beginning led to unstablelearning. Policy gradient methods adjust parameters of policy to increase probability of trajectories3Published as a conference paper at ICLR 2017Algorithm 1: EPOpt–for Robust Policy Search1Input: ,0,niter ,N,2foriterationi= 0;1;2;:::niter do3 fork= 1;2;:::N do4 sample model parameters pkP 5 sample a trajectory k=fst;at;rt;st+1gT1t=0fromM(pk)using policy (i)6 end7 computeQ=percentile offR(k)gNk=18 select sub-set T=fk:R(k)Qg9 Update policy: i+1=BatchPolOpt (i;T)10endwith high returns and reduce probability of poor trajectories. EPOpt due to the sub-sampling stepemphasizes penalizing poor trajectories more. This might constrain the initial exploration neededto find good trajectories. Thus, we initially use a setting of = 1for few iterations before settingepsilon to the desired value. This corresponds to exploring initially to find promising trajectories andrapidly reducing probability of trajectories that do not generalize.3.2 A DAPTING THE SOURCE DOMAIN DISTRIBUTIONIn line with model-based Bayesian RL, we can adapt the ensemble distribution after observingtrajectory data from the target domain. The Bayesian update can be written as:P(Pjk) =1ZP(kjP)P(P) =1ZT1Yt=0P(St+1=s(k)t+1js(k)t;a(k)t;p)P(P=p);(3)where1Zis the partition function (normalization) required to make the probabilities sum to 1, St+1isthe random variable representing the next state, ands(k)t;a(k)t;s(k)t+1Tt=0are data observed alongtrajectoryk. We try to explain the target trajectory using the stochasticity in the state-transitionfunction, which also models sensor errors. This provides the following expression for the likelihood:P(St+1jst;at;p)Tp(st;at): (4)We follow a sampling based approach to calculate the posterior, by sampling a set of model parameters:pi= [p1;p2;:::;pM]from a sampling distribution, PS(pi). Consequently, using Bayes rule andimportance sampling, we have:P(pijk)/L(kjpi)PP(pi)PS(pi); (5)where PP(pi)is the probability of drawing pifrom the prior distribution; and L(kjpi)is the likeli-hood of generating the observed trajectory with model parameters pi. The weighted samples from theposterior can be used to estimate a parametric model, as we do in this paper. Alternatively, one couldapproximate the continuous probability distribution using discrete weighted samples like in case of par-ticle filters. In cases where the prior has very low probability density in certain parts of the parameterspace, it might be advantageous to choose a sampling distribution different from the prior. The like-lihood can be factored using the Markov property as: L(kjpi) =QtP(St+1=s(k)t+1js(k)t;a(k)t;pi).This simple model adaptation rule allows us to illustrate the utility of EPOpt for robust policy search,as well as its integration with model adaptation to learn policies in cases where the target model couldbe very different from the initially assumed distribution.4 E XPERIMENTSWe evaluated the proposed EPOpt- algorithm on the 2D hopper (Erez et al., 2011) and half-cheetah (Wawrzynski, 2009) benchmarks using the MuJoCo physics simulator (Todorov et al.,2012).1Both tasks involve complex second order dynamics and direct torque control. Underactuation,1Supplementary video: https://youtu.be/w1YJ9vwaoto4Published as a conference paper at ICLR 2017high dimensionality, and contact discontinuities make these tasks challenging reinforcement learningbenchmarks. These challenges when coupled with systematic parameter discrepancies can quicklydegrade the performance of policies and make them unstable, as we show in the experiments. Thebatch policy optimization sub-routine is implemented using TRPO. We parametrize the stochasticpolicy using the scheme presented in Schulman et al. (2015). The policy is represented with aGaussian distribution, the mean of which is parametrized using a neural network with two hiddenlayers. Each hidden layer has 64 units, with a tanh non-linearity, and the final output layer is made oflinear units. Normally distributed independent random variables are added to the output of this neuralnetwork, and we also learn the standard deviation of their distributions. Our experiments are aimed atanswering the following questions:1.How does the performance of standard policy search methods (like TRPO) degrade in the presenceof systematic physical differences between the training and test domains, as might be the casewhen training in simulation and testing in the real world?2.Does training on a distribution of models with EPOpt improve the performance of the policy whentested under various model discrepancies, and how much does ensemble training degrade overallperformance (e.g. due to acquiring a more conservative strategy)?3.How does the robustness of the policy to physical parameter discrepancies change when using therobust EPOpt- variant of our method?4.Can EPOpt learn policies that are robust to unmodeled effects – that is, discrepancies in physicalparameters between source and target domains that do not vary in the source domain ensemble?5.When the initial model ensemble differs substantially from the target domain, can the ensemblebe adapted efficiently, and how much data from the target domain is required for this?In all the comparisons, performance refers to the average undiscounted return per trajectory or episode(we consider finite horizon episodic problems). In addition to the previously defined performance,we also use the 10thpercentile of the return distribution as a proxy for the worst-case return.4.1 C OMPARISON TO STANDARD POLICY SEARCHIn Figure 1, we evaluate the performance of standard TRPO and EPOpt (= 0:1)on the hoppertask, in the presence of a simple parametric discrepancy in the physics of the system between thetraining (source) and test (target) domains. The plots show the performance of various policies ontest domains with different torso mass. The first three plots show policies that are each trained ona single torso mass in the source domain, while the last plot illustrates the performance of EPOpt,3456789Torso Mass05001000150020002500300035004000Performancem = 33456789Torso Massm = 63456789Torso Massm = 93456789Torso MassEnsembleFigure 1: Performance of hopper policies when testing on target domains with different torso masses.The first three plots (blue, green, and red) show the performance of policies trained with TRPOon source domains with torso mass 3, 6, and 9, respectively (denoted by m=in the legend). Therightmost plot shows the performance of EPOpt( = 0:1) trained on a Gaussian source distributionwith mean mass = 6and standard deviation = 1:5. The shaded regions show the 10thand 90thpercentile of the return distribution. Policies trained using traditional approaches on a single massvalue are unstable for even slightly different masses, making the hopper fall over when trying tomove forward. In contrast, the EPOpt policy is stable and achieves a high level of performance on theentire range of masses considered. Further, the EPOpt policy does not suffer from degradation inperformance as a consequence of adopting a more robust policy.5Published as a conference paper at ICLR 2017Figure 2: On the left, is an illustration of the simulated 2D hopper task studied in this paper. Onright, we depict the performance of policies for various model instances of the hopper task. Theperformance is depicted as a heat map for various model configurations, parameters of which aregiven in the x and y axis. The adversarially trained policy, EPOpt (= 0:1), is observed to generalizeto a wider range of models and is more robust.which is trained on a Gaussian mass distribution. The results show that no single torso mass valueproduces a policy that is successful in all target domains. However, the EPOpt policy succeeds almostuniformly for all tested mass values. Furthermore, the results show that there is almost no degradationin the performance of EPOpt for any mass setting, suggesting that the EPOpt policy does not suffersubstantially from adopting a more robust strategy.4.2 A NALYSIS OF ROBUSTNESSNext, we analyze the robustness of policies trained using EPOpt on the hopper domain. For thisanalysis, we construct a source distribution which varies four different physical parameters: torsomass, ground friction, foot joint damping, and joint inertia (armature). This distribution is presentedin Table 1. Using this source distribution, we compare between three different methods: (1) standardpolicy search (TRPO) trained on a single model corresponding to the mean parameters in Table 1;(2) EPOpt (= 1) trained on the source distribution; (3) EPOpt (= 0:1)– i.e. the adversariallytrained policy, again trained on the previously described source distribution. The aim of the compari-son is to study direct-transfer performance, similar to the robustness evaluations common in robustcontroller design (Zhou et al., 1996). Hence, we learn a policy using each of the methods, and thentest policies on different model instances (i.e. different combinations of physical parameters) withoutany adaptation. The results of this comparison are summarized in Figure 2, where we present theperformance of the policy for testing conditions corresponding to different torso mass and frictionvalues, which we found to have the most pronounced impact on performance. The results indicatethat EPOpt (= 0:1)produces highly robust policies. A similar analysis for the 10thpercentile of thereturn distribution (softer version of worst-case performance), the half-cheetah task, and different settings are presented in the appendix.Table 1: Initial source domain distributionHopper low highmass 6.0 1.5 3.0 9.0ground friction 2.0 0.25 1.5 2.5joint damping 2.5 1.0 1.0 4.0armature 1.0 0.25 0.5 1.5Half-Cheetah low highmass 6.0 1.5 3.0 9.0ground friction 0.5 0.1 0.3 0.7joint damping 1.5 0.5 0.5 2.5armature 0.125 0.04 0.05 0.23 4 5 6 7 8 9Torso Mass05001000150020002500300035004000PerformanceEnsemble (unmodeled)Maximum-LikelihoodFigure 3: Comparison between policies trainedon a fixed maximum-likelihood model with mass(6), and an ensemble where all models have thesame mass (6) and other parameters varying asdescribed in Table 1.6Published as a conference paper at ICLR 20174.3 R OBUSTNESS TO UNMODELED EFFECTSTo analyze the robustness to unmodeled effects, our next experiment considers the setting wherethe source domain distribution is obtained by varying friction, damping, and armature as in Table 1,but does not consider a distribution over torso mass. Specifically, all models in the source domaindistribution have the same torso mass (value of 6), but we will evaluate the policy trained onthis distribution on target domains where the torso mass is different. Figure 3 indicates that theEPOpt (= 0:1)policy is robust to a broad range of torso masses even when its variation is notconsidered. However, as expected, this policy is not as robust as the case when mass is also modeledas part of the source domain distribution.4.4 M ODEL ADAPTATIONThe preceding experiments show that EPOpt can find robust policies, but the source distribution inthese experiments was chosen to be broad enough such that the target domain is not too far fromhigh-density regions of the distribution. However, for real-world problems, we might not have thedomain knowledge to identify a good source distribution in advance. In such settings, model (source)adaptation allows us to change the parameters of the source distribution using data gathered from thetarget domain. Additionally, model adaptation is helpful when the parameters of the target domaincould change over time, for example due to wear and tear in a physical system. To illustrate modeladaptation, we performed an experiment where the target domain was very far from the high densityregions of the initial source distribution, as depicted in Figure 4(a). In this experiment, the sourcedistribution varies the torso mass and ground friction. We observe that progressively, the sourcedistribution becomes a better approximation of the target domain and consequently the performanceimproves. In this case, since we followed a sampling based approach, we used a uniform samplingdistribution, and weighted each sample with the importance weight as described in Section 3.2.Eventually, after 10 iterations, the source domain distribution is able to accurately match the targetdomain. Figure 4(b) depicts the learning curve, and we see that a robust policy with return more than2500, which roughly corresponds to a situation where the hopper is able to move forward withoutfalling down for the duration of the episode, can be discovered with just 5 trajectories from the targetdomain. Subsequently, the policy improves near monotonically, and EPOpt finds a good policy withjust 11 episodes worth of data from the target domain. In contrast, to achieve the same level ofperformance on the target domain, completely model-free methods like TRPO would require morethan2104trajectories when the neural network parameters are initialized randomly.1.01.52.02.53.0Iteration 0 Iteration 10 5 10 15 201.01.52.02.53.0Iteration 20 5 10 15 20Iteration 7FrictionTorso Mass(a)0 2 4 6 8 10Iterations0500100015002000250030003500Performance (b)Figure 4: (a) Visualizes the source distribution during model adaptation on the hopper task, wheremass and friction coefficient are varied in the source domain. The red cross indicates the unknownparameters of the target domain. The contours in the plot indicate the distribution over models(we assume a Gaussian distribution). Lighter colors and more concentrated contour lines indicateregions of higher density. Each iteration corresponds to one round (episode) of interaction with thetarget domain. The high-density regions gradually move toward the true model, while maintainingprobability mass over a range of parameters which can explain the behavior of target domain.Figure 4(b) presents the corresponding learning curve, where the shaded region describes the 10thand 90th percentiles of the performance distribution, and the solid line is the average performance.7Published as a conference paper at ICLR 20175 R ELATED WORKRobust control is a branch of control theory which formally studies development of robust policies(Zhou et al., 1996; Nilim & Ghaoui, 2005; Lim et al., 2013). However, typically no distribution oversource or target tasks is assumed, and a worst case analysis is performed. Most results from thisfield have been concentrated around linear systems or finite MDPs, which often cannot adequatelymodel complexities of real-world tasks. The set-up of model-based Bayesian RL maintains a beliefover models for decision making under uncertainty (Vlassis et al., 2012; Ghavamzadeh et al., 2015).In Bayesian RL, through interaction with the target domain, the uncertainty is reduced to find thecorrect or closest model. Application of this idea in its full general form is difficult, and requireseither restrictive assumptions like finite MDPs (Poupart et al., 2006), gaussian dynamics (Rosset al., 2008), or task specific innovations. Previous methods have also suggested treating uncertainmodel parameters as unobserved state variables in a continuous POMDP framework, and solving thePOMDP to get optimal exploration-exploitation trade-off (Duff, 2003; Porta et al., 2006). While thisapproach is general, and allows automatic learning of epistemic actions, extending such methods tolarge continuous control tasks like those considered in this paper is difficult.Risk sensitive RL methods (Delage & Mannor, 2010; Tamar et al., 2015) have been proposed to actas a bridge between robust control and Bayesian RL. These approaches allow for using subjectivemodel belief priors, prevent overly conservative policies, and enjoy some strong guarantees typicallyassociated with robust control. However, their application in high dimensional continuous controltasks have not been sufficiently explored. We refer readers to Garc ́ıa & Fern ́andez (2015) for a surveyof related risk sensitive RL methods in the context of robustness and safety.Standard model-based control methods typically operate by finding a maximum-likelihood estimateof the target model (Ljung, 1998; Ross & Bagnell, 2012; Deisenroth et al., 2013), followed bypolicy optimization. Use of model ensembles to produce robust controllers was explored recentlyin robotics. Mordatch et al. (2015a) use a trajectory optimization approach and an ensemble withsmall finite set of models; whereas we follow a sampling based direct policy search approach over acontinuous distribution of uncertain parameters, and also show domain adaptation. Sampling basedapproaches can be applied to complex models and discrete MDPs which cannot be planned througheasily. Similarly, Wang et al. (2010) use an ensemble of models, but their goal is to optimize foraverage case performance as opposed to transferring to a target MDP. Wang et al. (2010) use a handengineered policy class whose parameters are optimized with CMA-ES. EPOpt on the other handcan optimize expressive neural network policies directly. In addition, we show model adaptation,effectiveness of the sub-sampling step ( <1case), and robustness to unmodeled effects, all of whichare important for transfering to a target MDP.Learning of parametrized skills (da Silva et al., 2012) is also concerned with finding policies fora distribution of parametrized tasks. However, this is primarily geared towards situations wheretask parameters are revealed during test time. Our work is motivated by situations where target taskparameters (e.g. friction) are unknown. A number of methods have also been suggested to reducesample complexity when provided with either a baseline policy (Thomas et al., 2015; Kakade &Langford, 2002), expert demonstration (Levine & Koltun, 2013; Argall et al., 2009), or approximatesimulator (Tamar et al., 2012; Abbeel et al., 2006). These are complimentary to our work, in thesense that our policy, which has good direct-transfer performance, can be used to sample from thetarget domain and other off-policy methods could be explored for policy improvement.6 C ONCLUSIONS AND FUTURE WORKIn this paper, we presented the EPOpt- algorithm for training robust policies on ensembles of sourcedomains. Our method provides for training of robust policies, and supports an adversarial trainingregime designed to provide good direct-transfer performance. We also describe how our approachcan be combined with Bayesian model adaptation to adapt the source domain ensemble to a targetdomain using a small amount of target domain experience. Our experimental results demonstratethat the ensemble approach provides for highly robust and generalizable policies in fairly complexsimulated robotic tasks. Our experiments also demonstrate that Bayesian model adaptation canproduce distributions over models that lead to better policies on the target domain than more standardmaximum likelihood estimation, particularly in presence of unmodeled effects.8Published as a conference paper at ICLR 2017Although our method exhibits good generalization performance, the adaptation algorithm we usecurrently relies on sampling the parameter space, which is computationally intensive as the number ofvariable physical parameters increase. We observed that (adaptive) sampling from the prior leads tofast and reliable adaptation if the true model does not have very low probability in the prior. However,when this assumption breaks, we require a different sampling distribution which could producesamples from all regions of the parameter space. This is a general drawback of Bayesian adaptationmethods. In future work, we plan to explore alternative sampling and parameterization schemes,including non-parametric distributions. An eventual end-goal would be to replace the physicssimulator entirely with learned Bayesian neural network models, which could be adapted with limiteddata from the physical system. These models could be pre-trained using physics based simulators likeMuJoCo to get a practical initialization of neural network parameters. Such representations are likelyuseful when dealing with high dimensional inputs like simulated vision from rendered images ortasks with complex dynamics like deformable bodies, which are needed to train highly generalizablepolicies that can successfully transfer to physical robots acting in the real world.ACKNOWLEDGMENTSThe authors would like to thank Emo Todorov, Sham Kakade, and students of Emo Todorov’s researchgroup for insightful comments about the work. The authors would also like to thank Emo Todorovfor the MuJoCo simulator. Aravind Rajeswaran and Balaraman Ravindran acknowledge financialsupport from ILDS, IIT Madras. | BkwBVorNl | Ensemble training and transfer, a good submission | 8: Top 50% of accepted papers, clear accept | This paper explores ensemble optimisation in the context of policy-gradient training. Ensemble training has been a low-hanging fruit for many years in the this space and this paper finally touches on this interesting subject. The paper is well written and accessible. In particular the questions posed in section 4 are well posed and interesting.
That said the paper does have some very weak points, most obviously that all of its results are for a very particular choice of domain+parameters. I eagerly look forward to the journal version where these experiments are repeated for all sorts of source domain/target domain/parameter combinations.
<rant
Finally a stylistic comment that the authors can feel free to ignore. I don't like the trend of every paper coming up with a new acronymy wEiRDLY cAsEd name. Especially here when the idea is so simple. Why not use words? English words from the dictionary. Instead of "EPOpt" and "EPOpt-e", you can write "ensemble training" and "robust ensemble training". Is that not clearer?
/> | 4: The reviewer is confident but not absolutely certain that the evaluation is correct |
BkUDvt5gg | ICLR.cc/2017/conference | 2017 | Wav2Letter: an End-to-End ConvNet-based Speech Recognition System | ["Ronan Collobert", "Christian Puhrsch", "Gabriel Synnaeve"] | This paper presents a simple end-to-end model for speech recognition, combining a convolutional network based acoustic model and a graph decoding. It is trained to output letters, with transcribed speech, without the need for force alignment of phonemes. We introduce an automatic segmentation criterion for training from sequence annotation without alignment that is on par with CTC (Graves et al., 2006) while being simpler. We show competitive results in word error rate on the Librispeech corpus (Panayotov et al., 2015) with MFCC features, and promising results from raw waveform. | ["Deep learning", "Speech", "Structured prediction"] | ABSTRACTThis paper presents a simple end-to-end model for speech recognition, combininga convolutional network based acoustic model and a graph decoding. It is trainedto output letters, with transcribed speech, without the need for force alignment ofphonemes. We introduce an automatic segmentation criterion for training fromsequence annotation without alignment that is on par with CTC (Graves et al.,2006) while being simpler. We show competitive results in word error rate on theLibrispeech corpus (Panayotov et al., 2015) with MFCC features, and promisingresults from raw waveform.1 I NTRODUCTIONWe present an end-to-end system to speech recognition, going from the speech signal (e.g. Mel-Frequency Cepstral Coefficients (MFCC), power spectrum, or raw waveform) to the transcription.The acoustic model is trained using letters (graphemes) directly, which take out the need for anintermediate (human or automatic) phonetic transcription. Indeed, the classical pipeline to buildstate of the art systems for speech recognition consists in first training an HMM/GMM model toforce align the units on which the final acoustic model operates (most often context-dependent phonestates). This approach takes its roots in HMM/GMM training (Woodland & Young, 1993). Theimprovements brought by deep neural networks (DNNs) (Mohamed et al., 2012; Hinton et al., 2012)and convolutional neural networks (CNNs) (Sercu et al., 2015; Soltau et al., 2014) for acousticmodeling only extend this training pipeline.The current state of the art on Librispeech (the dataset that we used for our evaluations) uses thisapproach too (Panayotov et al., 2015; Peddinti et al., 2015b), with an additional step of speakeradaptation (Saon et al., 2013; Peddinti et al., 2015a). Recently, Senior et al. (2014) proposed GMM-free training, but the approach still requires to generate a force alignment. An approach that cut tieswith the HMM/GMM pipeline (and with force alignment) was to train with a recurrent neural network(RNN) (Graves et al., 2013) for phoneme transcription. There are now competitive end-to-endapproaches of acoustic models toppled with RNNs layers as in (Hannun et al., 2014; Miao et al.,2015; Saon et al., 2015; Amodei et al., 2015), trained with a sequence criterion (Graves et al., 2006).However these models are computationally expensive, and thus take a long time to train.Compared to classical approaches that need phonetic annotation (often derived from a phoneticdictionary, rules, and generative training), we propose to train the model end-to-end, using graphemesdirectly. Compared to sequence criterion based approaches that train directly from speech signal tographemes (Miao et al., 2015), we propose a simple(r) architecture (23 millions of parameters for ourbest model, vs. 100 millions of parameters in (Amodei et al., 2015)) based on convolutional networks1Under review as a conference paper at ICLR 2017for the acoustic model, toppled with a graph transformer network (Bottou et al., 1997), trained witha simpler sequence criterion. Our word-error-rate on clean speech is slightly better than (Hannunet al., 2014), and slightly worse than (Amodei et al., 2015), in particular factoring that they train on12,000 hours while we only train on the 960h available in LibriSpeech’s train set. Finally, some ofour models are also trained on the raw waveform, as in (Palaz et al., 2013; 2015; Sainath et al., 2015).The rest of the paper is structured as follows: the next section presents the convolutional networksused for acoustic modeling, along with the automatic segmentation criterion. The following sectionshows experimental results comparing different features, the criterion, and our current best word errorrates on LibriSpeech.2 A RCHITECTUREOur speech recognition system is a standard convolutional neural network (LeCun & Bengio, 1995)fed with various different features, trained through an alternative to the Connectionist TemporalClassification (CTC) (Graves et al., 2006), and coupled with a simple beam search decoder. In thefollowing sub-sections, we detail each of these components.2.1 F EATURESCONVkw = 12000 : 40CONVkw = 12000 : 2000CONVkw = 32250 : 2000CONVkw = 7250 : 250CONVkw = 7250 : 250CONVkw = 7250 : 250CONVkw = 7250 : 250CONVkw = 7250 : 250CONVkw = 7250 : 250CONVkw = 7250 : 250CONVkw= 48, dw = 2250 : 250CONVkw= 250 , dw = 1601 : 250Figure 1: Our neural networkarchitecture for raw wave. Firsttwo layers are convolutions withstrides. Last two layers areconvolutions with kw = 1 ,which are equivalent to fullyconnected layers. Power spec-trum and MFCC based networksdo not have the first layer.We consider three types of input features for our model: MFCCs,power-spectrum, and raw wave. MFCCs are carefully designedspeech-specific features, often found in classical HMM/GMMspeech systems (Woodland & Young, 1993) because of their di-mensionality compression (13 coefficients are often enough to spanspeech frequencies). Power-spectrum features are found in mostrecent deep learning acoustic modeling features (Amodei et al.,2015). Raw wave has been somewhat explored in few recent work(Palaz et al., 2013; 2015). ConvNets have the advantage to beflexible enough to be used with either of these input feature types.Our acoustic models output letter scores (one score per letter, givena dictionaryL).2.2 C ONV NETACOUSTIC MODELThe acoustic models we considered in this paper are all based onstandard 1D convolutional neural networks (ConvNets). ConvNetsinterleave convolution operations with pointwise non-linearity op-erations. Often ConvNets also embark pooling layers: these type oflayers allow the network to “see” a larger context, without increas-ing the number of parameters, by locally aggregating the previousconvolution operation output. Instead, our networks leverage strid-ing convolutions. Given (xt)t=1:::Txan input sequence with Txframes ofdxdimensional vectors, a convolution with kernel widthkw, stridedwanddyframe size output computes the following:yit=bi+dxXj=1kwXk=1wi;j;kxjdw(t1)+k81idy; (1)whereb2Rdyandw2Rdydxkware the parameters of theconvolution (to be learned).Pointwise non-linear layers are added after convolutional layers.In our experience, we surprisingly found that using hyperbolictangents, their piecewise linear counterpart HardTanh (as in (Palazet al., 2015)) or ReLU units lead to similar results.There are some slight variations between the architectures, depend-ing on the input features. MFCC-based networks need less striding,as standard MFCC filters are applied with large strides on the input2Under review as a conference paper at ICLR 2017raw sequence. With power spectrum-based and raw wave-based networks, we observed that theoverall stride of the network was more important than where the convolution with strides were placed.We found thus preferrable to set the strided convolutions near the first input layers of the network, asit leads to the fastest architectures: with power spectrum features or raw wave, the input sequencesare very long and the first convolutions are thus the most expensive ones.The last layer of our convolutional network outputs one score per letter in the letter dictionary(dy=jLj). Our architecture for raw wave is shown in Figure 1 and is inspired by (Palaz et al., 2015).The architectures for both power spectrum and MFCC features do not include the first layer. Thefull network can be seen as a non-linear convolution, with a kernel width of size 31280 and strideequal to 320; given the sample rate of our data is 16KHz, label scores are produced using a windowof 1955 ms, with steps of 20ms.2.3 I NFERRING SEGMENTATION WITH AUTOSEGCRITERIONMost large labeled speech databases provide only a text transcription for each audio file. In aclassification framework (and given our acoustic model produces letter predictions), one wouldneed the segmentation of each letter in the transcription to train properly the model. Unfortunately,manually labeling the segmentation of each letter would be tedious. Several solutions have beenexplored in the speech community to alleviate this issue: HMM/GMM models use an iterative EMprocedure: (i) during the Estimation step, the best segmentation is inferred, according to the currentmodel, by maximizing the joint probability of the letter (or any sub-word unit) transcription and inputsequence. (ii) During the Maximization step the model is optimized by minimizing a frame-levelcriterion, based on the (now fixed) inferred segmentation. This approach is also often used to boostrapthe training of neural network-based acoustic models.Other alternatives have been explored in the context of hybrid HMM/NN systems, such as the MMIcriterion (Bahl et al., 1986) which maximizes the mutual information between the acoustic sequenceand word sequences or the Minimum Bayse Risk (MBR) criterion (Gibson & Hain, 2006).More recently, standalone neural network architectures have been trained using criterions whichjointly infer the segmentation of the transcription while increase the overall score of the right tran-scription (Graves et al., 2006; Palaz et al., 2014). The most popular one is certainly the ConnectionistTemporal Classification (CTC) criterion, which is at the core of Baidu’s Deep Speech architec-ture (Amodei et al., 2015). CTC assumes that the network output probability scores, normalizedat the frame level. It considers all possible sequence of letters (or any sub-word units), which canlead to a to a given transcription. CTC also allow a special “blank” state to be optionally insertedbetween each letters. The rational behind the blank state is two-folds: (i) modeling “garbage” frameswhich might occur between each letter and (ii) identifying the separation between two identicalconsecutive letters in a transcription. Figure 2a shows an example of the sequences accepted by CTCfor a given transcription. In practice, this graph is unfolded as shown in Figure 2b, over the availableframes output by the acoustic model. We denote Gctc(;T)an unfolded graph over Tframes for agiven transcription , and=1; :::; T2Gctc(;T)a path in this graph representing a (valid)sequence of letters for this transcription. At each time step t, each node of the graph is assignedwith the corresponding log-probability letter (that we denote ft()) output by the acoustic model.CTC aims at maximizing the “overall” score of paths in Gctc(;T); for that purpose, it minimizes theForward score:CTC (;T) =logadd2Gctc(;T)TXt=1ft(x); (2)where the “logadd” operation, also often called “log-sum-exp” is defined as logadd(a;b) =exp(log(a) + log(b)). This overall score can be efficiently computed with the Forward algorithm. Toput things in perspective, if one would replace the logadd()by amax()in (2) (which can be thenefficiently computed by the Viterbi algorithm, the counterpart of the Forward algorithm), one wouldthen maximize the score of the best path, according to the model belief. The logadd()can be seenas a smooth version of the max(): paths with similar scores will be attributed the same weight in theoverall score (and hence receive the same gradient), and paths with much larger score will have muchmore overall weight than paths with low scores. In practice, using the logadd()works much betterthan the max(). It is also worth noting that maximizing (2) does not diverge, as the acoustic modelis assumed to output normalized scores (log-probabilities) fi().3Under review as a conference paper at ICLR 2017∅C∅A∅T∅(a)∅ ∅ ∅C C C C∅ ∅ ∅A A A A∅ ∅ ∅T T T T∅ ∅ ∅ (b)Figure 2: The CTC criterion graph. (a) Graph which represents all the acceptable sequences of letters(with the blank state denoted “ ;”), for the transcription “cat”. (b) Shows the same graph unfoldedover 5 frames. There are no transitions scores. At each time step, nodes are assigned a conditionalprobability output by the neural network acoustic model.In this paper, we explore an alternative to CTC, with three differences: (i) there are no blank labels,(ii) un-normalized scores on the nodes (and possibly un-normalized transition scores on the edges)(iii) global normalization instead of per-frame normalization:The advantage of (i) is that it produces a much simpler graph (see Figure 3a and Figure 3b).We found that in practice there was no advantage of having a blank class to model thepossible “garbage” frames between letters. Modeling letter repetitions (which is also animportant quality of the blank label in CTC) can be easily replaced by repetition characterlabels (we used two extra labels for two and three repetitions). For example “caterpillar”could be written as “caterpil2ar”, where “2” is a label to represent the repetition of theprevious letter. Not having blank labels also simplifies the decoder.With (ii) one can easily plug an external language model, which would insert transitionscores on the edges of the graph. This could be particularly useful in future work, if onewanted to model representations more high-level than letters. In that respect, avoidingnormalized transitions is important to alleviate the problem of “label bias” Bottou (1991);Lafferty et al. (2001). In this work, we limited ourselves to transition scalars, which arelearned together with the acoustic model.The normalization evoked in (iii) is necessary when using un-normalized scores on nodes oredges; it insures incorrect transcriptions will have a low confidence.In the following, we name our criterion “Auto Segmentation Criterion” (ASG). Considering thesame notations than for CTC in (2), and an unfolded graph Gasg(;T)overTframes for a giventranscription (as in Figure 3b), as well as a fully connected graph Gfull(;T)overTframes(representing all possible sequence of letters, as in Figure 3c), ASG aims at minimizing:ASG (;T) =logadd2Gasg(;T)TXt=1(ft(x) +gt1;t(x)) + logadd2Gfull(;T)TXt=1(ft(x) +gt1;t(x));(3)wheregi;j()is a transition score model to jump from label ito labelj. The left-hand part of 3promotes sequences of letters leading to the right transcription, and the right-hand part demotes allsequences of letters. As for CTC, these two parts can be efficiently computed with the Forwardalgorithm. Derivatives with respect to fi()andgi;j()can be obtained (maths are a bit tedious) byapplying the chain rule through the Forward recursion.2.4 B EAM -SEARCH DECODERWe wrote our own one-pass decoder, which performs a simple beam-search with beam threholding,histogram pruning and language model smearing Steinbiss et al. (1994). We kept the decoder as4Under review as a conference paper at ICLR 2017C A T(a)C C C CA A A AT T T T (b)AB...ZAB...ZAB...ZAB...ZAB...ZAB...Z(c)Figure 3: The ASG criterion graph. (a) Graph which represents all the acceptable sequences ofletters for the transcription “cat”. (b) Shows the same graph unfolded over 5 frames. (c) Shows thecorresponding fully connected graph, which describe all possible sequences of letter; this graph isused for normalization purposes. Un-normalized transitions scores are possible on the edges. Ateach time step, nodes are assigned a conditional un-normalized score, output by the neural networkacoustic model.simple as possible (under 1000 lines of C code). We did not implement any sort of model adaptationbefore decoding, nor any word graph rescoring. Our decoder relies on KenLM Heafield et al. (2013)for the language modeling part. It also accepts un-normalized acoustic scores (transitions andemissions from the acoustic model) as input. The decoder attempts to maximize the following:L() = logadd2Gasg(;T)TXt=1(ft(x) +gt1;t(x)) +logPlm() +jj; (4)wherePlm()is the probability of the language model given a transcription ,andare twohyper-parameters which control the weight of the language model and the word insertion penaltyrespectively.3 E XPERIMENTS3.1 S ETUPWe implemented everything using Torch71. The ASG criterion as well as the decoder were imple-mented in C (and then interfaced into Torch).We consider as benchmark LibriSpeech, a large speech database freely available for download (Panay-otov et al., 2015). LibriSpeech comes with its own train, validation and test sets. Except whenspecified, we used all the available data (about 1000h of audio files) for training and validating ourmodels. We use the original 16 KHz sampling rate. The vocabulary Lcontains 30 graphemes: thestandard English alphabet plus the apostrophe, silence, and two special “repetition” graphemes whichencode the duplication (once or twice) of the previous letter (see Section 2.3).The architecture hyper-parameters, as well the decoder ones were tuned using the validation set. Inthe following, we either report letter-error-rates (LERs) or word-error-rates (WERs). WERs havebeen obtained by using our own decoder (see Section 2.4), with the standard 4-gram language modelprovided with LibriSpeech2.1http://www.torch.ch .2http://www.openslr.org/11 .5Under review as a conference paper at ICLR 2017Table 1: CTC vs ASG. CTC is Baidu’s implementation. ASG is implemented on CPU (C withOpenMP). Timings (in ms) for small sequences (input frames: 150, letter vocabulary size: 28,transcription size: 40) and long sequences (input frames: 700, letter vocabulary size: 28, transcriptionsize: 200) are reported in (a) and (b) respectively. (c) reports performance in LER. Timings includeboth forward and backward passes. CPU implementations use 8 threads.(a)batch CTC ASGsize CPU GPU CPU1 1.9 5.9 2.54 2.0 6.0 2.88 2.0 6.1 2.8(b)batch CTC ASGsize CPU GPU CPU1 40.9 97.9 16.04 41.6 99.6 17.78 41.7 100.3 19.2(c)ASG CTCdev-clean 10.4 10.7test-clean 10.1 10.5MFCC features are computed with 13 coefficients, a 25 ms sliding window and 10 ms stride. Weincluded first and second order derivatives. Power spectrum features are computed with a 25 mswindow, 10 ms stride, and have 257 components. All features are normalized (mean 0, std 1) perinput sequence.3.2 R ESULTSTable 1 reports a comparison between CTC and ASG, in terms of LER and speed. Our ASG criterionis implemented in C (CPU only), leveraging SSE instructions when possible. Our batching is donewith an OpenMP parallel for. We picked the CTC criterion implementation provided by Baidu3. Bothcriteria lead to the same LER. For comparing the speed, we report performance for sequence sizes asreported initially by Baidu, but also for longer sequence sizes, which corresponds to our average usecase. ASG appears faster on long sequences, even though it is running on CPU only. Baidu’s GPUCTC implementation seems more aimed at larger vocabularies (e.g. 5000 Chinese characters).We also investigated the impact of the training size on the dataset, as well as the effect of a simpledata augmentation procedure, where shifts were introduced in the input frames, as well as stretching.For that purpose, we tuned the size of our architectures (given a particular size of the dataset), toavoid over-fitting. Figure 4a shows the augmentation helps for small training set size. However, withenough training data, the effect of data augmentation vanishes, and both type of features appear toperform similarly. Figure 4b reports the WER with respect to the available training data size. Weobserve that we compare very well against Deep Speech 1 & 2 which were trained with much moredata Hannun et al. (2014); Amodei et al. (2015).Finally, we report in Table 2 the best results of our system so far, trained on 1000h of speech, foreach type of features. The overall stride of architectures is 320 (see Figure 1), which produces a labelevery 20 ms. We found that one could squeeze out about 1%in performance by refining the precisionof the output. This is efficiently achieved by shifting the input sequence, and feeding it to the networkseveral times. Results in Table 2 were obtained by a single extra shift of 10 ms. Both power spectrumand raw features are performing slightly worse than MFCCs. One could expect, however, that withenough data (see Figure 4) the gap would vanish.3https://github.com/baidu-research/warp-ctc .6Under review as a conference paper at ICLR 20170 200 400 600 800 1,000101520training set size (h)LERMFCCMFCC+AUGPOWPOW+AUG(a)101102103104510152025training set size (h)WERMFCCPOWDS1DS2 (b)Figure 4: Valid LER (a) and WER (b) v.s. training set size (10h, 100h, 200h, 1000h). Thiscompares MFCC-based and power spectrum-based (POW) architectures. AUG experiments includedata augmentation. In (b) we provide Baidu Deep Speech 1 and 2 numbers on LibriSpeech, as acomparison Hannun et al. (2014); Amodei et al. (2015).Table 2: LER/WER of the best sets of hyper-parameters for each feature types.MFCC PS RawLER WER LER WER LER WERdev-clean 6.9 9.3 10.3test-clean 6.9 7.2 9.1 9.4 10.6 10.14 C ONCLUSIONWe have introduced a simple end-to-end automatic speech recognition system, which combines astandard 1D convolutional neural network, a sequence criterion which can infer the segmentation, anda simple beam-search decoder. The decoding results are competitive on the LibriSpeech corpus withMFCC features (7.2% WER), and promising with power spectrum and raw speech (9.4% WER and10.1% WER respectively). We showed that our AutoSegCriterion can be faster than CTC (Graveset al., 2006), and as accurate (table 1). Our approach breaks free from HMM/GMM pre-trainingand force-alignment, as well as not being as computationally intensive as RNN-based approaches(Amodei et al., 2015) (on average, one LibriSpeech sentence is processed in less than 60ms by ourConvNet, and the decoder runs at 8.6x on a single thread). | rJrUKI_7e | Review | 7: Good paper, accept | This submission proposes a letter-level decoder with a variation of the CTC approach they call ASG, where the blank symbol is dropped and replaced by letter repetition symbols, and where explicit normalization is dropped. Both the description of a letter-level model (though not novel), as well as the CTC-variant are interesting.
The approach is evaluated on the LibriSpeech task. The authors claim that their approach is competitive. They compare their modelling variant ASG to CTC, but a comparison of the letter-level approach to available word-level results are missing. Compared to the results obtained in Panayotov et al. 2015, the performance obtained here seems only comparable to word-level GMM/HMM models, but worse than word-level hybrid DNN/HMM models, though Panayotov et al. also appled speaker adaptation, which was not done, as far as I can see. I suggest to add a comparison to Panyotov's results (in addition to mentioning Baidu's results on Librispeech, which are not comparable due to much larger amounts of training data), to allow readers to get a quantitative idea. As pointed out by the authors in the text, Baidu's GPU implementation for CTC is more aimed at larger vocabularies, therefore the comparison to GPU in Tables 1a-c do not seem to be helpful for this work, without further discussing the implementations.
You are using quite a huge analysis window (nearly 2s). Even though other authors also use windows up to 0.5s to 1s (e.g. MRASTA features), some comments on how you arrive at such a large window, and what advantages you observe for it, would be interesting.
The submission is well written, though more details on the experiences with using non-normalized (transition) scores and beam pruning would be desirable. Table 1 would be better readable if the units of the numbers shown in a/b/c would be shown within the tables, and not only in the caption.
Prior (partial) publications of this work (your NIPS end-to-end workshop paper) should clearly be mentioned/referenced.
What do you mean by transition "scalars"?
I do not repeat further comments here, which were already given in the pre-review period.
Minor comments:
- Sec. 2.3, end of 2nd sentence: train properly the model -> train the model properly
End of same paragraph: boostrap -> bootstrap (such errors should be avoided by performing an automatic spell check)
- Sec. 2.3: Bayse -> Bayes
- definition of logadd is wrong (see comment) - (applies also for your NIPS end-to-end workshop paper).
- line before Eq. (3): all possible sequence of letters -> all possible sequences of letters (plural)
- Sec. 2.4, first line: threholding -> thresholding (spell check..)
- Figure 4: mention the corpus used here - dev?
| 5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature |
BkUDvt5gg | ICLR.cc/2017/conference | 2017 | Wav2Letter: an End-to-End ConvNet-based Speech Recognition System | ["Ronan Collobert", "Christian Puhrsch", "Gabriel Synnaeve"] | This paper presents a simple end-to-end model for speech recognition, combining a convolutional network based acoustic model and a graph decoding. It is trained to output letters, with transcribed speech, without the need for force alignment of phonemes. We introduce an automatic segmentation criterion for training from sequence annotation without alignment that is on par with CTC (Graves et al., 2006) while being simpler. We show competitive results in word error rate on the Librispeech corpus (Panayotov et al., 2015) with MFCC features, and promising results from raw waveform. | ["Deep learning", "Speech", "Structured prediction"] | ABSTRACTThis paper presents a simple end-to-end model for speech recognition, combininga convolutional network based acoustic model and a graph decoding. It is trainedto output letters, with transcribed speech, without the need for force alignment ofphonemes. We introduce an automatic segmentation criterion for training fromsequence annotation without alignment that is on par with CTC (Graves et al.,2006) while being simpler. We show competitive results in word error rate on theLibrispeech corpus (Panayotov et al., 2015) with MFCC features, and promisingresults from raw waveform.1 I NTRODUCTIONWe present an end-to-end system to speech recognition, going from the speech signal (e.g. Mel-Frequency Cepstral Coefficients (MFCC), power spectrum, or raw waveform) to the transcription.The acoustic model is trained using letters (graphemes) directly, which take out the need for anintermediate (human or automatic) phonetic transcription. Indeed, the classical pipeline to buildstate of the art systems for speech recognition consists in first training an HMM/GMM model toforce align the units on which the final acoustic model operates (most often context-dependent phonestates). This approach takes its roots in HMM/GMM training (Woodland & Young, 1993). Theimprovements brought by deep neural networks (DNNs) (Mohamed et al., 2012; Hinton et al., 2012)and convolutional neural networks (CNNs) (Sercu et al., 2015; Soltau et al., 2014) for acousticmodeling only extend this training pipeline.The current state of the art on Librispeech (the dataset that we used for our evaluations) uses thisapproach too (Panayotov et al., 2015; Peddinti et al., 2015b), with an additional step of speakeradaptation (Saon et al., 2013; Peddinti et al., 2015a). Recently, Senior et al. (2014) proposed GMM-free training, but the approach still requires to generate a force alignment. An approach that cut tieswith the HMM/GMM pipeline (and with force alignment) was to train with a recurrent neural network(RNN) (Graves et al., 2013) for phoneme transcription. There are now competitive end-to-endapproaches of acoustic models toppled with RNNs layers as in (Hannun et al., 2014; Miao et al.,2015; Saon et al., 2015; Amodei et al., 2015), trained with a sequence criterion (Graves et al., 2006).However these models are computationally expensive, and thus take a long time to train.Compared to classical approaches that need phonetic annotation (often derived from a phoneticdictionary, rules, and generative training), we propose to train the model end-to-end, using graphemesdirectly. Compared to sequence criterion based approaches that train directly from speech signal tographemes (Miao et al., 2015), we propose a simple(r) architecture (23 millions of parameters for ourbest model, vs. 100 millions of parameters in (Amodei et al., 2015)) based on convolutional networks1Under review as a conference paper at ICLR 2017for the acoustic model, toppled with a graph transformer network (Bottou et al., 1997), trained witha simpler sequence criterion. Our word-error-rate on clean speech is slightly better than (Hannunet al., 2014), and slightly worse than (Amodei et al., 2015), in particular factoring that they train on12,000 hours while we only train on the 960h available in LibriSpeech’s train set. Finally, some ofour models are also trained on the raw waveform, as in (Palaz et al., 2013; 2015; Sainath et al., 2015).The rest of the paper is structured as follows: the next section presents the convolutional networksused for acoustic modeling, along with the automatic segmentation criterion. The following sectionshows experimental results comparing different features, the criterion, and our current best word errorrates on LibriSpeech.2 A RCHITECTUREOur speech recognition system is a standard convolutional neural network (LeCun & Bengio, 1995)fed with various different features, trained through an alternative to the Connectionist TemporalClassification (CTC) (Graves et al., 2006), and coupled with a simple beam search decoder. In thefollowing sub-sections, we detail each of these components.2.1 F EATURESCONVkw = 12000 : 40CONVkw = 12000 : 2000CONVkw = 32250 : 2000CONVkw = 7250 : 250CONVkw = 7250 : 250CONVkw = 7250 : 250CONVkw = 7250 : 250CONVkw = 7250 : 250CONVkw = 7250 : 250CONVkw = 7250 : 250CONVkw= 48, dw = 2250 : 250CONVkw= 250 , dw = 1601 : 250Figure 1: Our neural networkarchitecture for raw wave. Firsttwo layers are convolutions withstrides. Last two layers areconvolutions with kw = 1 ,which are equivalent to fullyconnected layers. Power spec-trum and MFCC based networksdo not have the first layer.We consider three types of input features for our model: MFCCs,power-spectrum, and raw wave. MFCCs are carefully designedspeech-specific features, often found in classical HMM/GMMspeech systems (Woodland & Young, 1993) because of their di-mensionality compression (13 coefficients are often enough to spanspeech frequencies). Power-spectrum features are found in mostrecent deep learning acoustic modeling features (Amodei et al.,2015). Raw wave has been somewhat explored in few recent work(Palaz et al., 2013; 2015). ConvNets have the advantage to beflexible enough to be used with either of these input feature types.Our acoustic models output letter scores (one score per letter, givena dictionaryL).2.2 C ONV NETACOUSTIC MODELThe acoustic models we considered in this paper are all based onstandard 1D convolutional neural networks (ConvNets). ConvNetsinterleave convolution operations with pointwise non-linearity op-erations. Often ConvNets also embark pooling layers: these type oflayers allow the network to “see” a larger context, without increas-ing the number of parameters, by locally aggregating the previousconvolution operation output. Instead, our networks leverage strid-ing convolutions. Given (xt)t=1:::Txan input sequence with Txframes ofdxdimensional vectors, a convolution with kernel widthkw, stridedwanddyframe size output computes the following:yit=bi+dxXj=1kwXk=1wi;j;kxjdw(t1)+k81idy; (1)whereb2Rdyandw2Rdydxkware the parameters of theconvolution (to be learned).Pointwise non-linear layers are added after convolutional layers.In our experience, we surprisingly found that using hyperbolictangents, their piecewise linear counterpart HardTanh (as in (Palazet al., 2015)) or ReLU units lead to similar results.There are some slight variations between the architectures, depend-ing on the input features. MFCC-based networks need less striding,as standard MFCC filters are applied with large strides on the input2Under review as a conference paper at ICLR 2017raw sequence. With power spectrum-based and raw wave-based networks, we observed that theoverall stride of the network was more important than where the convolution with strides were placed.We found thus preferrable to set the strided convolutions near the first input layers of the network, asit leads to the fastest architectures: with power spectrum features or raw wave, the input sequencesare very long and the first convolutions are thus the most expensive ones.The last layer of our convolutional network outputs one score per letter in the letter dictionary(dy=jLj). Our architecture for raw wave is shown in Figure 1 and is inspired by (Palaz et al., 2015).The architectures for both power spectrum and MFCC features do not include the first layer. Thefull network can be seen as a non-linear convolution, with a kernel width of size 31280 and strideequal to 320; given the sample rate of our data is 16KHz, label scores are produced using a windowof 1955 ms, with steps of 20ms.2.3 I NFERRING SEGMENTATION WITH AUTOSEGCRITERIONMost large labeled speech databases provide only a text transcription for each audio file. In aclassification framework (and given our acoustic model produces letter predictions), one wouldneed the segmentation of each letter in the transcription to train properly the model. Unfortunately,manually labeling the segmentation of each letter would be tedious. Several solutions have beenexplored in the speech community to alleviate this issue: HMM/GMM models use an iterative EMprocedure: (i) during the Estimation step, the best segmentation is inferred, according to the currentmodel, by maximizing the joint probability of the letter (or any sub-word unit) transcription and inputsequence. (ii) During the Maximization step the model is optimized by minimizing a frame-levelcriterion, based on the (now fixed) inferred segmentation. This approach is also often used to boostrapthe training of neural network-based acoustic models.Other alternatives have been explored in the context of hybrid HMM/NN systems, such as the MMIcriterion (Bahl et al., 1986) which maximizes the mutual information between the acoustic sequenceand word sequences or the Minimum Bayse Risk (MBR) criterion (Gibson & Hain, 2006).More recently, standalone neural network architectures have been trained using criterions whichjointly infer the segmentation of the transcription while increase the overall score of the right tran-scription (Graves et al., 2006; Palaz et al., 2014). The most popular one is certainly the ConnectionistTemporal Classification (CTC) criterion, which is at the core of Baidu’s Deep Speech architec-ture (Amodei et al., 2015). CTC assumes that the network output probability scores, normalizedat the frame level. It considers all possible sequence of letters (or any sub-word units), which canlead to a to a given transcription. CTC also allow a special “blank” state to be optionally insertedbetween each letters. The rational behind the blank state is two-folds: (i) modeling “garbage” frameswhich might occur between each letter and (ii) identifying the separation between two identicalconsecutive letters in a transcription. Figure 2a shows an example of the sequences accepted by CTCfor a given transcription. In practice, this graph is unfolded as shown in Figure 2b, over the availableframes output by the acoustic model. We denote Gctc(;T)an unfolded graph over Tframes for agiven transcription , and=1; :::; T2Gctc(;T)a path in this graph representing a (valid)sequence of letters for this transcription. At each time step t, each node of the graph is assignedwith the corresponding log-probability letter (that we denote ft()) output by the acoustic model.CTC aims at maximizing the “overall” score of paths in Gctc(;T); for that purpose, it minimizes theForward score:CTC (;T) =logadd2Gctc(;T)TXt=1ft(x); (2)where the “logadd” operation, also often called “log-sum-exp” is defined as logadd(a;b) =exp(log(a) + log(b)). This overall score can be efficiently computed with the Forward algorithm. Toput things in perspective, if one would replace the logadd()by amax()in (2) (which can be thenefficiently computed by the Viterbi algorithm, the counterpart of the Forward algorithm), one wouldthen maximize the score of the best path, according to the model belief. The logadd()can be seenas a smooth version of the max(): paths with similar scores will be attributed the same weight in theoverall score (and hence receive the same gradient), and paths with much larger score will have muchmore overall weight than paths with low scores. In practice, using the logadd()works much betterthan the max(). It is also worth noting that maximizing (2) does not diverge, as the acoustic modelis assumed to output normalized scores (log-probabilities) fi().3Under review as a conference paper at ICLR 2017∅C∅A∅T∅(a)∅ ∅ ∅C C C C∅ ∅ ∅A A A A∅ ∅ ∅T T T T∅ ∅ ∅ (b)Figure 2: The CTC criterion graph. (a) Graph which represents all the acceptable sequences of letters(with the blank state denoted “ ;”), for the transcription “cat”. (b) Shows the same graph unfoldedover 5 frames. There are no transitions scores. At each time step, nodes are assigned a conditionalprobability output by the neural network acoustic model.In this paper, we explore an alternative to CTC, with three differences: (i) there are no blank labels,(ii) un-normalized scores on the nodes (and possibly un-normalized transition scores on the edges)(iii) global normalization instead of per-frame normalization:The advantage of (i) is that it produces a much simpler graph (see Figure 3a and Figure 3b).We found that in practice there was no advantage of having a blank class to model thepossible “garbage” frames between letters. Modeling letter repetitions (which is also animportant quality of the blank label in CTC) can be easily replaced by repetition characterlabels (we used two extra labels for two and three repetitions). For example “caterpillar”could be written as “caterpil2ar”, where “2” is a label to represent the repetition of theprevious letter. Not having blank labels also simplifies the decoder.With (ii) one can easily plug an external language model, which would insert transitionscores on the edges of the graph. This could be particularly useful in future work, if onewanted to model representations more high-level than letters. In that respect, avoidingnormalized transitions is important to alleviate the problem of “label bias” Bottou (1991);Lafferty et al. (2001). In this work, we limited ourselves to transition scalars, which arelearned together with the acoustic model.The normalization evoked in (iii) is necessary when using un-normalized scores on nodes oredges; it insures incorrect transcriptions will have a low confidence.In the following, we name our criterion “Auto Segmentation Criterion” (ASG). Considering thesame notations than for CTC in (2), and an unfolded graph Gasg(;T)overTframes for a giventranscription (as in Figure 3b), as well as a fully connected graph Gfull(;T)overTframes(representing all possible sequence of letters, as in Figure 3c), ASG aims at minimizing:ASG (;T) =logadd2Gasg(;T)TXt=1(ft(x) +gt1;t(x)) + logadd2Gfull(;T)TXt=1(ft(x) +gt1;t(x));(3)wheregi;j()is a transition score model to jump from label ito labelj. The left-hand part of 3promotes sequences of letters leading to the right transcription, and the right-hand part demotes allsequences of letters. As for CTC, these two parts can be efficiently computed with the Forwardalgorithm. Derivatives with respect to fi()andgi;j()can be obtained (maths are a bit tedious) byapplying the chain rule through the Forward recursion.2.4 B EAM -SEARCH DECODERWe wrote our own one-pass decoder, which performs a simple beam-search with beam threholding,histogram pruning and language model smearing Steinbiss et al. (1994). We kept the decoder as4Under review as a conference paper at ICLR 2017C A T(a)C C C CA A A AT T T T (b)AB...ZAB...ZAB...ZAB...ZAB...ZAB...Z(c)Figure 3: The ASG criterion graph. (a) Graph which represents all the acceptable sequences ofletters for the transcription “cat”. (b) Shows the same graph unfolded over 5 frames. (c) Shows thecorresponding fully connected graph, which describe all possible sequences of letter; this graph isused for normalization purposes. Un-normalized transitions scores are possible on the edges. Ateach time step, nodes are assigned a conditional un-normalized score, output by the neural networkacoustic model.simple as possible (under 1000 lines of C code). We did not implement any sort of model adaptationbefore decoding, nor any word graph rescoring. Our decoder relies on KenLM Heafield et al. (2013)for the language modeling part. It also accepts un-normalized acoustic scores (transitions andemissions from the acoustic model) as input. The decoder attempts to maximize the following:L() = logadd2Gasg(;T)TXt=1(ft(x) +gt1;t(x)) +logPlm() +jj; (4)wherePlm()is the probability of the language model given a transcription ,andare twohyper-parameters which control the weight of the language model and the word insertion penaltyrespectively.3 E XPERIMENTS3.1 S ETUPWe implemented everything using Torch71. The ASG criterion as well as the decoder were imple-mented in C (and then interfaced into Torch).We consider as benchmark LibriSpeech, a large speech database freely available for download (Panay-otov et al., 2015). LibriSpeech comes with its own train, validation and test sets. Except whenspecified, we used all the available data (about 1000h of audio files) for training and validating ourmodels. We use the original 16 KHz sampling rate. The vocabulary Lcontains 30 graphemes: thestandard English alphabet plus the apostrophe, silence, and two special “repetition” graphemes whichencode the duplication (once or twice) of the previous letter (see Section 2.3).The architecture hyper-parameters, as well the decoder ones were tuned using the validation set. Inthe following, we either report letter-error-rates (LERs) or word-error-rates (WERs). WERs havebeen obtained by using our own decoder (see Section 2.4), with the standard 4-gram language modelprovided with LibriSpeech2.1http://www.torch.ch .2http://www.openslr.org/11 .5Under review as a conference paper at ICLR 2017Table 1: CTC vs ASG. CTC is Baidu’s implementation. ASG is implemented on CPU (C withOpenMP). Timings (in ms) for small sequences (input frames: 150, letter vocabulary size: 28,transcription size: 40) and long sequences (input frames: 700, letter vocabulary size: 28, transcriptionsize: 200) are reported in (a) and (b) respectively. (c) reports performance in LER. Timings includeboth forward and backward passes. CPU implementations use 8 threads.(a)batch CTC ASGsize CPU GPU CPU1 1.9 5.9 2.54 2.0 6.0 2.88 2.0 6.1 2.8(b)batch CTC ASGsize CPU GPU CPU1 40.9 97.9 16.04 41.6 99.6 17.78 41.7 100.3 19.2(c)ASG CTCdev-clean 10.4 10.7test-clean 10.1 10.5MFCC features are computed with 13 coefficients, a 25 ms sliding window and 10 ms stride. Weincluded first and second order derivatives. Power spectrum features are computed with a 25 mswindow, 10 ms stride, and have 257 components. All features are normalized (mean 0, std 1) perinput sequence.3.2 R ESULTSTable 1 reports a comparison between CTC and ASG, in terms of LER and speed. Our ASG criterionis implemented in C (CPU only), leveraging SSE instructions when possible. Our batching is donewith an OpenMP parallel for. We picked the CTC criterion implementation provided by Baidu3. Bothcriteria lead to the same LER. For comparing the speed, we report performance for sequence sizes asreported initially by Baidu, but also for longer sequence sizes, which corresponds to our average usecase. ASG appears faster on long sequences, even though it is running on CPU only. Baidu’s GPUCTC implementation seems more aimed at larger vocabularies (e.g. 5000 Chinese characters).We also investigated the impact of the training size on the dataset, as well as the effect of a simpledata augmentation procedure, where shifts were introduced in the input frames, as well as stretching.For that purpose, we tuned the size of our architectures (given a particular size of the dataset), toavoid over-fitting. Figure 4a shows the augmentation helps for small training set size. However, withenough training data, the effect of data augmentation vanishes, and both type of features appear toperform similarly. Figure 4b reports the WER with respect to the available training data size. Weobserve that we compare very well against Deep Speech 1 & 2 which were trained with much moredata Hannun et al. (2014); Amodei et al. (2015).Finally, we report in Table 2 the best results of our system so far, trained on 1000h of speech, foreach type of features. The overall stride of architectures is 320 (see Figure 1), which produces a labelevery 20 ms. We found that one could squeeze out about 1%in performance by refining the precisionof the output. This is efficiently achieved by shifting the input sequence, and feeding it to the networkseveral times. Results in Table 2 were obtained by a single extra shift of 10 ms. Both power spectrumand raw features are performing slightly worse than MFCCs. One could expect, however, that withenough data (see Figure 4) the gap would vanish.3https://github.com/baidu-research/warp-ctc .6Under review as a conference paper at ICLR 20170 200 400 600 800 1,000101520training set size (h)LERMFCCMFCC+AUGPOWPOW+AUG(a)101102103104510152025training set size (h)WERMFCCPOWDS1DS2 (b)Figure 4: Valid LER (a) and WER (b) v.s. training set size (10h, 100h, 200h, 1000h). Thiscompares MFCC-based and power spectrum-based (POW) architectures. AUG experiments includedata augmentation. In (b) we provide Baidu Deep Speech 1 and 2 numbers on LibriSpeech, as acomparison Hannun et al. (2014); Amodei et al. (2015).Table 2: LER/WER of the best sets of hyper-parameters for each feature types.MFCC PS RawLER WER LER WER LER WERdev-clean 6.9 9.3 10.3test-clean 6.9 7.2 9.1 9.4 10.6 10.14 C ONCLUSIONWe have introduced a simple end-to-end automatic speech recognition system, which combines astandard 1D convolutional neural network, a sequence criterion which can infer the segmentation, anda simple beam-search decoder. The decoding results are competitive on the LibriSpeech corpus withMFCC features (7.2% WER), and promising with power spectrum and raw speech (9.4% WER and10.1% WER respectively). We showed that our AutoSegCriterion can be faster than CTC (Graveset al., 2006), and as accurate (table 1). Our approach breaks free from HMM/GMM pre-trainingand force-alignment, as well as not being as computationally intensive as RNN-based approaches(Amodei et al., 2015) (on average, one LibriSpeech sentence is processed in less than 60ms by ourConvNet, and the decoder runs at 8.6x on a single thread). | rJgU53uXl | Review | 6: Marginally above acceptance threshold | There have been numerous works on learning from raw waveforms and training letter-based CTC networks for speech recognition, however, there are very few works on combining both of them with purely ConvNet as it is done in this paper. It is interesting to see results on a large scale corpus such as Librispeech that is used in this paper, though some baseline results from hybrid NN/HMM systems should be provided. To readers, it is unclear how this system is close to state-of-the-art only from Table 2.
The key contribution of this paper may be the end-to-end sequence training criterion for their CTC variant (where the blank symbol is dropped), which may be viewed as sequence training of CTC as H. Sak, et al. "Learning acoustic frame labeling for speech recognition with recurrent neural networks", 2015. However, instead of generating the denominator lattices using a frame-level trained CTC model first, this paper directly compute the sequence-level loss by considering all the competing hypothesis in the normalizer. Therefore, the model is trained end-to-end. From this perspective, it is closely related to D. Povey's LF-MMI for sequence-training of HMMs. As another reviewer has pointed out, references and discussions on that should be provided.
This approach should be more expensive than frame-level training of CTCs, however, from Table 1, the authors' implementation is much faster. Did the systems there use the same sampling rate? You said at the end of 2.2 that the step size for your model is 20ms. Is it also the same for Baidu's CTC system. Also, have you tried increasing the step size, e.g. to 30ms or 40ms, as people have found that it may work (equally) better, while significantly cut down the computational cost. | 4: The reviewer is confident but not absolutely certain that the evaluation is correct |
HyoST_9xl | ICLR.cc/2017/conference | 2017 | DSD: Dense-Sparse-Dense Training for Deep Neural Networks | ["Song Han", "Jeff Pool", "Sharan Narang", "Huizi Mao", "Enhao Gong", "Shijian Tang", "Erich Elsen", "Peter Vajda", "Manohar Paluri", "John Tran", "Bryan Catanzaro", "William J. Dally"] | Modern deep neural networks have a large number of parameters, making them very hard to train. We propose DSD, a dense-sparse-dense training flow, for regularizing deep neural networks and achieving better optimization performance. In the first D (Dense) step, we train a dense network to learn connection weights and importance. In the S (Sparse) step, we regularize the network by pruning the unimportant connections with small weights and retraining the network given the sparsity constraint. In the final D (re-Dense) step, we increase the model capacity by removing the sparsity constraint, re-initialize the pruned parameters from zero and retrain the whole dense network. Experiments show that DSD training can improve the performance for a wide range of CNNs, RNNs and LSTMs on the tasks of image classification, caption generation and speech recognition. On ImageNet, DSD improved the Top1 accuracy of GoogLeNet by 1.1%, VGG-16 by 4.3%, ResNet-18 by 1.2% and ResNet-50 by 1.1%, respectively. On the WSJ’93 dataset, DSD improved DeepSpeech and DeepSpeech2 WER by 2.0% and 1.1%. On the Flickr-8K dataset, DSD improved the NeuralTalk BLEU score by over 1.7. DSD is easy to use in practice: at training time, DSD incurs only one extra hyper-parameter: the sparsity ratio in the S step. At testing time, DSD doesn’t change the network architecture or incur any inference overhead. The consistent and significant performance gain of DSD experiments shows the inadequacy of the current training methods for finding the best local optimum, while DSD effectively achieves superior optimization performance for finding a better solution. DSD models are available to download at https://songhan.github.io/DSD. | ["Deep learning"] | ABSTRACTModern deep neural networks have a large number of parameters, making themvery hard to train. We propose DSD, a dense-sparse-dense training flow, forregularizing deep neural networks and achieving better optimization performance.In the first D (Dense) step, we train a dense network to learn connection weightsand importance. In the S (Sparse) step, we regularize the network by pruning theunimportant connections with small weights and retraining the network given thesparsity constraint. In the final D (re-Dense) step, we increase the model capacityby removing the sparsity constraint, re-initialize the pruned parameters from zeroand retrain the whole dense network. Experiments show that DSD training canimprove the performance for a wide range of CNNs, RNNs and LSTMs on thetasks of image classification, caption generation and speech recognition. OnImageNet, DSD improved the Top1 accuracy of GoogLeNet by 1.1%, VGG-16 by4.3%, ResNet-18 by 1.2% and ResNet-50 by 1.1%, respectively. On the WSJ’93dataset, DSD improved DeepSpeech and DeepSpeech2 WER by 2.0% and 1.1%.On the Flickr-8K dataset, DSD improved the NeuralTalk BLEU score by over1.7. DSD is easy to use in practice: at training time, DSD incurs only one extrahyper-parameter: the sparsity ratio in the S step. At testing time, DSD doesn’tchange the network architecture or incur any inference overhead. The consistentand significant performance gain of DSD experiments shows the inadequacy of thecurrent training methods for finding the best local optimum, while DSD effectivelyachieves superior optimization performance for finding a better solution. DSDmodels are available to download at https://songhan.github.io/DSD.1 I NTRODUCTIONDeep neural networks (DNNs) have shown significant improvements in many application domains,ranging from computer vision (He et al. (2015)) to natural language processing (Luong et al. (2015))and speech recognition (Amodei et al. (2015)). The abundance of powerful hardware makes it easierto train complicated DNN models with large capacities. The upside of complicated models is thatthey are very expressive and can capture the highly non-linear relationship between features andoutput. The downside of such large models is that they are prone to capturing the noise, rather thanthe intended pattern, in the training dataset. This noise does not generalize to new datasets, leading toover-fitting and a high variance.Indicates equal contributionyAlso at NVIDIAzNow at Google Brain. eriche@google.com1Published as a conference paper at ICLR 2017Dense Pruning Sparsity Constraint Sparse Increase Model Capacity Re-Dense Dense Figure 1: Dense-Sparse-Dense Training Flow. The sparse training regularizes the model, and the finaldense training restores the pruned weights (red), increasing the model capacity without overfitting.Algorithm 1: Workflow of DSD trainingInitialization: W(0)withW(0)N(0;)Output :W(t).———————————————– Initial Dense Phase ———————————————–while not converged doW(t)=W(t1)(t)rf(W(t1);x(t1));t=t+ 1;end————————————————— Sparse Phase —————————————————-//initialize the mask by sorting and keeping the Top-k weights.S=sort(jW(t1)j);=Ski;Mask =1(jW(t1)j>);while not converged doW(t)=W(t1)(t)rf(W(t1);x(t1));W(t)=W(t)Mask ;t=t+ 1;end————————————————- Final Dense Phase ————————————————–while not converged doW(t)=W(t1)(t)rf(W(t1);x(t1));t=t+ 1;endgoto Sparse Phase for iterative DSD;In contrast, simply reducing the model capacity would lead to the other extreme, causing a machinelearning system to miss the relevant relationships between features and target outputs, leading tounder-fitting and a high bias. Bias and variance are hard to optimize at the same time.To solve this problem, we propose a dense-sparse-dense training flow (DSD), a novel training strategythat starts from a dense model from conventional training, then regularizes the model with sparsity-constrained optimization, and finally increases the model capacity by restoring and retraining thepruned weights. At testing time, the final model produced by DSD still has the same architectureand dimension as the original dense model, and DSD training doesn’t incur any inference overhead.We experimented DSD training on 7 mainstream CNN / RNN / LSTMs and found consistentperformance gains over its comparable counterpart for image classification, image captioning andspeech recognition.2 DSD T RAINING FLOWOur DSD training employs a three-step process: dense, sparse, re-dense. Each step is illustrated inFigure 1 and Algorithm 1. The progression of weight distribution is plotted in Figure 2.Initial Dense Training: The first D step learns the connection weights and importance via normalnetwork training on the dense network. Unlike conventional training, however, the goal of this D stepis not only to learn the values of the weights; we are also learning which connections are important.We use a simple heuristic to quantify the importance of the weights using their absolute value.2Published as a conference paper at ICLR 2017−0.05 0 0.0501600320048006400Weight ValueCountTrain on Dense (D)(a)−0.05 0 0.0501600320048006400Weight ValueCountPruning the Network (b)−0.05 0 0.0501600320048006400Weight ValueCountTrain on Sparse (S) (c)−0.05 0 0.0501600320048006400Weight ValueCountRecover Zero Weights (d)−0.05 0 0.0501600320048006400Weight ValueCountTrain on Dense (D) (e)Figure 2: Weight distribution of a layer of GoogLeNet at different points in DSD training: the originalGoogLeNet (a), pruned (b), after retraining with the sparsity constraint (c), ignoring the sparistyconstraint and recovering the zero weights (d), and after retraining the dense network (e).Sparse Training: The S step prunes the low-weight connections and trains a sparse network. Weapplied the same sparsity to all the layers, thus there’s a single hyper parameter: the sparsity , thepercentage of weights that are pruned to 0. For each layer WwithNparameters, we sorted theparameters, picked the k-th largest one =Skas the threshold where k=N(1sparsity ), andgenerated a binary mask to remove all the weights smaller than . Details are shown in Algorithm 1 .We remove small weights because of the Taylor expansion. The loss function and its Taylor expansionare shown in Equation (1)(2). We want to minimize the increase in Loss when conducting a hardthresholding on the weights, so we need to minimize the first and second terms in Equation 2.Since we are zeroing out parameters, Wiis actuallyWi0 =Wi. At the local minimum where@Loss=@W i0and@2Loss@W2i>0, only the second order term matters. Since second order gradient@2Loss=@W2iis expensive to calculate and Wihas a power of 2, we use jWijas the metric of pruning.SmallerjWijmeans a smaller increase to the loss function.Loss =f(x;W 1;W 2;W 3:::) (1)Loss =@Loss@WiWi+12@2Loss@W2iWi2+::: (2)Retraining while enforcing the binary mask in each iteration, we converted a dense network into asparse network that has a known sparsity support and can fully recover or even increase the originalaccuracy of initial dense model under the sparsity constraint. The sparsity is the same for all thelayers and can be tuned using validation. We find a sparsity value between 25% and 50% generallyworks well in our experiments.Final Dense Training: The final D step recovers the pruned connections, making the network denseagain. These previously-pruned connections are initialized to zero and the entire network is retrainedwith 1/10 the original learning rate (since the sparse network is already at a good local minima).Hyper parameters like dropout ratios and weight decay remained unchanged. By restoring the prunedconnections, the final D step increases the model capacity of the network and makes it possible toarrive at a better local minima compared with the sparse model from the S step.To visualize the DSD training flow, we plotted the progression of the weight distribution in Figure 2.The figure is plotted using GoogLeNet’s inception_5b3x3 layer, and we found this progression ofweight distribution very representative for VGGNet and ResNet as well. The original distributionof weight is centered on zero with tails dropping off quickly. Pruning is based on absolute value soafter pruning the large center region is truncated away. The un-pruned network parameters adjustthemselves during the retraining phase, so in (c), the boundary becomes soft and forms a bimodaldistribution. In (d), at the beginning of the re-dense training step, all the pruned weights come backagain and are reinitialized to zero. Finally, in (e), the pruned weights are retrained together with theun-pruned weights. In this step, we kept the same learning hyper-parameters (weight decay, learningrate, etc.) for pruned weights and un-pruned weights. Comparing Figure (d) and (e), the un-prunedweights’ distribution almost remained the same, while the pruned weights became distributed furtheraround zero. The overall mean absolute value of the weight distribution is much smaller. Thisis a good phenomenon: choosing the smallest vector that solves the learning problem suppressesirrelevant components of the weight vector ( Moody et al. (1995)).3Published as a conference paper at ICLR 2017Table 1: Overview of the neural networks, data sets and performance improvements from DSD.Neural Network Domain Dataset Type Baseline DSD Abs. Imp. Rel. Imp.GoogLeNet Vision ImageNet CNN 31.1%130.0% 1.1% 3.6%VGG-16 Vision ImageNet CNN 31.5%127.2% 4.3% 13.7%ResNet-18 Vision ImageNet CNN 30.4%129.2% 1.2% 4.1%ResNet-50 Vision ImageNet CNN 24.0%122.9% 1.1% 4.6%NeuralTalk Caption Flickr-8K LSTM 16.8218.5 1.7 10.1%DeepSpeech Speech WSJ’93 RNN 33.6%331.6% 2.0% 5.8%DeepSpeech-2 Speech WSJ’93 RNN 14.5%313.4% 1.1% 7.4%1Top-1 error. VGG/GoogLeNet baselines from the Caffe Model Zoo, ResNet from Facebook.2BLEU score baseline from Neural Talk model zoo, the higher the better.3Word error rate: DeepSpeech2 is trained with a portion of Baidu internal dataset with only maxdecoding to show the effect of DNN improvement.3 R ELATED WORKDropout and DropConnect: DSD, Dropout (Srivastava et al. (2014)) and DropConnnect (Wan et al.(2013)) can all regularize neural networks and prevent over-fitting. The difference is that Dropout andDropConnect use a random sparsity pattern at each SGD iteration, while DSD training learns with adeterministic data driven sparsity pattern throughout sparse training. Our experiments on VGG16,GoogLeNet and NeuralTalk show that DSD training can work together with Dropout.Distillation: Model distillation (Hinton et al. (2015)) is a method that can transfer the learnedknowledge from a large model to a small model, which is more efficient for deployment. This isanother method that allows for performance improvements in neural networks without architecturalchanges.Model Compression: Both model compression (Han et al. (2016; 2015)) and DSD training usenetwork pruning (LeCun et al. (1990); Hassibi et al. (1993)). The difference is that the focus ofDSD training goes beyond maintaining the accuracy. DSD is able to further improve the accuracy byconsiderable margins. Another difference is that DSD training doesn’t require aggressive pruning. Amodestly pruned network (50%-60% sparse) can work well. However, model compression requiresaggressively pruning the network to achieve high compression rates.Sparsity Regularization and Hard Thresholding: the truncation-based sparse network has beentheoretically analyzed for learning a broad range of statistical models in high dimensions (Langfordet al. (2009); Yuan & Zhang (2013); Wang et al. (2014)). A similar training strategy with iterativehard thresholding and connection restoration is proposed by Jin et al. (2016) during the same timeperiod as, but independently from, DSD. Sparsity regularized optimization is heavily applied inCompressed Sensing (Candes & Romberg (2007)) to find optimal solutions to the inverse problemsin highly under-determined systems based on the sparsity assumption.4 E XPERIMENTSWe applied DSD training to different kinds of neural networks in different domains. We found thatDSD training improved the accuracy for allthese networks compared to the baseline networks thatwere not trained with DSD. The neural networks are chosen from CNN, RNN and LSTMs; thedatasets covered image classification, speech recognition, and caption generation. For networkstrained for ImageNet, we focus on GoogLeNet, VGG and ResNet, which are widely used in researchand production. An overview of the networks, dataset and accuracy results are shown in Table 1. Forthe convolutional networks, we do not prune the first layer during the sparse phase, since it has only 3channels and is very sensitive to pruning. The sparsity is the same for all the other layers, includingconvolutional and fully-connected layers. We do not change any other training hyper-parameters, andthe initial learning rate at each stage is decayed the same as conventional training. The epochs aredecided by when the loss converges. When the loss no longer decreases, we stop the training.4Published as a conference paper at ICLR 20174.1 G OOG LENETWe experimented with the BVLC GoogLeNet (Szegedy et al. (2015)) model obtained from the CaffeModel Zoo (Jia (2013)). It has 13 million parameters and 57 convolutional layers. We pruned eachlayer (except the first) to 30% sparsity. Retraining the sparse network gave some improvement inaccuracy due to regularization, as shown in Table 2. After the final dense training step, GoogLeNet’serror rates were reduced by 1.12% (Top-1) and 0.62% (Top-5) over the baseline.We compared DSD v.s. conventional training for the same number of epochs by dropping the learningrate upon "convergence" and continuing to learn. The result is shown as LLR (lower the learningrate). The training epochs for LLR is equal to that of Sparse+re-Dense as a fair comparison. LLR cannot achieve the same accuracy as DSD.Table 2: DSD results on GoogLeNetGoogLeNet Top-1 Err Top-5 Err Sparsity Epochs LRBaseline 31.14% 10.96% 0% 250 1e-2Sparse 30.58% 10.58% 30% 11 1e-3DSD 30.02% 10.34% 0% 22 1e-4LLR 30.20% 10.41% 0% 33 1e-5Improve (abs) 1.12% 0.62% - - -Improve (rel) 3.6% 5.7% - - -4.2 VGGN ETWe explored DSD training on VGG-16 (Simonyan & Zisserman (2014)), which is widely used indetection, segmentation and transfer learning. The baseline model is obtained from the Caffe ModelZoo (Jia (2013)). Similar to GoogLeNet, each layer is pruned to 30% sparsity. DSD training greatlyreduced the error by 4.31% (Top-1) and 2.65% (Top-5), detailed in Table 3. DSD also wins over theLLR result by a large margin.Table 3: DSD results on VGG-16VGG-16 Top-1 Err Top-5 Err Sparsity Epochs LRBaseline 31.50% 11.32% 0% 74 1e-2Sparse 28.19% 9.23% 30% 1.25 1e-4DSD 27.19% 8.67% 0% 18 1e-5LLR 29.33% 10.00% 0% 20 1e-7Improve (abs) 4.31% 2.65% - - -Improve (rel) 13.7% 23.4% - - -4.3 R ESNETDeep Residual Networks (ResNets, He et al. (2015)) were the top performer in the 2015 ImageNetchallenge. The baseline ResNet-18 and ResNet-50 models are provided by Facebook (2016). Weprune to 30% sparsity uniformly, and a single DSD pass for these networks reduced top-1 error by1.26% (ResNet-18) and 1.12% (ResNet-50), shown in Table 4. A second DSD iteration can furtherimprove the accuracy. As a fair comparison, we continue train the original model by lowering thelearning rate by another decade, but can’t reach the same accuracy as DSD, as shown in the LLR row.Table 4: DSD results on ResNet-18 and ResNet-50ResNet-18 ResNet-50Top-1 Err Top-5 Err Top-1 Err Top-5 Err Sparsity Epochs LRBaseline 30.43% 10.76% 24.01% 7.02% 0% 90 1e-1Sparse 30.15% 10.56% 23.55% 6.88% 30% 45 1e-2DSD 29.17 % 10.13 % 22.89 % 6.47% 0% 45 1e-3LLR 30.04% 10.49% 23.58% 6.84% 0% 90 1e-5Improve (abs) 1.26% 0.63% 1.12% 0.55% - - -Improve (rel) 4.14% 5.86% 4.66% 7.83% - - -5Published as a conference paper at ICLR 2017Baseline : a man and a woman are sitting on a bench. Sparse : a man is sitting on a bench with his hands in the air. DSD : a man is sitting on a bench with his arms folded.Baseline : two dogs are playing together in a field. Sparse : two dogs are playing in a field. DSD : two dogs are playing in the grass.Baseline : a boy in a red shirt is climbing a rock wall. Sparse : a young girl is jumping off a tree. DSD : a young girl in a pink shirt is s w i n g i n g o n a swing.Baseline : a basketball player in a red uniform is playing with a ball. Sparse : a basketball player in a blue uniform is jumping over the goal. DSD : a basketball player in a white uniform is trying to make a shot.Baseline : a person in a red jacket is riding a b i k e t h r o u g h t h e woods. Sparse : a car drives through a mud puddle. DSD : a car drives through a forest.1Figure 3: Visualization of DSD training improving the performance of image captioning.Table 5: DSD results on NeuralTalkNeuralTalk BLEU-1 BLEU-2 BLEU-3 BLEU-4 Sparsity Epochs LRBaseline 57.2 38.6 25.4 16.8 0 19 1e-2Sparse 58.4 39.7 26.3 17.5 80% 10 1e-3DSD 59.2 40.7 27.4 18.5 0 6 1e-4Improve(abs) 2.0 2.1 2.0 1.7 - - -Improve(rel) 3.5% 5.4% 7.9% 10.1% - - -4.4 N EURAL TALKWe evaluated DSD training on RNN and LSTM beyond CNN. We applied DSD to NeuralTalk(Karpathy & Fei-Fei (2015)), an LSTM for generating image descriptions. It uses a CNN as an imagefeature extractor and an LSTM to generate captions. To verify DSD training on LSTMs, we fixedthe CNN weights and only train the LSTM weights. The baseline NeuralTalk model we used is theflickr8k_cnn_lstm_v1.p downloaded from NeuralTalk Model Zoo.In the pruning step, we pruned all layers except Ws, the word embedding lookup table, to 80%sparse. We used a higher sparsity than CNN’s experiments based on the validation set of flickr8k. Weretrained the remaining sparse network using the same weight decay and batch size as the originalpaper. The learning rate is tuned based on the validation set, shown in Table 5. Retraining the sparsenetwork improved the BLUE score by [1.2, 1.1, 0.9, 0.7]. After getting rid of the sparsity constraintand retraining the dense network, the final results of DSD further improved the BLEU score by [2.0,2.1, 2.0, 1.7] over baseline.The BLEU score is not the sole criteria measuring auto-caption system. We visualized the captionsgenerated by DSD training in Figure 3. In the first image, the baseline model mistakes the girl with aboy and the girl’s hair with a rock wall; the sparse model can tell that it’s a girl; and the DSD modelcan further identify the swing. In the the second image, DSD training can more accurately tell theplayer is in a white uniform and trying to make a shot, rather than the baseline just saying he’s ina red uniform and playing with a ball. The performance of DSD training generalizes beyond theseexamples; more image caption results generated by DSD training are provided in the Appendix.4.5 D EEPSPEECHWe explore DSD training on speech recognition tasks using both Deep Speech 1 (DS1) and DeepSpeech 2 (DS2) networks (Hannun et al. (2014); Amodei et al. (2015)).The DS1 model is a 5 layer network with 1 Bidirectional Recurrent layer, as described in Table 6.The training dataset used for this model is the Wall Street Journal (WSJ), which contains 81 hours of6Published as a conference paper at ICLR 2017Table 6: Deep Speech 1 ArchitectureLayer ID 0 1 2 3 4 5Type Conv FC FC Bidirectional Recurrent FC CTCCost#Params 1814528 1049600 1049600 3146752 1049600 29725Table 7: DSD results on Deep Speech 1: Word Error Rate (WER)DeepSpeech 1 WSJ ’92 WSJ ’93 Sparsity Epochs LRDense Iter 0 29.82 34.57 0% 50 8e-4Sparse Iter 1 27.90 32.99 50% 50 5e-4Dense Iter 1 27.90 32.20 0% 50 3e-4Sparse Iter 2 27.45 32.99 25% 50 1e-4Dense Iter 2 27.45 31.59 0% 50 3e-5Baseline 28.03 33.55 0% 150 8e-4Improve(abs) 0.58 1.96 - - -Improve(rel) 2.07% 5.84% - - -speech. The validation set consists of 1 hour of speech. The test sets are from WSJ’92 and WSJ’93and contain 1 hour of speech combined. The Word Error Rate (WER) reported on the test sets for thebaseline models is different from Amodei et al. (2015) due to two factors. First, in DeepSpeech2,the models were trained using much larger data sets containing approximately 12,000 hours ofmulti-speaker speech data. Secondly, WER was evaluated with beam search and a language model inDeepSpeech2; here the network output is obtained using only max decoding to show improvement inthe neural network accuracy, and filtering out the other parts.The first dense phase was trained for 50 epochs. In the sparse phase, weights are pruned in theFully Connected layers and the Bidirectional Recurrent layer only (they are the majority of theweights). Each layer is pruned to achieve the same 50% sparsity and trained for 50 epochs. In thefinal dense phase, the pruned weights are initialized to zero and trained for another 50 epochs. Fora fair comparison of baseline, we used Nesterov SGD to train, reduce the learning rate with eachre-training, and keep all other hyper parameters unchanged. The learning rate is picked using ourvalidation set.We first wanted to compare the DSD results with a baseline model trained for the same number ofepochs. The first 3 rows of Table 7 shows the WER when the DSD model is trained for 50+50+50=150epochs, and the 6th line shows the baseline model trained by 150 epochs (the Same #Epochs asDSD). DSD training improves WER by 0.13 (WSJ ’92) and 1.35 (WSJ ’93) given the same numberof epochs as the conventional training.Given a second DSD iteration, accuracy can be further improved. In the second DSD iteration,each layer is pruned away 25% of the weights. Similar to the first iteration, the sparse model andsubsequent dense model are further retrained for 50 epochs. The learning rate is scaled down for eachre-training step. The results are shown in Table 7. Compared with the fully trained and convergedbaseline, the second DSD iteration improves WER by 0.58 (WSJ ’92) and 1.96 (WSJ ’93), a relativeimprovement of 2.07% (WSJ ’92) and 5.84% (WSJ ’93). So, we can do more DSD iterations(DSDSD) to further improve the performance. Adding more DSD iterations has a diminishing return.4.6 D EEPSPEECH 2To show how DSD works on deeper networks, we evaluated DSD on the Deep Speech 2 (DS2)network, described in Table 8. This network has 7 Bidirectional Recurrent layers with approximately67 million parameters, around 8 times larger than the DS1 model. A subset of the internal Englishtraining set is used. The training set is comprised of 2,100 hours of speech. The validation set iscomprised of 3.46 hours of speech. The test sets are from WSJ’92 and WSJ’93, which contain 1 hourof speech combined.Table 9 shows the results of the two iterations of DSD training. For the first sparse re-training,similar to DS1, 50% of the parameters from the Bidirectional Recurrent Layers and Fully Connected7Published as a conference paper at ICLR 2017Table 8: Deep Speech 2 ArchitectureLayer ID 0 1 2 3 - 8 9 10Type 2DConv 2DConv BR BR FC CTCCost#Params 19616 239168 8507840 9296320 3101120 95054Table 9: DSD results on Deep Speech 2 (WER)DeepSpeech 2 WSJ ’92 WSJ ’93 Sparsity Epochs LRDense Iter 0 11.83 17.42 0% 20 3e-4Sparse Iter 1 10.65 14.84 50% 20 3e-4Dense Iter 1 9.11 13.96 0% 20 3e-5Sparse Iter 2 8.94 14.02 25% 20 3e-5Dense Iter 2 9.02 13.44 0% 20 6e-6Baseline 9.55 14.52 0% 60 3e-4Improve(abs) 0.53 1.08 - - -Improve(rel) 5.55% 7.44% - - -Layers are pruned. The Baseline model is trained for 60 epochs to provide a fair comparison withDSD training. The baseline model shows no improvement after 40 epochs. With one iteration ofDSD training, WER improves by 0.44 (WSJ ’92) and 0.56 (WSJ ’93) compared to the fully trainedbaseline.Here we show again that DSD can be applied multiple times or iteratively for further performancegain. A second iteration of DSD training achieves better accuracy as shown in Table 9. For the secondsparse iteration, 25% of parameters in the Fully Connected layer and Bidirectional Recurrent layersare pruned. Overall DSD training achieves relative improvement of 5.55% (WSJ ’92) and 7.44%(WSJ ’93) on the DS2 architecture. These results are in line with DSD experiments on the smallerDS1 network. We can conclude that DSD re-training continues to show improvement in accuracywith larger layers and deeper networks.5 D ISCUSSIONDense-Sparse-Dense training changes the optimization process and improves the optimization perfor-mance with significant margins by nudging the network with pruning and re-densifying. We conjecturethat the following aspects contribute to the efficacy of DSD training.Escape Saddle Point: Based on previous studies, one of the most profound difficulties of optimizingdeep networks is the proliferation of saddle points (Dauphin et al. (2014)). Advanced optimizationmethods have been proposed to overcome saddle points. For a similar purpose but with a differentapproach, the proposed DSD method overcomes the saddle points by pruning and re-densifyingframework. Pruning the converged model perturbs the learning dynamics and allows the networkto jump away from saddle points, which gives the network a chance to converge at a better local orglobal minimum. This idea is also similar to Simulated Annealing ( Hwang (1988)). While SimulatedAnnealing randomly jumps with decreasing probability on the search graph, DSD deterministicallydeviates from the converged solution achieved in the first dense training phase by removing thesmall weights and enforcing a sparsity support. Similar to Simulated Annealing, which can escapesub-optimal solutions multiple times in the entire optimization process, DSD can also be appliediteratively to achieve further performance gains, as shown in the Deep Speech results.Significantly Better Minima: After escaping saddle point, DSD achieved better minima. Wemeasured both the training loss and validation loss, DSD training decreased the loss and error onboth the training and the validation sets on ImageNet. We have also validated the significance of theimprovements compared with conventional fine-tuning by t-test, shown in the appendix.Regularized and Sparse Training: The sparsity regularization in the sparse training step moves theoptimization to a lower-dimensional space where the loss surface is smoother and tend to be morerobust to noise. More numerical experiments verified that both sparse training and the final DSDreduce the variance and lead to lower error (shown in the appendix).8Published as a conference paper at ICLR 2017Robust re-initialization: Weight initialization plays a big role in deep learning (Mishkin & Matas(2015)). Conventional training has only one chance of initialization. DSD gives the optimization asecond (or more) chance during the training process to re-initialize from a more robust sparse trainingsolution. We re-densify the network from the sparse solution which can be seen as a zero initializationfor pruned weights. Other initialization methods are also worth trying.Break Symmetry: The permutation symmetry of the hidden units makes the weights symmetrical,thus prone to co-adaptation in training. In DSD, pruning the weights breaks the symmetry of thehidden units associated with the weights, and the weights are asymmetrical in the final dense phase.6 C ONCLUSIONWe introduce DSD, a dense-sparse-dense training framework that regularizes neural networks bypruning and then restoring connections. Our method learns which connections are important duringthe initial dense solution. Then it regularizes the network by pruning the unimportant connectionsand retraining to a sparser and more robust solution with same or better accuracy. Finally, the prunedconnections are restored and the entire network is retrained again. This increases the dimensionalityof parameters, and thus model capacity, from the sparser model.DSD training achieves superior optimization performance. We highlight our experiments usingGoogLeNet, VGGNet, and ResNet on ImageNet; NeuralTalk on Flickr-8K; and DeepSpeech-1&2on the WSJ dataset. This shows that the accuracy of CNNs, RNNs, and LSTMs can be significantlyimproved with DSD training. Our numerical results and empirical tests show the inadequacy ofcurrent training methods for which we have provided an effective solution.9Published as a conference paper at ICLR 2017 | rJ4EPOk4l | models has the capacity to achieve higher accuracy with better training methods | 8: Top 50% of accepted papers, clear accept | Summary:
The paper proposes a model training strategy to achieve higher accuracy. The issue is train a too large model and you going to over-fit and your model will capture noise. Prune models or make it too small then it will miss important connections and under-fit. Thus, the proposed method involves various training steps: first they train a dense network, then prune it making it sparse then train a sparse network and finally they add connections back and train the model as dense again (DSD). The DSD method is generic method that can be used in CNN/RNN/LSTM. The reasons why models have better accuracy after DSD are: escape of saddle point, sparsity makes model more robust to noise and symmetry break allowing richer representations.
Pro:
The main point that this paper wants to show is that a model has the capacity to achieve higher accuracy, because it was shown that it is possible to compress a model without losing accuracy. And lossless compression means that there’s significant redundancy in the models that were trained using current training methods. This is an important observation that large models can get better accuracies as better training schemes are used.
Cons & Questions:
The issue is that the accuracy is slightly increased (2 or 3%) for most models. And the question is what is the price paid for this improvement? Resource and performance concerns arises because training a large model is computationally expensive (hours or even days using high performance GPUs).
Second question, can I keep adding Dense, Sparse and Dense training iterations to get higher and higher accuracy improvement? Are there limitations to this DSDSD… approach?
| 3: The reviewer is fairly confident that the evaluation is correct |
HyoST_9xl | ICLR.cc/2017/conference | 2017 | DSD: Dense-Sparse-Dense Training for Deep Neural Networks | ["Song Han", "Jeff Pool", "Sharan Narang", "Huizi Mao", "Enhao Gong", "Shijian Tang", "Erich Elsen", "Peter Vajda", "Manohar Paluri", "John Tran", "Bryan Catanzaro", "William J. Dally"] | Modern deep neural networks have a large number of parameters, making them very hard to train. We propose DSD, a dense-sparse-dense training flow, for regularizing deep neural networks and achieving better optimization performance. In the first D (Dense) step, we train a dense network to learn connection weights and importance. In the S (Sparse) step, we regularize the network by pruning the unimportant connections with small weights and retraining the network given the sparsity constraint. In the final D (re-Dense) step, we increase the model capacity by removing the sparsity constraint, re-initialize the pruned parameters from zero and retrain the whole dense network. Experiments show that DSD training can improve the performance for a wide range of CNNs, RNNs and LSTMs on the tasks of image classification, caption generation and speech recognition. On ImageNet, DSD improved the Top1 accuracy of GoogLeNet by 1.1%, VGG-16 by 4.3%, ResNet-18 by 1.2% and ResNet-50 by 1.1%, respectively. On the WSJ’93 dataset, DSD improved DeepSpeech and DeepSpeech2 WER by 2.0% and 1.1%. On the Flickr-8K dataset, DSD improved the NeuralTalk BLEU score by over 1.7. DSD is easy to use in practice: at training time, DSD incurs only one extra hyper-parameter: the sparsity ratio in the S step. At testing time, DSD doesn’t change the network architecture or incur any inference overhead. The consistent and significant performance gain of DSD experiments shows the inadequacy of the current training methods for finding the best local optimum, while DSD effectively achieves superior optimization performance for finding a better solution. DSD models are available to download at https://songhan.github.io/DSD. | ["Deep learning"] | ABSTRACTModern deep neural networks have a large number of parameters, making themvery hard to train. We propose DSD, a dense-sparse-dense training flow, forregularizing deep neural networks and achieving better optimization performance.In the first D (Dense) step, we train a dense network to learn connection weightsand importance. In the S (Sparse) step, we regularize the network by pruning theunimportant connections with small weights and retraining the network given thesparsity constraint. In the final D (re-Dense) step, we increase the model capacityby removing the sparsity constraint, re-initialize the pruned parameters from zeroand retrain the whole dense network. Experiments show that DSD training canimprove the performance for a wide range of CNNs, RNNs and LSTMs on thetasks of image classification, caption generation and speech recognition. OnImageNet, DSD improved the Top1 accuracy of GoogLeNet by 1.1%, VGG-16 by4.3%, ResNet-18 by 1.2% and ResNet-50 by 1.1%, respectively. On the WSJ’93dataset, DSD improved DeepSpeech and DeepSpeech2 WER by 2.0% and 1.1%.On the Flickr-8K dataset, DSD improved the NeuralTalk BLEU score by over1.7. DSD is easy to use in practice: at training time, DSD incurs only one extrahyper-parameter: the sparsity ratio in the S step. At testing time, DSD doesn’tchange the network architecture or incur any inference overhead. The consistentand significant performance gain of DSD experiments shows the inadequacy of thecurrent training methods for finding the best local optimum, while DSD effectivelyachieves superior optimization performance for finding a better solution. DSDmodels are available to download at https://songhan.github.io/DSD.1 I NTRODUCTIONDeep neural networks (DNNs) have shown significant improvements in many application domains,ranging from computer vision (He et al. (2015)) to natural language processing (Luong et al. (2015))and speech recognition (Amodei et al. (2015)). The abundance of powerful hardware makes it easierto train complicated DNN models with large capacities. The upside of complicated models is thatthey are very expressive and can capture the highly non-linear relationship between features andoutput. The downside of such large models is that they are prone to capturing the noise, rather thanthe intended pattern, in the training dataset. This noise does not generalize to new datasets, leading toover-fitting and a high variance.Indicates equal contributionyAlso at NVIDIAzNow at Google Brain. eriche@google.com1Published as a conference paper at ICLR 2017Dense Pruning Sparsity Constraint Sparse Increase Model Capacity Re-Dense Dense Figure 1: Dense-Sparse-Dense Training Flow. The sparse training regularizes the model, and the finaldense training restores the pruned weights (red), increasing the model capacity without overfitting.Algorithm 1: Workflow of DSD trainingInitialization: W(0)withW(0)N(0;)Output :W(t).———————————————– Initial Dense Phase ———————————————–while not converged doW(t)=W(t1)(t)rf(W(t1);x(t1));t=t+ 1;end————————————————— Sparse Phase —————————————————-//initialize the mask by sorting and keeping the Top-k weights.S=sort(jW(t1)j);=Ski;Mask =1(jW(t1)j>);while not converged doW(t)=W(t1)(t)rf(W(t1);x(t1));W(t)=W(t)Mask ;t=t+ 1;end————————————————- Final Dense Phase ————————————————–while not converged doW(t)=W(t1)(t)rf(W(t1);x(t1));t=t+ 1;endgoto Sparse Phase for iterative DSD;In contrast, simply reducing the model capacity would lead to the other extreme, causing a machinelearning system to miss the relevant relationships between features and target outputs, leading tounder-fitting and a high bias. Bias and variance are hard to optimize at the same time.To solve this problem, we propose a dense-sparse-dense training flow (DSD), a novel training strategythat starts from a dense model from conventional training, then regularizes the model with sparsity-constrained optimization, and finally increases the model capacity by restoring and retraining thepruned weights. At testing time, the final model produced by DSD still has the same architectureand dimension as the original dense model, and DSD training doesn’t incur any inference overhead.We experimented DSD training on 7 mainstream CNN / RNN / LSTMs and found consistentperformance gains over its comparable counterpart for image classification, image captioning andspeech recognition.2 DSD T RAINING FLOWOur DSD training employs a three-step process: dense, sparse, re-dense. Each step is illustrated inFigure 1 and Algorithm 1. The progression of weight distribution is plotted in Figure 2.Initial Dense Training: The first D step learns the connection weights and importance via normalnetwork training on the dense network. Unlike conventional training, however, the goal of this D stepis not only to learn the values of the weights; we are also learning which connections are important.We use a simple heuristic to quantify the importance of the weights using their absolute value.2Published as a conference paper at ICLR 2017−0.05 0 0.0501600320048006400Weight ValueCountTrain on Dense (D)(a)−0.05 0 0.0501600320048006400Weight ValueCountPruning the Network (b)−0.05 0 0.0501600320048006400Weight ValueCountTrain on Sparse (S) (c)−0.05 0 0.0501600320048006400Weight ValueCountRecover Zero Weights (d)−0.05 0 0.0501600320048006400Weight ValueCountTrain on Dense (D) (e)Figure 2: Weight distribution of a layer of GoogLeNet at different points in DSD training: the originalGoogLeNet (a), pruned (b), after retraining with the sparsity constraint (c), ignoring the sparistyconstraint and recovering the zero weights (d), and after retraining the dense network (e).Sparse Training: The S step prunes the low-weight connections and trains a sparse network. Weapplied the same sparsity to all the layers, thus there’s a single hyper parameter: the sparsity , thepercentage of weights that are pruned to 0. For each layer WwithNparameters, we sorted theparameters, picked the k-th largest one =Skas the threshold where k=N(1sparsity ), andgenerated a binary mask to remove all the weights smaller than . Details are shown in Algorithm 1 .We remove small weights because of the Taylor expansion. The loss function and its Taylor expansionare shown in Equation (1)(2). We want to minimize the increase in Loss when conducting a hardthresholding on the weights, so we need to minimize the first and second terms in Equation 2.Since we are zeroing out parameters, Wiis actuallyWi0 =Wi. At the local minimum where@Loss=@W i0and@2Loss@W2i>0, only the second order term matters. Since second order gradient@2Loss=@W2iis expensive to calculate and Wihas a power of 2, we use jWijas the metric of pruning.SmallerjWijmeans a smaller increase to the loss function.Loss =f(x;W 1;W 2;W 3:::) (1)Loss =@Loss@WiWi+12@2Loss@W2iWi2+::: (2)Retraining while enforcing the binary mask in each iteration, we converted a dense network into asparse network that has a known sparsity support and can fully recover or even increase the originalaccuracy of initial dense model under the sparsity constraint. The sparsity is the same for all thelayers and can be tuned using validation. We find a sparsity value between 25% and 50% generallyworks well in our experiments.Final Dense Training: The final D step recovers the pruned connections, making the network denseagain. These previously-pruned connections are initialized to zero and the entire network is retrainedwith 1/10 the original learning rate (since the sparse network is already at a good local minima).Hyper parameters like dropout ratios and weight decay remained unchanged. By restoring the prunedconnections, the final D step increases the model capacity of the network and makes it possible toarrive at a better local minima compared with the sparse model from the S step.To visualize the DSD training flow, we plotted the progression of the weight distribution in Figure 2.The figure is plotted using GoogLeNet’s inception_5b3x3 layer, and we found this progression ofweight distribution very representative for VGGNet and ResNet as well. The original distributionof weight is centered on zero with tails dropping off quickly. Pruning is based on absolute value soafter pruning the large center region is truncated away. The un-pruned network parameters adjustthemselves during the retraining phase, so in (c), the boundary becomes soft and forms a bimodaldistribution. In (d), at the beginning of the re-dense training step, all the pruned weights come backagain and are reinitialized to zero. Finally, in (e), the pruned weights are retrained together with theun-pruned weights. In this step, we kept the same learning hyper-parameters (weight decay, learningrate, etc.) for pruned weights and un-pruned weights. Comparing Figure (d) and (e), the un-prunedweights’ distribution almost remained the same, while the pruned weights became distributed furtheraround zero. The overall mean absolute value of the weight distribution is much smaller. Thisis a good phenomenon: choosing the smallest vector that solves the learning problem suppressesirrelevant components of the weight vector ( Moody et al. (1995)).3Published as a conference paper at ICLR 2017Table 1: Overview of the neural networks, data sets and performance improvements from DSD.Neural Network Domain Dataset Type Baseline DSD Abs. Imp. Rel. Imp.GoogLeNet Vision ImageNet CNN 31.1%130.0% 1.1% 3.6%VGG-16 Vision ImageNet CNN 31.5%127.2% 4.3% 13.7%ResNet-18 Vision ImageNet CNN 30.4%129.2% 1.2% 4.1%ResNet-50 Vision ImageNet CNN 24.0%122.9% 1.1% 4.6%NeuralTalk Caption Flickr-8K LSTM 16.8218.5 1.7 10.1%DeepSpeech Speech WSJ’93 RNN 33.6%331.6% 2.0% 5.8%DeepSpeech-2 Speech WSJ’93 RNN 14.5%313.4% 1.1% 7.4%1Top-1 error. VGG/GoogLeNet baselines from the Caffe Model Zoo, ResNet from Facebook.2BLEU score baseline from Neural Talk model zoo, the higher the better.3Word error rate: DeepSpeech2 is trained with a portion of Baidu internal dataset with only maxdecoding to show the effect of DNN improvement.3 R ELATED WORKDropout and DropConnect: DSD, Dropout (Srivastava et al. (2014)) and DropConnnect (Wan et al.(2013)) can all regularize neural networks and prevent over-fitting. The difference is that Dropout andDropConnect use a random sparsity pattern at each SGD iteration, while DSD training learns with adeterministic data driven sparsity pattern throughout sparse training. Our experiments on VGG16,GoogLeNet and NeuralTalk show that DSD training can work together with Dropout.Distillation: Model distillation (Hinton et al. (2015)) is a method that can transfer the learnedknowledge from a large model to a small model, which is more efficient for deployment. This isanother method that allows for performance improvements in neural networks without architecturalchanges.Model Compression: Both model compression (Han et al. (2016; 2015)) and DSD training usenetwork pruning (LeCun et al. (1990); Hassibi et al. (1993)). The difference is that the focus ofDSD training goes beyond maintaining the accuracy. DSD is able to further improve the accuracy byconsiderable margins. Another difference is that DSD training doesn’t require aggressive pruning. Amodestly pruned network (50%-60% sparse) can work well. However, model compression requiresaggressively pruning the network to achieve high compression rates.Sparsity Regularization and Hard Thresholding: the truncation-based sparse network has beentheoretically analyzed for learning a broad range of statistical models in high dimensions (Langfordet al. (2009); Yuan & Zhang (2013); Wang et al. (2014)). A similar training strategy with iterativehard thresholding and connection restoration is proposed by Jin et al. (2016) during the same timeperiod as, but independently from, DSD. Sparsity regularized optimization is heavily applied inCompressed Sensing (Candes & Romberg (2007)) to find optimal solutions to the inverse problemsin highly under-determined systems based on the sparsity assumption.4 E XPERIMENTSWe applied DSD training to different kinds of neural networks in different domains. We found thatDSD training improved the accuracy for allthese networks compared to the baseline networks thatwere not trained with DSD. The neural networks are chosen from CNN, RNN and LSTMs; thedatasets covered image classification, speech recognition, and caption generation. For networkstrained for ImageNet, we focus on GoogLeNet, VGG and ResNet, which are widely used in researchand production. An overview of the networks, dataset and accuracy results are shown in Table 1. Forthe convolutional networks, we do not prune the first layer during the sparse phase, since it has only 3channels and is very sensitive to pruning. The sparsity is the same for all the other layers, includingconvolutional and fully-connected layers. We do not change any other training hyper-parameters, andthe initial learning rate at each stage is decayed the same as conventional training. The epochs aredecided by when the loss converges. When the loss no longer decreases, we stop the training.4Published as a conference paper at ICLR 20174.1 G OOG LENETWe experimented with the BVLC GoogLeNet (Szegedy et al. (2015)) model obtained from the CaffeModel Zoo (Jia (2013)). It has 13 million parameters and 57 convolutional layers. We pruned eachlayer (except the first) to 30% sparsity. Retraining the sparse network gave some improvement inaccuracy due to regularization, as shown in Table 2. After the final dense training step, GoogLeNet’serror rates were reduced by 1.12% (Top-1) and 0.62% (Top-5) over the baseline.We compared DSD v.s. conventional training for the same number of epochs by dropping the learningrate upon "convergence" and continuing to learn. The result is shown as LLR (lower the learningrate). The training epochs for LLR is equal to that of Sparse+re-Dense as a fair comparison. LLR cannot achieve the same accuracy as DSD.Table 2: DSD results on GoogLeNetGoogLeNet Top-1 Err Top-5 Err Sparsity Epochs LRBaseline 31.14% 10.96% 0% 250 1e-2Sparse 30.58% 10.58% 30% 11 1e-3DSD 30.02% 10.34% 0% 22 1e-4LLR 30.20% 10.41% 0% 33 1e-5Improve (abs) 1.12% 0.62% - - -Improve (rel) 3.6% 5.7% - - -4.2 VGGN ETWe explored DSD training on VGG-16 (Simonyan & Zisserman (2014)), which is widely used indetection, segmentation and transfer learning. The baseline model is obtained from the Caffe ModelZoo (Jia (2013)). Similar to GoogLeNet, each layer is pruned to 30% sparsity. DSD training greatlyreduced the error by 4.31% (Top-1) and 2.65% (Top-5), detailed in Table 3. DSD also wins over theLLR result by a large margin.Table 3: DSD results on VGG-16VGG-16 Top-1 Err Top-5 Err Sparsity Epochs LRBaseline 31.50% 11.32% 0% 74 1e-2Sparse 28.19% 9.23% 30% 1.25 1e-4DSD 27.19% 8.67% 0% 18 1e-5LLR 29.33% 10.00% 0% 20 1e-7Improve (abs) 4.31% 2.65% - - -Improve (rel) 13.7% 23.4% - - -4.3 R ESNETDeep Residual Networks (ResNets, He et al. (2015)) were the top performer in the 2015 ImageNetchallenge. The baseline ResNet-18 and ResNet-50 models are provided by Facebook (2016). Weprune to 30% sparsity uniformly, and a single DSD pass for these networks reduced top-1 error by1.26% (ResNet-18) and 1.12% (ResNet-50), shown in Table 4. A second DSD iteration can furtherimprove the accuracy. As a fair comparison, we continue train the original model by lowering thelearning rate by another decade, but can’t reach the same accuracy as DSD, as shown in the LLR row.Table 4: DSD results on ResNet-18 and ResNet-50ResNet-18 ResNet-50Top-1 Err Top-5 Err Top-1 Err Top-5 Err Sparsity Epochs LRBaseline 30.43% 10.76% 24.01% 7.02% 0% 90 1e-1Sparse 30.15% 10.56% 23.55% 6.88% 30% 45 1e-2DSD 29.17 % 10.13 % 22.89 % 6.47% 0% 45 1e-3LLR 30.04% 10.49% 23.58% 6.84% 0% 90 1e-5Improve (abs) 1.26% 0.63% 1.12% 0.55% - - -Improve (rel) 4.14% 5.86% 4.66% 7.83% - - -5Published as a conference paper at ICLR 2017Baseline : a man and a woman are sitting on a bench. Sparse : a man is sitting on a bench with his hands in the air. DSD : a man is sitting on a bench with his arms folded.Baseline : two dogs are playing together in a field. Sparse : two dogs are playing in a field. DSD : two dogs are playing in the grass.Baseline : a boy in a red shirt is climbing a rock wall. Sparse : a young girl is jumping off a tree. DSD : a young girl in a pink shirt is s w i n g i n g o n a swing.Baseline : a basketball player in a red uniform is playing with a ball. Sparse : a basketball player in a blue uniform is jumping over the goal. DSD : a basketball player in a white uniform is trying to make a shot.Baseline : a person in a red jacket is riding a b i k e t h r o u g h t h e woods. Sparse : a car drives through a mud puddle. DSD : a car drives through a forest.1Figure 3: Visualization of DSD training improving the performance of image captioning.Table 5: DSD results on NeuralTalkNeuralTalk BLEU-1 BLEU-2 BLEU-3 BLEU-4 Sparsity Epochs LRBaseline 57.2 38.6 25.4 16.8 0 19 1e-2Sparse 58.4 39.7 26.3 17.5 80% 10 1e-3DSD 59.2 40.7 27.4 18.5 0 6 1e-4Improve(abs) 2.0 2.1 2.0 1.7 - - -Improve(rel) 3.5% 5.4% 7.9% 10.1% - - -4.4 N EURAL TALKWe evaluated DSD training on RNN and LSTM beyond CNN. We applied DSD to NeuralTalk(Karpathy & Fei-Fei (2015)), an LSTM for generating image descriptions. It uses a CNN as an imagefeature extractor and an LSTM to generate captions. To verify DSD training on LSTMs, we fixedthe CNN weights and only train the LSTM weights. The baseline NeuralTalk model we used is theflickr8k_cnn_lstm_v1.p downloaded from NeuralTalk Model Zoo.In the pruning step, we pruned all layers except Ws, the word embedding lookup table, to 80%sparse. We used a higher sparsity than CNN’s experiments based on the validation set of flickr8k. Weretrained the remaining sparse network using the same weight decay and batch size as the originalpaper. The learning rate is tuned based on the validation set, shown in Table 5. Retraining the sparsenetwork improved the BLUE score by [1.2, 1.1, 0.9, 0.7]. After getting rid of the sparsity constraintand retraining the dense network, the final results of DSD further improved the BLEU score by [2.0,2.1, 2.0, 1.7] over baseline.The BLEU score is not the sole criteria measuring auto-caption system. We visualized the captionsgenerated by DSD training in Figure 3. In the first image, the baseline model mistakes the girl with aboy and the girl’s hair with a rock wall; the sparse model can tell that it’s a girl; and the DSD modelcan further identify the swing. In the the second image, DSD training can more accurately tell theplayer is in a white uniform and trying to make a shot, rather than the baseline just saying he’s ina red uniform and playing with a ball. The performance of DSD training generalizes beyond theseexamples; more image caption results generated by DSD training are provided in the Appendix.4.5 D EEPSPEECHWe explore DSD training on speech recognition tasks using both Deep Speech 1 (DS1) and DeepSpeech 2 (DS2) networks (Hannun et al. (2014); Amodei et al. (2015)).The DS1 model is a 5 layer network with 1 Bidirectional Recurrent layer, as described in Table 6.The training dataset used for this model is the Wall Street Journal (WSJ), which contains 81 hours of6Published as a conference paper at ICLR 2017Table 6: Deep Speech 1 ArchitectureLayer ID 0 1 2 3 4 5Type Conv FC FC Bidirectional Recurrent FC CTCCost#Params 1814528 1049600 1049600 3146752 1049600 29725Table 7: DSD results on Deep Speech 1: Word Error Rate (WER)DeepSpeech 1 WSJ ’92 WSJ ’93 Sparsity Epochs LRDense Iter 0 29.82 34.57 0% 50 8e-4Sparse Iter 1 27.90 32.99 50% 50 5e-4Dense Iter 1 27.90 32.20 0% 50 3e-4Sparse Iter 2 27.45 32.99 25% 50 1e-4Dense Iter 2 27.45 31.59 0% 50 3e-5Baseline 28.03 33.55 0% 150 8e-4Improve(abs) 0.58 1.96 - - -Improve(rel) 2.07% 5.84% - - -speech. The validation set consists of 1 hour of speech. The test sets are from WSJ’92 and WSJ’93and contain 1 hour of speech combined. The Word Error Rate (WER) reported on the test sets for thebaseline models is different from Amodei et al. (2015) due to two factors. First, in DeepSpeech2,the models were trained using much larger data sets containing approximately 12,000 hours ofmulti-speaker speech data. Secondly, WER was evaluated with beam search and a language model inDeepSpeech2; here the network output is obtained using only max decoding to show improvement inthe neural network accuracy, and filtering out the other parts.The first dense phase was trained for 50 epochs. In the sparse phase, weights are pruned in theFully Connected layers and the Bidirectional Recurrent layer only (they are the majority of theweights). Each layer is pruned to achieve the same 50% sparsity and trained for 50 epochs. In thefinal dense phase, the pruned weights are initialized to zero and trained for another 50 epochs. Fora fair comparison of baseline, we used Nesterov SGD to train, reduce the learning rate with eachre-training, and keep all other hyper parameters unchanged. The learning rate is picked using ourvalidation set.We first wanted to compare the DSD results with a baseline model trained for the same number ofepochs. The first 3 rows of Table 7 shows the WER when the DSD model is trained for 50+50+50=150epochs, and the 6th line shows the baseline model trained by 150 epochs (the Same #Epochs asDSD). DSD training improves WER by 0.13 (WSJ ’92) and 1.35 (WSJ ’93) given the same numberof epochs as the conventional training.Given a second DSD iteration, accuracy can be further improved. In the second DSD iteration,each layer is pruned away 25% of the weights. Similar to the first iteration, the sparse model andsubsequent dense model are further retrained for 50 epochs. The learning rate is scaled down for eachre-training step. The results are shown in Table 7. Compared with the fully trained and convergedbaseline, the second DSD iteration improves WER by 0.58 (WSJ ’92) and 1.96 (WSJ ’93), a relativeimprovement of 2.07% (WSJ ’92) and 5.84% (WSJ ’93). So, we can do more DSD iterations(DSDSD) to further improve the performance. Adding more DSD iterations has a diminishing return.4.6 D EEPSPEECH 2To show how DSD works on deeper networks, we evaluated DSD on the Deep Speech 2 (DS2)network, described in Table 8. This network has 7 Bidirectional Recurrent layers with approximately67 million parameters, around 8 times larger than the DS1 model. A subset of the internal Englishtraining set is used. The training set is comprised of 2,100 hours of speech. The validation set iscomprised of 3.46 hours of speech. The test sets are from WSJ’92 and WSJ’93, which contain 1 hourof speech combined.Table 9 shows the results of the two iterations of DSD training. For the first sparse re-training,similar to DS1, 50% of the parameters from the Bidirectional Recurrent Layers and Fully Connected7Published as a conference paper at ICLR 2017Table 8: Deep Speech 2 ArchitectureLayer ID 0 1 2 3 - 8 9 10Type 2DConv 2DConv BR BR FC CTCCost#Params 19616 239168 8507840 9296320 3101120 95054Table 9: DSD results on Deep Speech 2 (WER)DeepSpeech 2 WSJ ’92 WSJ ’93 Sparsity Epochs LRDense Iter 0 11.83 17.42 0% 20 3e-4Sparse Iter 1 10.65 14.84 50% 20 3e-4Dense Iter 1 9.11 13.96 0% 20 3e-5Sparse Iter 2 8.94 14.02 25% 20 3e-5Dense Iter 2 9.02 13.44 0% 20 6e-6Baseline 9.55 14.52 0% 60 3e-4Improve(abs) 0.53 1.08 - - -Improve(rel) 5.55% 7.44% - - -Layers are pruned. The Baseline model is trained for 60 epochs to provide a fair comparison withDSD training. The baseline model shows no improvement after 40 epochs. With one iteration ofDSD training, WER improves by 0.44 (WSJ ’92) and 0.56 (WSJ ’93) compared to the fully trainedbaseline.Here we show again that DSD can be applied multiple times or iteratively for further performancegain. A second iteration of DSD training achieves better accuracy as shown in Table 9. For the secondsparse iteration, 25% of parameters in the Fully Connected layer and Bidirectional Recurrent layersare pruned. Overall DSD training achieves relative improvement of 5.55% (WSJ ’92) and 7.44%(WSJ ’93) on the DS2 architecture. These results are in line with DSD experiments on the smallerDS1 network. We can conclude that DSD re-training continues to show improvement in accuracywith larger layers and deeper networks.5 D ISCUSSIONDense-Sparse-Dense training changes the optimization process and improves the optimization perfor-mance with significant margins by nudging the network with pruning and re-densifying. We conjecturethat the following aspects contribute to the efficacy of DSD training.Escape Saddle Point: Based on previous studies, one of the most profound difficulties of optimizingdeep networks is the proliferation of saddle points (Dauphin et al. (2014)). Advanced optimizationmethods have been proposed to overcome saddle points. For a similar purpose but with a differentapproach, the proposed DSD method overcomes the saddle points by pruning and re-densifyingframework. Pruning the converged model perturbs the learning dynamics and allows the networkto jump away from saddle points, which gives the network a chance to converge at a better local orglobal minimum. This idea is also similar to Simulated Annealing ( Hwang (1988)). While SimulatedAnnealing randomly jumps with decreasing probability on the search graph, DSD deterministicallydeviates from the converged solution achieved in the first dense training phase by removing thesmall weights and enforcing a sparsity support. Similar to Simulated Annealing, which can escapesub-optimal solutions multiple times in the entire optimization process, DSD can also be appliediteratively to achieve further performance gains, as shown in the Deep Speech results.Significantly Better Minima: After escaping saddle point, DSD achieved better minima. Wemeasured both the training loss and validation loss, DSD training decreased the loss and error onboth the training and the validation sets on ImageNet. We have also validated the significance of theimprovements compared with conventional fine-tuning by t-test, shown in the appendix.Regularized and Sparse Training: The sparsity regularization in the sparse training step moves theoptimization to a lower-dimensional space where the loss surface is smoother and tend to be morerobust to noise. More numerical experiments verified that both sparse training and the final DSDreduce the variance and lead to lower error (shown in the appendix).8Published as a conference paper at ICLR 2017Robust re-initialization: Weight initialization plays a big role in deep learning (Mishkin & Matas(2015)). Conventional training has only one chance of initialization. DSD gives the optimization asecond (or more) chance during the training process to re-initialize from a more robust sparse trainingsolution. We re-densify the network from the sparse solution which can be seen as a zero initializationfor pruned weights. Other initialization methods are also worth trying.Break Symmetry: The permutation symmetry of the hidden units makes the weights symmetrical,thus prone to co-adaptation in training. In DSD, pruning the weights breaks the symmetry of thehidden units associated with the weights, and the weights are asymmetrical in the final dense phase.6 C ONCLUSIONWe introduce DSD, a dense-sparse-dense training framework that regularizes neural networks bypruning and then restoring connections. Our method learns which connections are important duringthe initial dense solution. Then it regularizes the network by pruning the unimportant connectionsand retraining to a sparser and more robust solution with same or better accuracy. Finally, the prunedconnections are restored and the entire network is retrained again. This increases the dimensionalityof parameters, and thus model capacity, from the sparser model.DSD training achieves superior optimization performance. We highlight our experiments usingGoogLeNet, VGGNet, and ResNet on ImageNet; NeuralTalk on Flickr-8K; and DeepSpeech-1&2on the WSJ dataset. This shows that the accuracy of CNNs, RNNs, and LSTMs can be significantlyimproved with DSD training. Our numerical results and empirical tests show the inadequacy ofcurrent training methods for which we have provided an effective solution.9Published as a conference paper at ICLR 2017 | S1DWSMU4l | nice new training method for deep networks | 8: Top 50% of accepted papers, clear accept | Training highly non-convex deep neural networks is a very important practical problem, and this paper provides a great exploration of an interesting new idea for more effective training. The empirical evaluation both in the paper itself and in the authors’ comments during discussion convincingly demonstrates that the method achieves consistent improvements in accuracy across multiple architectures, tasks and datasets. The algorithm is very simple (alternating between training the full dense network and a sparse version of it), which is actually a positive since that means it may get adapted in practice by the research community.
The paper should be revised to incorporate the additional experiments and comments from the discussion, particularly the accuracy comparisons with the same number of epochs. | 3: The reviewer is fairly confident that the evaluation is correct |
HyoST_9xl | ICLR.cc/2017/conference | 2017 | DSD: Dense-Sparse-Dense Training for Deep Neural Networks | ["Song Han", "Jeff Pool", "Sharan Narang", "Huizi Mao", "Enhao Gong", "Shijian Tang", "Erich Elsen", "Peter Vajda", "Manohar Paluri", "John Tran", "Bryan Catanzaro", "William J. Dally"] | Modern deep neural networks have a large number of parameters, making them very hard to train. We propose DSD, a dense-sparse-dense training flow, for regularizing deep neural networks and achieving better optimization performance. In the first D (Dense) step, we train a dense network to learn connection weights and importance. In the S (Sparse) step, we regularize the network by pruning the unimportant connections with small weights and retraining the network given the sparsity constraint. In the final D (re-Dense) step, we increase the model capacity by removing the sparsity constraint, re-initialize the pruned parameters from zero and retrain the whole dense network. Experiments show that DSD training can improve the performance for a wide range of CNNs, RNNs and LSTMs on the tasks of image classification, caption generation and speech recognition. On ImageNet, DSD improved the Top1 accuracy of GoogLeNet by 1.1%, VGG-16 by 4.3%, ResNet-18 by 1.2% and ResNet-50 by 1.1%, respectively. On the WSJ’93 dataset, DSD improved DeepSpeech and DeepSpeech2 WER by 2.0% and 1.1%. On the Flickr-8K dataset, DSD improved the NeuralTalk BLEU score by over 1.7. DSD is easy to use in practice: at training time, DSD incurs only one extra hyper-parameter: the sparsity ratio in the S step. At testing time, DSD doesn’t change the network architecture or incur any inference overhead. The consistent and significant performance gain of DSD experiments shows the inadequacy of the current training methods for finding the best local optimum, while DSD effectively achieves superior optimization performance for finding a better solution. DSD models are available to download at https://songhan.github.io/DSD. | ["Deep learning"] | ABSTRACTModern deep neural networks have a large number of parameters, making themvery hard to train. We propose DSD, a dense-sparse-dense training flow, forregularizing deep neural networks and achieving better optimization performance.In the first D (Dense) step, we train a dense network to learn connection weightsand importance. In the S (Sparse) step, we regularize the network by pruning theunimportant connections with small weights and retraining the network given thesparsity constraint. In the final D (re-Dense) step, we increase the model capacityby removing the sparsity constraint, re-initialize the pruned parameters from zeroand retrain the whole dense network. Experiments show that DSD training canimprove the performance for a wide range of CNNs, RNNs and LSTMs on thetasks of image classification, caption generation and speech recognition. OnImageNet, DSD improved the Top1 accuracy of GoogLeNet by 1.1%, VGG-16 by4.3%, ResNet-18 by 1.2% and ResNet-50 by 1.1%, respectively. On the WSJ’93dataset, DSD improved DeepSpeech and DeepSpeech2 WER by 2.0% and 1.1%.On the Flickr-8K dataset, DSD improved the NeuralTalk BLEU score by over1.7. DSD is easy to use in practice: at training time, DSD incurs only one extrahyper-parameter: the sparsity ratio in the S step. At testing time, DSD doesn’tchange the network architecture or incur any inference overhead. The consistentand significant performance gain of DSD experiments shows the inadequacy of thecurrent training methods for finding the best local optimum, while DSD effectivelyachieves superior optimization performance for finding a better solution. DSDmodels are available to download at https://songhan.github.io/DSD.1 I NTRODUCTIONDeep neural networks (DNNs) have shown significant improvements in many application domains,ranging from computer vision (He et al. (2015)) to natural language processing (Luong et al. (2015))and speech recognition (Amodei et al. (2015)). The abundance of powerful hardware makes it easierto train complicated DNN models with large capacities. The upside of complicated models is thatthey are very expressive and can capture the highly non-linear relationship between features andoutput. The downside of such large models is that they are prone to capturing the noise, rather thanthe intended pattern, in the training dataset. This noise does not generalize to new datasets, leading toover-fitting and a high variance.Indicates equal contributionyAlso at NVIDIAzNow at Google Brain. eriche@google.com1Published as a conference paper at ICLR 2017Dense Pruning Sparsity Constraint Sparse Increase Model Capacity Re-Dense Dense Figure 1: Dense-Sparse-Dense Training Flow. The sparse training regularizes the model, and the finaldense training restores the pruned weights (red), increasing the model capacity without overfitting.Algorithm 1: Workflow of DSD trainingInitialization: W(0)withW(0)N(0;)Output :W(t).———————————————– Initial Dense Phase ———————————————–while not converged doW(t)=W(t1)(t)rf(W(t1);x(t1));t=t+ 1;end————————————————— Sparse Phase —————————————————-//initialize the mask by sorting and keeping the Top-k weights.S=sort(jW(t1)j);=Ski;Mask =1(jW(t1)j>);while not converged doW(t)=W(t1)(t)rf(W(t1);x(t1));W(t)=W(t)Mask ;t=t+ 1;end————————————————- Final Dense Phase ————————————————–while not converged doW(t)=W(t1)(t)rf(W(t1);x(t1));t=t+ 1;endgoto Sparse Phase for iterative DSD;In contrast, simply reducing the model capacity would lead to the other extreme, causing a machinelearning system to miss the relevant relationships between features and target outputs, leading tounder-fitting and a high bias. Bias and variance are hard to optimize at the same time.To solve this problem, we propose a dense-sparse-dense training flow (DSD), a novel training strategythat starts from a dense model from conventional training, then regularizes the model with sparsity-constrained optimization, and finally increases the model capacity by restoring and retraining thepruned weights. At testing time, the final model produced by DSD still has the same architectureand dimension as the original dense model, and DSD training doesn’t incur any inference overhead.We experimented DSD training on 7 mainstream CNN / RNN / LSTMs and found consistentperformance gains over its comparable counterpart for image classification, image captioning andspeech recognition.2 DSD T RAINING FLOWOur DSD training employs a three-step process: dense, sparse, re-dense. Each step is illustrated inFigure 1 and Algorithm 1. The progression of weight distribution is plotted in Figure 2.Initial Dense Training: The first D step learns the connection weights and importance via normalnetwork training on the dense network. Unlike conventional training, however, the goal of this D stepis not only to learn the values of the weights; we are also learning which connections are important.We use a simple heuristic to quantify the importance of the weights using their absolute value.2Published as a conference paper at ICLR 2017−0.05 0 0.0501600320048006400Weight ValueCountTrain on Dense (D)(a)−0.05 0 0.0501600320048006400Weight ValueCountPruning the Network (b)−0.05 0 0.0501600320048006400Weight ValueCountTrain on Sparse (S) (c)−0.05 0 0.0501600320048006400Weight ValueCountRecover Zero Weights (d)−0.05 0 0.0501600320048006400Weight ValueCountTrain on Dense (D) (e)Figure 2: Weight distribution of a layer of GoogLeNet at different points in DSD training: the originalGoogLeNet (a), pruned (b), after retraining with the sparsity constraint (c), ignoring the sparistyconstraint and recovering the zero weights (d), and after retraining the dense network (e).Sparse Training: The S step prunes the low-weight connections and trains a sparse network. Weapplied the same sparsity to all the layers, thus there’s a single hyper parameter: the sparsity , thepercentage of weights that are pruned to 0. For each layer WwithNparameters, we sorted theparameters, picked the k-th largest one =Skas the threshold where k=N(1sparsity ), andgenerated a binary mask to remove all the weights smaller than . Details are shown in Algorithm 1 .We remove small weights because of the Taylor expansion. The loss function and its Taylor expansionare shown in Equation (1)(2). We want to minimize the increase in Loss when conducting a hardthresholding on the weights, so we need to minimize the first and second terms in Equation 2.Since we are zeroing out parameters, Wiis actuallyWi0 =Wi. At the local minimum where@Loss=@W i0and@2Loss@W2i>0, only the second order term matters. Since second order gradient@2Loss=@W2iis expensive to calculate and Wihas a power of 2, we use jWijas the metric of pruning.SmallerjWijmeans a smaller increase to the loss function.Loss =f(x;W 1;W 2;W 3:::) (1)Loss =@Loss@WiWi+12@2Loss@W2iWi2+::: (2)Retraining while enforcing the binary mask in each iteration, we converted a dense network into asparse network that has a known sparsity support and can fully recover or even increase the originalaccuracy of initial dense model under the sparsity constraint. The sparsity is the same for all thelayers and can be tuned using validation. We find a sparsity value between 25% and 50% generallyworks well in our experiments.Final Dense Training: The final D step recovers the pruned connections, making the network denseagain. These previously-pruned connections are initialized to zero and the entire network is retrainedwith 1/10 the original learning rate (since the sparse network is already at a good local minima).Hyper parameters like dropout ratios and weight decay remained unchanged. By restoring the prunedconnections, the final D step increases the model capacity of the network and makes it possible toarrive at a better local minima compared with the sparse model from the S step.To visualize the DSD training flow, we plotted the progression of the weight distribution in Figure 2.The figure is plotted using GoogLeNet’s inception_5b3x3 layer, and we found this progression ofweight distribution very representative for VGGNet and ResNet as well. The original distributionof weight is centered on zero with tails dropping off quickly. Pruning is based on absolute value soafter pruning the large center region is truncated away. The un-pruned network parameters adjustthemselves during the retraining phase, so in (c), the boundary becomes soft and forms a bimodaldistribution. In (d), at the beginning of the re-dense training step, all the pruned weights come backagain and are reinitialized to zero. Finally, in (e), the pruned weights are retrained together with theun-pruned weights. In this step, we kept the same learning hyper-parameters (weight decay, learningrate, etc.) for pruned weights and un-pruned weights. Comparing Figure (d) and (e), the un-prunedweights’ distribution almost remained the same, while the pruned weights became distributed furtheraround zero. The overall mean absolute value of the weight distribution is much smaller. Thisis a good phenomenon: choosing the smallest vector that solves the learning problem suppressesirrelevant components of the weight vector ( Moody et al. (1995)).3Published as a conference paper at ICLR 2017Table 1: Overview of the neural networks, data sets and performance improvements from DSD.Neural Network Domain Dataset Type Baseline DSD Abs. Imp. Rel. Imp.GoogLeNet Vision ImageNet CNN 31.1%130.0% 1.1% 3.6%VGG-16 Vision ImageNet CNN 31.5%127.2% 4.3% 13.7%ResNet-18 Vision ImageNet CNN 30.4%129.2% 1.2% 4.1%ResNet-50 Vision ImageNet CNN 24.0%122.9% 1.1% 4.6%NeuralTalk Caption Flickr-8K LSTM 16.8218.5 1.7 10.1%DeepSpeech Speech WSJ’93 RNN 33.6%331.6% 2.0% 5.8%DeepSpeech-2 Speech WSJ’93 RNN 14.5%313.4% 1.1% 7.4%1Top-1 error. VGG/GoogLeNet baselines from the Caffe Model Zoo, ResNet from Facebook.2BLEU score baseline from Neural Talk model zoo, the higher the better.3Word error rate: DeepSpeech2 is trained with a portion of Baidu internal dataset with only maxdecoding to show the effect of DNN improvement.3 R ELATED WORKDropout and DropConnect: DSD, Dropout (Srivastava et al. (2014)) and DropConnnect (Wan et al.(2013)) can all regularize neural networks and prevent over-fitting. The difference is that Dropout andDropConnect use a random sparsity pattern at each SGD iteration, while DSD training learns with adeterministic data driven sparsity pattern throughout sparse training. Our experiments on VGG16,GoogLeNet and NeuralTalk show that DSD training can work together with Dropout.Distillation: Model distillation (Hinton et al. (2015)) is a method that can transfer the learnedknowledge from a large model to a small model, which is more efficient for deployment. This isanother method that allows for performance improvements in neural networks without architecturalchanges.Model Compression: Both model compression (Han et al. (2016; 2015)) and DSD training usenetwork pruning (LeCun et al. (1990); Hassibi et al. (1993)). The difference is that the focus ofDSD training goes beyond maintaining the accuracy. DSD is able to further improve the accuracy byconsiderable margins. Another difference is that DSD training doesn’t require aggressive pruning. Amodestly pruned network (50%-60% sparse) can work well. However, model compression requiresaggressively pruning the network to achieve high compression rates.Sparsity Regularization and Hard Thresholding: the truncation-based sparse network has beentheoretically analyzed for learning a broad range of statistical models in high dimensions (Langfordet al. (2009); Yuan & Zhang (2013); Wang et al. (2014)). A similar training strategy with iterativehard thresholding and connection restoration is proposed by Jin et al. (2016) during the same timeperiod as, but independently from, DSD. Sparsity regularized optimization is heavily applied inCompressed Sensing (Candes & Romberg (2007)) to find optimal solutions to the inverse problemsin highly under-determined systems based on the sparsity assumption.4 E XPERIMENTSWe applied DSD training to different kinds of neural networks in different domains. We found thatDSD training improved the accuracy for allthese networks compared to the baseline networks thatwere not trained with DSD. The neural networks are chosen from CNN, RNN and LSTMs; thedatasets covered image classification, speech recognition, and caption generation. For networkstrained for ImageNet, we focus on GoogLeNet, VGG and ResNet, which are widely used in researchand production. An overview of the networks, dataset and accuracy results are shown in Table 1. Forthe convolutional networks, we do not prune the first layer during the sparse phase, since it has only 3channels and is very sensitive to pruning. The sparsity is the same for all the other layers, includingconvolutional and fully-connected layers. We do not change any other training hyper-parameters, andthe initial learning rate at each stage is decayed the same as conventional training. The epochs aredecided by when the loss converges. When the loss no longer decreases, we stop the training.4Published as a conference paper at ICLR 20174.1 G OOG LENETWe experimented with the BVLC GoogLeNet (Szegedy et al. (2015)) model obtained from the CaffeModel Zoo (Jia (2013)). It has 13 million parameters and 57 convolutional layers. We pruned eachlayer (except the first) to 30% sparsity. Retraining the sparse network gave some improvement inaccuracy due to regularization, as shown in Table 2. After the final dense training step, GoogLeNet’serror rates were reduced by 1.12% (Top-1) and 0.62% (Top-5) over the baseline.We compared DSD v.s. conventional training for the same number of epochs by dropping the learningrate upon "convergence" and continuing to learn. The result is shown as LLR (lower the learningrate). The training epochs for LLR is equal to that of Sparse+re-Dense as a fair comparison. LLR cannot achieve the same accuracy as DSD.Table 2: DSD results on GoogLeNetGoogLeNet Top-1 Err Top-5 Err Sparsity Epochs LRBaseline 31.14% 10.96% 0% 250 1e-2Sparse 30.58% 10.58% 30% 11 1e-3DSD 30.02% 10.34% 0% 22 1e-4LLR 30.20% 10.41% 0% 33 1e-5Improve (abs) 1.12% 0.62% - - -Improve (rel) 3.6% 5.7% - - -4.2 VGGN ETWe explored DSD training on VGG-16 (Simonyan & Zisserman (2014)), which is widely used indetection, segmentation and transfer learning. The baseline model is obtained from the Caffe ModelZoo (Jia (2013)). Similar to GoogLeNet, each layer is pruned to 30% sparsity. DSD training greatlyreduced the error by 4.31% (Top-1) and 2.65% (Top-5), detailed in Table 3. DSD also wins over theLLR result by a large margin.Table 3: DSD results on VGG-16VGG-16 Top-1 Err Top-5 Err Sparsity Epochs LRBaseline 31.50% 11.32% 0% 74 1e-2Sparse 28.19% 9.23% 30% 1.25 1e-4DSD 27.19% 8.67% 0% 18 1e-5LLR 29.33% 10.00% 0% 20 1e-7Improve (abs) 4.31% 2.65% - - -Improve (rel) 13.7% 23.4% - - -4.3 R ESNETDeep Residual Networks (ResNets, He et al. (2015)) were the top performer in the 2015 ImageNetchallenge. The baseline ResNet-18 and ResNet-50 models are provided by Facebook (2016). Weprune to 30% sparsity uniformly, and a single DSD pass for these networks reduced top-1 error by1.26% (ResNet-18) and 1.12% (ResNet-50), shown in Table 4. A second DSD iteration can furtherimprove the accuracy. As a fair comparison, we continue train the original model by lowering thelearning rate by another decade, but can’t reach the same accuracy as DSD, as shown in the LLR row.Table 4: DSD results on ResNet-18 and ResNet-50ResNet-18 ResNet-50Top-1 Err Top-5 Err Top-1 Err Top-5 Err Sparsity Epochs LRBaseline 30.43% 10.76% 24.01% 7.02% 0% 90 1e-1Sparse 30.15% 10.56% 23.55% 6.88% 30% 45 1e-2DSD 29.17 % 10.13 % 22.89 % 6.47% 0% 45 1e-3LLR 30.04% 10.49% 23.58% 6.84% 0% 90 1e-5Improve (abs) 1.26% 0.63% 1.12% 0.55% - - -Improve (rel) 4.14% 5.86% 4.66% 7.83% - - -5Published as a conference paper at ICLR 2017Baseline : a man and a woman are sitting on a bench. Sparse : a man is sitting on a bench with his hands in the air. DSD : a man is sitting on a bench with his arms folded.Baseline : two dogs are playing together in a field. Sparse : two dogs are playing in a field. DSD : two dogs are playing in the grass.Baseline : a boy in a red shirt is climbing a rock wall. Sparse : a young girl is jumping off a tree. DSD : a young girl in a pink shirt is s w i n g i n g o n a swing.Baseline : a basketball player in a red uniform is playing with a ball. Sparse : a basketball player in a blue uniform is jumping over the goal. DSD : a basketball player in a white uniform is trying to make a shot.Baseline : a person in a red jacket is riding a b i k e t h r o u g h t h e woods. Sparse : a car drives through a mud puddle. DSD : a car drives through a forest.1Figure 3: Visualization of DSD training improving the performance of image captioning.Table 5: DSD results on NeuralTalkNeuralTalk BLEU-1 BLEU-2 BLEU-3 BLEU-4 Sparsity Epochs LRBaseline 57.2 38.6 25.4 16.8 0 19 1e-2Sparse 58.4 39.7 26.3 17.5 80% 10 1e-3DSD 59.2 40.7 27.4 18.5 0 6 1e-4Improve(abs) 2.0 2.1 2.0 1.7 - - -Improve(rel) 3.5% 5.4% 7.9% 10.1% - - -4.4 N EURAL TALKWe evaluated DSD training on RNN and LSTM beyond CNN. We applied DSD to NeuralTalk(Karpathy & Fei-Fei (2015)), an LSTM for generating image descriptions. It uses a CNN as an imagefeature extractor and an LSTM to generate captions. To verify DSD training on LSTMs, we fixedthe CNN weights and only train the LSTM weights. The baseline NeuralTalk model we used is theflickr8k_cnn_lstm_v1.p downloaded from NeuralTalk Model Zoo.In the pruning step, we pruned all layers except Ws, the word embedding lookup table, to 80%sparse. We used a higher sparsity than CNN’s experiments based on the validation set of flickr8k. Weretrained the remaining sparse network using the same weight decay and batch size as the originalpaper. The learning rate is tuned based on the validation set, shown in Table 5. Retraining the sparsenetwork improved the BLUE score by [1.2, 1.1, 0.9, 0.7]. After getting rid of the sparsity constraintand retraining the dense network, the final results of DSD further improved the BLEU score by [2.0,2.1, 2.0, 1.7] over baseline.The BLEU score is not the sole criteria measuring auto-caption system. We visualized the captionsgenerated by DSD training in Figure 3. In the first image, the baseline model mistakes the girl with aboy and the girl’s hair with a rock wall; the sparse model can tell that it’s a girl; and the DSD modelcan further identify the swing. In the the second image, DSD training can more accurately tell theplayer is in a white uniform and trying to make a shot, rather than the baseline just saying he’s ina red uniform and playing with a ball. The performance of DSD training generalizes beyond theseexamples; more image caption results generated by DSD training are provided in the Appendix.4.5 D EEPSPEECHWe explore DSD training on speech recognition tasks using both Deep Speech 1 (DS1) and DeepSpeech 2 (DS2) networks (Hannun et al. (2014); Amodei et al. (2015)).The DS1 model is a 5 layer network with 1 Bidirectional Recurrent layer, as described in Table 6.The training dataset used for this model is the Wall Street Journal (WSJ), which contains 81 hours of6Published as a conference paper at ICLR 2017Table 6: Deep Speech 1 ArchitectureLayer ID 0 1 2 3 4 5Type Conv FC FC Bidirectional Recurrent FC CTCCost#Params 1814528 1049600 1049600 3146752 1049600 29725Table 7: DSD results on Deep Speech 1: Word Error Rate (WER)DeepSpeech 1 WSJ ’92 WSJ ’93 Sparsity Epochs LRDense Iter 0 29.82 34.57 0% 50 8e-4Sparse Iter 1 27.90 32.99 50% 50 5e-4Dense Iter 1 27.90 32.20 0% 50 3e-4Sparse Iter 2 27.45 32.99 25% 50 1e-4Dense Iter 2 27.45 31.59 0% 50 3e-5Baseline 28.03 33.55 0% 150 8e-4Improve(abs) 0.58 1.96 - - -Improve(rel) 2.07% 5.84% - - -speech. The validation set consists of 1 hour of speech. The test sets are from WSJ’92 and WSJ’93and contain 1 hour of speech combined. The Word Error Rate (WER) reported on the test sets for thebaseline models is different from Amodei et al. (2015) due to two factors. First, in DeepSpeech2,the models were trained using much larger data sets containing approximately 12,000 hours ofmulti-speaker speech data. Secondly, WER was evaluated with beam search and a language model inDeepSpeech2; here the network output is obtained using only max decoding to show improvement inthe neural network accuracy, and filtering out the other parts.The first dense phase was trained for 50 epochs. In the sparse phase, weights are pruned in theFully Connected layers and the Bidirectional Recurrent layer only (they are the majority of theweights). Each layer is pruned to achieve the same 50% sparsity and trained for 50 epochs. In thefinal dense phase, the pruned weights are initialized to zero and trained for another 50 epochs. Fora fair comparison of baseline, we used Nesterov SGD to train, reduce the learning rate with eachre-training, and keep all other hyper parameters unchanged. The learning rate is picked using ourvalidation set.We first wanted to compare the DSD results with a baseline model trained for the same number ofepochs. The first 3 rows of Table 7 shows the WER when the DSD model is trained for 50+50+50=150epochs, and the 6th line shows the baseline model trained by 150 epochs (the Same #Epochs asDSD). DSD training improves WER by 0.13 (WSJ ’92) and 1.35 (WSJ ’93) given the same numberof epochs as the conventional training.Given a second DSD iteration, accuracy can be further improved. In the second DSD iteration,each layer is pruned away 25% of the weights. Similar to the first iteration, the sparse model andsubsequent dense model are further retrained for 50 epochs. The learning rate is scaled down for eachre-training step. The results are shown in Table 7. Compared with the fully trained and convergedbaseline, the second DSD iteration improves WER by 0.58 (WSJ ’92) and 1.96 (WSJ ’93), a relativeimprovement of 2.07% (WSJ ’92) and 5.84% (WSJ ’93). So, we can do more DSD iterations(DSDSD) to further improve the performance. Adding more DSD iterations has a diminishing return.4.6 D EEPSPEECH 2To show how DSD works on deeper networks, we evaluated DSD on the Deep Speech 2 (DS2)network, described in Table 8. This network has 7 Bidirectional Recurrent layers with approximately67 million parameters, around 8 times larger than the DS1 model. A subset of the internal Englishtraining set is used. The training set is comprised of 2,100 hours of speech. The validation set iscomprised of 3.46 hours of speech. The test sets are from WSJ’92 and WSJ’93, which contain 1 hourof speech combined.Table 9 shows the results of the two iterations of DSD training. For the first sparse re-training,similar to DS1, 50% of the parameters from the Bidirectional Recurrent Layers and Fully Connected7Published as a conference paper at ICLR 2017Table 8: Deep Speech 2 ArchitectureLayer ID 0 1 2 3 - 8 9 10Type 2DConv 2DConv BR BR FC CTCCost#Params 19616 239168 8507840 9296320 3101120 95054Table 9: DSD results on Deep Speech 2 (WER)DeepSpeech 2 WSJ ’92 WSJ ’93 Sparsity Epochs LRDense Iter 0 11.83 17.42 0% 20 3e-4Sparse Iter 1 10.65 14.84 50% 20 3e-4Dense Iter 1 9.11 13.96 0% 20 3e-5Sparse Iter 2 8.94 14.02 25% 20 3e-5Dense Iter 2 9.02 13.44 0% 20 6e-6Baseline 9.55 14.52 0% 60 3e-4Improve(abs) 0.53 1.08 - - -Improve(rel) 5.55% 7.44% - - -Layers are pruned. The Baseline model is trained for 60 epochs to provide a fair comparison withDSD training. The baseline model shows no improvement after 40 epochs. With one iteration ofDSD training, WER improves by 0.44 (WSJ ’92) and 0.56 (WSJ ’93) compared to the fully trainedbaseline.Here we show again that DSD can be applied multiple times or iteratively for further performancegain. A second iteration of DSD training achieves better accuracy as shown in Table 9. For the secondsparse iteration, 25% of parameters in the Fully Connected layer and Bidirectional Recurrent layersare pruned. Overall DSD training achieves relative improvement of 5.55% (WSJ ’92) and 7.44%(WSJ ’93) on the DS2 architecture. These results are in line with DSD experiments on the smallerDS1 network. We can conclude that DSD re-training continues to show improvement in accuracywith larger layers and deeper networks.5 D ISCUSSIONDense-Sparse-Dense training changes the optimization process and improves the optimization perfor-mance with significant margins by nudging the network with pruning and re-densifying. We conjecturethat the following aspects contribute to the efficacy of DSD training.Escape Saddle Point: Based on previous studies, one of the most profound difficulties of optimizingdeep networks is the proliferation of saddle points (Dauphin et al. (2014)). Advanced optimizationmethods have been proposed to overcome saddle points. For a similar purpose but with a differentapproach, the proposed DSD method overcomes the saddle points by pruning and re-densifyingframework. Pruning the converged model perturbs the learning dynamics and allows the networkto jump away from saddle points, which gives the network a chance to converge at a better local orglobal minimum. This idea is also similar to Simulated Annealing ( Hwang (1988)). While SimulatedAnnealing randomly jumps with decreasing probability on the search graph, DSD deterministicallydeviates from the converged solution achieved in the first dense training phase by removing thesmall weights and enforcing a sparsity support. Similar to Simulated Annealing, which can escapesub-optimal solutions multiple times in the entire optimization process, DSD can also be appliediteratively to achieve further performance gains, as shown in the Deep Speech results.Significantly Better Minima: After escaping saddle point, DSD achieved better minima. Wemeasured both the training loss and validation loss, DSD training decreased the loss and error onboth the training and the validation sets on ImageNet. We have also validated the significance of theimprovements compared with conventional fine-tuning by t-test, shown in the appendix.Regularized and Sparse Training: The sparsity regularization in the sparse training step moves theoptimization to a lower-dimensional space where the loss surface is smoother and tend to be morerobust to noise. More numerical experiments verified that both sparse training and the final DSDreduce the variance and lead to lower error (shown in the appendix).8Published as a conference paper at ICLR 2017Robust re-initialization: Weight initialization plays a big role in deep learning (Mishkin & Matas(2015)). Conventional training has only one chance of initialization. DSD gives the optimization asecond (or more) chance during the training process to re-initialize from a more robust sparse trainingsolution. We re-densify the network from the sparse solution which can be seen as a zero initializationfor pruned weights. Other initialization methods are also worth trying.Break Symmetry: The permutation symmetry of the hidden units makes the weights symmetrical,thus prone to co-adaptation in training. In DSD, pruning the weights breaks the symmetry of thehidden units associated with the weights, and the weights are asymmetrical in the final dense phase.6 C ONCLUSIONWe introduce DSD, a dense-sparse-dense training framework that regularizes neural networks bypruning and then restoring connections. Our method learns which connections are important duringthe initial dense solution. Then it regularizes the network by pruning the unimportant connectionsand retraining to a sparser and more robust solution with same or better accuracy. Finally, the prunedconnections are restored and the entire network is retrained again. This increases the dimensionalityof parameters, and thus model capacity, from the sparser model.DSD training achieves superior optimization performance. We highlight our experiments usingGoogLeNet, VGGNet, and ResNet on ImageNet; NeuralTalk on Flickr-8K; and DeepSpeech-1&2on the WSJ dataset. This shows that the accuracy of CNNs, RNNs, and LSTMs can be significantlyimproved with DSD training. Our numerical results and empirical tests show the inadequacy ofcurrent training methods for which we have provided an effective solution.9Published as a conference paper at ICLR 2017 | HydNSx9Xg | Interesting training strategy for deep networks | 5: Marginally below acceptance threshold | This paper presents a training strategy for deep networks. First, the network is trained in a standard fashion. Second, small magnitude weights are clamped to 0; the rest of the weights continue to be trained. Finally, all the weights are again jointly trained. Experiments on a variety of image, text, and speech datasets demonstrate the approach can obtain high-quality results.
The proposed idea is novel and interesting. In a sense it is close to Dropout, though as noted in the paper the deterministic weight clamping method is different.
The main advantage of the proposed method is its simplicity. Three hyper-parameters are needed: the number of weights to clamp to 0, and the numbers of epochs of training used in the first dense phase and the sparse phase. Given these, it can be plugged in to training a range of networks, as shown in the experiments.
The concern I have is regarding the current empirical evaluation. As noted in the question phase, it seems the baseline methods are not trained for as many epochs as the proposed method. Standard tricks, such as dropping the learning rate upon "convergence" and continuing to learn, can be employed. The response seems to indicate that these approaches can be effective. I think a more thorough empirical analysis of performance over epochs, learning rates, etc. would strengthen the paper. An exploration regarding the sparsity hyper-parameter would also be interesting.
| 4: The reviewer is confident but not absolutely certain that the evaluation is correct |
rJJ3YU5ge | ICLR.cc/2017/conference | 2017 | Is a picture worth a thousand words? A Deep Multi-Modal Fusion Architecture for Product Classification in e-commerce | ["Tom Zahavy", "Alessandro Magnani", "Abhinandan Krishnan", "Shie Mannor"] | Classifying products into categories precisely and efficiently is a major challenge in modern e-commerce. The high traffic of new products uploaded daily and the dynamic nature of the categories raise the need for machine learning models that can reduce the cost and time of human editors. In this paper, we propose a decision level fusion approach for multi-modal product classification using text and image inputs. We train input specific state-of-the-art deep neural networks for each input source, show the potential of forging them together into a multi-modal architecture and train a novel policy network that learns to choose between them. Finally, we demonstrate that our multi-modal network improves the top-1 accuracy $\%$ over both networks on a real-world large-scale product classification dataset that we collected from Walmart.com. While we focus on image-text fusion that characterizes e-commerce domains, our algorithms can be easily applied to other modalities such as audio, video, physical sensors, etc. | ["Multi-modal learning", "Deep learning"] | ABSTRACTClassifying products into categories precisely and efficiently is a major challengein modern e-commerce. The high traffic of new products uploaded daily and thedynamic nature of the categories raise the need for machine learning models thatcan reduce the cost and time of human editors. In this paper, we propose a decisionlevel fusion approach for multi-modal product classification using text and imageinputs. We train input specific state-of-the-art deep neural networks for each inputsource, show the potential of forging them together into a multi-modal architectureand train a novel policy network that learns to choose between them. Finally, wedemonstrate that our multi-modal network improves the top-1 accuracy %overboth networks on a real-world large-scale product classification dataset that wecollected from Walmart.com. While we focus on image-text fusion that character-izes e-commerce domains, our algorithms can be easily applied to other modalitiessuch as audio, video, physical sensors, etc.1 I NTRODUCTIONProduct classification is a key issue in e-commerce domains. A product is typically representedby metadata such as its title, image, color, weight and so on, and most of them are assigned man-ually by the seller. Once a product is uploaded to an e-commerce website, it is typically placedin multiple categories. Categorizing products helps e-commerce websites to provide costumers abetter shopping experience, for example by efficiently searching the products catalog or by develop-ing recommendation systems. A few examples of categories are internal taxonomies (for businessneeds), public taxonomies (such as groceries and office equipment) and the product’s shelf (a groupof products that are presented together on an e-commerce web page). These categories vary withtime in order to optimize search efficiency and to account for special events such as holidays andsports events. In order to address these needs, e-commerce websites typically hire editors and usecrowdsourcing platforms to classify products. However, due to the high amount of new productsuploaded daily and the dynamic nature of the categories, machine learning solutions for productclassification are very appealing as means to reduce the time and economic costs. Thus, preciselycategorizing items emerges as a significant issue in e-commerce domains.A shelf is a group of products presented together on an e-commerce website page, and usuallycontain products with a given theme/category (e.g., Women boots, folding tables). Product to shelfclassification is a challenging problem due to data size, category skewness, and noisy metadataand labels. In particular, it presents three fundamental challenges for machine learning algorithms.First, it is typically a multi-class problem with thousands of classes. Second, a product may belongto multiple shelves making it a multi-label problem. And last, a product has both an image and atext input making it a multi-modal problem.Products classification is typically addressed as a text classification problem because most metadataof items are represented as textual features (Pyo et al., 2010). Text classification is a classic topicfor natural language processing, in which one needs to assign predefined categories to text inputs.1Under review as a conference paper at ICLR 2017Figure 1: Predicting shelves from product metadata obtained from Walmart.com. Left: productsthat have both an image and a title that contain useful information for predicting the product’s shelf.Center, top: the boots title gives specific information about the boots but does not mention that theproduct is a boot, making it harder to predict the shelf. Center, bottom: the baby toddler shirt’stitle is only refers to the text on the toddler shirt and does not mention that it is a product for babies.Right, top: the umbrella image contains information about its color but it is hard to understand thatthe image is referring to an umbrella. Right, bottom: the lips pencil image looks like a regularpencil, making it hard to predict that it belongs to the moisturizers shelf.Standard methods follow a classical two-stage scheme of extraction of (handcrafted) features, fol-lowed by a classification stage. Typical features include bag-of-words or n-grams, and their TF-IDF.On the other hand, Deep Neural Networks use generic priors instead of specific domain knowledge(Bengio et al., 2013) and have been shown to give competitive results on text classification tasks(Zhang et al., 2015). In particular, Convolutional neural networks (CNNs) (Kim, 2014; Zhang et al.,2015; Conneau et al., 2016) and Recurrent NNs (Lai et al., 2015; Pyo et al., 2010; Xiao & Cho,2016) can efficiently capture the sequentiality of the text. These methods are typically applied di-rectly to distributed embedding of words (Kim, 2014; Lai et al., 2015; Pyo et al., 2010) or characters(Zhang et al., 2015; Conneau et al., 2016; Xiao & Cho, 2016), without any knowledge on the syn-tactic or semantic structures of a language. However, all of these architectures were only appliedon problems with a small amount of labels ( 20) while e-commerce shelf classification problemstypically have thousands of labels with multiple labels per product.In Image classification, CNNs are widely considered the best models, and achieve state-of-the-art results on the ImageNet Large-Scale Visual Recognition Challenge (Russakovsky et al., 2015;Krizhevsky et al., 2012; Simonyan & Zisserman, 2014; He et al., 2015). However, as good as theyare, the classification accuracy of machine learning systems is often limited in problems with manyclasses of object categories. One remedy is to leverage data from other sources, such as text data.However, the studies on multi-modal deep learning for large-scale item categorization are still rare tothe best of our belief. In particular in a setting where there is a significant difference in discriminativepower between the two types of signals.In this work, we propose a multi-modal deep neural network model for product classification. Ourdesign principle is to leverage the specific prior for each data type by using the current state-of-2Under review as a conference paper at ICLR 2017the-art classifiers from the image and text domains. The final architecture has 3 main components(Figure 2, Right): a text CNN (Kim, 2014), an image CNN (Simonyan & Zisserman, 2014) anda policy network that learns to choose between them. We collected a large-scale data set of 1:2million products from the Walmart.com website. Each product has a title and an image and needs tobe classified to a shelf (label) with 2890 possible shelves. Examples from this dataset can be seenin Figure 1 and are also available on-line at the Walmart.com website. For most of the products,both the image and the title of each product contain relevant information for customers. However, itis interesting to observe that for some of the products, both input types may not be informative forshelf prediction (Figure 1). This observation motivates our work and raises interesting questions:which input type is more useful for product classification? is it possible to forge the inputs into abetter architecture?In our experiments, we show that the text CNN outperforms the image one. However, for a relativelylarge number of products ( 8%), the image CNN is correct while the text CNN is wrong, indicatinga potential gain from using a multi-modal architecture. We also show that the policy is able to choosebetween the two models and give a performance improvement over both state-of-the-art networks.To the best of our knowledge, this is the first work that demonstrates a performance improvementon top-1 classification accuracy by using images and text on a large-scale classification problem. Inparticular, our main contributions are:We demonstrate that the text classification CNN (Kim, 2014) outperforms the VGG net-work (Simonyan & Zisserman, 2014) on a real-world large-scale product to shelf classifi-cation problem.We analyze the errors made by the different networks and show the potential gain of multi-modality.We propose a novel decision-level fusion policy that learns to choose between the text andimage networks and improve over both.2 M ULTI -MODALITYOver the years, a large body of research has been devoted to improving classification using en-sembles of classifiers (Kittler et al., 1998; Hansen & Salamon, 1990). Inspired by their success,these methods have also been used in multi-modal settings (e.g.,Guillaumin et al. (2010); Poria et al.(2016)), where the source of the signals, or alternatively their modalities, are different. Some exam-ples include audio-visual speech classification (Ngiam et al., 2011), image and text retrieval (Kiroset al.), sentiment analysis and semi-supervised learning (Guillaumin et al., 2010).Combining classifiers from different input sources presents multiple challenges. First, classifiersvary in their discriminative power, thus, an optimal unification method should be able to adaptitself for specific combinations of classifiers. Second, different data sources have different state-of-the-art architectures, typically deep neural networks, which vary in depth, width, and optimizationalgorithm; making it non-trivial to merge them. Moreover, a multi-modal architecture potentiallyhas more local minima that may give unsatisfying results. Finally, most of the publicly availablereal-world big data classification datasets, an essential building block of deep learning systems,typically contain only one data type.Nevertheless, the potential performance boost of multi-modal architectures has motivated re-searchers over the years. Frome et al. (2013) combined an image network (Krizhevsky et al., 2012)with a Skip-gram Language Model in order to improve classification results on ImageNet. However,they were not able to improve the top-1 accuracy prediction, possibly because the text input theyused (image labels) didn’t contain a lot of information. Other works, used multi-modality to learngood embedding but did not present results on classification benchmarks (Lynch et al., 2015; Kiroset al.; Gong et al., 2014). Kannan et al. (2011) suggested to improve text-based product classifica-tion by adding an image signal, training an image classifier and learning a decision rule between thetwo. However, they only experimented with a small dataset and a low number of labels, and it isnot clear how to scale their method for extreme multi-class multi-label applications that characterizereal-world problems in e-commerce.3Under review as a conference paper at ICLR 2017PolicyT ext CNNVGG16T ext ImageClass probabilities Prediction Class probabilities PredictionFinal predictionImage Input T ext InputShared representationMulti-modallayersNetwork predictionImage Input T ext InputNetwork prediction Network predictionNetwork predictionPolicyInputFeature-level fusionDecision-level fusionFigure 2: Multi-modal fusion architectures.Left, top: Feature-level fusion. Each modality is processed in a different pipe. After a certaindepth, the pipes are concatenated followed by multi-modal layers. Left, bottom: Decision-levelfusion. Each modality is processed in a different pipe and gives a prediction. A policy network islearning to decide which classifier to use. Right: The proposed multi-modal architecture.Adding modalities can improve the classification of products that have a non-informative inputsource (e.g., image or text). In e-commerce, for example, classifiers that rely exclusively on textsuffer from short and non-informative titles, differences in style between vendors and overlappingtext across categories (i.e., a word that helps to classify a certain class may appear in other classes).Figure 1 presents a few examples of products that have only one informative input type. These ex-amples suggest that a multi-modal architecture can potentially outperform a classifier with a singleinput type.Most unification techniques for multi-modal learning are partitioned between feature-level fusiontechniques and decision-level fusion techniques (Figure 2, Left).2.1 F EATURE LEVEL FUSIONFeature-level fusion is characterized by three phases: (a) learning a representation, (b) supervisedtraining, and (c) testing. The different unification techniques are distinguished by the availabilityof the data in each phase (Guillaumin et al., 2010). For example, in cross-modality training, therepresentation is learned from all the modalities, but only one modality is available for supervisedtraining and testing. In other cases, all of the modalities are available at all stages but we may want(or not) to limit their usage given a certain budget. Another source for the distinction is the orderin which phases (a) and (b) are made. For example, one may first learn the representation and thenlearn a classifier from it, or learn both the representation and the classifier in parallel. In the deeplearning context, there are two common approaches. In the first approach, we learn an end-to-enddeep NN; the NN has multiple input-specific pipes that include a data source followed by inputspecific layers. After a certain depth, the pipes are concatenated followed by additional layers suchthat the NN is trained end-to-end. In the second approach, input specific deep NNs are learned first,and a multi-modal representation vector is created by concatenating the input specific feature vectors(e.g., the neural network’s last hidden layer). Then, an additional classifier learns to classify fromthe multi-modal representation vector. While multi-modal methods have shown potential to boostperformance on small datasets (Poria et al., 2016), or on top-k accuracy measures (Frome et al.,2013), we are not familiar with works that succeeded with applying it on a large-scale classificationproblem and received performance improvement in top-1 accuracy.2.2 D ECISION -LEVEL FUSIONIn this approach, an input specific classifier is learned for each modality, and the goal is to find adecision rule between them. The decision rule is typically a pre-defined rule (Guillaumin et al.,2010) and is not learned from the data. For example, Poria et al. (2016) chose the classifier with themaximal confidence, while Krizhevsky et al. (2012) average classifier predictions. However, in thiswork we show that learning the decision rule yields significantly better results on our data.4Under review as a conference paper at ICLR 20173 M ETHODS AND ARCHITECTURESIn this section, we give the details of our multi-modal product classification architecture. The ar-chitecture is composed of a text CNN and an image CNN which are forged together by a policynetwork, as can be seen in Figure 2, Right.3.1 M ULTI LABEL COST FUNCTIONOur cost function is the weighted sigmoid cross entropy with logits, a common cost function formulti-label problems. Let xbe the logits, zbe the targets, qbe a positive weight coefficient, used asa multiplier for the positive targets, and (x) =11+exp(x):The loss is given by:Cost(x,z;q) =qzlog((x))(1z)log(1(x)) =(1z)x+ (1 + (q1)z)log(1 +exp(x)):The positive coefficient q;allows one to trade off recall and precision by up- or down-weighting thecost of a positive error relative to a negative error. We found it to have a significant effect in practice.3.2 T EXT CLASSIFICATIONFor the text signal, we use the text CNN architecture of Kim (2014). The first layer embeds wordsinto low-dimensional vectors using random embedding (different than the original paper). The nextlayer performs convolutions over time on the embedded word vectors using multiple filter sizes (3,4 and 5), where we use 128filters from each size. Next, we max-pool-over-time the result of eachconvolution filter and concatenated all the results together. We add a dropout regularization layer(0.5 dropping rate), followed by a fully connected layer, and classify the result using a softmax layer.An illustration of the Text CNN can be seen in Figure 2.3.3 I MAGE CLASSIFICATIONFor the image signal, we use the VGG Network (Simonyan & Zisserman, 2014). The input to thenetwork is a fixed-size 224x224RGB image. The image is passed through a stack of convolutionallayers with a very small receptive field: 3x3. The convolution stride is fixed to 1pixel; the spatialpadding of the convolutional layer is 1pixel. Spatial pooling is carried out by five max-poolinglayers, which follow some of the convolutional layers. Max-pooling is performed over a 2x2pixelwindow, with stride 2. A stack of convolutional layers is followed by three Fully-Connected (FC)layers: the first two have 4096 channels each, the third performs 2890-way product classificationand thus contains 2890 channels (one for each class). All hidden layers are followed by a ReLunon-linearity. The exact details can be seen in Figure 2.3.4 M ULTI -MODAL ARCHITECTUREWe experimented with four types of multi-modal architectures. (1)Learning decision-level fusionpolicies from different inputs. (1a) Policies that use the text and image CNNs class probabilitiesas input (Figure 2). We experimented with architectures that have one or two fully connected layers(the two-layered policy is using 10hidden units and a ReLu non-linearity between them). (1b)Policies that use the text and/or image as input. For these policies, the architecture of policynetwork was either the text CNN or the VGG network. In order to train policies, labels are collectedfrom the image and text networks predictions, i.e., the label is 1if the image network made a correctprediction while the text network made a mistake, and 0otherwise. On evaluation, we use thepolicy predictions to select between the models, i.e., if the policy prediction is 1we use the imagenetwork, and use the text network otherwise. (2)Pre-defined policies that average the predictionsof the different CNNs or choose the CNN with the highest confidence. (3)End-to-end feature-levelfusion, each input type is processed by its specific CNN. We concatenate the last hidden layers of theCNNs and add one or two fully connected layers. All the layers are trained together end-to-end (wealso tried to initialize the input specific weights from pre-trained single-modal networks). (4)Multi-step feature-level fusion. As in (3), we create shared representation vector by concatenating the lasthidden layers. However, we now keep the shared representation fixed and learn a new classifier fromit.5Under review as a conference paper at ICLR 20174 E XPERIMENTS4.1 S ETUPOur dataset contains 1.2 million products (title image and shelf) that we collected from Walmart.com(offered online and can be viewed at the website) and were deemed the hardest to classify by thecurrent production system. We divide the data into training (1.1 million) validation (50k) and test(50k). We train both the image network and the text network on the training dataset and evaluatethem on the test dataset. The policy is trained on the validation dataset and is also evaluated onthe test dataset. The objective is to classify the product’s shelf, from 2890 possible choices. Eachproduct is typically assigned to more than one shelf (3 on average), and the network is consideredaccurate if its most probable shelf is one of them.4.2 T RAINING THE TEXT ARCHITECTUREPreprocess: we build a dictionary of all the words in the training data and embed each word using arandom embedding into a one hundred dimensional vector. We trim titles with more than 40 wordsand pad shorter titles with nulls.We experimented with different batch sizes, dropout rates, and filters stride, but found that the vanillaarchitecture (Kim, 2014) works well on our data. This is consistent with Zhang & Wallace (2015),who showed that text CNNs are not very sensitive to hyperparameters. We tuned the cost functionpositive coefficient parameter q;and found out that the value 30 performed best in practice (we willalso use this value for the image network). The best CNN that we trained classified 70:1%of theproducts from the test set correctly (Table 1).4.3 T RAINING THE IMAGE ARCHITECTUREPreprocess: we re-size all the images into 224 x 224 pixels and reduce the image mean.The VGG network that we trained classified 57% of the products from the test set correctly. This isa bit disappointing if we compare it to the performance of the VGG network on ImageNet ( 75%).There are a few differences between these two datasets that may explain this gap. First, our data has3 times more classes and contains multiple labels per image making the classification harder, andsecond, Figure 1 implies that some of our images are not informative for shelf classification. Someworks claim that the features learned by VGG on ImageNet are global feature extractors (Lynchet al., 2015). We therefore decided to use the weights learned by VGG on ImageNet and learn onlythe last layer. This configuration yielded only 36:7%accuracy. We believe that the reason is thatsome of the ImageNet classes are irrelevant for e-commerce (e.g., vehicles and animals) while somerelevant categories are misrepresented (e.g., electronics and office equipment). It could also be thatour images follow some specific pattern of white background, well-lit studio etc., that characterizese-commerce.4.4 E RROR ANALYSISIs a picture worth a thousand words? Inspecting Figure 3, we can see that the text network out-performed the image network on this dataset, classifying more products correctly. Similar resultswere reported before (Pyo et al., 2010; Kannan et al., 2011) but to the best of our knowledge, thisis the first work that compares state-of-the-art text and image CNNs on a real-world large-scale e-commerce dataset.What is the potential of multi-modality? We identified that for 7:8%of the products the image net-work made a correct prediction while the text network was wrong. This observation is encouragingsince it implies that there is a relative big potential to harness via multi-modality. We find this largegap surprising since different neural networks applied to the same problem tend to make the samemistakes (Szegedy et al., 2013).Unification techniques for multi-modal problems typically use the last hidden layer of each networkas features (Frome et al., 2013; Lynch et al., 2015; Pyo et al., 2010). We therefore decided to visual-ize the activations of this layer using a tSNE map (Maaten & Hinton, 2008). Figure 3, depicts sucha map for the activations of the text model (the image model yielded similar results). In particular,6Under review as a conference paper at ICLR 2017Title is correct, image is not: 21.9%Image is correct, title is not:7.8% Both models are wrong: 22.4%Both models are correct: 47.9%Figure 3: Error analysis using a tSNE map, created from the last hidden layer neural activations ofthe text model.we were looking for regions in the tSNE map where the image predictions are correct and the textis wrong (Figure 3, green). Finding such a region will imply that a policy network can learn gooddecision boundaries. However, we can see that there are no well-defined regions in the tSNE mapswhere the image network is correct and the title is wrong (green), thus implying that it might be hardto identify these products using the activations of the last layers.4.5 M ULTI -MODAL UNIFICATION TECHNIQUESOur error analysis experiment highlights the potential of merging image and text. Still, we foundit hard to achieve the upper bound provided by the error analysis in practice. We now describe thepolicies that managed to achieve performance boost in top-1 accuracy %over the text and imagenetworks, and then provide discussion on other approaches that we tried but didn’t work.Decision-level fusion: We trained policies from different data sources (e.g., title, image, and eachCNN class probabilities), using different architectures and different hyperparameters. Looking atTable 1, we can see that the best policies were trained using the class probabilities (the softmaxprobabilities) of the image and text CNNs as inputs. The amount of class probabilities that wereused (top-1, top-3 or all) did not have a significant effect on the results, indicating that the top-1probability contains enough information to learn good policies. This result makes sense since thetop-1 probability measures the confidence of the network in making a prediction. Still, the top-3probabilities performed slightly better, indicating that the difference between the top probabilitiesmay also matter. We can also see that the 2-layer architecture outperformed the 1-layer, indicatingthat a linear policy is too simple, and deeper models can yield better results. Last, the cost functionpositive coefficient q had a big impact on the results. We can see that for q= 1, the policy networkis more accurate in its prediction however it achieves worse results on shelf classification. For q= 5we get the best results, while higher values of q(e.g., 7or10) resulted in inaccurate policies that didnot perform well in practice.Policy input # layers q Text Image Policy Oracle Policy accuracyCP-1 1 5 70.1 56.7 71.4 (+1.3) 77.5 (+7.8) 86.4CP-1 2 5 70.1 56.6 71.5 (+1.4) 77.6 (+7.5) 84.2CP-all 2 5 70.1 56.6 71.4 (+1.3) 77.6 (+7.5) 84.6CP-3 2 5 70.2 56.7 71.8 (+1.6) 77.7 (+7.5) 84.2CP-3 2 1 70.2 56.7 70.2 (+0) 77.7 (+7.5) 92.5CP-3 2 7 70.0 56.6 71.0 (+1.0) 77.5 (+7.5) 79.1CP-3 2 10 70.1 56.6 70.7 (+0.6) 77.6 (+7.5) 75.0Image - 5 70.1 56.6 68.5(-1.6) 77.6 (+7.5) 80.3Text - 5 70.1 56.6 69.0 (-1.1) 77.6 (+7.5) 83.7Both - 5 70.1 56.6 66.1 (-4) 77.6 (+7.5) 73.7Fixed-Mean - - 70.1 56.7 65.4 (+0) 77.6 (+7.5) -Fixed-Max - - 70.1 56.7 60.1 (-10) 77.7 (+7.6) 38.2Table 1: Decision-level fusion results. Each row presents a different policy configuration (definedby the policy input, the number of layers and the value of q), followed by the accuracy %of theimage, text, policy and oracle (optimal policy) classifiers on the test dataset. The policy accuracycolumn presents the accuracy %of the policy in making correct predictions, i.e., choosing the imagenetwork when it made a correct prediction while the text network didn’t. Numbers in (+)referto the performance gain over the text CNN. Class Probabilities (CP) refer to the number of classprobabilities used as input.7Under review as a conference paper at ICLR 2017While it may not seem surprising that combining text and image will improve accuracy, in practicewe found it extremely hard to leverage this potential. To the best of our knowledge, this is the firstwork that demonstrates a direct performance improvement on top-1 classification accuracy fromusing images and text on a large-scale classification problem.We experimented with pre-defined policies that do not learn from the data. Specifically, we tried toaverage the logits, following (Krizhevsky et al., 2012; Simonyan & Zisserman, 2014), and to choosethe network with the maximal confidence following (Poria et al., 2016). Both of these experimentsyielded significantly worse results, probably, since the text network is much more accurate than theimage one (Table 1). We also tried to learn policies from the text and/or the image input, usinga policy network which is either a text CNN, a VGG network or a combination. However, all ofthese experiments resulted in policies that overfit the data and performed worse than the title modelon the test data (Table 1). We also experimented with early stopping criteria, various regularizationmethods (dropout, l1, l2) and reduced model size but none could make the policy network generalize.Feature-level fusion: Training a CNN end-to-end can be tricky. For example, each input sourcehas its own specific architecture, with specific learning rate and optimization algorithm. We exper-imented with training the network end-to-end, but also with first training each part separately andthen learning the concatenated parts. We tried different unification approaches such as gating func-tions (Srivastava et al., 2015), cross products and a different number of fully connected layers afterthe concatenation. These experiments resulted in models that were inferior to the text model. Whilethis may seem surprising, the only successful feature level fusion that we are aware of (Frome et al.,2013), was not able to gain accuracy improvement on top-1 accuracy.5 C ONCLUSIONSIn this work, we investigated a multi-modal multi-class multi-label product classification problemand presented results on a challenging real-world dataset that we collected from Walmart.com. Wediscovered that the text network outperforms the image network on our dataset, and observed a bigpotential of fusing text and image inputs. Finally, we suggested a multi-modal decision-level fusionapproach that leverages state-of-the-art results from image and text classification and forges theminto a multi-modal architecture that outperforms both.State-of-the-art image CNNs are much larger than text CNNs, and take more time to train and torun. Thus, extracting image features during run time, or getting the image network predictions maybe prohibitively expensive. In this context, an interesting observation is that feature level fusionmethods require using the image signal for each product, while decision level fusion methods re-quire using the image network selectively making them more appealing. Moreover, our experimentssuggest that decision-level fusion performs better than feature-level fusion in practice.Finally, we were only able to realize a fraction of the potential of multi-modality. In the future, weplan to investigate deeper policy networks and more sophisticated measures of confidence. We alsoplan to investigate ensembles of image networks (Krizhevsky et al., 2012) and text networks (Pyoet al., 2010). We believe that the insights from training policy networks will eventually lead us totrain end to end differential multi-modal networks. | B1e7GJfVg | 4: Ok but not good enough - rejection | This paper presents a system approach to combine multiple modalities to perform classification in a practical scenario (e-commerce).
In general, I find the proposed approach in the paper sound and solid, but do not see novelty in the paper: feature fusion and decision time fusion are both standard practices in multi-modal analysis, and the rest of the paper offers no surprise in implementing such approaches. This seems to be a better fit for venues that focus more on production systems, and seems to be a bad fit for ICLR where the focus is more on research of novel algorithms and theories. | 4: The reviewer is confident but not absolutely certain that the evaluation is correct |
|
rJJ3YU5ge | ICLR.cc/2017/conference | 2017 | Is a picture worth a thousand words? A Deep Multi-Modal Fusion Architecture for Product Classification in e-commerce | ["Tom Zahavy", "Alessandro Magnani", "Abhinandan Krishnan", "Shie Mannor"] | Classifying products into categories precisely and efficiently is a major challenge in modern e-commerce. The high traffic of new products uploaded daily and the dynamic nature of the categories raise the need for machine learning models that can reduce the cost and time of human editors. In this paper, we propose a decision level fusion approach for multi-modal product classification using text and image inputs. We train input specific state-of-the-art deep neural networks for each input source, show the potential of forging them together into a multi-modal architecture and train a novel policy network that learns to choose between them. Finally, we demonstrate that our multi-modal network improves the top-1 accuracy $\%$ over both networks on a real-world large-scale product classification dataset that we collected from Walmart.com. While we focus on image-text fusion that characterizes e-commerce domains, our algorithms can be easily applied to other modalities such as audio, video, physical sensors, etc. | ["Multi-modal learning", "Deep learning"] | ABSTRACTClassifying products into categories precisely and efficiently is a major challengein modern e-commerce. The high traffic of new products uploaded daily and thedynamic nature of the categories raise the need for machine learning models thatcan reduce the cost and time of human editors. In this paper, we propose a decisionlevel fusion approach for multi-modal product classification using text and imageinputs. We train input specific state-of-the-art deep neural networks for each inputsource, show the potential of forging them together into a multi-modal architectureand train a novel policy network that learns to choose between them. Finally, wedemonstrate that our multi-modal network improves the top-1 accuracy %overboth networks on a real-world large-scale product classification dataset that wecollected from Walmart.com. While we focus on image-text fusion that character-izes e-commerce domains, our algorithms can be easily applied to other modalitiessuch as audio, video, physical sensors, etc.1 I NTRODUCTIONProduct classification is a key issue in e-commerce domains. A product is typically representedby metadata such as its title, image, color, weight and so on, and most of them are assigned man-ually by the seller. Once a product is uploaded to an e-commerce website, it is typically placedin multiple categories. Categorizing products helps e-commerce websites to provide costumers abetter shopping experience, for example by efficiently searching the products catalog or by develop-ing recommendation systems. A few examples of categories are internal taxonomies (for businessneeds), public taxonomies (such as groceries and office equipment) and the product’s shelf (a groupof products that are presented together on an e-commerce web page). These categories vary withtime in order to optimize search efficiency and to account for special events such as holidays andsports events. In order to address these needs, e-commerce websites typically hire editors and usecrowdsourcing platforms to classify products. However, due to the high amount of new productsuploaded daily and the dynamic nature of the categories, machine learning solutions for productclassification are very appealing as means to reduce the time and economic costs. Thus, preciselycategorizing items emerges as a significant issue in e-commerce domains.A shelf is a group of products presented together on an e-commerce website page, and usuallycontain products with a given theme/category (e.g., Women boots, folding tables). Product to shelfclassification is a challenging problem due to data size, category skewness, and noisy metadataand labels. In particular, it presents three fundamental challenges for machine learning algorithms.First, it is typically a multi-class problem with thousands of classes. Second, a product may belongto multiple shelves making it a multi-label problem. And last, a product has both an image and atext input making it a multi-modal problem.Products classification is typically addressed as a text classification problem because most metadataof items are represented as textual features (Pyo et al., 2010). Text classification is a classic topicfor natural language processing, in which one needs to assign predefined categories to text inputs.1Under review as a conference paper at ICLR 2017Figure 1: Predicting shelves from product metadata obtained from Walmart.com. Left: productsthat have both an image and a title that contain useful information for predicting the product’s shelf.Center, top: the boots title gives specific information about the boots but does not mention that theproduct is a boot, making it harder to predict the shelf. Center, bottom: the baby toddler shirt’stitle is only refers to the text on the toddler shirt and does not mention that it is a product for babies.Right, top: the umbrella image contains information about its color but it is hard to understand thatthe image is referring to an umbrella. Right, bottom: the lips pencil image looks like a regularpencil, making it hard to predict that it belongs to the moisturizers shelf.Standard methods follow a classical two-stage scheme of extraction of (handcrafted) features, fol-lowed by a classification stage. Typical features include bag-of-words or n-grams, and their TF-IDF.On the other hand, Deep Neural Networks use generic priors instead of specific domain knowledge(Bengio et al., 2013) and have been shown to give competitive results on text classification tasks(Zhang et al., 2015). In particular, Convolutional neural networks (CNNs) (Kim, 2014; Zhang et al.,2015; Conneau et al., 2016) and Recurrent NNs (Lai et al., 2015; Pyo et al., 2010; Xiao & Cho,2016) can efficiently capture the sequentiality of the text. These methods are typically applied di-rectly to distributed embedding of words (Kim, 2014; Lai et al., 2015; Pyo et al., 2010) or characters(Zhang et al., 2015; Conneau et al., 2016; Xiao & Cho, 2016), without any knowledge on the syn-tactic or semantic structures of a language. However, all of these architectures were only appliedon problems with a small amount of labels ( 20) while e-commerce shelf classification problemstypically have thousands of labels with multiple labels per product.In Image classification, CNNs are widely considered the best models, and achieve state-of-the-art results on the ImageNet Large-Scale Visual Recognition Challenge (Russakovsky et al., 2015;Krizhevsky et al., 2012; Simonyan & Zisserman, 2014; He et al., 2015). However, as good as theyare, the classification accuracy of machine learning systems is often limited in problems with manyclasses of object categories. One remedy is to leverage data from other sources, such as text data.However, the studies on multi-modal deep learning for large-scale item categorization are still rare tothe best of our belief. In particular in a setting where there is a significant difference in discriminativepower between the two types of signals.In this work, we propose a multi-modal deep neural network model for product classification. Ourdesign principle is to leverage the specific prior for each data type by using the current state-of-2Under review as a conference paper at ICLR 2017the-art classifiers from the image and text domains. The final architecture has 3 main components(Figure 2, Right): a text CNN (Kim, 2014), an image CNN (Simonyan & Zisserman, 2014) anda policy network that learns to choose between them. We collected a large-scale data set of 1:2million products from the Walmart.com website. Each product has a title and an image and needs tobe classified to a shelf (label) with 2890 possible shelves. Examples from this dataset can be seenin Figure 1 and are also available on-line at the Walmart.com website. For most of the products,both the image and the title of each product contain relevant information for customers. However, itis interesting to observe that for some of the products, both input types may not be informative forshelf prediction (Figure 1). This observation motivates our work and raises interesting questions:which input type is more useful for product classification? is it possible to forge the inputs into abetter architecture?In our experiments, we show that the text CNN outperforms the image one. However, for a relativelylarge number of products ( 8%), the image CNN is correct while the text CNN is wrong, indicatinga potential gain from using a multi-modal architecture. We also show that the policy is able to choosebetween the two models and give a performance improvement over both state-of-the-art networks.To the best of our knowledge, this is the first work that demonstrates a performance improvementon top-1 classification accuracy by using images and text on a large-scale classification problem. Inparticular, our main contributions are:We demonstrate that the text classification CNN (Kim, 2014) outperforms the VGG net-work (Simonyan & Zisserman, 2014) on a real-world large-scale product to shelf classifi-cation problem.We analyze the errors made by the different networks and show the potential gain of multi-modality.We propose a novel decision-level fusion policy that learns to choose between the text andimage networks and improve over both.2 M ULTI -MODALITYOver the years, a large body of research has been devoted to improving classification using en-sembles of classifiers (Kittler et al., 1998; Hansen & Salamon, 1990). Inspired by their success,these methods have also been used in multi-modal settings (e.g.,Guillaumin et al. (2010); Poria et al.(2016)), where the source of the signals, or alternatively their modalities, are different. Some exam-ples include audio-visual speech classification (Ngiam et al., 2011), image and text retrieval (Kiroset al.), sentiment analysis and semi-supervised learning (Guillaumin et al., 2010).Combining classifiers from different input sources presents multiple challenges. First, classifiersvary in their discriminative power, thus, an optimal unification method should be able to adaptitself for specific combinations of classifiers. Second, different data sources have different state-of-the-art architectures, typically deep neural networks, which vary in depth, width, and optimizationalgorithm; making it non-trivial to merge them. Moreover, a multi-modal architecture potentiallyhas more local minima that may give unsatisfying results. Finally, most of the publicly availablereal-world big data classification datasets, an essential building block of deep learning systems,typically contain only one data type.Nevertheless, the potential performance boost of multi-modal architectures has motivated re-searchers over the years. Frome et al. (2013) combined an image network (Krizhevsky et al., 2012)with a Skip-gram Language Model in order to improve classification results on ImageNet. However,they were not able to improve the top-1 accuracy prediction, possibly because the text input theyused (image labels) didn’t contain a lot of information. Other works, used multi-modality to learngood embedding but did not present results on classification benchmarks (Lynch et al., 2015; Kiroset al.; Gong et al., 2014). Kannan et al. (2011) suggested to improve text-based product classifica-tion by adding an image signal, training an image classifier and learning a decision rule between thetwo. However, they only experimented with a small dataset and a low number of labels, and it isnot clear how to scale their method for extreme multi-class multi-label applications that characterizereal-world problems in e-commerce.3Under review as a conference paper at ICLR 2017PolicyT ext CNNVGG16T ext ImageClass probabilities Prediction Class probabilities PredictionFinal predictionImage Input T ext InputShared representationMulti-modallayersNetwork predictionImage Input T ext InputNetwork prediction Network predictionNetwork predictionPolicyInputFeature-level fusionDecision-level fusionFigure 2: Multi-modal fusion architectures.Left, top: Feature-level fusion. Each modality is processed in a different pipe. After a certaindepth, the pipes are concatenated followed by multi-modal layers. Left, bottom: Decision-levelfusion. Each modality is processed in a different pipe and gives a prediction. A policy network islearning to decide which classifier to use. Right: The proposed multi-modal architecture.Adding modalities can improve the classification of products that have a non-informative inputsource (e.g., image or text). In e-commerce, for example, classifiers that rely exclusively on textsuffer from short and non-informative titles, differences in style between vendors and overlappingtext across categories (i.e., a word that helps to classify a certain class may appear in other classes).Figure 1 presents a few examples of products that have only one informative input type. These ex-amples suggest that a multi-modal architecture can potentially outperform a classifier with a singleinput type.Most unification techniques for multi-modal learning are partitioned between feature-level fusiontechniques and decision-level fusion techniques (Figure 2, Left).2.1 F EATURE LEVEL FUSIONFeature-level fusion is characterized by three phases: (a) learning a representation, (b) supervisedtraining, and (c) testing. The different unification techniques are distinguished by the availabilityof the data in each phase (Guillaumin et al., 2010). For example, in cross-modality training, therepresentation is learned from all the modalities, but only one modality is available for supervisedtraining and testing. In other cases, all of the modalities are available at all stages but we may want(or not) to limit their usage given a certain budget. Another source for the distinction is the orderin which phases (a) and (b) are made. For example, one may first learn the representation and thenlearn a classifier from it, or learn both the representation and the classifier in parallel. In the deeplearning context, there are two common approaches. In the first approach, we learn an end-to-enddeep NN; the NN has multiple input-specific pipes that include a data source followed by inputspecific layers. After a certain depth, the pipes are concatenated followed by additional layers suchthat the NN is trained end-to-end. In the second approach, input specific deep NNs are learned first,and a multi-modal representation vector is created by concatenating the input specific feature vectors(e.g., the neural network’s last hidden layer). Then, an additional classifier learns to classify fromthe multi-modal representation vector. While multi-modal methods have shown potential to boostperformance on small datasets (Poria et al., 2016), or on top-k accuracy measures (Frome et al.,2013), we are not familiar with works that succeeded with applying it on a large-scale classificationproblem and received performance improvement in top-1 accuracy.2.2 D ECISION -LEVEL FUSIONIn this approach, an input specific classifier is learned for each modality, and the goal is to find adecision rule between them. The decision rule is typically a pre-defined rule (Guillaumin et al.,2010) and is not learned from the data. For example, Poria et al. (2016) chose the classifier with themaximal confidence, while Krizhevsky et al. (2012) average classifier predictions. However, in thiswork we show that learning the decision rule yields significantly better results on our data.4Under review as a conference paper at ICLR 20173 M ETHODS AND ARCHITECTURESIn this section, we give the details of our multi-modal product classification architecture. The ar-chitecture is composed of a text CNN and an image CNN which are forged together by a policynetwork, as can be seen in Figure 2, Right.3.1 M ULTI LABEL COST FUNCTIONOur cost function is the weighted sigmoid cross entropy with logits, a common cost function formulti-label problems. Let xbe the logits, zbe the targets, qbe a positive weight coefficient, used asa multiplier for the positive targets, and (x) =11+exp(x):The loss is given by:Cost(x,z;q) =qzlog((x))(1z)log(1(x)) =(1z)x+ (1 + (q1)z)log(1 +exp(x)):The positive coefficient q;allows one to trade off recall and precision by up- or down-weighting thecost of a positive error relative to a negative error. We found it to have a significant effect in practice.3.2 T EXT CLASSIFICATIONFor the text signal, we use the text CNN architecture of Kim (2014). The first layer embeds wordsinto low-dimensional vectors using random embedding (different than the original paper). The nextlayer performs convolutions over time on the embedded word vectors using multiple filter sizes (3,4 and 5), where we use 128filters from each size. Next, we max-pool-over-time the result of eachconvolution filter and concatenated all the results together. We add a dropout regularization layer(0.5 dropping rate), followed by a fully connected layer, and classify the result using a softmax layer.An illustration of the Text CNN can be seen in Figure 2.3.3 I MAGE CLASSIFICATIONFor the image signal, we use the VGG Network (Simonyan & Zisserman, 2014). The input to thenetwork is a fixed-size 224x224RGB image. The image is passed through a stack of convolutionallayers with a very small receptive field: 3x3. The convolution stride is fixed to 1pixel; the spatialpadding of the convolutional layer is 1pixel. Spatial pooling is carried out by five max-poolinglayers, which follow some of the convolutional layers. Max-pooling is performed over a 2x2pixelwindow, with stride 2. A stack of convolutional layers is followed by three Fully-Connected (FC)layers: the first two have 4096 channels each, the third performs 2890-way product classificationand thus contains 2890 channels (one for each class). All hidden layers are followed by a ReLunon-linearity. The exact details can be seen in Figure 2.3.4 M ULTI -MODAL ARCHITECTUREWe experimented with four types of multi-modal architectures. (1)Learning decision-level fusionpolicies from different inputs. (1a) Policies that use the text and image CNNs class probabilitiesas input (Figure 2). We experimented with architectures that have one or two fully connected layers(the two-layered policy is using 10hidden units and a ReLu non-linearity between them). (1b)Policies that use the text and/or image as input. For these policies, the architecture of policynetwork was either the text CNN or the VGG network. In order to train policies, labels are collectedfrom the image and text networks predictions, i.e., the label is 1if the image network made a correctprediction while the text network made a mistake, and 0otherwise. On evaluation, we use thepolicy predictions to select between the models, i.e., if the policy prediction is 1we use the imagenetwork, and use the text network otherwise. (2)Pre-defined policies that average the predictionsof the different CNNs or choose the CNN with the highest confidence. (3)End-to-end feature-levelfusion, each input type is processed by its specific CNN. We concatenate the last hidden layers of theCNNs and add one or two fully connected layers. All the layers are trained together end-to-end (wealso tried to initialize the input specific weights from pre-trained single-modal networks). (4)Multi-step feature-level fusion. As in (3), we create shared representation vector by concatenating the lasthidden layers. However, we now keep the shared representation fixed and learn a new classifier fromit.5Under review as a conference paper at ICLR 20174 E XPERIMENTS4.1 S ETUPOur dataset contains 1.2 million products (title image and shelf) that we collected from Walmart.com(offered online and can be viewed at the website) and were deemed the hardest to classify by thecurrent production system. We divide the data into training (1.1 million) validation (50k) and test(50k). We train both the image network and the text network on the training dataset and evaluatethem on the test dataset. The policy is trained on the validation dataset and is also evaluated onthe test dataset. The objective is to classify the product’s shelf, from 2890 possible choices. Eachproduct is typically assigned to more than one shelf (3 on average), and the network is consideredaccurate if its most probable shelf is one of them.4.2 T RAINING THE TEXT ARCHITECTUREPreprocess: we build a dictionary of all the words in the training data and embed each word using arandom embedding into a one hundred dimensional vector. We trim titles with more than 40 wordsand pad shorter titles with nulls.We experimented with different batch sizes, dropout rates, and filters stride, but found that the vanillaarchitecture (Kim, 2014) works well on our data. This is consistent with Zhang & Wallace (2015),who showed that text CNNs are not very sensitive to hyperparameters. We tuned the cost functionpositive coefficient parameter q;and found out that the value 30 performed best in practice (we willalso use this value for the image network). The best CNN that we trained classified 70:1%of theproducts from the test set correctly (Table 1).4.3 T RAINING THE IMAGE ARCHITECTUREPreprocess: we re-size all the images into 224 x 224 pixels and reduce the image mean.The VGG network that we trained classified 57% of the products from the test set correctly. This isa bit disappointing if we compare it to the performance of the VGG network on ImageNet ( 75%).There are a few differences between these two datasets that may explain this gap. First, our data has3 times more classes and contains multiple labels per image making the classification harder, andsecond, Figure 1 implies that some of our images are not informative for shelf classification. Someworks claim that the features learned by VGG on ImageNet are global feature extractors (Lynchet al., 2015). We therefore decided to use the weights learned by VGG on ImageNet and learn onlythe last layer. This configuration yielded only 36:7%accuracy. We believe that the reason is thatsome of the ImageNet classes are irrelevant for e-commerce (e.g., vehicles and animals) while somerelevant categories are misrepresented (e.g., electronics and office equipment). It could also be thatour images follow some specific pattern of white background, well-lit studio etc., that characterizese-commerce.4.4 E RROR ANALYSISIs a picture worth a thousand words? Inspecting Figure 3, we can see that the text network out-performed the image network on this dataset, classifying more products correctly. Similar resultswere reported before (Pyo et al., 2010; Kannan et al., 2011) but to the best of our knowledge, thisis the first work that compares state-of-the-art text and image CNNs on a real-world large-scale e-commerce dataset.What is the potential of multi-modality? We identified that for 7:8%of the products the image net-work made a correct prediction while the text network was wrong. This observation is encouragingsince it implies that there is a relative big potential to harness via multi-modality. We find this largegap surprising since different neural networks applied to the same problem tend to make the samemistakes (Szegedy et al., 2013).Unification techniques for multi-modal problems typically use the last hidden layer of each networkas features (Frome et al., 2013; Lynch et al., 2015; Pyo et al., 2010). We therefore decided to visual-ize the activations of this layer using a tSNE map (Maaten & Hinton, 2008). Figure 3, depicts sucha map for the activations of the text model (the image model yielded similar results). In particular,6Under review as a conference paper at ICLR 2017Title is correct, image is not: 21.9%Image is correct, title is not:7.8% Both models are wrong: 22.4%Both models are correct: 47.9%Figure 3: Error analysis using a tSNE map, created from the last hidden layer neural activations ofthe text model.we were looking for regions in the tSNE map where the image predictions are correct and the textis wrong (Figure 3, green). Finding such a region will imply that a policy network can learn gooddecision boundaries. However, we can see that there are no well-defined regions in the tSNE mapswhere the image network is correct and the title is wrong (green), thus implying that it might be hardto identify these products using the activations of the last layers.4.5 M ULTI -MODAL UNIFICATION TECHNIQUESOur error analysis experiment highlights the potential of merging image and text. Still, we foundit hard to achieve the upper bound provided by the error analysis in practice. We now describe thepolicies that managed to achieve performance boost in top-1 accuracy %over the text and imagenetworks, and then provide discussion on other approaches that we tried but didn’t work.Decision-level fusion: We trained policies from different data sources (e.g., title, image, and eachCNN class probabilities), using different architectures and different hyperparameters. Looking atTable 1, we can see that the best policies were trained using the class probabilities (the softmaxprobabilities) of the image and text CNNs as inputs. The amount of class probabilities that wereused (top-1, top-3 or all) did not have a significant effect on the results, indicating that the top-1probability contains enough information to learn good policies. This result makes sense since thetop-1 probability measures the confidence of the network in making a prediction. Still, the top-3probabilities performed slightly better, indicating that the difference between the top probabilitiesmay also matter. We can also see that the 2-layer architecture outperformed the 1-layer, indicatingthat a linear policy is too simple, and deeper models can yield better results. Last, the cost functionpositive coefficient q had a big impact on the results. We can see that for q= 1, the policy networkis more accurate in its prediction however it achieves worse results on shelf classification. For q= 5we get the best results, while higher values of q(e.g., 7or10) resulted in inaccurate policies that didnot perform well in practice.Policy input # layers q Text Image Policy Oracle Policy accuracyCP-1 1 5 70.1 56.7 71.4 (+1.3) 77.5 (+7.8) 86.4CP-1 2 5 70.1 56.6 71.5 (+1.4) 77.6 (+7.5) 84.2CP-all 2 5 70.1 56.6 71.4 (+1.3) 77.6 (+7.5) 84.6CP-3 2 5 70.2 56.7 71.8 (+1.6) 77.7 (+7.5) 84.2CP-3 2 1 70.2 56.7 70.2 (+0) 77.7 (+7.5) 92.5CP-3 2 7 70.0 56.6 71.0 (+1.0) 77.5 (+7.5) 79.1CP-3 2 10 70.1 56.6 70.7 (+0.6) 77.6 (+7.5) 75.0Image - 5 70.1 56.6 68.5(-1.6) 77.6 (+7.5) 80.3Text - 5 70.1 56.6 69.0 (-1.1) 77.6 (+7.5) 83.7Both - 5 70.1 56.6 66.1 (-4) 77.6 (+7.5) 73.7Fixed-Mean - - 70.1 56.7 65.4 (+0) 77.6 (+7.5) -Fixed-Max - - 70.1 56.7 60.1 (-10) 77.7 (+7.6) 38.2Table 1: Decision-level fusion results. Each row presents a different policy configuration (definedby the policy input, the number of layers and the value of q), followed by the accuracy %of theimage, text, policy and oracle (optimal policy) classifiers on the test dataset. The policy accuracycolumn presents the accuracy %of the policy in making correct predictions, i.e., choosing the imagenetwork when it made a correct prediction while the text network didn’t. Numbers in (+)referto the performance gain over the text CNN. Class Probabilities (CP) refer to the number of classprobabilities used as input.7Under review as a conference paper at ICLR 2017While it may not seem surprising that combining text and image will improve accuracy, in practicewe found it extremely hard to leverage this potential. To the best of our knowledge, this is the firstwork that demonstrates a direct performance improvement on top-1 classification accuracy fromusing images and text on a large-scale classification problem.We experimented with pre-defined policies that do not learn from the data. Specifically, we tried toaverage the logits, following (Krizhevsky et al., 2012; Simonyan & Zisserman, 2014), and to choosethe network with the maximal confidence following (Poria et al., 2016). Both of these experimentsyielded significantly worse results, probably, since the text network is much more accurate than theimage one (Table 1). We also tried to learn policies from the text and/or the image input, usinga policy network which is either a text CNN, a VGG network or a combination. However, all ofthese experiments resulted in policies that overfit the data and performed worse than the title modelon the test data (Table 1). We also experimented with early stopping criteria, various regularizationmethods (dropout, l1, l2) and reduced model size but none could make the policy network generalize.Feature-level fusion: Training a CNN end-to-end can be tricky. For example, each input sourcehas its own specific architecture, with specific learning rate and optimization algorithm. We exper-imented with training the network end-to-end, but also with first training each part separately andthen learning the concatenated parts. We tried different unification approaches such as gating func-tions (Srivastava et al., 2015), cross products and a different number of fully connected layers afterthe concatenation. These experiments resulted in models that were inferior to the text model. Whilethis may seem surprising, the only successful feature level fusion that we are aware of (Frome et al.,2013), was not able to gain accuracy improvement on top-1 accuracy.5 C ONCLUSIONSIn this work, we investigated a multi-modal multi-class multi-label product classification problemand presented results on a challenging real-world dataset that we collected from Walmart.com. Wediscovered that the text network outperforms the image network on our dataset, and observed a bigpotential of fusing text and image inputs. Finally, we suggested a multi-modal decision-level fusionapproach that leverages state-of-the-art results from image and text classification and forges theminto a multi-modal architecture that outperforms both.State-of-the-art image CNNs are much larger than text CNNs, and take more time to train and torun. Thus, extracting image features during run time, or getting the image network predictions maybe prohibitively expensive. In this context, an interesting observation is that feature level fusionmethods require using the image signal for each product, while decision level fusion methods re-quire using the image network selectively making them more appealing. Moreover, our experimentssuggest that decision-level fusion performs better than feature-level fusion in practice.Finally, we were only able to realize a fraction of the potential of multi-modality. In the future, weplan to investigate deeper policy networks and more sophisticated measures of confidence. We alsoplan to investigate ensembles of image networks (Krizhevsky et al., 2012) and text networks (Pyoet al., 2010). We believe that the insights from training policy networks will eventually lead us totrain end to end differential multi-modal networks. | r1slCpBVx | Practical large-scale multi-model architecture but lack technical novelty | 5: Marginally below acceptance threshold | This paper introduces a large-scale multi-model product classification system. The model consists of three modules, Image CNN (VGG 16 architecture), text CNN (Kim 2014) and decision-level fusion policies. The authors have tried several fusion methods: including policies taking inputs from text and image CNN probabilities; choose either CNN; average the predictions; end-to-end training. Experimental results show that text CNN alone works better than image CNN and multi-model fusion can improve the accuracy by a small margin. It is a little bit surprising that end-to-end feature level fusion works worse than text CNN alone. The writing is clear and there are a lot of useful practical experiences of learning large-scale model. However, I lean toward rejecting the paper because the following:
1) No other dataset reported. The authors haven't mentioned releasing the walmart dataset and it is going to be really hard to reproduce the results without the dataset.
2) Technical novelty is limited. All the decision-level fusion policies have been investigated by some previous methods before.
3) Performance gain is also limited. | 4: The reviewer is confident but not absolutely certain that the evaluation is correct |
rJJ3YU5ge | ICLR.cc/2017/conference | 2017 | Is a picture worth a thousand words? A Deep Multi-Modal Fusion Architecture for Product Classification in e-commerce | ["Tom Zahavy", "Alessandro Magnani", "Abhinandan Krishnan", "Shie Mannor"] | Classifying products into categories precisely and efficiently is a major challenge in modern e-commerce. The high traffic of new products uploaded daily and the dynamic nature of the categories raise the need for machine learning models that can reduce the cost and time of human editors. In this paper, we propose a decision level fusion approach for multi-modal product classification using text and image inputs. We train input specific state-of-the-art deep neural networks for each input source, show the potential of forging them together into a multi-modal architecture and train a novel policy network that learns to choose between them. Finally, we demonstrate that our multi-modal network improves the top-1 accuracy $\%$ over both networks on a real-world large-scale product classification dataset that we collected from Walmart.com. While we focus on image-text fusion that characterizes e-commerce domains, our algorithms can be easily applied to other modalities such as audio, video, physical sensors, etc. | ["Multi-modal learning", "Deep learning"] | ABSTRACTClassifying products into categories precisely and efficiently is a major challengein modern e-commerce. The high traffic of new products uploaded daily and thedynamic nature of the categories raise the need for machine learning models thatcan reduce the cost and time of human editors. In this paper, we propose a decisionlevel fusion approach for multi-modal product classification using text and imageinputs. We train input specific state-of-the-art deep neural networks for each inputsource, show the potential of forging them together into a multi-modal architectureand train a novel policy network that learns to choose between them. Finally, wedemonstrate that our multi-modal network improves the top-1 accuracy %overboth networks on a real-world large-scale product classification dataset that wecollected from Walmart.com. While we focus on image-text fusion that character-izes e-commerce domains, our algorithms can be easily applied to other modalitiessuch as audio, video, physical sensors, etc.1 I NTRODUCTIONProduct classification is a key issue in e-commerce domains. A product is typically representedby metadata such as its title, image, color, weight and so on, and most of them are assigned man-ually by the seller. Once a product is uploaded to an e-commerce website, it is typically placedin multiple categories. Categorizing products helps e-commerce websites to provide costumers abetter shopping experience, for example by efficiently searching the products catalog or by develop-ing recommendation systems. A few examples of categories are internal taxonomies (for businessneeds), public taxonomies (such as groceries and office equipment) and the product’s shelf (a groupof products that are presented together on an e-commerce web page). These categories vary withtime in order to optimize search efficiency and to account for special events such as holidays andsports events. In order to address these needs, e-commerce websites typically hire editors and usecrowdsourcing platforms to classify products. However, due to the high amount of new productsuploaded daily and the dynamic nature of the categories, machine learning solutions for productclassification are very appealing as means to reduce the time and economic costs. Thus, preciselycategorizing items emerges as a significant issue in e-commerce domains.A shelf is a group of products presented together on an e-commerce website page, and usuallycontain products with a given theme/category (e.g., Women boots, folding tables). Product to shelfclassification is a challenging problem due to data size, category skewness, and noisy metadataand labels. In particular, it presents three fundamental challenges for machine learning algorithms.First, it is typically a multi-class problem with thousands of classes. Second, a product may belongto multiple shelves making it a multi-label problem. And last, a product has both an image and atext input making it a multi-modal problem.Products classification is typically addressed as a text classification problem because most metadataof items are represented as textual features (Pyo et al., 2010). Text classification is a classic topicfor natural language processing, in which one needs to assign predefined categories to text inputs.1Under review as a conference paper at ICLR 2017Figure 1: Predicting shelves from product metadata obtained from Walmart.com. Left: productsthat have both an image and a title that contain useful information for predicting the product’s shelf.Center, top: the boots title gives specific information about the boots but does not mention that theproduct is a boot, making it harder to predict the shelf. Center, bottom: the baby toddler shirt’stitle is only refers to the text on the toddler shirt and does not mention that it is a product for babies.Right, top: the umbrella image contains information about its color but it is hard to understand thatthe image is referring to an umbrella. Right, bottom: the lips pencil image looks like a regularpencil, making it hard to predict that it belongs to the moisturizers shelf.Standard methods follow a classical two-stage scheme of extraction of (handcrafted) features, fol-lowed by a classification stage. Typical features include bag-of-words or n-grams, and their TF-IDF.On the other hand, Deep Neural Networks use generic priors instead of specific domain knowledge(Bengio et al., 2013) and have been shown to give competitive results on text classification tasks(Zhang et al., 2015). In particular, Convolutional neural networks (CNNs) (Kim, 2014; Zhang et al.,2015; Conneau et al., 2016) and Recurrent NNs (Lai et al., 2015; Pyo et al., 2010; Xiao & Cho,2016) can efficiently capture the sequentiality of the text. These methods are typically applied di-rectly to distributed embedding of words (Kim, 2014; Lai et al., 2015; Pyo et al., 2010) or characters(Zhang et al., 2015; Conneau et al., 2016; Xiao & Cho, 2016), without any knowledge on the syn-tactic or semantic structures of a language. However, all of these architectures were only appliedon problems with a small amount of labels ( 20) while e-commerce shelf classification problemstypically have thousands of labels with multiple labels per product.In Image classification, CNNs are widely considered the best models, and achieve state-of-the-art results on the ImageNet Large-Scale Visual Recognition Challenge (Russakovsky et al., 2015;Krizhevsky et al., 2012; Simonyan & Zisserman, 2014; He et al., 2015). However, as good as theyare, the classification accuracy of machine learning systems is often limited in problems with manyclasses of object categories. One remedy is to leverage data from other sources, such as text data.However, the studies on multi-modal deep learning for large-scale item categorization are still rare tothe best of our belief. In particular in a setting where there is a significant difference in discriminativepower between the two types of signals.In this work, we propose a multi-modal deep neural network model for product classification. Ourdesign principle is to leverage the specific prior for each data type by using the current state-of-2Under review as a conference paper at ICLR 2017the-art classifiers from the image and text domains. The final architecture has 3 main components(Figure 2, Right): a text CNN (Kim, 2014), an image CNN (Simonyan & Zisserman, 2014) anda policy network that learns to choose between them. We collected a large-scale data set of 1:2million products from the Walmart.com website. Each product has a title and an image and needs tobe classified to a shelf (label) with 2890 possible shelves. Examples from this dataset can be seenin Figure 1 and are also available on-line at the Walmart.com website. For most of the products,both the image and the title of each product contain relevant information for customers. However, itis interesting to observe that for some of the products, both input types may not be informative forshelf prediction (Figure 1). This observation motivates our work and raises interesting questions:which input type is more useful for product classification? is it possible to forge the inputs into abetter architecture?In our experiments, we show that the text CNN outperforms the image one. However, for a relativelylarge number of products ( 8%), the image CNN is correct while the text CNN is wrong, indicatinga potential gain from using a multi-modal architecture. We also show that the policy is able to choosebetween the two models and give a performance improvement over both state-of-the-art networks.To the best of our knowledge, this is the first work that demonstrates a performance improvementon top-1 classification accuracy by using images and text on a large-scale classification problem. Inparticular, our main contributions are:We demonstrate that the text classification CNN (Kim, 2014) outperforms the VGG net-work (Simonyan & Zisserman, 2014) on a real-world large-scale product to shelf classifi-cation problem.We analyze the errors made by the different networks and show the potential gain of multi-modality.We propose a novel decision-level fusion policy that learns to choose between the text andimage networks and improve over both.2 M ULTI -MODALITYOver the years, a large body of research has been devoted to improving classification using en-sembles of classifiers (Kittler et al., 1998; Hansen & Salamon, 1990). Inspired by their success,these methods have also been used in multi-modal settings (e.g.,Guillaumin et al. (2010); Poria et al.(2016)), where the source of the signals, or alternatively their modalities, are different. Some exam-ples include audio-visual speech classification (Ngiam et al., 2011), image and text retrieval (Kiroset al.), sentiment analysis and semi-supervised learning (Guillaumin et al., 2010).Combining classifiers from different input sources presents multiple challenges. First, classifiersvary in their discriminative power, thus, an optimal unification method should be able to adaptitself for specific combinations of classifiers. Second, different data sources have different state-of-the-art architectures, typically deep neural networks, which vary in depth, width, and optimizationalgorithm; making it non-trivial to merge them. Moreover, a multi-modal architecture potentiallyhas more local minima that may give unsatisfying results. Finally, most of the publicly availablereal-world big data classification datasets, an essential building block of deep learning systems,typically contain only one data type.Nevertheless, the potential performance boost of multi-modal architectures has motivated re-searchers over the years. Frome et al. (2013) combined an image network (Krizhevsky et al., 2012)with a Skip-gram Language Model in order to improve classification results on ImageNet. However,they were not able to improve the top-1 accuracy prediction, possibly because the text input theyused (image labels) didn’t contain a lot of information. Other works, used multi-modality to learngood embedding but did not present results on classification benchmarks (Lynch et al., 2015; Kiroset al.; Gong et al., 2014). Kannan et al. (2011) suggested to improve text-based product classifica-tion by adding an image signal, training an image classifier and learning a decision rule between thetwo. However, they only experimented with a small dataset and a low number of labels, and it isnot clear how to scale their method for extreme multi-class multi-label applications that characterizereal-world problems in e-commerce.3Under review as a conference paper at ICLR 2017PolicyT ext CNNVGG16T ext ImageClass probabilities Prediction Class probabilities PredictionFinal predictionImage Input T ext InputShared representationMulti-modallayersNetwork predictionImage Input T ext InputNetwork prediction Network predictionNetwork predictionPolicyInputFeature-level fusionDecision-level fusionFigure 2: Multi-modal fusion architectures.Left, top: Feature-level fusion. Each modality is processed in a different pipe. After a certaindepth, the pipes are concatenated followed by multi-modal layers. Left, bottom: Decision-levelfusion. Each modality is processed in a different pipe and gives a prediction. A policy network islearning to decide which classifier to use. Right: The proposed multi-modal architecture.Adding modalities can improve the classification of products that have a non-informative inputsource (e.g., image or text). In e-commerce, for example, classifiers that rely exclusively on textsuffer from short and non-informative titles, differences in style between vendors and overlappingtext across categories (i.e., a word that helps to classify a certain class may appear in other classes).Figure 1 presents a few examples of products that have only one informative input type. These ex-amples suggest that a multi-modal architecture can potentially outperform a classifier with a singleinput type.Most unification techniques for multi-modal learning are partitioned between feature-level fusiontechniques and decision-level fusion techniques (Figure 2, Left).2.1 F EATURE LEVEL FUSIONFeature-level fusion is characterized by three phases: (a) learning a representation, (b) supervisedtraining, and (c) testing. The different unification techniques are distinguished by the availabilityof the data in each phase (Guillaumin et al., 2010). For example, in cross-modality training, therepresentation is learned from all the modalities, but only one modality is available for supervisedtraining and testing. In other cases, all of the modalities are available at all stages but we may want(or not) to limit their usage given a certain budget. Another source for the distinction is the orderin which phases (a) and (b) are made. For example, one may first learn the representation and thenlearn a classifier from it, or learn both the representation and the classifier in parallel. In the deeplearning context, there are two common approaches. In the first approach, we learn an end-to-enddeep NN; the NN has multiple input-specific pipes that include a data source followed by inputspecific layers. After a certain depth, the pipes are concatenated followed by additional layers suchthat the NN is trained end-to-end. In the second approach, input specific deep NNs are learned first,and a multi-modal representation vector is created by concatenating the input specific feature vectors(e.g., the neural network’s last hidden layer). Then, an additional classifier learns to classify fromthe multi-modal representation vector. While multi-modal methods have shown potential to boostperformance on small datasets (Poria et al., 2016), or on top-k accuracy measures (Frome et al.,2013), we are not familiar with works that succeeded with applying it on a large-scale classificationproblem and received performance improvement in top-1 accuracy.2.2 D ECISION -LEVEL FUSIONIn this approach, an input specific classifier is learned for each modality, and the goal is to find adecision rule between them. The decision rule is typically a pre-defined rule (Guillaumin et al.,2010) and is not learned from the data. For example, Poria et al. (2016) chose the classifier with themaximal confidence, while Krizhevsky et al. (2012) average classifier predictions. However, in thiswork we show that learning the decision rule yields significantly better results on our data.4Under review as a conference paper at ICLR 20173 M ETHODS AND ARCHITECTURESIn this section, we give the details of our multi-modal product classification architecture. The ar-chitecture is composed of a text CNN and an image CNN which are forged together by a policynetwork, as can be seen in Figure 2, Right.3.1 M ULTI LABEL COST FUNCTIONOur cost function is the weighted sigmoid cross entropy with logits, a common cost function formulti-label problems. Let xbe the logits, zbe the targets, qbe a positive weight coefficient, used asa multiplier for the positive targets, and (x) =11+exp(x):The loss is given by:Cost(x,z;q) =qzlog((x))(1z)log(1(x)) =(1z)x+ (1 + (q1)z)log(1 +exp(x)):The positive coefficient q;allows one to trade off recall and precision by up- or down-weighting thecost of a positive error relative to a negative error. We found it to have a significant effect in practice.3.2 T EXT CLASSIFICATIONFor the text signal, we use the text CNN architecture of Kim (2014). The first layer embeds wordsinto low-dimensional vectors using random embedding (different than the original paper). The nextlayer performs convolutions over time on the embedded word vectors using multiple filter sizes (3,4 and 5), where we use 128filters from each size. Next, we max-pool-over-time the result of eachconvolution filter and concatenated all the results together. We add a dropout regularization layer(0.5 dropping rate), followed by a fully connected layer, and classify the result using a softmax layer.An illustration of the Text CNN can be seen in Figure 2.3.3 I MAGE CLASSIFICATIONFor the image signal, we use the VGG Network (Simonyan & Zisserman, 2014). The input to thenetwork is a fixed-size 224x224RGB image. The image is passed through a stack of convolutionallayers with a very small receptive field: 3x3. The convolution stride is fixed to 1pixel; the spatialpadding of the convolutional layer is 1pixel. Spatial pooling is carried out by five max-poolinglayers, which follow some of the convolutional layers. Max-pooling is performed over a 2x2pixelwindow, with stride 2. A stack of convolutional layers is followed by three Fully-Connected (FC)layers: the first two have 4096 channels each, the third performs 2890-way product classificationand thus contains 2890 channels (one for each class). All hidden layers are followed by a ReLunon-linearity. The exact details can be seen in Figure 2.3.4 M ULTI -MODAL ARCHITECTUREWe experimented with four types of multi-modal architectures. (1)Learning decision-level fusionpolicies from different inputs. (1a) Policies that use the text and image CNNs class probabilitiesas input (Figure 2). We experimented with architectures that have one or two fully connected layers(the two-layered policy is using 10hidden units and a ReLu non-linearity between them). (1b)Policies that use the text and/or image as input. For these policies, the architecture of policynetwork was either the text CNN or the VGG network. In order to train policies, labels are collectedfrom the image and text networks predictions, i.e., the label is 1if the image network made a correctprediction while the text network made a mistake, and 0otherwise. On evaluation, we use thepolicy predictions to select between the models, i.e., if the policy prediction is 1we use the imagenetwork, and use the text network otherwise. (2)Pre-defined policies that average the predictionsof the different CNNs or choose the CNN with the highest confidence. (3)End-to-end feature-levelfusion, each input type is processed by its specific CNN. We concatenate the last hidden layers of theCNNs and add one or two fully connected layers. All the layers are trained together end-to-end (wealso tried to initialize the input specific weights from pre-trained single-modal networks). (4)Multi-step feature-level fusion. As in (3), we create shared representation vector by concatenating the lasthidden layers. However, we now keep the shared representation fixed and learn a new classifier fromit.5Under review as a conference paper at ICLR 20174 E XPERIMENTS4.1 S ETUPOur dataset contains 1.2 million products (title image and shelf) that we collected from Walmart.com(offered online and can be viewed at the website) and were deemed the hardest to classify by thecurrent production system. We divide the data into training (1.1 million) validation (50k) and test(50k). We train both the image network and the text network on the training dataset and evaluatethem on the test dataset. The policy is trained on the validation dataset and is also evaluated onthe test dataset. The objective is to classify the product’s shelf, from 2890 possible choices. Eachproduct is typically assigned to more than one shelf (3 on average), and the network is consideredaccurate if its most probable shelf is one of them.4.2 T RAINING THE TEXT ARCHITECTUREPreprocess: we build a dictionary of all the words in the training data and embed each word using arandom embedding into a one hundred dimensional vector. We trim titles with more than 40 wordsand pad shorter titles with nulls.We experimented with different batch sizes, dropout rates, and filters stride, but found that the vanillaarchitecture (Kim, 2014) works well on our data. This is consistent with Zhang & Wallace (2015),who showed that text CNNs are not very sensitive to hyperparameters. We tuned the cost functionpositive coefficient parameter q;and found out that the value 30 performed best in practice (we willalso use this value for the image network). The best CNN that we trained classified 70:1%of theproducts from the test set correctly (Table 1).4.3 T RAINING THE IMAGE ARCHITECTUREPreprocess: we re-size all the images into 224 x 224 pixels and reduce the image mean.The VGG network that we trained classified 57% of the products from the test set correctly. This isa bit disappointing if we compare it to the performance of the VGG network on ImageNet ( 75%).There are a few differences between these two datasets that may explain this gap. First, our data has3 times more classes and contains multiple labels per image making the classification harder, andsecond, Figure 1 implies that some of our images are not informative for shelf classification. Someworks claim that the features learned by VGG on ImageNet are global feature extractors (Lynchet al., 2015). We therefore decided to use the weights learned by VGG on ImageNet and learn onlythe last layer. This configuration yielded only 36:7%accuracy. We believe that the reason is thatsome of the ImageNet classes are irrelevant for e-commerce (e.g., vehicles and animals) while somerelevant categories are misrepresented (e.g., electronics and office equipment). It could also be thatour images follow some specific pattern of white background, well-lit studio etc., that characterizese-commerce.4.4 E RROR ANALYSISIs a picture worth a thousand words? Inspecting Figure 3, we can see that the text network out-performed the image network on this dataset, classifying more products correctly. Similar resultswere reported before (Pyo et al., 2010; Kannan et al., 2011) but to the best of our knowledge, thisis the first work that compares state-of-the-art text and image CNNs on a real-world large-scale e-commerce dataset.What is the potential of multi-modality? We identified that for 7:8%of the products the image net-work made a correct prediction while the text network was wrong. This observation is encouragingsince it implies that there is a relative big potential to harness via multi-modality. We find this largegap surprising since different neural networks applied to the same problem tend to make the samemistakes (Szegedy et al., 2013).Unification techniques for multi-modal problems typically use the last hidden layer of each networkas features (Frome et al., 2013; Lynch et al., 2015; Pyo et al., 2010). We therefore decided to visual-ize the activations of this layer using a tSNE map (Maaten & Hinton, 2008). Figure 3, depicts sucha map for the activations of the text model (the image model yielded similar results). In particular,6Under review as a conference paper at ICLR 2017Title is correct, image is not: 21.9%Image is correct, title is not:7.8% Both models are wrong: 22.4%Both models are correct: 47.9%Figure 3: Error analysis using a tSNE map, created from the last hidden layer neural activations ofthe text model.we were looking for regions in the tSNE map where the image predictions are correct and the textis wrong (Figure 3, green). Finding such a region will imply that a policy network can learn gooddecision boundaries. However, we can see that there are no well-defined regions in the tSNE mapswhere the image network is correct and the title is wrong (green), thus implying that it might be hardto identify these products using the activations of the last layers.4.5 M ULTI -MODAL UNIFICATION TECHNIQUESOur error analysis experiment highlights the potential of merging image and text. Still, we foundit hard to achieve the upper bound provided by the error analysis in practice. We now describe thepolicies that managed to achieve performance boost in top-1 accuracy %over the text and imagenetworks, and then provide discussion on other approaches that we tried but didn’t work.Decision-level fusion: We trained policies from different data sources (e.g., title, image, and eachCNN class probabilities), using different architectures and different hyperparameters. Looking atTable 1, we can see that the best policies were trained using the class probabilities (the softmaxprobabilities) of the image and text CNNs as inputs. The amount of class probabilities that wereused (top-1, top-3 or all) did not have a significant effect on the results, indicating that the top-1probability contains enough information to learn good policies. This result makes sense since thetop-1 probability measures the confidence of the network in making a prediction. Still, the top-3probabilities performed slightly better, indicating that the difference between the top probabilitiesmay also matter. We can also see that the 2-layer architecture outperformed the 1-layer, indicatingthat a linear policy is too simple, and deeper models can yield better results. Last, the cost functionpositive coefficient q had a big impact on the results. We can see that for q= 1, the policy networkis more accurate in its prediction however it achieves worse results on shelf classification. For q= 5we get the best results, while higher values of q(e.g., 7or10) resulted in inaccurate policies that didnot perform well in practice.Policy input # layers q Text Image Policy Oracle Policy accuracyCP-1 1 5 70.1 56.7 71.4 (+1.3) 77.5 (+7.8) 86.4CP-1 2 5 70.1 56.6 71.5 (+1.4) 77.6 (+7.5) 84.2CP-all 2 5 70.1 56.6 71.4 (+1.3) 77.6 (+7.5) 84.6CP-3 2 5 70.2 56.7 71.8 (+1.6) 77.7 (+7.5) 84.2CP-3 2 1 70.2 56.7 70.2 (+0) 77.7 (+7.5) 92.5CP-3 2 7 70.0 56.6 71.0 (+1.0) 77.5 (+7.5) 79.1CP-3 2 10 70.1 56.6 70.7 (+0.6) 77.6 (+7.5) 75.0Image - 5 70.1 56.6 68.5(-1.6) 77.6 (+7.5) 80.3Text - 5 70.1 56.6 69.0 (-1.1) 77.6 (+7.5) 83.7Both - 5 70.1 56.6 66.1 (-4) 77.6 (+7.5) 73.7Fixed-Mean - - 70.1 56.7 65.4 (+0) 77.6 (+7.5) -Fixed-Max - - 70.1 56.7 60.1 (-10) 77.7 (+7.6) 38.2Table 1: Decision-level fusion results. Each row presents a different policy configuration (definedby the policy input, the number of layers and the value of q), followed by the accuracy %of theimage, text, policy and oracle (optimal policy) classifiers on the test dataset. The policy accuracycolumn presents the accuracy %of the policy in making correct predictions, i.e., choosing the imagenetwork when it made a correct prediction while the text network didn’t. Numbers in (+)referto the performance gain over the text CNN. Class Probabilities (CP) refer to the number of classprobabilities used as input.7Under review as a conference paper at ICLR 2017While it may not seem surprising that combining text and image will improve accuracy, in practicewe found it extremely hard to leverage this potential. To the best of our knowledge, this is the firstwork that demonstrates a direct performance improvement on top-1 classification accuracy fromusing images and text on a large-scale classification problem.We experimented with pre-defined policies that do not learn from the data. Specifically, we tried toaverage the logits, following (Krizhevsky et al., 2012; Simonyan & Zisserman, 2014), and to choosethe network with the maximal confidence following (Poria et al., 2016). Both of these experimentsyielded significantly worse results, probably, since the text network is much more accurate than theimage one (Table 1). We also tried to learn policies from the text and/or the image input, usinga policy network which is either a text CNN, a VGG network or a combination. However, all ofthese experiments resulted in policies that overfit the data and performed worse than the title modelon the test data (Table 1). We also experimented with early stopping criteria, various regularizationmethods (dropout, l1, l2) and reduced model size but none could make the policy network generalize.Feature-level fusion: Training a CNN end-to-end can be tricky. For example, each input sourcehas its own specific architecture, with specific learning rate and optimization algorithm. We exper-imented with training the network end-to-end, but also with first training each part separately andthen learning the concatenated parts. We tried different unification approaches such as gating func-tions (Srivastava et al., 2015), cross products and a different number of fully connected layers afterthe concatenation. These experiments resulted in models that were inferior to the text model. Whilethis may seem surprising, the only successful feature level fusion that we are aware of (Frome et al.,2013), was not able to gain accuracy improvement on top-1 accuracy.5 C ONCLUSIONSIn this work, we investigated a multi-modal multi-class multi-label product classification problemand presented results on a challenging real-world dataset that we collected from Walmart.com. Wediscovered that the text network outperforms the image network on our dataset, and observed a bigpotential of fusing text and image inputs. Finally, we suggested a multi-modal decision-level fusionapproach that leverages state-of-the-art results from image and text classification and forges theminto a multi-modal architecture that outperforms both.State-of-the-art image CNNs are much larger than text CNNs, and take more time to train and torun. Thus, extracting image features during run time, or getting the image network predictions maybe prohibitively expensive. In this context, an interesting observation is that feature level fusionmethods require using the image signal for each product, while decision level fusion methods re-quire using the image network selectively making them more appealing. Moreover, our experimentssuggest that decision-level fusion performs better than feature-level fusion in practice.Finally, we were only able to realize a fraction of the potential of multi-modality. In the future, weplan to investigate deeper policy networks and more sophisticated measures of confidence. We alsoplan to investigate ensembles of image networks (Krizhevsky et al., 2012) and text networks (Pyoet al., 2010). We believe that the insights from training policy networks will eventually lead us totrain end to end differential multi-modal networks. | S1--09r4l | Review | 5: Marginally below acceptance threshold | This paper tackles the problem of multi-modal classification of text and images.
Pros:
- Interesting dataset and application.
Cons:
- The results are rather lacklustre, showing a very mild improvement compared to the oracle improvement. But perhaps some insights as to whether the incorrect decisions are humanly possible would help with significance of the results.
- Could have explored some intermediate architectures such as feature fusion + class probabilities with/without finetuning. There are no feature fusion results reported.
- No evaluation on standard datasets or comparison to previous works.
What is the policy learnt for CP-1? Given 2 input class probabilities, how does the network perform better than max or mean?
| 4: The reviewer is confident but not absolutely certain that the evaluation is correct |
rJsiFTYex | ICLR.cc/2017/conference | 2017 | A Way out of the Odyssey: Analyzing and Combining Recent Insights for LSTMs | ["Shayne Longpre", "Sabeek Pradhan", "Caiming Xiong", "Richard Socher"] | LSTMs have become a basic building block for many deep NLP models. In recent years, many improvements and variations have been proposed for deep sequence models in general, and LSTMs in particular. We propose and analyze a series of architectural modifications for LSTM networks resulting in improved performance for text classification datasets. We observe compounding improvements on traditional LSTMs using Monte Carlo test-time model averaging, deep vector averaging (DVA), and residual connections, along with four other suggested modifications. Our analysis provides a simple, reliable, and high quality baseline model. | ["Natural language processing", "Deep learning", "Supervised Learning"] | ABSTRACTLSTMs have become a basic building block for many deep NLP models. In recentyears, many improvements and variations have been proposed for deep sequencemodels in general, and LSTMs in particular. We propose and analyze a series ofaugmentations and modifications to LSTM networks resulting in improved perfor-mance for text classification datasets. We observe compounding improvements ontraditional LSTMs using Monte Carlo test-time model averaging, average pooling,and residual connections, along with four other suggested modifications. Ouranalysis provides a simple, reliable, and high quality baseline model.1 I NTRODUCTIONWhen exploring a new problem, having a simple yet competitive off-the-shelf baseline is fundamentalto new research. For instance, Caruana et al. (2008) showed random forests to be a strong baseline formany high-dimensional supervised learning tasks. For computer vision, off-the-shelf convolutionalneural networks (CNNs) have earned their reputation as a strong baseline (Sharif Razavian et al.,2014) and basic building block for more complex models like visual question answering (Xionget al., 2016). For natural language processing (NLP) and other sequential modeling tasks, recurrentneural networks (RNNs), and in particular Long Short-Term Memory (LSTM) networks, with alinear projection layer at the end have begun to attain a similar status. However, the standard LSTMis in many ways lacking as a baseline. Zaremba (2015), Gal (2015), and others show that largeimprovements are possible using a forget bias, inverted dropout regularization or bidirectionality. Weadd three major additions with similar improvements to off-the-shelf LSTMs: Monte Carlo modelaveraging, embed average pooling, and residual connections. We analyze these and other morecommon improvements.2 LSTM N ETWORKLSTM networks are among the most commonly used models for tasks involving variable-lengthsequences of data, such as text classification. The basic LSTM layer consists of six equations:it= tanh (Wixt+Riht1+bi) (1)jt=(Wjxt+Rjht1+bj) (2)ft=(Wfxt+Rfht1+bf) (3)ot= tanh (Woxt+Roht1+bo) (4)ct=itjt+ftct1 (5)ht=ottanh (ct) (6)1Under review as a conference paper at ICLR 20170 20 40 60 80 100 120 140 160 180 200Monte Carlo Samples0.4950.5000.5050.5100.5150.5200.525SST 5-Class Error RateMonte Carlo SSTMonte Carlo ErrorInverted Dropout Error(a) Monte Carlo for SST fine-grained error0 20 40 60 80 100 120 140 160 180 200Monte Carlo Samples0.1120.1140.1160.1180.1200.1220.1240.1260.1280.130Binary Error RateIMDB: Monte CarloMonte Carlo ErrorInverted Dropout Error (b) Monte Carlo for IMDB binary errorFigure 1: A comparison of the performance of Monte Carlo averaging, over sample size, to regularsingle-sample inverted dropout at test-time.Whereis the sigmoid function, is element-wise multiplication, and vtis the value of variable vat timestept. Each layer receives xtfrom the layer that came before it and ht1andct1from theprevious timestep, and it outputs htto the layer that comes after it and htandctto the next timestep.Thecandhvalues jointly constitute the recurrent state of the LSTM that is passed from one timestepto the next. Since the hvalue completely updates at each timestep while the cvalue maintains part ofits own value through multiplication by the forget gate f,handccomplement each other very well,withhforming a “fast” state that can quickly adapt to new information and cforming a “slow” statethat allows information to be retained over longer periods of time (Zaremba, 2015). While variouspapers have tried to systematically experiment with the 6 core equations constituting an LSTM (Greffet al., 2015; Zaremba, 2015), in general the basic LSTM equations have proven extremely resilientand, if not optimal, at least a local maximum.3 M ONTE CARLO MODEL AVERAGINGIt is common practice when applying dropout in neural networks to scale the weights up at traintime (inverted dropout). This ensures that the expected magnitude of the inputs to any given layerare equivalent between train and test, allowing for an efficient computation of test-time predictions.However, for a model trained with dropout, test-time predictions generated without dropout merelyapproximate the ensemble of smaller models that dropout is meant to provide. A higher fidelitymethod requires that test-time dropout be conducted in a manner consistent with how the model wastrained. To achieve this, we sample kneural nets with dropout applied for each test example andaverage the predictions. With sufficiently large kthis Monte Carlo average should approach the truemodel average (Srivastava et al., 2014). We show in Figure 1 that this technique can yield moreaccurate predictions on test-time data than the standard practice. This is demonstrated over a numberof datasets, suggesting its applicability to many types of sequential architectures. While runningmultiple Monte Carlo samples is more computationally expensive, the overall increase is minimalas the process is only run on test-time forward passes and is highly parallelizable. We show thathigher performance can be achieved with relatively few Monte Carlo samples, and that this numberof samples is similar across different NLP datasets and tasks.We encountered one ambiguity of Monte Carlo model averaging that to our knowledge remainsunaddressed in prior literature: there is relatively little exploration as to where and how the modelaveraging is most appropriately handled. We investigated averaging over the output of the finalrecurrent layer (just before the projection layer), over the output of the projection layer (the pre-softmax unnormalized logits), and the post-softmax normalized probabilities, which is the approachtaken by Gal (2015) for language modeling. We saw no discernible difference in performancebetween averaging the pre-projection and post-projection outputs. Averaging over the post-softmaxprobabilities showed marginal improvements over these two methods, but interestingly only forbidirectional models. We also explored using majority voting among the sampled models. This2Under review as a conference paper at ICLR 2017Embed...RNNSoftmaxw2w2w3w3wN1wN1wNwNMLPNXi=1wiNNXi=1wiNAverage Word Vectorsw1w1wN2wN2Figure 2: An illustration of the embed average pooling extension to a standard RNN model. Theoutput of the multilayer perceptron is concatenated to the final hidden state output by the RNN.involves tallying the maximum post-softmax probabilities and selecting the class that received themost votes. This method differs from averaging the post-softmax probabilities in the same waymax-margin differs from maximum likelihood estimation (MLE), de-emphasizing the points wellinside the decision boundary or the models that predicted a class with extremely high probability.With sufficiently large k, this voting method seemed to work best of the averaging methods we tried,and thus all of our displayed models use this technique. However, for classification problems withmore classes, more Monte Carlo samples might be necessary to guarantee a meaningful plurality ofclass predictions. We conclude that the majority-vote Monte Carlo averaging method is preferablein the case where the ratio of Monte Carlo samples to number of classification labels is large(k=outputsize).The Monte Carlo model averaging experiments, shown in Figure 1, were conducted as follows. Wedrewk= 400 separate test samples for each example, differentiated by their dropout masks. For eachsample size p(whose values, plotted on the x-axis, were in the range from 2to200with step-size2) we selected pof ourksamples randomly without replacement and performed the relevant MonteCarlo averaging technique for that task, as discussed above. We do this m= 20 times for each point,to establish the mean and variance for that number of Monte Carlo iterations/samples p. The varianceis used to visualize the 90% confidence interval in blue, while the red line denotes the test accuracycomputed using the traditional approximation method (inverted dropout at train-time, and no dropoutat test-time).4 E MBED AVERAGE POOLINGReliably retaining long-range information is a well documented weakness of LSTM networks(Karpathy et al., 2015). This is especially the case for very long sequences like the IMDB sentimentdataset (Maas et al., 2011), where deep sequential models fail to capture uni- and bi-gram occurrencesover long sequences. This is likely why n-gram based models, such as a bi-gram NBSVM (Wangand Manning, 2012), outperform RNN models on such datasetes. It was shown by Iyyer et al. (2015)and others that for general NLP classification tasks, the use of a deep, unordered composition (or bag-of-words) of a sequence can yield strong results. Their solution, the deep averaging network (DAN),combines the observed effectiveness of depth, with the unreasonable effectiveness of unorderedrepresentations of long sequences.We suspect that the primary advantage of DANs is their ability to keep track of information thatwould have otherwise been forgotten by a sequential model, such as information early in the sequencefor a unidirectional RNN or information in the middle of the sequence for a bidirectional RNN. Ourembed average pooling supplements the bidirectional RNN with the information from a DAN at arelatively negligible computational cost.3Under review as a conference paper at ICLR 2017LSTMLSTMLSTMSoftmaxh(1)th(1)th(2)th(2)th(3)th(3)txtxt......xt+1xt+1xt1xt1h(1)t1h(1)t1h(1)th(1)th(2)t1h(2)t1h(3)t1h(3)t1h(2)th(2)th(3)th(3)t(a) Res-V1: An illustration of vertical residual connec-tionsLSTMLSTMLSTMSoftmax......xtxtxt1xt1xt+1xt+1h(1)t1h(1)t1h(1)th(1)th(1)th(1)th(2)t1h(2)t1h(2)th(2)th(2)th(2)th(3)t1h(3)t1h(3)th(3)th(3)th(3)t(b) Res-V2: An illustration of vertical and lateral resid-ual connectionsFigure 3: An illustration of vertical (ResV) and lateral residual (ResL) connections added to a 3-layerRNN. A model with only vertical residuals is denoted Res-V1, whereas a model with vertical andlateral residuals is denoted “Res-V2”.As shown in Figure 2, embed average pooling works by averaging the sequence of word vectors andpassing this average through an MLP. The averaging is similar to an average pooling layer in a CNN(hence the name), but with the averaging being done temporally rather than spatially. The output ofthis MLP is concatenated to the final output of the RNN, and the combined vector is then passedinto the projection and softmax layer. We apply the same dropout mask to the word vectors whenpassing them to the RNN as when averaging them, and we apply a different dropout mask on theoutput of the MLP. We experimented with applying the MLP before rather than after averaging theword vectors but found the latter to be most effective.5 R ESIDUAL CONNECTIONSFor feed-forward convolutional neural networks used in computer vision tasks, residual networks, orResNets, have obtained state of the art results (He et al., 2015). Rather than having each layer learn awholly new representation of the data, as is customary for neural networks, ResNets have each layer(or group of layers) learn a residual which is added to the layer’s input and then passed on to the nextlayer. More formally, if the input to a layer (or group of layers) is xand the output of that layer (orgroup of layers) is F(x), then the input to the next layer (or group of layers) is x+F(x), whereas itwould beF(x)in a conventional neural network. This architecture allows the training of far deepermodels. He et al. (2015) trained convolutional neural networks as deep as 151 layers, compared to 16layers used in VGGNets (Simonyan and Zisserman, 2014) or 22 layers used in GoogLeNet (Szegedyet al., 2015), and won the 2015 ImageNet Challenge. Since then, various papers have tried to buildupon the ResNet paradigm (Huang et al., 2016; Szegedy et al., 2016), and various others have tried tocreate convincing theoretical reasons for ResNet’s success (Liao and Poggio, 2016; Veit et al., 2016).4Under review as a conference paper at ICLR 2017We explored many different ways to incorporate residual connections in an RNN. The two mostsuccessful ones, which we call Res-V1 and Res-V2 are depicted in Figure 6. Res-V1 incorporatesonly vertical residuals, while Res-V2 incorporates both vertical and lateral residuals. With verticalresidual connections, the input to a layer is added to its output and then passed to the next layer,as is done in feed-forward ResNets. Thus, whereas the input to a layer is normally the htfromthe previous layer, with vertical residuals the input becomes the ht+xtfrom the previous layer.This maintains many of the attractive properties of ResNets (e.g. unimpeded gradient flow acrosslayers, adding/averaging the contributions of each layer) and thus lends itself naturally to deepernetworks. However, it can interact unpredictably with the LSTM architecture, as the “fast” state ofthe LSTM no longer reflects the network’s full representation of the data at that point. To mitigate thisunpredictability, Res-V2 also includes lateral residual connections. With lateral residual connections,the input to a layer is added to its output and then passed to the next timestep as the fast state of theLSTM. It is equivalent to replacing equation 6 with ht=ottanh (ct) +xt. Thus, applying bothvertical and lateral residuals ensures that the same value is passed both to the next layer as input andto the next timestep as the “fast” state.In addition to these two, we explored various other, ultimately less successful, ways of adding residualconnections to an LSTM, the primary one being horizontal residual connections. In this architecture,rather than adding the input from the previous layer to a layer’s output, we added the fast statefrom the previous timestep. The hope was that adding residual connections across timesteps wouldallow information to flow more effectively across timesteps and thus improve the performance ofRNNs that are deep across timesteps, much as ResNets do for networks that are deep across layers.Thus, we believed horizontal residual connections could solve the problem of LSTMs not learninglong-term dependencies, the same problem we also hoped to mitigate with embed average pooling.Unfortunately, horizontal residuals failed, possibly because they blurred the distinction betweenthe LSTM’s “fast” state and “slow” state and thus prevented the LSTM from quickly adapting tonew data. Alternate combinations of horizontal, vertical, and lateral residual connections were alsoexperimented with but yielded poor results.6 E XPERIMENTAL RESULTS6.1 D ATASETSWe chose two commonly used benchmark datasets for our experiments: the Stanford SentimentTreebank (SST) (Socher et al., 2013) and the IMDB sentiment dataset (Maas et al., 2011). Thisallowed us to compare the performance of our models to existing work and review the flexibility ofour proposed model extensions across fairly disparate types of classification datasets. SST containsrelatively well curated, short sequence sentences, in contrast to IMDB’s comparatively colloquialand lengthy sequences (some up to 2;000tokens). To further differentiate the classification tasks wechose to experiment with fine-grained, five-class sentiment on SST, while IMDB only offered binarylabels. For IMDB, we randomly split the training set of 25;000examples into training and validationsets containing 22;500and2;500examples respectively, as done in Maas et al. (2011).6.2 M ETHODOLOGYOur objective is to show a series of compounding extensions to the standard LSTM baseline thatenhance accuracy. To ensure scientific reliability, the addition of each feature is the only changefrom the previous model (see Figures 4 and 5). The baseline model is a 2-layer stacked LSTM withhidden size 170for SST and 120for IMDB, as used in Tai et al. (2015). All models in this paper usedpublicly available 300 dimensional word vectors, pre-trained using Glove on 840 million tokens ofCommon Crawl Data (Pennington et al., 2014), and both the word vectors and the subsequent weightmatrices were trained using Adam with a learning rate of 104.The first set of basic feature additions were adding a forget bias and using dropout. Adding a bias of1:0to the forget gate (i.e. adding 1.0 to the inside of the sigmoid function in equation 3) improvesresults across NLP tasks, especially for learning long-range dependencies (Zaremba, 2015). Dropout(Srivastava et al., 2014) is a highly effective regularizer for deep models. For SST and IMDB we usedgrid search to select dropout probabilities of 0:5and0:7respectively, applied to the input of eachlayer, including the projection/softmax layer. While forget bias appears to hurt performance in Figure5Under review as a conference paper at ICLR 20175, the combination of dropout and forget bias yielded better results in all cases than dropout withoutforget bias. Our last two basic optimizations were increasing the hidden sizes and then adding shared-weight bidirectionality to the RNN. The hidden sizes for SST and IMDB were increased to 800and360respectively; we found significantly diminishing returns to performance from increases beyondthis. We chose shared-weight bidirectionality to ensure the model size did not increase any further.Specifically, the forward and backward weights are shared, and the input to the projection/softmaxlayer is a concatenation of the forward and backward passes’ final hidden states.All of our subsequent proposed model extensions are described at length in their own sections. Forboth datasets, we used 60Monte Carlo samples, and the embed average pooling MLP had onehidden layer and both a hidden dimension and an output dimension of 300as the output dimensionof the embed average pooling MLP. Note that although the MLP weights increased the size of theirrespective models, this increase is negligible (equivalent to increasing the hidden size for SST from800to804or the hidden size of IMDB from 360to369), and we found that such a size increase hadno discernible effect on accuracy when done without the embed average pooling.6.3 R ESULTSSince each of our proposed modifications operate independently, they are well suited to use incombination as well as in isolation. In Figures 4 and 5 we compound these features on top of themore traditional enhancements. Due to the expensiveness of bidirectional models, Figure 4 alsoshows these compounding features on SST with and without bidirectionality. The validation accuracydistributions show that each augmentation usually provides some small but noticeable improvementon the previous model, as measured by consistent improvements in mean and median accuracy.Baseline: 2-LSTM + Forget Bias + Dropout + Hidden Size + Bidirectional + Monte Carlo + Embed Averaging + Vertical Residual + Lateral ResidualFeatures0.450.460.470.480.490.500.510.520.535-Class Val AccuracySST: Full Compounding Model Features(a) Compounding feature models on 5-Class SST.Baseline: 2-LSTM + Forget Bias + Dropout + Hidden Size + Monte Carlo + Embed Averaging + Vertical Residual + Lateral ResidualFeatures0.450.460.470.480.490.500.510.520.535-Class Val AccuracySST: Compounding Model Features (b) Compounding feature models (minus bidirectional)for 5-Class SST.Figure 4: These box-plots show the performance of compounding model features on fine-grain SSTvalidation accuracy. The red points, red lines, blue boxes, whiskers and plus-shaped points indicatethe mean, median, quartiles, range, and outliers, respectively.We originally suspected that MC would provide marginal yet consistent improvements across datasets,while embed average pooling would especially excel for long sequences like in IMDB, where n-grambased models and deep unordered compositions have benefited from their ability to retain informationfrom disparate parts of the text. The former hypothesis was largely confirmed. However, whileembed average pooling was generally performance-enhancing, the performance boost it yielded forIMDB was not significantly larger than the one it yielded for SST, though that may have been becausethe other enhancements already encompassed most of the advantages provided by deep unorderedcompositions.The only evident exceptions to the positive trend are the variations of residual connections. Which ofRes-V1 (vertical only) and Res-V2 (vertical and residual) outperformed the other depended on thedataset and whether the network was bidirectional. The Res-V2 architecture dominated in experiments4b and 5 while the Res-V1 (only vertical residuals) architecture is most performant in Figure 4a. This6Under review as a conference paper at ICLR 2017Baseline: 2-LSTM + Forget Bias + Dropout + Hidden Size + Bidirectional + Monte Carlo + Embed Averaging + Vertical Residual + Lateral ResidualFeatures0.8700.8750.8800.8850.8900.8950.9000.9050.910Binary Val AccuracyIMDB: Compounding Model FeaturesFigure 5: These box-plots show the performance of compounding model features on binary IMDBvalidation accuracy.Figure 6: Comparing the effects of layer depth between Vanilla RNNs, Res-V1 and Res-V2 modelson fine-grained sentiment classification (SST). As we increase the layers, we decrease the hidden sizeto maintain equivalent model sizes. The points indicate average validation accuracy, while the shadedregions indicate 90% confidence intervals.suggests for short sequences, bidirectionality and lateral residuals conflict. Further analysis of theeffect of residual connections and model depth can be found in Figure 6. In that figure, the number ofparameters, and hence model size, are kept uniform by modifying the hidden size as the layer depthchanged. The hidden sizes used for 1,2,4,6, and 8layer models were 250,170,120,100, and 85respectively, maintaining 550;000total parameters for all models. As the graph demonstrates,7Under review as a conference paper at ICLR 2017Model # Params (M) Train Time / Epoch (sec) Test Acc (%)RNTN (Socher et al., 2013) 45:7CNN-MC (Kim, 2014) 47:4DRNN (Irsoy and Cardie, 2014) 49:8CT-LSTM (Tai et al., 2015) 0:317 51:0DMN (Kumar et al., 2016) 52:1NTI-SLSTM-LSTM (Munkhdalai andYu, 2016) 53:1Baseline 2-LSTM 0:553 2;100 46 :4Large 2-LSTM 8:650 3;150 48 :7Bi-2-LSTM 8:650 6;100 50 :9Bi-2-LSTM+MC+Pooling+ResV 8:740 8;050 52:22-LSTM+MC+Pooling+ResV+ResL 8:740 4;800 51:6Table 1: Test performance on the Stanford Sentiment Treebank (SST) sentiment classification task.Model # Params (M) Train Time / Epoch (sec) Test Acc (%)SVM-bi (Wang and Manning, 2012) 89:2DAN-RAND (Iyyer et al., 2015) 88:8DAN (Iyyer et al., 2015) 89:4NBSVM-bi (Wang and Manning, 2012) 91:2NBSVM-tri, RNN, Sentence-Vec En-semble (Mesnil et al., 2014) 92:6Baseline 2-LSTM 0:318 1;800 85 :3Large 2-LSTM 2:00 2;500 87 :6Bi-2-LSTM 2:00 5;100 88 :9Bi-2-LSTM+MC+Pooling+ResV+ResL 2:08 5;500 90 :1Table 2: Test performance on the IMDB sentiment classification task.normal LSTMs (“Vanilla”) perform drastically worse as they become deeper and narrower, whileRes-V1 and Res-V2 both see their performance stay much steadier or even briefly rise. While depthwound up being far from a panacea for the datasets we experimented on, the ability of an LSTM withresidual connections to maintain its performance as it gets deeper holds promise for other domainswhere the extra expressive power provided by depth might prove more crucial.Selecting the best results for each model, we see results competitive with state-of-the-art performancefor both IMDB1and SST, even though many state-of-the-art models use either parse-tree information(Tai et al., 2015), multiple passes through the data (Kumar et al., 2016) or tremendous train andtest-time computational and memory expenses (Le and Mikolov, 2014). To our knowledge, ourmodels constitute the best performance of purely sequential, single-pass, and computationally feasiblemodels, precisely the desired features of a solid out-of-the-box baseline. Furthermore, for SST, thecompounding enhancement model without bidirectionality, the final model shown in Figure 4b, greatlyexceeded the performance of the large bidirectional model ( 51:6%vs50:9%), with significantly lesstraining time (Table 1). This suggests our enhancements could provide a similarly reasonable andefficient alternative to shared-weight bidirectionality for other such datasets.7 C ONCLUSIONWe explore several easy to implement enhancements to the basic LSTM network that positivelyimpact performance. These include both fairly well established extensions (biasing the forget gate,dropout, increasing the model size, bidirectionality) and several more novel ones (Monte Carlo1For IMDB, we benchmark only against results obtained from training exclusively on the labeled training set.Thus, we omit results from unsupervised models that leveraged the additional 50;000unlabeled examples, suchas Miyato et al. (2016).8Under review as a conference paper at ICLR 2017model averaging, embed average pooling, residual connections). We find that these enhancementsimprove the performance of the LSTM in classification tasks, both in conjunction or isolation, withan accuracy close to state of the art despite being more lightweight and using less information thanthe current state of the art models. Our results suggest that these extensions should be incorporatedinto LSTM baselines. | S1FjsGfNe | 5: Marginally below acceptance threshold | I agree with the other reviewer that the application areas are limited in the paper. I agree with the overall sentiment of the paper to evaluate effectiveness of some of the more recent techniques in this area, in conjunction with the recurrent networks.
The paper advertises itself as a method (or a list of methods) of improving the recurrent baselines when performing experiments, however fails (or not shown) to generalize to other tasks. Effectiveness of these methods need to be shown across a wide variety of tasks if we intend to replace traditional baselines in general, rather than a specific subset of applications.
I like the desire to evaluate many of the recent techniques and having many replications of experiments towards this end (which is a strong point of the paper). However, whether there are synergies of some of the enhancements with sentiment analysis or not, we cannot see from these results. It would be interesting to see whether some of these results generalize across a wide variety of tasks. | 4: The reviewer is confident but not absolutely certain that the evaluation is correct |
|
rJsiFTYex | ICLR.cc/2017/conference | 2017 | A Way out of the Odyssey: Analyzing and Combining Recent Insights for LSTMs | ["Shayne Longpre", "Sabeek Pradhan", "Caiming Xiong", "Richard Socher"] | LSTMs have become a basic building block for many deep NLP models. In recent years, many improvements and variations have been proposed for deep sequence models in general, and LSTMs in particular. We propose and analyze a series of architectural modifications for LSTM networks resulting in improved performance for text classification datasets. We observe compounding improvements on traditional LSTMs using Monte Carlo test-time model averaging, deep vector averaging (DVA), and residual connections, along with four other suggested modifications. Our analysis provides a simple, reliable, and high quality baseline model. | ["Natural language processing", "Deep learning", "Supervised Learning"] | ABSTRACTLSTMs have become a basic building block for many deep NLP models. In recentyears, many improvements and variations have been proposed for deep sequencemodels in general, and LSTMs in particular. We propose and analyze a series ofaugmentations and modifications to LSTM networks resulting in improved perfor-mance for text classification datasets. We observe compounding improvements ontraditional LSTMs using Monte Carlo test-time model averaging, average pooling,and residual connections, along with four other suggested modifications. Ouranalysis provides a simple, reliable, and high quality baseline model.1 I NTRODUCTIONWhen exploring a new problem, having a simple yet competitive off-the-shelf baseline is fundamentalto new research. For instance, Caruana et al. (2008) showed random forests to be a strong baseline formany high-dimensional supervised learning tasks. For computer vision, off-the-shelf convolutionalneural networks (CNNs) have earned their reputation as a strong baseline (Sharif Razavian et al.,2014) and basic building block for more complex models like visual question answering (Xionget al., 2016). For natural language processing (NLP) and other sequential modeling tasks, recurrentneural networks (RNNs), and in particular Long Short-Term Memory (LSTM) networks, with alinear projection layer at the end have begun to attain a similar status. However, the standard LSTMis in many ways lacking as a baseline. Zaremba (2015), Gal (2015), and others show that largeimprovements are possible using a forget bias, inverted dropout regularization or bidirectionality. Weadd three major additions with similar improvements to off-the-shelf LSTMs: Monte Carlo modelaveraging, embed average pooling, and residual connections. We analyze these and other morecommon improvements.2 LSTM N ETWORKLSTM networks are among the most commonly used models for tasks involving variable-lengthsequences of data, such as text classification. The basic LSTM layer consists of six equations:it= tanh (Wixt+Riht1+bi) (1)jt=(Wjxt+Rjht1+bj) (2)ft=(Wfxt+Rfht1+bf) (3)ot= tanh (Woxt+Roht1+bo) (4)ct=itjt+ftct1 (5)ht=ottanh (ct) (6)1Under review as a conference paper at ICLR 20170 20 40 60 80 100 120 140 160 180 200Monte Carlo Samples0.4950.5000.5050.5100.5150.5200.525SST 5-Class Error RateMonte Carlo SSTMonte Carlo ErrorInverted Dropout Error(a) Monte Carlo for SST fine-grained error0 20 40 60 80 100 120 140 160 180 200Monte Carlo Samples0.1120.1140.1160.1180.1200.1220.1240.1260.1280.130Binary Error RateIMDB: Monte CarloMonte Carlo ErrorInverted Dropout Error (b) Monte Carlo for IMDB binary errorFigure 1: A comparison of the performance of Monte Carlo averaging, over sample size, to regularsingle-sample inverted dropout at test-time.Whereis the sigmoid function, is element-wise multiplication, and vtis the value of variable vat timestept. Each layer receives xtfrom the layer that came before it and ht1andct1from theprevious timestep, and it outputs htto the layer that comes after it and htandctto the next timestep.Thecandhvalues jointly constitute the recurrent state of the LSTM that is passed from one timestepto the next. Since the hvalue completely updates at each timestep while the cvalue maintains part ofits own value through multiplication by the forget gate f,handccomplement each other very well,withhforming a “fast” state that can quickly adapt to new information and cforming a “slow” statethat allows information to be retained over longer periods of time (Zaremba, 2015). While variouspapers have tried to systematically experiment with the 6 core equations constituting an LSTM (Greffet al., 2015; Zaremba, 2015), in general the basic LSTM equations have proven extremely resilientand, if not optimal, at least a local maximum.3 M ONTE CARLO MODEL AVERAGINGIt is common practice when applying dropout in neural networks to scale the weights up at traintime (inverted dropout). This ensures that the expected magnitude of the inputs to any given layerare equivalent between train and test, allowing for an efficient computation of test-time predictions.However, for a model trained with dropout, test-time predictions generated without dropout merelyapproximate the ensemble of smaller models that dropout is meant to provide. A higher fidelitymethod requires that test-time dropout be conducted in a manner consistent with how the model wastrained. To achieve this, we sample kneural nets with dropout applied for each test example andaverage the predictions. With sufficiently large kthis Monte Carlo average should approach the truemodel average (Srivastava et al., 2014). We show in Figure 1 that this technique can yield moreaccurate predictions on test-time data than the standard practice. This is demonstrated over a numberof datasets, suggesting its applicability to many types of sequential architectures. While runningmultiple Monte Carlo samples is more computationally expensive, the overall increase is minimalas the process is only run on test-time forward passes and is highly parallelizable. We show thathigher performance can be achieved with relatively few Monte Carlo samples, and that this numberof samples is similar across different NLP datasets and tasks.We encountered one ambiguity of Monte Carlo model averaging that to our knowledge remainsunaddressed in prior literature: there is relatively little exploration as to where and how the modelaveraging is most appropriately handled. We investigated averaging over the output of the finalrecurrent layer (just before the projection layer), over the output of the projection layer (the pre-softmax unnormalized logits), and the post-softmax normalized probabilities, which is the approachtaken by Gal (2015) for language modeling. We saw no discernible difference in performancebetween averaging the pre-projection and post-projection outputs. Averaging over the post-softmaxprobabilities showed marginal improvements over these two methods, but interestingly only forbidirectional models. We also explored using majority voting among the sampled models. This2Under review as a conference paper at ICLR 2017Embed...RNNSoftmaxw2w2w3w3wN1wN1wNwNMLPNXi=1wiNNXi=1wiNAverage Word Vectorsw1w1wN2wN2Figure 2: An illustration of the embed average pooling extension to a standard RNN model. Theoutput of the multilayer perceptron is concatenated to the final hidden state output by the RNN.involves tallying the maximum post-softmax probabilities and selecting the class that received themost votes. This method differs from averaging the post-softmax probabilities in the same waymax-margin differs from maximum likelihood estimation (MLE), de-emphasizing the points wellinside the decision boundary or the models that predicted a class with extremely high probability.With sufficiently large k, this voting method seemed to work best of the averaging methods we tried,and thus all of our displayed models use this technique. However, for classification problems withmore classes, more Monte Carlo samples might be necessary to guarantee a meaningful plurality ofclass predictions. We conclude that the majority-vote Monte Carlo averaging method is preferablein the case where the ratio of Monte Carlo samples to number of classification labels is large(k=outputsize).The Monte Carlo model averaging experiments, shown in Figure 1, were conducted as follows. Wedrewk= 400 separate test samples for each example, differentiated by their dropout masks. For eachsample size p(whose values, plotted on the x-axis, were in the range from 2to200with step-size2) we selected pof ourksamples randomly without replacement and performed the relevant MonteCarlo averaging technique for that task, as discussed above. We do this m= 20 times for each point,to establish the mean and variance for that number of Monte Carlo iterations/samples p. The varianceis used to visualize the 90% confidence interval in blue, while the red line denotes the test accuracycomputed using the traditional approximation method (inverted dropout at train-time, and no dropoutat test-time).4 E MBED AVERAGE POOLINGReliably retaining long-range information is a well documented weakness of LSTM networks(Karpathy et al., 2015). This is especially the case for very long sequences like the IMDB sentimentdataset (Maas et al., 2011), where deep sequential models fail to capture uni- and bi-gram occurrencesover long sequences. This is likely why n-gram based models, such as a bi-gram NBSVM (Wangand Manning, 2012), outperform RNN models on such datasetes. It was shown by Iyyer et al. (2015)and others that for general NLP classification tasks, the use of a deep, unordered composition (or bag-of-words) of a sequence can yield strong results. Their solution, the deep averaging network (DAN),combines the observed effectiveness of depth, with the unreasonable effectiveness of unorderedrepresentations of long sequences.We suspect that the primary advantage of DANs is their ability to keep track of information thatwould have otherwise been forgotten by a sequential model, such as information early in the sequencefor a unidirectional RNN or information in the middle of the sequence for a bidirectional RNN. Ourembed average pooling supplements the bidirectional RNN with the information from a DAN at arelatively negligible computational cost.3Under review as a conference paper at ICLR 2017LSTMLSTMLSTMSoftmaxh(1)th(1)th(2)th(2)th(3)th(3)txtxt......xt+1xt+1xt1xt1h(1)t1h(1)t1h(1)th(1)th(2)t1h(2)t1h(3)t1h(3)t1h(2)th(2)th(3)th(3)t(a) Res-V1: An illustration of vertical residual connec-tionsLSTMLSTMLSTMSoftmax......xtxtxt1xt1xt+1xt+1h(1)t1h(1)t1h(1)th(1)th(1)th(1)th(2)t1h(2)t1h(2)th(2)th(2)th(2)th(3)t1h(3)t1h(3)th(3)th(3)th(3)t(b) Res-V2: An illustration of vertical and lateral resid-ual connectionsFigure 3: An illustration of vertical (ResV) and lateral residual (ResL) connections added to a 3-layerRNN. A model with only vertical residuals is denoted Res-V1, whereas a model with vertical andlateral residuals is denoted “Res-V2”.As shown in Figure 2, embed average pooling works by averaging the sequence of word vectors andpassing this average through an MLP. The averaging is similar to an average pooling layer in a CNN(hence the name), but with the averaging being done temporally rather than spatially. The output ofthis MLP is concatenated to the final output of the RNN, and the combined vector is then passedinto the projection and softmax layer. We apply the same dropout mask to the word vectors whenpassing them to the RNN as when averaging them, and we apply a different dropout mask on theoutput of the MLP. We experimented with applying the MLP before rather than after averaging theword vectors but found the latter to be most effective.5 R ESIDUAL CONNECTIONSFor feed-forward convolutional neural networks used in computer vision tasks, residual networks, orResNets, have obtained state of the art results (He et al., 2015). Rather than having each layer learn awholly new representation of the data, as is customary for neural networks, ResNets have each layer(or group of layers) learn a residual which is added to the layer’s input and then passed on to the nextlayer. More formally, if the input to a layer (or group of layers) is xand the output of that layer (orgroup of layers) is F(x), then the input to the next layer (or group of layers) is x+F(x), whereas itwould beF(x)in a conventional neural network. This architecture allows the training of far deepermodels. He et al. (2015) trained convolutional neural networks as deep as 151 layers, compared to 16layers used in VGGNets (Simonyan and Zisserman, 2014) or 22 layers used in GoogLeNet (Szegedyet al., 2015), and won the 2015 ImageNet Challenge. Since then, various papers have tried to buildupon the ResNet paradigm (Huang et al., 2016; Szegedy et al., 2016), and various others have tried tocreate convincing theoretical reasons for ResNet’s success (Liao and Poggio, 2016; Veit et al., 2016).4Under review as a conference paper at ICLR 2017We explored many different ways to incorporate residual connections in an RNN. The two mostsuccessful ones, which we call Res-V1 and Res-V2 are depicted in Figure 6. Res-V1 incorporatesonly vertical residuals, while Res-V2 incorporates both vertical and lateral residuals. With verticalresidual connections, the input to a layer is added to its output and then passed to the next layer,as is done in feed-forward ResNets. Thus, whereas the input to a layer is normally the htfromthe previous layer, with vertical residuals the input becomes the ht+xtfrom the previous layer.This maintains many of the attractive properties of ResNets (e.g. unimpeded gradient flow acrosslayers, adding/averaging the contributions of each layer) and thus lends itself naturally to deepernetworks. However, it can interact unpredictably with the LSTM architecture, as the “fast” state ofthe LSTM no longer reflects the network’s full representation of the data at that point. To mitigate thisunpredictability, Res-V2 also includes lateral residual connections. With lateral residual connections,the input to a layer is added to its output and then passed to the next timestep as the fast state of theLSTM. It is equivalent to replacing equation 6 with ht=ottanh (ct) +xt. Thus, applying bothvertical and lateral residuals ensures that the same value is passed both to the next layer as input andto the next timestep as the “fast” state.In addition to these two, we explored various other, ultimately less successful, ways of adding residualconnections to an LSTM, the primary one being horizontal residual connections. In this architecture,rather than adding the input from the previous layer to a layer’s output, we added the fast statefrom the previous timestep. The hope was that adding residual connections across timesteps wouldallow information to flow more effectively across timesteps and thus improve the performance ofRNNs that are deep across timesteps, much as ResNets do for networks that are deep across layers.Thus, we believed horizontal residual connections could solve the problem of LSTMs not learninglong-term dependencies, the same problem we also hoped to mitigate with embed average pooling.Unfortunately, horizontal residuals failed, possibly because they blurred the distinction betweenthe LSTM’s “fast” state and “slow” state and thus prevented the LSTM from quickly adapting tonew data. Alternate combinations of horizontal, vertical, and lateral residual connections were alsoexperimented with but yielded poor results.6 E XPERIMENTAL RESULTS6.1 D ATASETSWe chose two commonly used benchmark datasets for our experiments: the Stanford SentimentTreebank (SST) (Socher et al., 2013) and the IMDB sentiment dataset (Maas et al., 2011). Thisallowed us to compare the performance of our models to existing work and review the flexibility ofour proposed model extensions across fairly disparate types of classification datasets. SST containsrelatively well curated, short sequence sentences, in contrast to IMDB’s comparatively colloquialand lengthy sequences (some up to 2;000tokens). To further differentiate the classification tasks wechose to experiment with fine-grained, five-class sentiment on SST, while IMDB only offered binarylabels. For IMDB, we randomly split the training set of 25;000examples into training and validationsets containing 22;500and2;500examples respectively, as done in Maas et al. (2011).6.2 M ETHODOLOGYOur objective is to show a series of compounding extensions to the standard LSTM baseline thatenhance accuracy. To ensure scientific reliability, the addition of each feature is the only changefrom the previous model (see Figures 4 and 5). The baseline model is a 2-layer stacked LSTM withhidden size 170for SST and 120for IMDB, as used in Tai et al. (2015). All models in this paper usedpublicly available 300 dimensional word vectors, pre-trained using Glove on 840 million tokens ofCommon Crawl Data (Pennington et al., 2014), and both the word vectors and the subsequent weightmatrices were trained using Adam with a learning rate of 104.The first set of basic feature additions were adding a forget bias and using dropout. Adding a bias of1:0to the forget gate (i.e. adding 1.0 to the inside of the sigmoid function in equation 3) improvesresults across NLP tasks, especially for learning long-range dependencies (Zaremba, 2015). Dropout(Srivastava et al., 2014) is a highly effective regularizer for deep models. For SST and IMDB we usedgrid search to select dropout probabilities of 0:5and0:7respectively, applied to the input of eachlayer, including the projection/softmax layer. While forget bias appears to hurt performance in Figure5Under review as a conference paper at ICLR 20175, the combination of dropout and forget bias yielded better results in all cases than dropout withoutforget bias. Our last two basic optimizations were increasing the hidden sizes and then adding shared-weight bidirectionality to the RNN. The hidden sizes for SST and IMDB were increased to 800and360respectively; we found significantly diminishing returns to performance from increases beyondthis. We chose shared-weight bidirectionality to ensure the model size did not increase any further.Specifically, the forward and backward weights are shared, and the input to the projection/softmaxlayer is a concatenation of the forward and backward passes’ final hidden states.All of our subsequent proposed model extensions are described at length in their own sections. Forboth datasets, we used 60Monte Carlo samples, and the embed average pooling MLP had onehidden layer and both a hidden dimension and an output dimension of 300as the output dimensionof the embed average pooling MLP. Note that although the MLP weights increased the size of theirrespective models, this increase is negligible (equivalent to increasing the hidden size for SST from800to804or the hidden size of IMDB from 360to369), and we found that such a size increase hadno discernible effect on accuracy when done without the embed average pooling.6.3 R ESULTSSince each of our proposed modifications operate independently, they are well suited to use incombination as well as in isolation. In Figures 4 and 5 we compound these features on top of themore traditional enhancements. Due to the expensiveness of bidirectional models, Figure 4 alsoshows these compounding features on SST with and without bidirectionality. The validation accuracydistributions show that each augmentation usually provides some small but noticeable improvementon the previous model, as measured by consistent improvements in mean and median accuracy.Baseline: 2-LSTM + Forget Bias + Dropout + Hidden Size + Bidirectional + Monte Carlo + Embed Averaging + Vertical Residual + Lateral ResidualFeatures0.450.460.470.480.490.500.510.520.535-Class Val AccuracySST: Full Compounding Model Features(a) Compounding feature models on 5-Class SST.Baseline: 2-LSTM + Forget Bias + Dropout + Hidden Size + Monte Carlo + Embed Averaging + Vertical Residual + Lateral ResidualFeatures0.450.460.470.480.490.500.510.520.535-Class Val AccuracySST: Compounding Model Features (b) Compounding feature models (minus bidirectional)for 5-Class SST.Figure 4: These box-plots show the performance of compounding model features on fine-grain SSTvalidation accuracy. The red points, red lines, blue boxes, whiskers and plus-shaped points indicatethe mean, median, quartiles, range, and outliers, respectively.We originally suspected that MC would provide marginal yet consistent improvements across datasets,while embed average pooling would especially excel for long sequences like in IMDB, where n-grambased models and deep unordered compositions have benefited from their ability to retain informationfrom disparate parts of the text. The former hypothesis was largely confirmed. However, whileembed average pooling was generally performance-enhancing, the performance boost it yielded forIMDB was not significantly larger than the one it yielded for SST, though that may have been becausethe other enhancements already encompassed most of the advantages provided by deep unorderedcompositions.The only evident exceptions to the positive trend are the variations of residual connections. Which ofRes-V1 (vertical only) and Res-V2 (vertical and residual) outperformed the other depended on thedataset and whether the network was bidirectional. The Res-V2 architecture dominated in experiments4b and 5 while the Res-V1 (only vertical residuals) architecture is most performant in Figure 4a. This6Under review as a conference paper at ICLR 2017Baseline: 2-LSTM + Forget Bias + Dropout + Hidden Size + Bidirectional + Monte Carlo + Embed Averaging + Vertical Residual + Lateral ResidualFeatures0.8700.8750.8800.8850.8900.8950.9000.9050.910Binary Val AccuracyIMDB: Compounding Model FeaturesFigure 5: These box-plots show the performance of compounding model features on binary IMDBvalidation accuracy.Figure 6: Comparing the effects of layer depth between Vanilla RNNs, Res-V1 and Res-V2 modelson fine-grained sentiment classification (SST). As we increase the layers, we decrease the hidden sizeto maintain equivalent model sizes. The points indicate average validation accuracy, while the shadedregions indicate 90% confidence intervals.suggests for short sequences, bidirectionality and lateral residuals conflict. Further analysis of theeffect of residual connections and model depth can be found in Figure 6. In that figure, the number ofparameters, and hence model size, are kept uniform by modifying the hidden size as the layer depthchanged. The hidden sizes used for 1,2,4,6, and 8layer models were 250,170,120,100, and 85respectively, maintaining 550;000total parameters for all models. As the graph demonstrates,7Under review as a conference paper at ICLR 2017Model # Params (M) Train Time / Epoch (sec) Test Acc (%)RNTN (Socher et al., 2013) 45:7CNN-MC (Kim, 2014) 47:4DRNN (Irsoy and Cardie, 2014) 49:8CT-LSTM (Tai et al., 2015) 0:317 51:0DMN (Kumar et al., 2016) 52:1NTI-SLSTM-LSTM (Munkhdalai andYu, 2016) 53:1Baseline 2-LSTM 0:553 2;100 46 :4Large 2-LSTM 8:650 3;150 48 :7Bi-2-LSTM 8:650 6;100 50 :9Bi-2-LSTM+MC+Pooling+ResV 8:740 8;050 52:22-LSTM+MC+Pooling+ResV+ResL 8:740 4;800 51:6Table 1: Test performance on the Stanford Sentiment Treebank (SST) sentiment classification task.Model # Params (M) Train Time / Epoch (sec) Test Acc (%)SVM-bi (Wang and Manning, 2012) 89:2DAN-RAND (Iyyer et al., 2015) 88:8DAN (Iyyer et al., 2015) 89:4NBSVM-bi (Wang and Manning, 2012) 91:2NBSVM-tri, RNN, Sentence-Vec En-semble (Mesnil et al., 2014) 92:6Baseline 2-LSTM 0:318 1;800 85 :3Large 2-LSTM 2:00 2;500 87 :6Bi-2-LSTM 2:00 5;100 88 :9Bi-2-LSTM+MC+Pooling+ResV+ResL 2:08 5;500 90 :1Table 2: Test performance on the IMDB sentiment classification task.normal LSTMs (“Vanilla”) perform drastically worse as they become deeper and narrower, whileRes-V1 and Res-V2 both see their performance stay much steadier or even briefly rise. While depthwound up being far from a panacea for the datasets we experimented on, the ability of an LSTM withresidual connections to maintain its performance as it gets deeper holds promise for other domainswhere the extra expressive power provided by depth might prove more crucial.Selecting the best results for each model, we see results competitive with state-of-the-art performancefor both IMDB1and SST, even though many state-of-the-art models use either parse-tree information(Tai et al., 2015), multiple passes through the data (Kumar et al., 2016) or tremendous train andtest-time computational and memory expenses (Le and Mikolov, 2014). To our knowledge, ourmodels constitute the best performance of purely sequential, single-pass, and computationally feasiblemodels, precisely the desired features of a solid out-of-the-box baseline. Furthermore, for SST, thecompounding enhancement model without bidirectionality, the final model shown in Figure 4b, greatlyexceeded the performance of the large bidirectional model ( 51:6%vs50:9%), with significantly lesstraining time (Table 1). This suggests our enhancements could provide a similarly reasonable andefficient alternative to shared-weight bidirectionality for other such datasets.7 C ONCLUSIONWe explore several easy to implement enhancements to the basic LSTM network that positivelyimpact performance. These include both fairly well established extensions (biasing the forget gate,dropout, increasing the model size, bidirectionality) and several more novel ones (Monte Carlo1For IMDB, we benchmark only against results obtained from training exclusively on the labeled training set.Thus, we omit results from unsupervised models that leveraged the additional 50;000unlabeled examples, suchas Miyato et al. (2016).8Under review as a conference paper at ICLR 2017model averaging, embed average pooling, residual connections). We find that these enhancementsimprove the performance of the LSTM in classification tasks, both in conjunction or isolation, withan accuracy close to state of the art despite being more lightweight and using less information thanthe current state of the art models. Our results suggest that these extensions should be incorporatedinto LSTM baselines. | r1pM1KGVe | review | 5: Marginally below acceptance threshold | This paper presents three improvements to the standard LSTM architecture used in many neural NLP models: Monte Carlo averaging, embed average pooling, and residual connections. Each of the modifications is trivial to implement, so the paper is definitely of interest to any NLP researchers experimenting with deep learning.
With that said, I am concerned about the experiments and their results. The residual connections do not seem to consistently help performance; on SST the vertical residuals help but the lateral residuals hurt, and on IMDB it is the opposite. More fundamentally, there need to be more tasks than just sentiment analysis here. I'm not quite sure why the paper's focus is on text classification, as any NLP task using an LSTM encoder could conceivably benefit from these modifications. It would be great to see a huge variety of tasks like QA, MT, etc., which would really make the paper much stronger.
At this point, while the experiments that are included in the paper are very thorough and the analysis is interesting, there need to be more tasks to convince me that the modifications generalize, so I don't think the paper is ready for publication. | 4: The reviewer is confident but not absolutely certain that the evaluation is correct |
rJsiFTYex | ICLR.cc/2017/conference | 2017 | A Way out of the Odyssey: Analyzing and Combining Recent Insights for LSTMs | ["Shayne Longpre", "Sabeek Pradhan", "Caiming Xiong", "Richard Socher"] | LSTMs have become a basic building block for many deep NLP models. In recent years, many improvements and variations have been proposed for deep sequence models in general, and LSTMs in particular. We propose and analyze a series of architectural modifications for LSTM networks resulting in improved performance for text classification datasets. We observe compounding improvements on traditional LSTMs using Monte Carlo test-time model averaging, deep vector averaging (DVA), and residual connections, along with four other suggested modifications. Our analysis provides a simple, reliable, and high quality baseline model. | ["Natural language processing", "Deep learning", "Supervised Learning"] | ABSTRACTLSTMs have become a basic building block for many deep NLP models. In recentyears, many improvements and variations have been proposed for deep sequencemodels in general, and LSTMs in particular. We propose and analyze a series ofaugmentations and modifications to LSTM networks resulting in improved perfor-mance for text classification datasets. We observe compounding improvements ontraditional LSTMs using Monte Carlo test-time model averaging, average pooling,and residual connections, along with four other suggested modifications. Ouranalysis provides a simple, reliable, and high quality baseline model.1 I NTRODUCTIONWhen exploring a new problem, having a simple yet competitive off-the-shelf baseline is fundamentalto new research. For instance, Caruana et al. (2008) showed random forests to be a strong baseline formany high-dimensional supervised learning tasks. For computer vision, off-the-shelf convolutionalneural networks (CNNs) have earned their reputation as a strong baseline (Sharif Razavian et al.,2014) and basic building block for more complex models like visual question answering (Xionget al., 2016). For natural language processing (NLP) and other sequential modeling tasks, recurrentneural networks (RNNs), and in particular Long Short-Term Memory (LSTM) networks, with alinear projection layer at the end have begun to attain a similar status. However, the standard LSTMis in many ways lacking as a baseline. Zaremba (2015), Gal (2015), and others show that largeimprovements are possible using a forget bias, inverted dropout regularization or bidirectionality. Weadd three major additions with similar improvements to off-the-shelf LSTMs: Monte Carlo modelaveraging, embed average pooling, and residual connections. We analyze these and other morecommon improvements.2 LSTM N ETWORKLSTM networks are among the most commonly used models for tasks involving variable-lengthsequences of data, such as text classification. The basic LSTM layer consists of six equations:it= tanh (Wixt+Riht1+bi) (1)jt=(Wjxt+Rjht1+bj) (2)ft=(Wfxt+Rfht1+bf) (3)ot= tanh (Woxt+Roht1+bo) (4)ct=itjt+ftct1 (5)ht=ottanh (ct) (6)1Under review as a conference paper at ICLR 20170 20 40 60 80 100 120 140 160 180 200Monte Carlo Samples0.4950.5000.5050.5100.5150.5200.525SST 5-Class Error RateMonte Carlo SSTMonte Carlo ErrorInverted Dropout Error(a) Monte Carlo for SST fine-grained error0 20 40 60 80 100 120 140 160 180 200Monte Carlo Samples0.1120.1140.1160.1180.1200.1220.1240.1260.1280.130Binary Error RateIMDB: Monte CarloMonte Carlo ErrorInverted Dropout Error (b) Monte Carlo for IMDB binary errorFigure 1: A comparison of the performance of Monte Carlo averaging, over sample size, to regularsingle-sample inverted dropout at test-time.Whereis the sigmoid function, is element-wise multiplication, and vtis the value of variable vat timestept. Each layer receives xtfrom the layer that came before it and ht1andct1from theprevious timestep, and it outputs htto the layer that comes after it and htandctto the next timestep.Thecandhvalues jointly constitute the recurrent state of the LSTM that is passed from one timestepto the next. Since the hvalue completely updates at each timestep while the cvalue maintains part ofits own value through multiplication by the forget gate f,handccomplement each other very well,withhforming a “fast” state that can quickly adapt to new information and cforming a “slow” statethat allows information to be retained over longer periods of time (Zaremba, 2015). While variouspapers have tried to systematically experiment with the 6 core equations constituting an LSTM (Greffet al., 2015; Zaremba, 2015), in general the basic LSTM equations have proven extremely resilientand, if not optimal, at least a local maximum.3 M ONTE CARLO MODEL AVERAGINGIt is common practice when applying dropout in neural networks to scale the weights up at traintime (inverted dropout). This ensures that the expected magnitude of the inputs to any given layerare equivalent between train and test, allowing for an efficient computation of test-time predictions.However, for a model trained with dropout, test-time predictions generated without dropout merelyapproximate the ensemble of smaller models that dropout is meant to provide. A higher fidelitymethod requires that test-time dropout be conducted in a manner consistent with how the model wastrained. To achieve this, we sample kneural nets with dropout applied for each test example andaverage the predictions. With sufficiently large kthis Monte Carlo average should approach the truemodel average (Srivastava et al., 2014). We show in Figure 1 that this technique can yield moreaccurate predictions on test-time data than the standard practice. This is demonstrated over a numberof datasets, suggesting its applicability to many types of sequential architectures. While runningmultiple Monte Carlo samples is more computationally expensive, the overall increase is minimalas the process is only run on test-time forward passes and is highly parallelizable. We show thathigher performance can be achieved with relatively few Monte Carlo samples, and that this numberof samples is similar across different NLP datasets and tasks.We encountered one ambiguity of Monte Carlo model averaging that to our knowledge remainsunaddressed in prior literature: there is relatively little exploration as to where and how the modelaveraging is most appropriately handled. We investigated averaging over the output of the finalrecurrent layer (just before the projection layer), over the output of the projection layer (the pre-softmax unnormalized logits), and the post-softmax normalized probabilities, which is the approachtaken by Gal (2015) for language modeling. We saw no discernible difference in performancebetween averaging the pre-projection and post-projection outputs. Averaging over the post-softmaxprobabilities showed marginal improvements over these two methods, but interestingly only forbidirectional models. We also explored using majority voting among the sampled models. This2Under review as a conference paper at ICLR 2017Embed...RNNSoftmaxw2w2w3w3wN1wN1wNwNMLPNXi=1wiNNXi=1wiNAverage Word Vectorsw1w1wN2wN2Figure 2: An illustration of the embed average pooling extension to a standard RNN model. Theoutput of the multilayer perceptron is concatenated to the final hidden state output by the RNN.involves tallying the maximum post-softmax probabilities and selecting the class that received themost votes. This method differs from averaging the post-softmax probabilities in the same waymax-margin differs from maximum likelihood estimation (MLE), de-emphasizing the points wellinside the decision boundary or the models that predicted a class with extremely high probability.With sufficiently large k, this voting method seemed to work best of the averaging methods we tried,and thus all of our displayed models use this technique. However, for classification problems withmore classes, more Monte Carlo samples might be necessary to guarantee a meaningful plurality ofclass predictions. We conclude that the majority-vote Monte Carlo averaging method is preferablein the case where the ratio of Monte Carlo samples to number of classification labels is large(k=outputsize).The Monte Carlo model averaging experiments, shown in Figure 1, were conducted as follows. Wedrewk= 400 separate test samples for each example, differentiated by their dropout masks. For eachsample size p(whose values, plotted on the x-axis, were in the range from 2to200with step-size2) we selected pof ourksamples randomly without replacement and performed the relevant MonteCarlo averaging technique for that task, as discussed above. We do this m= 20 times for each point,to establish the mean and variance for that number of Monte Carlo iterations/samples p. The varianceis used to visualize the 90% confidence interval in blue, while the red line denotes the test accuracycomputed using the traditional approximation method (inverted dropout at train-time, and no dropoutat test-time).4 E MBED AVERAGE POOLINGReliably retaining long-range information is a well documented weakness of LSTM networks(Karpathy et al., 2015). This is especially the case for very long sequences like the IMDB sentimentdataset (Maas et al., 2011), where deep sequential models fail to capture uni- and bi-gram occurrencesover long sequences. This is likely why n-gram based models, such as a bi-gram NBSVM (Wangand Manning, 2012), outperform RNN models on such datasetes. It was shown by Iyyer et al. (2015)and others that for general NLP classification tasks, the use of a deep, unordered composition (or bag-of-words) of a sequence can yield strong results. Their solution, the deep averaging network (DAN),combines the observed effectiveness of depth, with the unreasonable effectiveness of unorderedrepresentations of long sequences.We suspect that the primary advantage of DANs is their ability to keep track of information thatwould have otherwise been forgotten by a sequential model, such as information early in the sequencefor a unidirectional RNN or information in the middle of the sequence for a bidirectional RNN. Ourembed average pooling supplements the bidirectional RNN with the information from a DAN at arelatively negligible computational cost.3Under review as a conference paper at ICLR 2017LSTMLSTMLSTMSoftmaxh(1)th(1)th(2)th(2)th(3)th(3)txtxt......xt+1xt+1xt1xt1h(1)t1h(1)t1h(1)th(1)th(2)t1h(2)t1h(3)t1h(3)t1h(2)th(2)th(3)th(3)t(a) Res-V1: An illustration of vertical residual connec-tionsLSTMLSTMLSTMSoftmax......xtxtxt1xt1xt+1xt+1h(1)t1h(1)t1h(1)th(1)th(1)th(1)th(2)t1h(2)t1h(2)th(2)th(2)th(2)th(3)t1h(3)t1h(3)th(3)th(3)th(3)t(b) Res-V2: An illustration of vertical and lateral resid-ual connectionsFigure 3: An illustration of vertical (ResV) and lateral residual (ResL) connections added to a 3-layerRNN. A model with only vertical residuals is denoted Res-V1, whereas a model with vertical andlateral residuals is denoted “Res-V2”.As shown in Figure 2, embed average pooling works by averaging the sequence of word vectors andpassing this average through an MLP. The averaging is similar to an average pooling layer in a CNN(hence the name), but with the averaging being done temporally rather than spatially. The output ofthis MLP is concatenated to the final output of the RNN, and the combined vector is then passedinto the projection and softmax layer. We apply the same dropout mask to the word vectors whenpassing them to the RNN as when averaging them, and we apply a different dropout mask on theoutput of the MLP. We experimented with applying the MLP before rather than after averaging theword vectors but found the latter to be most effective.5 R ESIDUAL CONNECTIONSFor feed-forward convolutional neural networks used in computer vision tasks, residual networks, orResNets, have obtained state of the art results (He et al., 2015). Rather than having each layer learn awholly new representation of the data, as is customary for neural networks, ResNets have each layer(or group of layers) learn a residual which is added to the layer’s input and then passed on to the nextlayer. More formally, if the input to a layer (or group of layers) is xand the output of that layer (orgroup of layers) is F(x), then the input to the next layer (or group of layers) is x+F(x), whereas itwould beF(x)in a conventional neural network. This architecture allows the training of far deepermodels. He et al. (2015) trained convolutional neural networks as deep as 151 layers, compared to 16layers used in VGGNets (Simonyan and Zisserman, 2014) or 22 layers used in GoogLeNet (Szegedyet al., 2015), and won the 2015 ImageNet Challenge. Since then, various papers have tried to buildupon the ResNet paradigm (Huang et al., 2016; Szegedy et al., 2016), and various others have tried tocreate convincing theoretical reasons for ResNet’s success (Liao and Poggio, 2016; Veit et al., 2016).4Under review as a conference paper at ICLR 2017We explored many different ways to incorporate residual connections in an RNN. The two mostsuccessful ones, which we call Res-V1 and Res-V2 are depicted in Figure 6. Res-V1 incorporatesonly vertical residuals, while Res-V2 incorporates both vertical and lateral residuals. With verticalresidual connections, the input to a layer is added to its output and then passed to the next layer,as is done in feed-forward ResNets. Thus, whereas the input to a layer is normally the htfromthe previous layer, with vertical residuals the input becomes the ht+xtfrom the previous layer.This maintains many of the attractive properties of ResNets (e.g. unimpeded gradient flow acrosslayers, adding/averaging the contributions of each layer) and thus lends itself naturally to deepernetworks. However, it can interact unpredictably with the LSTM architecture, as the “fast” state ofthe LSTM no longer reflects the network’s full representation of the data at that point. To mitigate thisunpredictability, Res-V2 also includes lateral residual connections. With lateral residual connections,the input to a layer is added to its output and then passed to the next timestep as the fast state of theLSTM. It is equivalent to replacing equation 6 with ht=ottanh (ct) +xt. Thus, applying bothvertical and lateral residuals ensures that the same value is passed both to the next layer as input andto the next timestep as the “fast” state.In addition to these two, we explored various other, ultimately less successful, ways of adding residualconnections to an LSTM, the primary one being horizontal residual connections. In this architecture,rather than adding the input from the previous layer to a layer’s output, we added the fast statefrom the previous timestep. The hope was that adding residual connections across timesteps wouldallow information to flow more effectively across timesteps and thus improve the performance ofRNNs that are deep across timesteps, much as ResNets do for networks that are deep across layers.Thus, we believed horizontal residual connections could solve the problem of LSTMs not learninglong-term dependencies, the same problem we also hoped to mitigate with embed average pooling.Unfortunately, horizontal residuals failed, possibly because they blurred the distinction betweenthe LSTM’s “fast” state and “slow” state and thus prevented the LSTM from quickly adapting tonew data. Alternate combinations of horizontal, vertical, and lateral residual connections were alsoexperimented with but yielded poor results.6 E XPERIMENTAL RESULTS6.1 D ATASETSWe chose two commonly used benchmark datasets for our experiments: the Stanford SentimentTreebank (SST) (Socher et al., 2013) and the IMDB sentiment dataset (Maas et al., 2011). Thisallowed us to compare the performance of our models to existing work and review the flexibility ofour proposed model extensions across fairly disparate types of classification datasets. SST containsrelatively well curated, short sequence sentences, in contrast to IMDB’s comparatively colloquialand lengthy sequences (some up to 2;000tokens). To further differentiate the classification tasks wechose to experiment with fine-grained, five-class sentiment on SST, while IMDB only offered binarylabels. For IMDB, we randomly split the training set of 25;000examples into training and validationsets containing 22;500and2;500examples respectively, as done in Maas et al. (2011).6.2 M ETHODOLOGYOur objective is to show a series of compounding extensions to the standard LSTM baseline thatenhance accuracy. To ensure scientific reliability, the addition of each feature is the only changefrom the previous model (see Figures 4 and 5). The baseline model is a 2-layer stacked LSTM withhidden size 170for SST and 120for IMDB, as used in Tai et al. (2015). All models in this paper usedpublicly available 300 dimensional word vectors, pre-trained using Glove on 840 million tokens ofCommon Crawl Data (Pennington et al., 2014), and both the word vectors and the subsequent weightmatrices were trained using Adam with a learning rate of 104.The first set of basic feature additions were adding a forget bias and using dropout. Adding a bias of1:0to the forget gate (i.e. adding 1.0 to the inside of the sigmoid function in equation 3) improvesresults across NLP tasks, especially for learning long-range dependencies (Zaremba, 2015). Dropout(Srivastava et al., 2014) is a highly effective regularizer for deep models. For SST and IMDB we usedgrid search to select dropout probabilities of 0:5and0:7respectively, applied to the input of eachlayer, including the projection/softmax layer. While forget bias appears to hurt performance in Figure5Under review as a conference paper at ICLR 20175, the combination of dropout and forget bias yielded better results in all cases than dropout withoutforget bias. Our last two basic optimizations were increasing the hidden sizes and then adding shared-weight bidirectionality to the RNN. The hidden sizes for SST and IMDB were increased to 800and360respectively; we found significantly diminishing returns to performance from increases beyondthis. We chose shared-weight bidirectionality to ensure the model size did not increase any further.Specifically, the forward and backward weights are shared, and the input to the projection/softmaxlayer is a concatenation of the forward and backward passes’ final hidden states.All of our subsequent proposed model extensions are described at length in their own sections. Forboth datasets, we used 60Monte Carlo samples, and the embed average pooling MLP had onehidden layer and both a hidden dimension and an output dimension of 300as the output dimensionof the embed average pooling MLP. Note that although the MLP weights increased the size of theirrespective models, this increase is negligible (equivalent to increasing the hidden size for SST from800to804or the hidden size of IMDB from 360to369), and we found that such a size increase hadno discernible effect on accuracy when done without the embed average pooling.6.3 R ESULTSSince each of our proposed modifications operate independently, they are well suited to use incombination as well as in isolation. In Figures 4 and 5 we compound these features on top of themore traditional enhancements. Due to the expensiveness of bidirectional models, Figure 4 alsoshows these compounding features on SST with and without bidirectionality. The validation accuracydistributions show that each augmentation usually provides some small but noticeable improvementon the previous model, as measured by consistent improvements in mean and median accuracy.Baseline: 2-LSTM + Forget Bias + Dropout + Hidden Size + Bidirectional + Monte Carlo + Embed Averaging + Vertical Residual + Lateral ResidualFeatures0.450.460.470.480.490.500.510.520.535-Class Val AccuracySST: Full Compounding Model Features(a) Compounding feature models on 5-Class SST.Baseline: 2-LSTM + Forget Bias + Dropout + Hidden Size + Monte Carlo + Embed Averaging + Vertical Residual + Lateral ResidualFeatures0.450.460.470.480.490.500.510.520.535-Class Val AccuracySST: Compounding Model Features (b) Compounding feature models (minus bidirectional)for 5-Class SST.Figure 4: These box-plots show the performance of compounding model features on fine-grain SSTvalidation accuracy. The red points, red lines, blue boxes, whiskers and plus-shaped points indicatethe mean, median, quartiles, range, and outliers, respectively.We originally suspected that MC would provide marginal yet consistent improvements across datasets,while embed average pooling would especially excel for long sequences like in IMDB, where n-grambased models and deep unordered compositions have benefited from their ability to retain informationfrom disparate parts of the text. The former hypothesis was largely confirmed. However, whileembed average pooling was generally performance-enhancing, the performance boost it yielded forIMDB was not significantly larger than the one it yielded for SST, though that may have been becausethe other enhancements already encompassed most of the advantages provided by deep unorderedcompositions.The only evident exceptions to the positive trend are the variations of residual connections. Which ofRes-V1 (vertical only) and Res-V2 (vertical and residual) outperformed the other depended on thedataset and whether the network was bidirectional. The Res-V2 architecture dominated in experiments4b and 5 while the Res-V1 (only vertical residuals) architecture is most performant in Figure 4a. This6Under review as a conference paper at ICLR 2017Baseline: 2-LSTM + Forget Bias + Dropout + Hidden Size + Bidirectional + Monte Carlo + Embed Averaging + Vertical Residual + Lateral ResidualFeatures0.8700.8750.8800.8850.8900.8950.9000.9050.910Binary Val AccuracyIMDB: Compounding Model FeaturesFigure 5: These box-plots show the performance of compounding model features on binary IMDBvalidation accuracy.Figure 6: Comparing the effects of layer depth between Vanilla RNNs, Res-V1 and Res-V2 modelson fine-grained sentiment classification (SST). As we increase the layers, we decrease the hidden sizeto maintain equivalent model sizes. The points indicate average validation accuracy, while the shadedregions indicate 90% confidence intervals.suggests for short sequences, bidirectionality and lateral residuals conflict. Further analysis of theeffect of residual connections and model depth can be found in Figure 6. In that figure, the number ofparameters, and hence model size, are kept uniform by modifying the hidden size as the layer depthchanged. The hidden sizes used for 1,2,4,6, and 8layer models were 250,170,120,100, and 85respectively, maintaining 550;000total parameters for all models. As the graph demonstrates,7Under review as a conference paper at ICLR 2017Model # Params (M) Train Time / Epoch (sec) Test Acc (%)RNTN (Socher et al., 2013) 45:7CNN-MC (Kim, 2014) 47:4DRNN (Irsoy and Cardie, 2014) 49:8CT-LSTM (Tai et al., 2015) 0:317 51:0DMN (Kumar et al., 2016) 52:1NTI-SLSTM-LSTM (Munkhdalai andYu, 2016) 53:1Baseline 2-LSTM 0:553 2;100 46 :4Large 2-LSTM 8:650 3;150 48 :7Bi-2-LSTM 8:650 6;100 50 :9Bi-2-LSTM+MC+Pooling+ResV 8:740 8;050 52:22-LSTM+MC+Pooling+ResV+ResL 8:740 4;800 51:6Table 1: Test performance on the Stanford Sentiment Treebank (SST) sentiment classification task.Model # Params (M) Train Time / Epoch (sec) Test Acc (%)SVM-bi (Wang and Manning, 2012) 89:2DAN-RAND (Iyyer et al., 2015) 88:8DAN (Iyyer et al., 2015) 89:4NBSVM-bi (Wang and Manning, 2012) 91:2NBSVM-tri, RNN, Sentence-Vec En-semble (Mesnil et al., 2014) 92:6Baseline 2-LSTM 0:318 1;800 85 :3Large 2-LSTM 2:00 2;500 87 :6Bi-2-LSTM 2:00 5;100 88 :9Bi-2-LSTM+MC+Pooling+ResV+ResL 2:08 5;500 90 :1Table 2: Test performance on the IMDB sentiment classification task.normal LSTMs (“Vanilla”) perform drastically worse as they become deeper and narrower, whileRes-V1 and Res-V2 both see their performance stay much steadier or even briefly rise. While depthwound up being far from a panacea for the datasets we experimented on, the ability of an LSTM withresidual connections to maintain its performance as it gets deeper holds promise for other domainswhere the extra expressive power provided by depth might prove more crucial.Selecting the best results for each model, we see results competitive with state-of-the-art performancefor both IMDB1and SST, even though many state-of-the-art models use either parse-tree information(Tai et al., 2015), multiple passes through the data (Kumar et al., 2016) or tremendous train andtest-time computational and memory expenses (Le and Mikolov, 2014). To our knowledge, ourmodels constitute the best performance of purely sequential, single-pass, and computationally feasiblemodels, precisely the desired features of a solid out-of-the-box baseline. Furthermore, for SST, thecompounding enhancement model without bidirectionality, the final model shown in Figure 4b, greatlyexceeded the performance of the large bidirectional model ( 51:6%vs50:9%), with significantly lesstraining time (Table 1). This suggests our enhancements could provide a similarly reasonable andefficient alternative to shared-weight bidirectionality for other such datasets.7 C ONCLUSIONWe explore several easy to implement enhancements to the basic LSTM network that positivelyimpact performance. These include both fairly well established extensions (biasing the forget gate,dropout, increasing the model size, bidirectionality) and several more novel ones (Monte Carlo1For IMDB, we benchmark only against results obtained from training exclusively on the labeled training set.Thus, we omit results from unsupervised models that leveraged the additional 50;000unlabeled examples, suchas Miyato et al. (2016).8Under review as a conference paper at ICLR 2017model averaging, embed average pooling, residual connections). We find that these enhancementsimprove the performance of the LSTM in classification tasks, both in conjunction or isolation, withan accuracy close to state of the art despite being more lightweight and using less information thanthe current state of the art models. Our results suggest that these extensions should be incorporatedinto LSTM baselines. | rJLoZtr4g | official review | 5: Marginally below acceptance threshold | The paper proposes and analyses three methods applied to traditional LSTMs: Monte Carlo test-time model averaging, average pooling, and residual connections. It shows that those methods help to enhance traditional LSTMs on sentiment analysis.
Although the paper is well written, the experiment section is definitely its dead point. Firstly, although it shows some improvements over traditional LSTMs, those results are not on par with the state of the art. Secondly, if the purpose is to take those extensions as strong baselines for further research, the experiments are not adequate: the both two datasets which were used are quite similar (though they have different statistics). I thus suggest to carry out more experiments on more diverse tasks, like those in "LSTM: A Search Space Odyssey").
Besides, those extensions are not really novel. | 4: The reviewer is confident but not absolutely certain that the evaluation is correct |
Byiy-Pqlx | ICLR.cc/2017/conference | 2017 | Lie-Access Neural Turing Machines | ["Greg Yang", "Alexander Rush"] |
External neural memory structures have recently become a popular tool for
algorithmic deep learning
(Graves et al. 2014; Weston et al. 2014). These models
generally utilize differentiable versions of traditional discrete
memory-access structures (random access, stacks, tapes) to provide
the storage necessary for computational tasks. In
this work, we argue that these neural memory systems lack specific
structure important for relative indexing, and propose an
alternative model, Lie-access memory, that is explicitly designed
for the neural setting. In this paradigm, memory is accessed using
a continuous head in a key-space manifold. The head is moved via Lie
group actions, such as shifts or rotations, generated by a
controller, and memory access is performed by linear smoothing in
key space. We argue that Lie groups provide a natural generalization
of discrete memory structures, such as Turing machines, as they
provide inverse and identity operators while maintaining
differentiability. To experiment with this approach, we implement
a simplified Lie-access neural Turing machine (LANTM) with
different Lie groups. We find that this approach is able to perform
well on a range of algorithmic tasks. | ["Natural language processing", "Deep learning", "Supervised Learning"] | ABSTRACTExternal neural memory structures have recently become a popular tool for algo-rithmic deep learning (Graves et al., 2014; Weston et al., 2014). These models gen-erally utilize differentiable versions of traditional discrete memory-access struc-tures (random access, stacks, tapes) to provide the storage necessary for computa-tional tasks. In this work, we argue that these neural memory systems lack specificstructure important for relative indexing, and propose an alternative model, Lie-access memory, that is explicitly designed for the neural setting. In this paradigm,memory is accessed using a continuous head in a key-space manifold. The head ismoved via Lie group actions, such as shifts or rotations, generated by a controller,and memory access is performed by linear smoothing in key space. We argue thatLie groups provide a natural generalization of discrete memory structures, such asTuring machines, as they provide inverse and identity operators while maintainingdifferentiability. To experiment with this approach, we implement a simplifiedLie-access neural Turing machine (LANTM) with different Lie groups. We findthat this approach is able to perform well on a range of algorithmic tasks.1 I NTRODUCTIONRecent work on neural Turing machines (NTMs) (Graves et al., 2014; 2016) and memory networks(MemNNs) (Weston et al., 2014) has repopularized the use of explicit external memory in neuralnetworks and demonstrated that these networks can be effectively trained in an end-to-end fash-ion. These methods have been successfully applied to question answering (Weston et al., 2014;Sukhbaatar et al., 2015; Kumar et al., 2015), algorithm learning (Graves et al., 2014; Kalchbrenneret al., 2015; Kaiser & Sutskever, 2015; Kurach et al., 2015; Zaremba & Sutskever, 2015; Grefen-stette et al., 2015; Joulin & Mikolov, 2015), machine translation (Kalchbrenner et al., 2015), andother tasks. This methodology has the potential to extend deep networks in a general-purpose waybeyond the limitations of fixed-length encodings such as standard recurrent neural networks (RNNs).A shared theme in many of these works (and earlier exploration of neural memory) is to re-frametraditional memory access paradigms to be continuous and possibly differentiable to allow for back-propagation. In MemNNs, traditional random-access memory is replaced with a ranking approachthat finds the most likely memory. In the work of Grefenstette et al. (2015), classical stack-,queue- , and deque-based memories are replaced by soft-differentiable stack, queue, and deque data-structures. In NTMs, sequential local-access memory is simulated by an explicit tape data structure.This work questions the assumption that neural memory should mimic the structure of traditionaldiscrete memory. We argue that a neural memory should provide the following: (A) differentiabilityfor end-to-end training and (B) robust relative indexing (perhaps in addition to random-access).Surprisingly many neural memory systems fail one of these conditions, either lacking Criterion B,discussed below, or employing extensions like REINFORCE to work around lack of differentiability(Zaremba & Sutskever, 2015).We propose instead a class of memory access techniques based around Lie groups, i.e. groups withdifferentiable operations, which provide a natural structure for neural memory access. By definition,their differentiability satisfies the concerns of Criterion A. Additionally the group axioms provideidentity, invertibility, and associativity, all of which are desirable properties for a relative indexingscheme (Criterion B), and all of which are satisfied by standard Turing machines. Notably though,1Published as a conference paper at ICLR 2017simple group properties like invertibility are not satisfied by neural Turing machines, differentiableneural computers, or even by simple soft-tape machines. In short, in our method, we constructmemory systems with keys placed on a manifold, and where relative access operations are providedby Lie groups.To experiment with this approach, we implement a neural Turing machine with an LSTM con-troller and several versions of Lie-access memory, which we call Lie-access neural Turing machines(LANTM). The details of these models are exhibited in Section 4.1Our main experimental resultsare presented in Section 5. The LANTM model is able to learn non-trivial algorithmic tasks suchas copying and permutating sequences with higher accuracy than more traditional memory-basedapproaches, and significantly better than fixed memory LSTM models. The memory structures andkey transformation learned by the model resemble interesting continuous space representations oftraditional discrete memory data structures.2 B ACKGROUND : RECURRENT NEURAL NETWORKS WITH MEMORYThis work focuses particularly on recurrent neural network (RNN) controllers of abstract neuralmemories. Formally, an RNN is a differentiable function RNN :XH!H , whereXis anarbitrary input space and His the hidden state space. On input (x(1);:::;x(T))2XTand withinitial stateh(0)2H, the RNN produces states h(1);:::;h(T)based on the recurrence,h(t):= RNN(x(t);h(t1)):These states can be used for downstream tasks, for example sequence prediction which producesoutputs (y(1);:::;y(T))based on an additional transformation and prediction layer y(t)=F(h(t))such as a linear-layer followed by a softmax. RNNs can be trained end-to-end by backpropagation-through-time (BPTT) (Werbos, 1990). In practice, we use long short-term memory (LSTM) RNNs(Hochreiter & Schmidhuber, 1997). LSTM’s hidden state consists of two variables (c(t);h(t)), whereh(t)is also the output to the external world; we however use the above notation for simplicity.An RNN can also serve as the controller for an external memory system (Graves et al., 2014; Grefen-stette et al., 2015; Zaremba & Sutskever, 2015), which enables: (1) the entire system to carry stateover time from both the RNN and the external memory, and (2) the RNN controller to collect read-ings from and compute additional instructions to the external memory. Formally, we extend therecurrence to,h(t):= RNN([x(t);(t1)];h(t1));(t);(t):= RW((t1);h(t));where is the abstract memory state, and (t)is the value read from memory, and his used as anabstract controller command to a read/write function RW. Writing occurs in the mutation of ateach time step. Throughout this work, will take the form of an ordered set f(ki;vi;si)giwhereki2K is an arbitrary key, vi2Rmis a memory value, and si2R+is a memory strength.In order for the model to be trainable with backpropagation, the memory function RW must alsobe differentiable. Several forms of differentiable memory have been proposed in the literature. Webegin by describing two simple forms: (neural) random-access memory and (neural) tape-basedmemory. For this section, we focus on the read step and assume is fixed.Random-Access Memory Random-access memory consists of using a now standard attention-mechanism or MemNN to read a memory (our description follows Miller et al. (2016)). The con-troller hidden state is used to output a random-access pointer, q0(h)that determines a weighting ofmemory vectors via dot products with the corresponding keys. This weighting in turn determinesthe read values via linear smoothing based on a function w,wi(q;) :=siexphq;kiiPjsjexphq;kji:=Xiwi(q0(h);)vi:The final read memory is based on how “close” the read pointer was to each of the keys, wherecloseness in key space is determined by w.1Our implementations are available at https://github.com/harvardnlp/lie-access-memory2Published as a conference paper at ICLR 2017Tape-Based Memory Neural memories can also be extended to support relative access by main-taining read state. Following notation from Turing machines, we call this state the head ,q. In thesimplest case the recurrence now has the form,0;q0;= RW(;q;h);and this can be extended to support multiple heads.In the simplest case of soft tape-based memory (a naive version of the much more complicated neuralTuring machine), the keys kiindicate one-hot positions along a tape with ki=i. The headqis aprobability distribution over tape positions. It determines the read value by directly specifying theweights. The controller can only “shift” the head by outputting a kernel K(h) = (K1;K0;K+1)in the probability simplex 2and applying convolution.q0(q;h) :=qK(h); i.e. q0j=qj1K+1+qjK0+qj+1K1We can view this as the soft version of a single-step discrete Turing machine where the kernel cansoftly shift the “head” of the machine one to the left, one to the right, or remain in the same location.The value returned can then be computed with linear smoothing as above,wi(q;) :=sihq;kiiPjsjhq;kji:=Xiwi(q0(q;h);)vi:3 L IEGROUPS FOR MEMORYLet us now take a brief digression and consider the standard (non-neural) Turing machine (TM) andthe movement of its head over a tape. A TM has a head q2Zindicating the position on a tape.Between reads, the head can move any number of steps left or right. Moving a+bsteps and thencsteps eventually puts the head at the same location as moving asteps and then b+csteps — i.e.the head movement is associative . In addition, the machine should be able to reverse a head shift,for example, in a stack simulation algorithm, going from push to pop — i.e. each head movementshould also have a corresponding inverse . Finally, the head should also be allowed to stay put, forexample, to read a single data item and use it for multiple time points, an identity .These movements correspond directly to group actions: the possible head movements should beassociative, and contain inverse and identity elements. This group acts on the set of possible headlocations. In a TM, the set of Z-valued head movement acts on the set of locations on the Z-indexedinfinite tape. By our reasoning above, if a Turing machine is to store data contents at points in ageneral spaceK(instead of an infinite Z-indexed tape), then its head movements should form agroup and act onKvia group actions.For a neural memory system, we desire the network to be (almost everywhere) differentiable. Thenotion of “differentiable” groups is well-studied in mathematics, where they are known as Liegroups , and “differentiable group actions” are correspondingly called Lie group actions . In ourcase, using Lie group actions as generalized head movements on a general key space (more accu-rately, manifolds) would most importantly mean that we can take derivatives of these movementsand perform the usual backpropagation algorithm.4 L IE-ACCESS NEURAL TURING MACHINESThese properties motivate us to propose Lie access as an alternative formalism to popular neuralmemory systems, such as probabilistic tapes, which surprisingly do not satisfy invertibility and oftendo not provide an identity.2Our Lie-access memory will consist of a set of points in a manifold K.2The Markov kernel convolutional soft head shift mechanism proposed in Graves et al. (2014) and sketchedin Section 2 does not in general have inverses. Indeed, the authors reported problems with the soft head losing“sharpness” over time, which they dealt with by sharpening coefficients. In the followup work, Graves et al.(2016) utilize a temporal memory link matrix for actions. They note, “the operation Lwsmoothly shifts thefocus forwards to the locations written ... whereas L>wshifts the focus backwards” but do not enforce this asa true inverse. They also explicitly do not include an identity, noting “Self-links are excluded (the diagonal ofthe link matrix is always 0)”; however, they could ignore the link matrix with an interpolation gate, which ineffect acts as the identity.3Published as a conference paper at ICLR 2017We replace the discrete head with a continuous head q2K . The head moves based on a set ofLie group actions a2A generated by the controller. To read memories, we will rely on a distancemeasure in this space, d:KK! R0.3Together these properties describe a general class ofpossible neural memory architectures.Formally a Lie-access neural Turing machine (LANTM) computes the following function,0;q0;q0(w);:= RW(;q;q (w);h)whereq;q (w)2K are resp. read and write heads, and is the memory itself. We implement , asabove, as a weighted dictionary =f(ki;vi;si)gi.4.1 A DDRESSING PROCEDUREThe LANTM maintains a read head qwhich at every step is first updated to q0and then used to readfrom the memory table. This update occurs by selecting a Lie group action from Awhich then actssmoothly on the key space K. We parametrize the action transformation, a:H7!A by the hiddenstate to produce the Lie action, a(h)2A. In the simplest case, the head is then updated based onthis action (heredenotes group action): q0:=a(h)q.For instance, consider two possible Lie groups:(1) A shift group R2acting additively on R2. This means thatA=R2so thata(h) = (;)actsupon a head q= (x;y)by,a(h)q= (;) + (x;y) = (x+;y+):(2) A rotation group SO(3)acting on the sphere S2=fv2R3:kvk= 1g. Each rotation can bedescribed by its axis (a unit vector) and angle . An action (;)qis just the appropriate rotationof the pointq, and is given by Rodrigues’ rotation formula,a(h)q= (;)q=qcos+ (q) sin+h;qi(1cos):Heredenotes cross product.4.2 R EADING AND WRITING MEMORIESRecall that memories are stored in , each with a key, ki, memory vector, vi, and strength, si, andthat memories are read using linear smoothing over vectors based on a key weighting function w,:=Piwi(q0;)vi. While there are many possible weighting schemes, we use one based onthe distance of each memory address from the head in key-space assuming a metric donK. Weconsider two different weighting functions (1) inverse-square and (2) softmax. There first uses thepolynomial law and the second an annealed softmax of the squared distances:w(1)i(q;) :=sid(q;ki)2Pjsjd(q;kj)2w(2)i(q;;T) :=siexp(d(q;ki)2=T)Pjsjexp(d(q;kj)2=T);where we use the convention that it takes the limit value when q!kiandTis atemperature thatrepresents the certainty of its reading, i.e. higher Tcreates more uniform w.The writing procedure is similar to reading. The LANTM maintains a separate write headq(w)thatmoves analogously to the read head, i.e. with action function a(w)(h)and updated value q0(w). Ateach call to RW, a new memory is automatically appended to withk=q0(w). The corresponding3This metric should satisfy a compatibility relation with the Lie group action. When points x;y2Xare simultaneously moved by the same Lie group action v, their distance should stay the same (One possiblemathematical formalization is that Xshould be a Riemannian manifold and the Lie group should be a subgroupofX’s isometry group.): d(vx;vy ) =d(x;y):This condition ensures that if the machine writes a sequence ofdata along a “straight line” at points x;vx;v2x;:::;vkx, then it can read the same sequence by emitting a readlocationyclose toxand then follow the “ v-trail”y;vy;v2y;:::;vky.4Published as a conference paper at ICLR 2017mem. vec.viread valueaddresskikey manifold Kread keyqweight schemeFigure 1: Retrieval of value from memory via a key. Weightings with unit sum are assigned to differentmemories depending on the distances from the addresses to the read key. Linear smoothing over values is usedto emit the final read value. Both inverse-square and softmax schemes follow this method, but differ in theircomputations of the weightings.memoryvand strength sare created by MLP’s v(h)2Rmands(h)2[0;1]takinghas input. Afterwriting, the new memory set is,0:= [f(q0(w);v(h);s(h))g:No explicit erase mechanism is provided, but to erase a memory (k;v;s ), the controller may intheory write (k;v;s).4.3 C OMBINING WITH RANDOM ACCESSFinally we combine this relative addressing procedure with direct random-access to give the modelthe ability for absolute address access. We do this by outputting an absolute address each stepand simply interpolating with our current head. Write t(h)2[0;1]for the interpolation gate and~q(h)2K for our proposed random-access layer. For key space manifolds KlikeRn,4there’s awell defined straight-line interpolation between two points, so we can setq0:=a(tq+ (1t)~q)where we have omitted the implied dependence on h. For other manifolds like the spheres Snthat have well-behaved projection functions :Rn!Sn, we can just project the straight-lineinterpolation to the sphere:q0:=a(tq+ (1t)~q):In the case of a sphere Sn,is justL2-normalization.55 E XPERIMENTSWe experiment with Lie-access memory on a variety of algorithmic learning tasks. We are partic-ularly interested in: (a) how Lie-access memory can be trained, (b) whether it can be effectivelyutilized for algorithmic learning, and (c) what internal structures the model learns compared to sys-tems based directly on soft discrete memory. In particular Lie access is not equipped with an explicitstack or tape, so it would need to learn continuous patterns that capture these properties.Setup. Our experiments utilize an LSTM controller in a version of the encoder-decoder setup(Sutskever et al., 2014), i.e. an encoding input pass followed by a decoding output pass. The encoderreads and writes memories at each step; the decoder only reads memories. The encoder is given hsi,4Or in general, manifolds with convex embeddings in Rn.5Technically, in the sphere case, dom=Rdf0g. But in practice one almost never gets 0 from astraight-line interpolation, so computationally this makes little difference.5Published as a conference paper at ICLR 2017followed by an the input sequence, and then h=sito terminate input. The decoder is not re-fed itsoutput or the correct symbol, i.e. we do not use teacher forcing, so x(t)is a fixed placeholder inputsymbol. The decoder must correctly emit an end-of-output symbol h=eito terminate.Models and Baselines. We implement three main baseline models including: (a) a standard LSTMencoder-decoder, without explicit external memory, (b) a random access memory network, RAMusing the key-value formulation as described in the background, roughly analogous to an attention-based encoder-decoder, and (c) an interpolation of a RAM/Tape -based memory network as describedin the background, i.e. a highly simplified version of a true NTM (Graves et al., 2014) with asharpening parameter. Our models include four versions of Lie-access memory. The main model,LANTM , has an LSTM controller, with a shift group A=R2acting additively on key space K=R2. We also consider a model SLANTM with spherical memory, utilizing a rotation group A=SO(3)acting on keys in the sphere K=S2. For both of the models, the distance function dis theEuclidean (L2) distance, and we experiment with smoothing using inverse-square (default) and withan annealed softmax .6Model Setup. For all tasks, the LSTM baseline has 1 to 4 layers, each with 256 cells. Each ofthe other models has a single-layer, 50-cell LSTM controller, with memory width (i.e. the size ofeach memory vector) 20. Other parameters such as learning rate, decay, and intialization are foundthrough grid search. Further hyperparameter details are give in the appendix.Tasks. Our experiments are on a series of algorithmic tasks shown in Table 1a. The C OPY, RE-VERSE , and B IGRAM FLIPtasks are based on Grefenstette et al. (2015); the D OUBLE and I NTER -LEAVED ADDtasks are designed in a similar vein. Additionally we also include three harder tasks:ODDFIRST , REPEAT COPY, and P RIORITY SORT. In O DDFIRST , the model must output the odd-indexed elements first, followed by the even-indexed elements. In R EPEAT COPY, each model mustrepeat a sequence of length 20, Ntimes. In P RIORITY SORT, each item of the input sequence isgiven a priority, and the model must output them in priority order.We train each model in two regimes, one with a small number of samples (16K) and one with a largenumber of samples (320K). In the former case, the samples are iterated through 20 times, while inthe latter, the samples are iterated through only once. Thus in both regimes, the total training timesare the same. Training is done by minimizing negative log likelihood with RMSProp.Prediction is performed via argmax/greedy prediction at each step. To evaluate the performance ofthe models, we compute the fraction of tokens correctly predicted and the fraction of all answerscompletely correctly predicted, respectively called fine and coarse scores. We assess the models on3.2K randomly generated out-of-sample 2x length examples, i.e. with sequence lengths 2k(or repeatnumber 2Nin the case of R EPEAT COPY) to test the generalization of the system. More precisely,for all tasks other than repeat copy, during training, the length kis varied in the interval [lk;uk](asshown in table 1ba). During test time, the length kis varied in the range [uk+ 1;2uk]. For repeatcopy, the repetition number Nis varied similarly, instead of k.Results. Main results comparing the different memory systems and read computations on a seriesof tasks are shown in Table 1b. Consistent with previous work the fixed-memory LSTM systemfails consistently when required to generalize to the 2x samples, unable to solve any 2x problemcorrectly, and only able to predict at most 50% of the symbols for all tasks except interleavedaddition, regardless of training regime. The RAM (attention-based) and the RAM/tape hybrid aremuch stronger baselines, answering more than 50% of the characters correctly for all but the 6-O DDFIRST task. Perhaps surprisingly, RAM and RAM/tape learned the 7-R EPEAT COPY task withalmost perfect generalization scores when trained in the large sample regime. In general, it does notseem that the simple tape memory confers much advantage to the RAM model, as the generalizationperformances of both models are similar for the most part, which motivates more advanced NTMenhancements beyond sharpening.The last four columns illustrate the performance of the LANTM models. We found the inverse-square LANTM and SLANTM models to be the most effective, achieving >90% generalization6Note that the read weight calculation of a SLANTM with softmax is essentially the same as the RAMmodel: For head q,exp(d(q;ki)2=T) = exp(kqkik2=T) = exp((22hq;kii)=T), wherethe last equality comes from kqk=kkik= 1 (key-space is on the sphere). Therefore the weightswi=siexp(d(q;ki)2=T)Pjsjexp(d(q;kj)2=T)=siexp(2hq;kii=T)Pjsjexp(2hq;kji=T), which is the RAM weighting scheme.6Published as a conference paper at ICLR 2017Task Input Output Size kjVj1 - C OPY a1a2a3ak a1a2a3ak [2;64] 1282 - R EVERSE a1a2a3ak akak1ak2a1 [2;64] 1283 - B IGRAM FLIP a1a2a3a4a2k1a2ka2a1a4a3a2ka2k1 [1;16] 1284 - D OUBLE a1a2ak 2jaka1j [2;40] 105 - I NTERLEAVED ADDa1a2a3a4a2k1a2kja2ka2k2a2j+ja2k1a1j [2;16] 106 - O DDFIRST a1a2a3a4a2k1a2ka1a3a2k1a2a4a2k [1;16] 1287 - R EPEAT COPY Na1a20 a1a20a1a20(Ntimes)N2[1;5] 1288 - P RIORITY SORT 5a52a29a9 a1a2a3ak [2;10] 128(a) Task descriptions and parameters. jaka1jmeans the decimal number repesented by decimal digitsaka1. Arithmetic tasks have all numbers formatted with the least significant digits on the left and with zeropadding. The D OUBLE task takes an integer x2[0;10k)padded tokdigits and outputs 2xink+ 1 digits,zero padded to k+ 1digits. The I NTERLEAVED ADDtask takes two integers x;y2[0;10k)padded tokdigitsand interleaved, forming a length 2kinput sequence and outputs x+yzero padded to k+ 1digits. The lasttwo tasks use numbers in unary format: Nis the shorthand for a length Nsequence of a special symbol @,encodingNin unary, e.g. 3 = @@@ .Base Memory LieLSTM RAM RAM/Tape LANTM LANTM-s SLANTM SLANTM-sS L S L S L S L S L S L S L1 16/0 21/0 61/0 61/1 70/2 70/1 ? ? ? ? ? ? ? ?2 26/0 32/0 58/2 54/2 24/1 43/2 ? ? 97/44 98/88 99/96 ? ? ?3 30/0 39/0 56/5 54/9 64/8 69/9 ? ? ? 99/94 99/99 97/67 93/60 90/434 44/0 47/0 72/8 74/15 70/12 71/6 ? ? ? ? ? ? ? ?5 60/0 61/0 74/13 76/17 77/23 67/19 99/93 99/93 90/38 94/57 99/91 99/97 98/78 ?6 29/0 42/0 31/5 46/4 43/8 62/8 99/91 99/95 90/29 50/0 49/7 56/8 74/15 76/167 24/0 37/0 98/56 99/98 71/18 99/93 67/0 70/0 17/0 48/0 99/91 99/78 96/41 99/518 46/0 53/0 60/5 80/22 78/15 66/9 87/35 98/72 99/95 99/99 ? 99/99 98/79 ?(b) Main results. Numbers represent the accuracy percentages on the fine/coarse evaluations on the out-of-sample 2tasks. The S and L columns resp. indicate small and large sample training regimes. Symbol ?indicates exact 100% accuracy (Fine scores above 99.5 are not rounded up). Baselines are described in thebody. LANTM and SLANTM use inverse-square while LANTM-s and SLANTM-s use softmax weightingscheme. The best scores, if not 100% (denoted by stars), are bolded for each of the small and large sampleregimes.accuracy on most tasks, and together they solve all of the tasks here with >90% coarse score. Inparticular, LANTM is able to solve the 6-O DDFIRST problem when no other model can correctlysolve 20% of the 2x instances; SLANTM on the other hand is the only Lie access model able tosolve the 7-R EPEAT COPY problem.The best Lie access model trained with the small sample regime beats or is competitive with any ofthe baseline trained under the large sample regime. In all tasks other than 7-R EPEAT COPY, the gapin the coarse score between the best Lie access model in small sample regime and the best baselinein any sample regime is 70%. However, in most cases, training under the large sample regimedoes not improve much. For a few tasks, small sample regime actually produces a model with bettergeneralization than large sample regime. We observed in these instances, the generalization errorcurve under a large sample regime reaches an optimum at around 2/3 to 3/4 of training time, andthen increases almost monotonically from there. Thus, the model likely has found an algorithm thatworks only for the training sizes; in particular, this phenomenon does not seem to be due to lack oftraining time.6 D ISCUSSIONQualitative Analysis. We did further visual analysis of the different Lie-access techniques to seehow the models were learning the underlying tasks, and to verify that they were using the relativeaddressing scheme. Figure 2 shows two diagrams of the LANTM model of the tasks of priority sortand repeat copy. Figure 3 shows two diagrams of the SLANTM model for the same two tasks. Fig-7Published as a conference paper at ICLR 2017@@79@@@@98@5@@@107119dec./uni00A0reads119 /uni00A05 79 107 98 /uni00A0$(a) (b)Figure 2: Analysis of the LANTM model. (a)PCA projection from key space R2to 1D for the memories and read heads qof LANTM for the unary 8-P RIORITY SORT task. In this task, the encoder reads a priority,encoded in unary, and then a value; the decoder must output these values in priority order. In this examplethe sequence is [@;@;79;@;@;@;@;98;@;5;@;@;@;107;@;119], where the special symbol @ is a unaryencoding of the priority. From top to bottom, each row indicates the movement of the encoder write head q(w)as it is fed each input character. Fill indicates the strength siof memory write (black indicates high strength).Position of a dot within its row indicates the PCA projection of the key ki. The last line indicates the movementof decoder read head q. Interestingly, we note that, instead of writing to memory, the controller remembersthe item 119 itself. (b)Raw coordinates in key space R2of writes (red) and reads (blue) from LANTM on7-R EPEAT COPY. Red line indicates the writes, which occur along a straight line during the encoding phase.Blue line indicates the reads, which zip back and forth in the process of copying the input sequence 6 times.Enc./uni00A0Writes Dec./uni00A0Reads287443102883980/uni00A076273(a) (b)Figure 3: Analysis of the SLANTM model. (a)PCA projection from the spherical key space S2to 2D of thememories and read heads qof SLANTM for the task of 7-R EPEAT COPY. Here the model is to repeatedlyoutput the sequence 10 times. Input is 10 repetitions of special symbol @ followed by [28, 74, 43, 102, 88, 39,... ]. Left: the positions of write head q(w)during the encoding phase. Fill indicates strength si(black meanshigh strength); number indicates the character stored. SLANTM traverses in a circle clockwise starting at point28, and stores data at regular intervals. Right : the positions of read head qduring the decoding phase. Startingfrom the blue dot, the reads move clockwise around the sphere, and end at the red dot. For the sake of clarity,read positions are indicated by bends in the blue line, instead of by dots. Intriguingly, the model implementsa cyclic list data structure, taking advantage of the spherical structure of the memory. (b)Raw coordinates inkey spaceS2of writes (red) and reads (blue) from SLANTM on a non-unary encoded variant of the prioritysort task. Red line indicates the movements of the write-head q(w)to place points along a sub-manifold of K(an arc ofS2) during the encoding phase. Notably, this movement is not sequential, but random-access, so asto store elements in correct priority order. Blue line indicates the simple traversal of this arc during decoding.8Published as a conference paper at ICLR 2017Figure 4: Memory access pattern of LANTM on 6-O DDFIRST . Left: In the middle of training. LANTMlearns to store data in a zigzag such that odd-indexed items fall on one side and even-indexed items fall on theother. However reading is only half correct. Right: After training. During reading, the model simply reads theodd-indexed items in a straight line, followed by the even-indexed items in a parallel line.ure 4 shows the memory access pattern of LANTM on 6-O DDFIRST task. Additionally, animationstracing the evolution of the memory access pattern of models over training time can be found athttp://nlp.seas.harvard.edu/lantm . They demonstrate that the models indeed learn relativeaddressing and internally are constructing geometric data structures to solve these algorithmic tasks.Unbounded storage One possible criticism of the LANTM framework could be that the amountof information stored increases linearly with time, which limits the usefulness of this framework forlong timescale tasks. This is indeed the case with our implementations, but need not be the case ingeneral. There can be many ways of limiting physical memory usage. For example, a simple way isto discard the least recently used memory, as in the work of Graves et al. (2016) and Gulcehre et al.(2016). Another way is to approximate with fixed number of bits the read function that takes a headposition and returns the read value. For example, noting that this function is a rational function onthe head position, keys, and memory vectors, we can approximate the numerators and denominatorswith a fixed degree polynomial.Content address Our Lie-access framework is not mutually exclusive from content addressingmethods. For example, in each of our implementations, we could have the controllers output both aposition in the key space and a content addresser of the same size as memory vectors, and interpo-lated the read values from Lie-access and the read values from content addressing.7 C ONCLUSIONThis paper introduces Lie-access memory as an alternative neural memory access paradigm, andexplored several different implementations of this approach. LANTMs follow similar axioms asdiscrete Turing machines while providing differentiability. Experiments show that simple modelscan learn algorithmic tasks. Internally these models naturally learn equivalence of standard datastructures like stack and cyclic lists. In future work we hope to experiment with more groups and toscale these methods to more difficult reasoning tasks. For instance, we hope to build a general pur-pose encoder-decoder model for tasks like question answering and machine translation that makesuse of differentiable relative-addressing schemes to replace RAM-style attention.9Published as a conference paper at ICLR 2017 | HywzhQGNg | mathematically elegant, limited impact | 7: Good paper, accept | The Neural Turing Machine and related “external memory models” have demonstrated an ability to learn algorithmic solutions by utilizing differentiable analogues of conventional memory structures. In particular, the NTM, DNC and other approaches provide mechanisms for shifting a memory access head to linked memories from the current read position.
The NTM, which is the most relevant to this work, uses a differentiable version of a Turing machine tape. The controller outputs a kernel which “softly” shifts the head, allowing the machine to read and write sequences. Since this soft shift typically “smears” the focus of the head, the controller also outputs a sharpening parameter which compensates by refocusing the distribution.
The premise of this work is to notice that while the NTM emulates a differentiable version of a Turing tape, there is no particular reason that one is constrained to follow the topology of a Turing tape. Instead they propose memory stored at a set of points on a manifold and shift actions which form a Lie group. In this way, memory points can have have different relationships to one another, rather than being constrained to Z.
This is mathematically elegant and here they empirically test models with the shift group R^2 acting on R^2 and the rotation group acting on a sphere.
Overall, the paper is well communicated and a novel idea.
The primary limitation of this paper is its limited impact. While this approach is certainly mathematically elegant, even likely beneficial for some specific problems where the problem structure matches the group structure, it is not clear that this significantly contributes to building models capable of more general program learning. Instead, it is likely to make an already complex and slow model such as the NTM even slower. In general, it would seem memory topology is problem specific and should therefore be learned rather than specified.
The baseline used for comparison is a very simple model, which does not even having the sharpening (the NTM approach to solving the problem of head distributions becoming ‘smeared’). There is also no comparison with the successor to the NTM, the DNC, which provides a more general approach to linking memories based on prior memory accesses.
Minor issues:
Footnote on page 3 is misleading regarding the DNC. While the linkage matrix explicitly excludes the identity, the controller can keep the head in the same position by gating the following of the link matrix.
Figures on page 8 are difficult to follow.
| 4: The reviewer is confident but not absolutely certain that the evaluation is correct |
Byiy-Pqlx | ICLR.cc/2017/conference | 2017 | Lie-Access Neural Turing Machines | ["Greg Yang", "Alexander Rush"] |
External neural memory structures have recently become a popular tool for
algorithmic deep learning
(Graves et al. 2014; Weston et al. 2014). These models
generally utilize differentiable versions of traditional discrete
memory-access structures (random access, stacks, tapes) to provide
the storage necessary for computational tasks. In
this work, we argue that these neural memory systems lack specific
structure important for relative indexing, and propose an
alternative model, Lie-access memory, that is explicitly designed
for the neural setting. In this paradigm, memory is accessed using
a continuous head in a key-space manifold. The head is moved via Lie
group actions, such as shifts or rotations, generated by a
controller, and memory access is performed by linear smoothing in
key space. We argue that Lie groups provide a natural generalization
of discrete memory structures, such as Turing machines, as they
provide inverse and identity operators while maintaining
differentiability. To experiment with this approach, we implement
a simplified Lie-access neural Turing machine (LANTM) with
different Lie groups. We find that this approach is able to perform
well on a range of algorithmic tasks. | ["Natural language processing", "Deep learning", "Supervised Learning"] | ABSTRACTExternal neural memory structures have recently become a popular tool for algo-rithmic deep learning (Graves et al., 2014; Weston et al., 2014). These models gen-erally utilize differentiable versions of traditional discrete memory-access struc-tures (random access, stacks, tapes) to provide the storage necessary for computa-tional tasks. In this work, we argue that these neural memory systems lack specificstructure important for relative indexing, and propose an alternative model, Lie-access memory, that is explicitly designed for the neural setting. In this paradigm,memory is accessed using a continuous head in a key-space manifold. The head ismoved via Lie group actions, such as shifts or rotations, generated by a controller,and memory access is performed by linear smoothing in key space. We argue thatLie groups provide a natural generalization of discrete memory structures, such asTuring machines, as they provide inverse and identity operators while maintainingdifferentiability. To experiment with this approach, we implement a simplifiedLie-access neural Turing machine (LANTM) with different Lie groups. We findthat this approach is able to perform well on a range of algorithmic tasks.1 I NTRODUCTIONRecent work on neural Turing machines (NTMs) (Graves et al., 2014; 2016) and memory networks(MemNNs) (Weston et al., 2014) has repopularized the use of explicit external memory in neuralnetworks and demonstrated that these networks can be effectively trained in an end-to-end fash-ion. These methods have been successfully applied to question answering (Weston et al., 2014;Sukhbaatar et al., 2015; Kumar et al., 2015), algorithm learning (Graves et al., 2014; Kalchbrenneret al., 2015; Kaiser & Sutskever, 2015; Kurach et al., 2015; Zaremba & Sutskever, 2015; Grefen-stette et al., 2015; Joulin & Mikolov, 2015), machine translation (Kalchbrenner et al., 2015), andother tasks. This methodology has the potential to extend deep networks in a general-purpose waybeyond the limitations of fixed-length encodings such as standard recurrent neural networks (RNNs).A shared theme in many of these works (and earlier exploration of neural memory) is to re-frametraditional memory access paradigms to be continuous and possibly differentiable to allow for back-propagation. In MemNNs, traditional random-access memory is replaced with a ranking approachthat finds the most likely memory. In the work of Grefenstette et al. (2015), classical stack-,queue- , and deque-based memories are replaced by soft-differentiable stack, queue, and deque data-structures. In NTMs, sequential local-access memory is simulated by an explicit tape data structure.This work questions the assumption that neural memory should mimic the structure of traditionaldiscrete memory. We argue that a neural memory should provide the following: (A) differentiabilityfor end-to-end training and (B) robust relative indexing (perhaps in addition to random-access).Surprisingly many neural memory systems fail one of these conditions, either lacking Criterion B,discussed below, or employing extensions like REINFORCE to work around lack of differentiability(Zaremba & Sutskever, 2015).We propose instead a class of memory access techniques based around Lie groups, i.e. groups withdifferentiable operations, which provide a natural structure for neural memory access. By definition,their differentiability satisfies the concerns of Criterion A. Additionally the group axioms provideidentity, invertibility, and associativity, all of which are desirable properties for a relative indexingscheme (Criterion B), and all of which are satisfied by standard Turing machines. Notably though,1Published as a conference paper at ICLR 2017simple group properties like invertibility are not satisfied by neural Turing machines, differentiableneural computers, or even by simple soft-tape machines. In short, in our method, we constructmemory systems with keys placed on a manifold, and where relative access operations are providedby Lie groups.To experiment with this approach, we implement a neural Turing machine with an LSTM con-troller and several versions of Lie-access memory, which we call Lie-access neural Turing machines(LANTM). The details of these models are exhibited in Section 4.1Our main experimental resultsare presented in Section 5. The LANTM model is able to learn non-trivial algorithmic tasks suchas copying and permutating sequences with higher accuracy than more traditional memory-basedapproaches, and significantly better than fixed memory LSTM models. The memory structures andkey transformation learned by the model resemble interesting continuous space representations oftraditional discrete memory data structures.2 B ACKGROUND : RECURRENT NEURAL NETWORKS WITH MEMORYThis work focuses particularly on recurrent neural network (RNN) controllers of abstract neuralmemories. Formally, an RNN is a differentiable function RNN :XH!H , whereXis anarbitrary input space and His the hidden state space. On input (x(1);:::;x(T))2XTand withinitial stateh(0)2H, the RNN produces states h(1);:::;h(T)based on the recurrence,h(t):= RNN(x(t);h(t1)):These states can be used for downstream tasks, for example sequence prediction which producesoutputs (y(1);:::;y(T))based on an additional transformation and prediction layer y(t)=F(h(t))such as a linear-layer followed by a softmax. RNNs can be trained end-to-end by backpropagation-through-time (BPTT) (Werbos, 1990). In practice, we use long short-term memory (LSTM) RNNs(Hochreiter & Schmidhuber, 1997). LSTM’s hidden state consists of two variables (c(t);h(t)), whereh(t)is also the output to the external world; we however use the above notation for simplicity.An RNN can also serve as the controller for an external memory system (Graves et al., 2014; Grefen-stette et al., 2015; Zaremba & Sutskever, 2015), which enables: (1) the entire system to carry stateover time from both the RNN and the external memory, and (2) the RNN controller to collect read-ings from and compute additional instructions to the external memory. Formally, we extend therecurrence to,h(t):= RNN([x(t);(t1)];h(t1));(t);(t):= RW((t1);h(t));where is the abstract memory state, and (t)is the value read from memory, and his used as anabstract controller command to a read/write function RW. Writing occurs in the mutation of ateach time step. Throughout this work, will take the form of an ordered set f(ki;vi;si)giwhereki2K is an arbitrary key, vi2Rmis a memory value, and si2R+is a memory strength.In order for the model to be trainable with backpropagation, the memory function RW must alsobe differentiable. Several forms of differentiable memory have been proposed in the literature. Webegin by describing two simple forms: (neural) random-access memory and (neural) tape-basedmemory. For this section, we focus on the read step and assume is fixed.Random-Access Memory Random-access memory consists of using a now standard attention-mechanism or MemNN to read a memory (our description follows Miller et al. (2016)). The con-troller hidden state is used to output a random-access pointer, q0(h)that determines a weighting ofmemory vectors via dot products with the corresponding keys. This weighting in turn determinesthe read values via linear smoothing based on a function w,wi(q;) :=siexphq;kiiPjsjexphq;kji:=Xiwi(q0(h);)vi:The final read memory is based on how “close” the read pointer was to each of the keys, wherecloseness in key space is determined by w.1Our implementations are available at https://github.com/harvardnlp/lie-access-memory2Published as a conference paper at ICLR 2017Tape-Based Memory Neural memories can also be extended to support relative access by main-taining read state. Following notation from Turing machines, we call this state the head ,q. In thesimplest case the recurrence now has the form,0;q0;= RW(;q;h);and this can be extended to support multiple heads.In the simplest case of soft tape-based memory (a naive version of the much more complicated neuralTuring machine), the keys kiindicate one-hot positions along a tape with ki=i. The headqis aprobability distribution over tape positions. It determines the read value by directly specifying theweights. The controller can only “shift” the head by outputting a kernel K(h) = (K1;K0;K+1)in the probability simplex 2and applying convolution.q0(q;h) :=qK(h); i.e. q0j=qj1K+1+qjK0+qj+1K1We can view this as the soft version of a single-step discrete Turing machine where the kernel cansoftly shift the “head” of the machine one to the left, one to the right, or remain in the same location.The value returned can then be computed with linear smoothing as above,wi(q;) :=sihq;kiiPjsjhq;kji:=Xiwi(q0(q;h);)vi:3 L IEGROUPS FOR MEMORYLet us now take a brief digression and consider the standard (non-neural) Turing machine (TM) andthe movement of its head over a tape. A TM has a head q2Zindicating the position on a tape.Between reads, the head can move any number of steps left or right. Moving a+bsteps and thencsteps eventually puts the head at the same location as moving asteps and then b+csteps — i.e.the head movement is associative . In addition, the machine should be able to reverse a head shift,for example, in a stack simulation algorithm, going from push to pop — i.e. each head movementshould also have a corresponding inverse . Finally, the head should also be allowed to stay put, forexample, to read a single data item and use it for multiple time points, an identity .These movements correspond directly to group actions: the possible head movements should beassociative, and contain inverse and identity elements. This group acts on the set of possible headlocations. In a TM, the set of Z-valued head movement acts on the set of locations on the Z-indexedinfinite tape. By our reasoning above, if a Turing machine is to store data contents at points in ageneral spaceK(instead of an infinite Z-indexed tape), then its head movements should form agroup and act onKvia group actions.For a neural memory system, we desire the network to be (almost everywhere) differentiable. Thenotion of “differentiable” groups is well-studied in mathematics, where they are known as Liegroups , and “differentiable group actions” are correspondingly called Lie group actions . In ourcase, using Lie group actions as generalized head movements on a general key space (more accu-rately, manifolds) would most importantly mean that we can take derivatives of these movementsand perform the usual backpropagation algorithm.4 L IE-ACCESS NEURAL TURING MACHINESThese properties motivate us to propose Lie access as an alternative formalism to popular neuralmemory systems, such as probabilistic tapes, which surprisingly do not satisfy invertibility and oftendo not provide an identity.2Our Lie-access memory will consist of a set of points in a manifold K.2The Markov kernel convolutional soft head shift mechanism proposed in Graves et al. (2014) and sketchedin Section 2 does not in general have inverses. Indeed, the authors reported problems with the soft head losing“sharpness” over time, which they dealt with by sharpening coefficients. In the followup work, Graves et al.(2016) utilize a temporal memory link matrix for actions. They note, “the operation Lwsmoothly shifts thefocus forwards to the locations written ... whereas L>wshifts the focus backwards” but do not enforce this asa true inverse. They also explicitly do not include an identity, noting “Self-links are excluded (the diagonal ofthe link matrix is always 0)”; however, they could ignore the link matrix with an interpolation gate, which ineffect acts as the identity.3Published as a conference paper at ICLR 2017We replace the discrete head with a continuous head q2K . The head moves based on a set ofLie group actions a2A generated by the controller. To read memories, we will rely on a distancemeasure in this space, d:KK! R0.3Together these properties describe a general class ofpossible neural memory architectures.Formally a Lie-access neural Turing machine (LANTM) computes the following function,0;q0;q0(w);:= RW(;q;q (w);h)whereq;q (w)2K are resp. read and write heads, and is the memory itself. We implement , asabove, as a weighted dictionary =f(ki;vi;si)gi.4.1 A DDRESSING PROCEDUREThe LANTM maintains a read head qwhich at every step is first updated to q0and then used to readfrom the memory table. This update occurs by selecting a Lie group action from Awhich then actssmoothly on the key space K. We parametrize the action transformation, a:H7!A by the hiddenstate to produce the Lie action, a(h)2A. In the simplest case, the head is then updated based onthis action (heredenotes group action): q0:=a(h)q.For instance, consider two possible Lie groups:(1) A shift group R2acting additively on R2. This means thatA=R2so thata(h) = (;)actsupon a head q= (x;y)by,a(h)q= (;) + (x;y) = (x+;y+):(2) A rotation group SO(3)acting on the sphere S2=fv2R3:kvk= 1g. Each rotation can bedescribed by its axis (a unit vector) and angle . An action (;)qis just the appropriate rotationof the pointq, and is given by Rodrigues’ rotation formula,a(h)q= (;)q=qcos+ (q) sin+h;qi(1cos):Heredenotes cross product.4.2 R EADING AND WRITING MEMORIESRecall that memories are stored in , each with a key, ki, memory vector, vi, and strength, si, andthat memories are read using linear smoothing over vectors based on a key weighting function w,:=Piwi(q0;)vi. While there are many possible weighting schemes, we use one based onthe distance of each memory address from the head in key-space assuming a metric donK. Weconsider two different weighting functions (1) inverse-square and (2) softmax. There first uses thepolynomial law and the second an annealed softmax of the squared distances:w(1)i(q;) :=sid(q;ki)2Pjsjd(q;kj)2w(2)i(q;;T) :=siexp(d(q;ki)2=T)Pjsjexp(d(q;kj)2=T);where we use the convention that it takes the limit value when q!kiandTis atemperature thatrepresents the certainty of its reading, i.e. higher Tcreates more uniform w.The writing procedure is similar to reading. The LANTM maintains a separate write headq(w)thatmoves analogously to the read head, i.e. with action function a(w)(h)and updated value q0(w). Ateach call to RW, a new memory is automatically appended to withk=q0(w). The corresponding3This metric should satisfy a compatibility relation with the Lie group action. When points x;y2Xare simultaneously moved by the same Lie group action v, their distance should stay the same (One possiblemathematical formalization is that Xshould be a Riemannian manifold and the Lie group should be a subgroupofX’s isometry group.): d(vx;vy ) =d(x;y):This condition ensures that if the machine writes a sequence ofdata along a “straight line” at points x;vx;v2x;:::;vkx, then it can read the same sequence by emitting a readlocationyclose toxand then follow the “ v-trail”y;vy;v2y;:::;vky.4Published as a conference paper at ICLR 2017mem. vec.viread valueaddresskikey manifold Kread keyqweight schemeFigure 1: Retrieval of value from memory via a key. Weightings with unit sum are assigned to differentmemories depending on the distances from the addresses to the read key. Linear smoothing over values is usedto emit the final read value. Both inverse-square and softmax schemes follow this method, but differ in theircomputations of the weightings.memoryvand strength sare created by MLP’s v(h)2Rmands(h)2[0;1]takinghas input. Afterwriting, the new memory set is,0:= [f(q0(w);v(h);s(h))g:No explicit erase mechanism is provided, but to erase a memory (k;v;s ), the controller may intheory write (k;v;s).4.3 C OMBINING WITH RANDOM ACCESSFinally we combine this relative addressing procedure with direct random-access to give the modelthe ability for absolute address access. We do this by outputting an absolute address each stepand simply interpolating with our current head. Write t(h)2[0;1]for the interpolation gate and~q(h)2K for our proposed random-access layer. For key space manifolds KlikeRn,4there’s awell defined straight-line interpolation between two points, so we can setq0:=a(tq+ (1t)~q)where we have omitted the implied dependence on h. For other manifolds like the spheres Snthat have well-behaved projection functions :Rn!Sn, we can just project the straight-lineinterpolation to the sphere:q0:=a(tq+ (1t)~q):In the case of a sphere Sn,is justL2-normalization.55 E XPERIMENTSWe experiment with Lie-access memory on a variety of algorithmic learning tasks. We are partic-ularly interested in: (a) how Lie-access memory can be trained, (b) whether it can be effectivelyutilized for algorithmic learning, and (c) what internal structures the model learns compared to sys-tems based directly on soft discrete memory. In particular Lie access is not equipped with an explicitstack or tape, so it would need to learn continuous patterns that capture these properties.Setup. Our experiments utilize an LSTM controller in a version of the encoder-decoder setup(Sutskever et al., 2014), i.e. an encoding input pass followed by a decoding output pass. The encoderreads and writes memories at each step; the decoder only reads memories. The encoder is given hsi,4Or in general, manifolds with convex embeddings in Rn.5Technically, in the sphere case, dom=Rdf0g. But in practice one almost never gets 0 from astraight-line interpolation, so computationally this makes little difference.5Published as a conference paper at ICLR 2017followed by an the input sequence, and then h=sito terminate input. The decoder is not re-fed itsoutput or the correct symbol, i.e. we do not use teacher forcing, so x(t)is a fixed placeholder inputsymbol. The decoder must correctly emit an end-of-output symbol h=eito terminate.Models and Baselines. We implement three main baseline models including: (a) a standard LSTMencoder-decoder, without explicit external memory, (b) a random access memory network, RAMusing the key-value formulation as described in the background, roughly analogous to an attention-based encoder-decoder, and (c) an interpolation of a RAM/Tape -based memory network as describedin the background, i.e. a highly simplified version of a true NTM (Graves et al., 2014) with asharpening parameter. Our models include four versions of Lie-access memory. The main model,LANTM , has an LSTM controller, with a shift group A=R2acting additively on key space K=R2. We also consider a model SLANTM with spherical memory, utilizing a rotation group A=SO(3)acting on keys in the sphere K=S2. For both of the models, the distance function dis theEuclidean (L2) distance, and we experiment with smoothing using inverse-square (default) and withan annealed softmax .6Model Setup. For all tasks, the LSTM baseline has 1 to 4 layers, each with 256 cells. Each ofthe other models has a single-layer, 50-cell LSTM controller, with memory width (i.e. the size ofeach memory vector) 20. Other parameters such as learning rate, decay, and intialization are foundthrough grid search. Further hyperparameter details are give in the appendix.Tasks. Our experiments are on a series of algorithmic tasks shown in Table 1a. The C OPY, RE-VERSE , and B IGRAM FLIPtasks are based on Grefenstette et al. (2015); the D OUBLE and I NTER -LEAVED ADDtasks are designed in a similar vein. Additionally we also include three harder tasks:ODDFIRST , REPEAT COPY, and P RIORITY SORT. In O DDFIRST , the model must output the odd-indexed elements first, followed by the even-indexed elements. In R EPEAT COPY, each model mustrepeat a sequence of length 20, Ntimes. In P RIORITY SORT, each item of the input sequence isgiven a priority, and the model must output them in priority order.We train each model in two regimes, one with a small number of samples (16K) and one with a largenumber of samples (320K). In the former case, the samples are iterated through 20 times, while inthe latter, the samples are iterated through only once. Thus in both regimes, the total training timesare the same. Training is done by minimizing negative log likelihood with RMSProp.Prediction is performed via argmax/greedy prediction at each step. To evaluate the performance ofthe models, we compute the fraction of tokens correctly predicted and the fraction of all answerscompletely correctly predicted, respectively called fine and coarse scores. We assess the models on3.2K randomly generated out-of-sample 2x length examples, i.e. with sequence lengths 2k(or repeatnumber 2Nin the case of R EPEAT COPY) to test the generalization of the system. More precisely,for all tasks other than repeat copy, during training, the length kis varied in the interval [lk;uk](asshown in table 1ba). During test time, the length kis varied in the range [uk+ 1;2uk]. For repeatcopy, the repetition number Nis varied similarly, instead of k.Results. Main results comparing the different memory systems and read computations on a seriesof tasks are shown in Table 1b. Consistent with previous work the fixed-memory LSTM systemfails consistently when required to generalize to the 2x samples, unable to solve any 2x problemcorrectly, and only able to predict at most 50% of the symbols for all tasks except interleavedaddition, regardless of training regime. The RAM (attention-based) and the RAM/tape hybrid aremuch stronger baselines, answering more than 50% of the characters correctly for all but the 6-O DDFIRST task. Perhaps surprisingly, RAM and RAM/tape learned the 7-R EPEAT COPY task withalmost perfect generalization scores when trained in the large sample regime. In general, it does notseem that the simple tape memory confers much advantage to the RAM model, as the generalizationperformances of both models are similar for the most part, which motivates more advanced NTMenhancements beyond sharpening.The last four columns illustrate the performance of the LANTM models. We found the inverse-square LANTM and SLANTM models to be the most effective, achieving >90% generalization6Note that the read weight calculation of a SLANTM with softmax is essentially the same as the RAMmodel: For head q,exp(d(q;ki)2=T) = exp(kqkik2=T) = exp((22hq;kii)=T), wherethe last equality comes from kqk=kkik= 1 (key-space is on the sphere). Therefore the weightswi=siexp(d(q;ki)2=T)Pjsjexp(d(q;kj)2=T)=siexp(2hq;kii=T)Pjsjexp(2hq;kji=T), which is the RAM weighting scheme.6Published as a conference paper at ICLR 2017Task Input Output Size kjVj1 - C OPY a1a2a3ak a1a2a3ak [2;64] 1282 - R EVERSE a1a2a3ak akak1ak2a1 [2;64] 1283 - B IGRAM FLIP a1a2a3a4a2k1a2ka2a1a4a3a2ka2k1 [1;16] 1284 - D OUBLE a1a2ak 2jaka1j [2;40] 105 - I NTERLEAVED ADDa1a2a3a4a2k1a2kja2ka2k2a2j+ja2k1a1j [2;16] 106 - O DDFIRST a1a2a3a4a2k1a2ka1a3a2k1a2a4a2k [1;16] 1287 - R EPEAT COPY Na1a20 a1a20a1a20(Ntimes)N2[1;5] 1288 - P RIORITY SORT 5a52a29a9 a1a2a3ak [2;10] 128(a) Task descriptions and parameters. jaka1jmeans the decimal number repesented by decimal digitsaka1. Arithmetic tasks have all numbers formatted with the least significant digits on the left and with zeropadding. The D OUBLE task takes an integer x2[0;10k)padded tokdigits and outputs 2xink+ 1 digits,zero padded to k+ 1digits. The I NTERLEAVED ADDtask takes two integers x;y2[0;10k)padded tokdigitsand interleaved, forming a length 2kinput sequence and outputs x+yzero padded to k+ 1digits. The lasttwo tasks use numbers in unary format: Nis the shorthand for a length Nsequence of a special symbol @,encodingNin unary, e.g. 3 = @@@ .Base Memory LieLSTM RAM RAM/Tape LANTM LANTM-s SLANTM SLANTM-sS L S L S L S L S L S L S L1 16/0 21/0 61/0 61/1 70/2 70/1 ? ? ? ? ? ? ? ?2 26/0 32/0 58/2 54/2 24/1 43/2 ? ? 97/44 98/88 99/96 ? ? ?3 30/0 39/0 56/5 54/9 64/8 69/9 ? ? ? 99/94 99/99 97/67 93/60 90/434 44/0 47/0 72/8 74/15 70/12 71/6 ? ? ? ? ? ? ? ?5 60/0 61/0 74/13 76/17 77/23 67/19 99/93 99/93 90/38 94/57 99/91 99/97 98/78 ?6 29/0 42/0 31/5 46/4 43/8 62/8 99/91 99/95 90/29 50/0 49/7 56/8 74/15 76/167 24/0 37/0 98/56 99/98 71/18 99/93 67/0 70/0 17/0 48/0 99/91 99/78 96/41 99/518 46/0 53/0 60/5 80/22 78/15 66/9 87/35 98/72 99/95 99/99 ? 99/99 98/79 ?(b) Main results. Numbers represent the accuracy percentages on the fine/coarse evaluations on the out-of-sample 2tasks. The S and L columns resp. indicate small and large sample training regimes. Symbol ?indicates exact 100% accuracy (Fine scores above 99.5 are not rounded up). Baselines are described in thebody. LANTM and SLANTM use inverse-square while LANTM-s and SLANTM-s use softmax weightingscheme. The best scores, if not 100% (denoted by stars), are bolded for each of the small and large sampleregimes.accuracy on most tasks, and together they solve all of the tasks here with >90% coarse score. Inparticular, LANTM is able to solve the 6-O DDFIRST problem when no other model can correctlysolve 20% of the 2x instances; SLANTM on the other hand is the only Lie access model able tosolve the 7-R EPEAT COPY problem.The best Lie access model trained with the small sample regime beats or is competitive with any ofthe baseline trained under the large sample regime. In all tasks other than 7-R EPEAT COPY, the gapin the coarse score between the best Lie access model in small sample regime and the best baselinein any sample regime is 70%. However, in most cases, training under the large sample regimedoes not improve much. For a few tasks, small sample regime actually produces a model with bettergeneralization than large sample regime. We observed in these instances, the generalization errorcurve under a large sample regime reaches an optimum at around 2/3 to 3/4 of training time, andthen increases almost monotonically from there. Thus, the model likely has found an algorithm thatworks only for the training sizes; in particular, this phenomenon does not seem to be due to lack oftraining time.6 D ISCUSSIONQualitative Analysis. We did further visual analysis of the different Lie-access techniques to seehow the models were learning the underlying tasks, and to verify that they were using the relativeaddressing scheme. Figure 2 shows two diagrams of the LANTM model of the tasks of priority sortand repeat copy. Figure 3 shows two diagrams of the SLANTM model for the same two tasks. Fig-7Published as a conference paper at ICLR 2017@@79@@@@98@5@@@107119dec./uni00A0reads119 /uni00A05 79 107 98 /uni00A0$(a) (b)Figure 2: Analysis of the LANTM model. (a)PCA projection from key space R2to 1D for the memories and read heads qof LANTM for the unary 8-P RIORITY SORT task. In this task, the encoder reads a priority,encoded in unary, and then a value; the decoder must output these values in priority order. In this examplethe sequence is [@;@;79;@;@;@;@;98;@;5;@;@;@;107;@;119], where the special symbol @ is a unaryencoding of the priority. From top to bottom, each row indicates the movement of the encoder write head q(w)as it is fed each input character. Fill indicates the strength siof memory write (black indicates high strength).Position of a dot within its row indicates the PCA projection of the key ki. The last line indicates the movementof decoder read head q. Interestingly, we note that, instead of writing to memory, the controller remembersthe item 119 itself. (b)Raw coordinates in key space R2of writes (red) and reads (blue) from LANTM on7-R EPEAT COPY. Red line indicates the writes, which occur along a straight line during the encoding phase.Blue line indicates the reads, which zip back and forth in the process of copying the input sequence 6 times.Enc./uni00A0Writes Dec./uni00A0Reads287443102883980/uni00A076273(a) (b)Figure 3: Analysis of the SLANTM model. (a)PCA projection from the spherical key space S2to 2D of thememories and read heads qof SLANTM for the task of 7-R EPEAT COPY. Here the model is to repeatedlyoutput the sequence 10 times. Input is 10 repetitions of special symbol @ followed by [28, 74, 43, 102, 88, 39,... ]. Left: the positions of write head q(w)during the encoding phase. Fill indicates strength si(black meanshigh strength); number indicates the character stored. SLANTM traverses in a circle clockwise starting at point28, and stores data at regular intervals. Right : the positions of read head qduring the decoding phase. Startingfrom the blue dot, the reads move clockwise around the sphere, and end at the red dot. For the sake of clarity,read positions are indicated by bends in the blue line, instead of by dots. Intriguingly, the model implementsa cyclic list data structure, taking advantage of the spherical structure of the memory. (b)Raw coordinates inkey spaceS2of writes (red) and reads (blue) from SLANTM on a non-unary encoded variant of the prioritysort task. Red line indicates the movements of the write-head q(w)to place points along a sub-manifold of K(an arc ofS2) during the encoding phase. Notably, this movement is not sequential, but random-access, so asto store elements in correct priority order. Blue line indicates the simple traversal of this arc during decoding.8Published as a conference paper at ICLR 2017Figure 4: Memory access pattern of LANTM on 6-O DDFIRST . Left: In the middle of training. LANTMlearns to store data in a zigzag such that odd-indexed items fall on one side and even-indexed items fall on theother. However reading is only half correct. Right: After training. During reading, the model simply reads theodd-indexed items in a straight line, followed by the even-indexed items in a parallel line.ure 4 shows the memory access pattern of LANTM on 6-O DDFIRST task. Additionally, animationstracing the evolution of the memory access pattern of models over training time can be found athttp://nlp.seas.harvard.edu/lantm . They demonstrate that the models indeed learn relativeaddressing and internally are constructing geometric data structures to solve these algorithmic tasks.Unbounded storage One possible criticism of the LANTM framework could be that the amountof information stored increases linearly with time, which limits the usefulness of this framework forlong timescale tasks. This is indeed the case with our implementations, but need not be the case ingeneral. There can be many ways of limiting physical memory usage. For example, a simple way isto discard the least recently used memory, as in the work of Graves et al. (2016) and Gulcehre et al.(2016). Another way is to approximate with fixed number of bits the read function that takes a headposition and returns the read value. For example, noting that this function is a rational function onthe head position, keys, and memory vectors, we can approximate the numerators and denominatorswith a fixed degree polynomial.Content address Our Lie-access framework is not mutually exclusive from content addressingmethods. For example, in each of our implementations, we could have the controllers output both aposition in the key space and a content addresser of the same size as memory vectors, and interpo-lated the read values from Lie-access and the read values from content addressing.7 C ONCLUSIONThis paper introduces Lie-access memory as an alternative neural memory access paradigm, andexplored several different implementations of this approach. LANTMs follow similar axioms asdiscrete Turing machines while providing differentiability. Experiments show that simple modelscan learn algorithmic tasks. Internally these models naturally learn equivalence of standard datastructures like stack and cyclic lists. In future work we hope to experiment with more groups and toscale these methods to more difficult reasoning tasks. For instance, we hope to build a general pur-pose encoder-decoder model for tasks like question answering and machine translation that makesuse of differentiable relative-addressing schemes to replace RAM-style attention.9Published as a conference paper at ICLR 2017 | rJsnsceVg | This paper brings unity and formalism in the requirement for memory addressing while maintaining differentiable memories. Its proposal provide a generic scheme to build addressing mechanisms. When comparing the proposed approach with key-value networks, the unbounded number of memory cells and the lack of incentive to reuse indexes might reveal impractical. | 8: Top 50% of accepted papers, clear accept | *** Paper Summary ***
This paper formalizes the properties required for addressing (indexing) memory augmented neural networks as well as how to pair the addressing with read/write operation. It then proposes a framework in which any Lie group as the addressing space. Experiments on algorithmic tasks are reported.
*** Review Summary ***
This paper brings unity and formalism in the requirement for memory addressing while maintaining differentiable memories. Its proposal provide a generic scheme to build addressing mechanisms. When comparing the proposed approach with key-value networks, the unbounded number of memory cells and the lack of incentive to reuse indexes might reveal impractical.
*** Detailed Review ***
The paper reads well, has appropriate relevance to related work. The unified presentation of memory augmented networks is clear and brings unity to the field. The proposed approach is introduced clearly, is powerful and gives a tool that can be reused after reading the article. I do not appreciate that the growing memory is not mentioned as a drawback. It should be stressed and a discussion on the impact it has on efficiency/scalability is needed. | 4: The reviewer is confident but not absolutely certain that the evaluation is correct |
Byiy-Pqlx | ICLR.cc/2017/conference | 2017 | Lie-Access Neural Turing Machines | ["Greg Yang", "Alexander Rush"] |
External neural memory structures have recently become a popular tool for
algorithmic deep learning
(Graves et al. 2014; Weston et al. 2014). These models
generally utilize differentiable versions of traditional discrete
memory-access structures (random access, stacks, tapes) to provide
the storage necessary for computational tasks. In
this work, we argue that these neural memory systems lack specific
structure important for relative indexing, and propose an
alternative model, Lie-access memory, that is explicitly designed
for the neural setting. In this paradigm, memory is accessed using
a continuous head in a key-space manifold. The head is moved via Lie
group actions, such as shifts or rotations, generated by a
controller, and memory access is performed by linear smoothing in
key space. We argue that Lie groups provide a natural generalization
of discrete memory structures, such as Turing machines, as they
provide inverse and identity operators while maintaining
differentiability. To experiment with this approach, we implement
a simplified Lie-access neural Turing machine (LANTM) with
different Lie groups. We find that this approach is able to perform
well on a range of algorithmic tasks. | ["Natural language processing", "Deep learning", "Supervised Learning"] | ABSTRACTExternal neural memory structures have recently become a popular tool for algo-rithmic deep learning (Graves et al., 2014; Weston et al., 2014). These models gen-erally utilize differentiable versions of traditional discrete memory-access struc-tures (random access, stacks, tapes) to provide the storage necessary for computa-tional tasks. In this work, we argue that these neural memory systems lack specificstructure important for relative indexing, and propose an alternative model, Lie-access memory, that is explicitly designed for the neural setting. In this paradigm,memory is accessed using a continuous head in a key-space manifold. The head ismoved via Lie group actions, such as shifts or rotations, generated by a controller,and memory access is performed by linear smoothing in key space. We argue thatLie groups provide a natural generalization of discrete memory structures, such asTuring machines, as they provide inverse and identity operators while maintainingdifferentiability. To experiment with this approach, we implement a simplifiedLie-access neural Turing machine (LANTM) with different Lie groups. We findthat this approach is able to perform well on a range of algorithmic tasks.1 I NTRODUCTIONRecent work on neural Turing machines (NTMs) (Graves et al., 2014; 2016) and memory networks(MemNNs) (Weston et al., 2014) has repopularized the use of explicit external memory in neuralnetworks and demonstrated that these networks can be effectively trained in an end-to-end fash-ion. These methods have been successfully applied to question answering (Weston et al., 2014;Sukhbaatar et al., 2015; Kumar et al., 2015), algorithm learning (Graves et al., 2014; Kalchbrenneret al., 2015; Kaiser & Sutskever, 2015; Kurach et al., 2015; Zaremba & Sutskever, 2015; Grefen-stette et al., 2015; Joulin & Mikolov, 2015), machine translation (Kalchbrenner et al., 2015), andother tasks. This methodology has the potential to extend deep networks in a general-purpose waybeyond the limitations of fixed-length encodings such as standard recurrent neural networks (RNNs).A shared theme in many of these works (and earlier exploration of neural memory) is to re-frametraditional memory access paradigms to be continuous and possibly differentiable to allow for back-propagation. In MemNNs, traditional random-access memory is replaced with a ranking approachthat finds the most likely memory. In the work of Grefenstette et al. (2015), classical stack-,queue- , and deque-based memories are replaced by soft-differentiable stack, queue, and deque data-structures. In NTMs, sequential local-access memory is simulated by an explicit tape data structure.This work questions the assumption that neural memory should mimic the structure of traditionaldiscrete memory. We argue that a neural memory should provide the following: (A) differentiabilityfor end-to-end training and (B) robust relative indexing (perhaps in addition to random-access).Surprisingly many neural memory systems fail one of these conditions, either lacking Criterion B,discussed below, or employing extensions like REINFORCE to work around lack of differentiability(Zaremba & Sutskever, 2015).We propose instead a class of memory access techniques based around Lie groups, i.e. groups withdifferentiable operations, which provide a natural structure for neural memory access. By definition,their differentiability satisfies the concerns of Criterion A. Additionally the group axioms provideidentity, invertibility, and associativity, all of which are desirable properties for a relative indexingscheme (Criterion B), and all of which are satisfied by standard Turing machines. Notably though,1Published as a conference paper at ICLR 2017simple group properties like invertibility are not satisfied by neural Turing machines, differentiableneural computers, or even by simple soft-tape machines. In short, in our method, we constructmemory systems with keys placed on a manifold, and where relative access operations are providedby Lie groups.To experiment with this approach, we implement a neural Turing machine with an LSTM con-troller and several versions of Lie-access memory, which we call Lie-access neural Turing machines(LANTM). The details of these models are exhibited in Section 4.1Our main experimental resultsare presented in Section 5. The LANTM model is able to learn non-trivial algorithmic tasks suchas copying and permutating sequences with higher accuracy than more traditional memory-basedapproaches, and significantly better than fixed memory LSTM models. The memory structures andkey transformation learned by the model resemble interesting continuous space representations oftraditional discrete memory data structures.2 B ACKGROUND : RECURRENT NEURAL NETWORKS WITH MEMORYThis work focuses particularly on recurrent neural network (RNN) controllers of abstract neuralmemories. Formally, an RNN is a differentiable function RNN :XH!H , whereXis anarbitrary input space and His the hidden state space. On input (x(1);:::;x(T))2XTand withinitial stateh(0)2H, the RNN produces states h(1);:::;h(T)based on the recurrence,h(t):= RNN(x(t);h(t1)):These states can be used for downstream tasks, for example sequence prediction which producesoutputs (y(1);:::;y(T))based on an additional transformation and prediction layer y(t)=F(h(t))such as a linear-layer followed by a softmax. RNNs can be trained end-to-end by backpropagation-through-time (BPTT) (Werbos, 1990). In practice, we use long short-term memory (LSTM) RNNs(Hochreiter & Schmidhuber, 1997). LSTM’s hidden state consists of two variables (c(t);h(t)), whereh(t)is also the output to the external world; we however use the above notation for simplicity.An RNN can also serve as the controller for an external memory system (Graves et al., 2014; Grefen-stette et al., 2015; Zaremba & Sutskever, 2015), which enables: (1) the entire system to carry stateover time from both the RNN and the external memory, and (2) the RNN controller to collect read-ings from and compute additional instructions to the external memory. Formally, we extend therecurrence to,h(t):= RNN([x(t);(t1)];h(t1));(t);(t):= RW((t1);h(t));where is the abstract memory state, and (t)is the value read from memory, and his used as anabstract controller command to a read/write function RW. Writing occurs in the mutation of ateach time step. Throughout this work, will take the form of an ordered set f(ki;vi;si)giwhereki2K is an arbitrary key, vi2Rmis a memory value, and si2R+is a memory strength.In order for the model to be trainable with backpropagation, the memory function RW must alsobe differentiable. Several forms of differentiable memory have been proposed in the literature. Webegin by describing two simple forms: (neural) random-access memory and (neural) tape-basedmemory. For this section, we focus on the read step and assume is fixed.Random-Access Memory Random-access memory consists of using a now standard attention-mechanism or MemNN to read a memory (our description follows Miller et al. (2016)). The con-troller hidden state is used to output a random-access pointer, q0(h)that determines a weighting ofmemory vectors via dot products with the corresponding keys. This weighting in turn determinesthe read values via linear smoothing based on a function w,wi(q;) :=siexphq;kiiPjsjexphq;kji:=Xiwi(q0(h);)vi:The final read memory is based on how “close” the read pointer was to each of the keys, wherecloseness in key space is determined by w.1Our implementations are available at https://github.com/harvardnlp/lie-access-memory2Published as a conference paper at ICLR 2017Tape-Based Memory Neural memories can also be extended to support relative access by main-taining read state. Following notation from Turing machines, we call this state the head ,q. In thesimplest case the recurrence now has the form,0;q0;= RW(;q;h);and this can be extended to support multiple heads.In the simplest case of soft tape-based memory (a naive version of the much more complicated neuralTuring machine), the keys kiindicate one-hot positions along a tape with ki=i. The headqis aprobability distribution over tape positions. It determines the read value by directly specifying theweights. The controller can only “shift” the head by outputting a kernel K(h) = (K1;K0;K+1)in the probability simplex 2and applying convolution.q0(q;h) :=qK(h); i.e. q0j=qj1K+1+qjK0+qj+1K1We can view this as the soft version of a single-step discrete Turing machine where the kernel cansoftly shift the “head” of the machine one to the left, one to the right, or remain in the same location.The value returned can then be computed with linear smoothing as above,wi(q;) :=sihq;kiiPjsjhq;kji:=Xiwi(q0(q;h);)vi:3 L IEGROUPS FOR MEMORYLet us now take a brief digression and consider the standard (non-neural) Turing machine (TM) andthe movement of its head over a tape. A TM has a head q2Zindicating the position on a tape.Between reads, the head can move any number of steps left or right. Moving a+bsteps and thencsteps eventually puts the head at the same location as moving asteps and then b+csteps — i.e.the head movement is associative . In addition, the machine should be able to reverse a head shift,for example, in a stack simulation algorithm, going from push to pop — i.e. each head movementshould also have a corresponding inverse . Finally, the head should also be allowed to stay put, forexample, to read a single data item and use it for multiple time points, an identity .These movements correspond directly to group actions: the possible head movements should beassociative, and contain inverse and identity elements. This group acts on the set of possible headlocations. In a TM, the set of Z-valued head movement acts on the set of locations on the Z-indexedinfinite tape. By our reasoning above, if a Turing machine is to store data contents at points in ageneral spaceK(instead of an infinite Z-indexed tape), then its head movements should form agroup and act onKvia group actions.For a neural memory system, we desire the network to be (almost everywhere) differentiable. Thenotion of “differentiable” groups is well-studied in mathematics, where they are known as Liegroups , and “differentiable group actions” are correspondingly called Lie group actions . In ourcase, using Lie group actions as generalized head movements on a general key space (more accu-rately, manifolds) would most importantly mean that we can take derivatives of these movementsand perform the usual backpropagation algorithm.4 L IE-ACCESS NEURAL TURING MACHINESThese properties motivate us to propose Lie access as an alternative formalism to popular neuralmemory systems, such as probabilistic tapes, which surprisingly do not satisfy invertibility and oftendo not provide an identity.2Our Lie-access memory will consist of a set of points in a manifold K.2The Markov kernel convolutional soft head shift mechanism proposed in Graves et al. (2014) and sketchedin Section 2 does not in general have inverses. Indeed, the authors reported problems with the soft head losing“sharpness” over time, which they dealt with by sharpening coefficients. In the followup work, Graves et al.(2016) utilize a temporal memory link matrix for actions. They note, “the operation Lwsmoothly shifts thefocus forwards to the locations written ... whereas L>wshifts the focus backwards” but do not enforce this asa true inverse. They also explicitly do not include an identity, noting “Self-links are excluded (the diagonal ofthe link matrix is always 0)”; however, they could ignore the link matrix with an interpolation gate, which ineffect acts as the identity.3Published as a conference paper at ICLR 2017We replace the discrete head with a continuous head q2K . The head moves based on a set ofLie group actions a2A generated by the controller. To read memories, we will rely on a distancemeasure in this space, d:KK! R0.3Together these properties describe a general class ofpossible neural memory architectures.Formally a Lie-access neural Turing machine (LANTM) computes the following function,0;q0;q0(w);:= RW(;q;q (w);h)whereq;q (w)2K are resp. read and write heads, and is the memory itself. We implement , asabove, as a weighted dictionary =f(ki;vi;si)gi.4.1 A DDRESSING PROCEDUREThe LANTM maintains a read head qwhich at every step is first updated to q0and then used to readfrom the memory table. This update occurs by selecting a Lie group action from Awhich then actssmoothly on the key space K. We parametrize the action transformation, a:H7!A by the hiddenstate to produce the Lie action, a(h)2A. In the simplest case, the head is then updated based onthis action (heredenotes group action): q0:=a(h)q.For instance, consider two possible Lie groups:(1) A shift group R2acting additively on R2. This means thatA=R2so thata(h) = (;)actsupon a head q= (x;y)by,a(h)q= (;) + (x;y) = (x+;y+):(2) A rotation group SO(3)acting on the sphere S2=fv2R3:kvk= 1g. Each rotation can bedescribed by its axis (a unit vector) and angle . An action (;)qis just the appropriate rotationof the pointq, and is given by Rodrigues’ rotation formula,a(h)q= (;)q=qcos+ (q) sin+h;qi(1cos):Heredenotes cross product.4.2 R EADING AND WRITING MEMORIESRecall that memories are stored in , each with a key, ki, memory vector, vi, and strength, si, andthat memories are read using linear smoothing over vectors based on a key weighting function w,:=Piwi(q0;)vi. While there are many possible weighting schemes, we use one based onthe distance of each memory address from the head in key-space assuming a metric donK. Weconsider two different weighting functions (1) inverse-square and (2) softmax. There first uses thepolynomial law and the second an annealed softmax of the squared distances:w(1)i(q;) :=sid(q;ki)2Pjsjd(q;kj)2w(2)i(q;;T) :=siexp(d(q;ki)2=T)Pjsjexp(d(q;kj)2=T);where we use the convention that it takes the limit value when q!kiandTis atemperature thatrepresents the certainty of its reading, i.e. higher Tcreates more uniform w.The writing procedure is similar to reading. The LANTM maintains a separate write headq(w)thatmoves analogously to the read head, i.e. with action function a(w)(h)and updated value q0(w). Ateach call to RW, a new memory is automatically appended to withk=q0(w). The corresponding3This metric should satisfy a compatibility relation with the Lie group action. When points x;y2Xare simultaneously moved by the same Lie group action v, their distance should stay the same (One possiblemathematical formalization is that Xshould be a Riemannian manifold and the Lie group should be a subgroupofX’s isometry group.): d(vx;vy ) =d(x;y):This condition ensures that if the machine writes a sequence ofdata along a “straight line” at points x;vx;v2x;:::;vkx, then it can read the same sequence by emitting a readlocationyclose toxand then follow the “ v-trail”y;vy;v2y;:::;vky.4Published as a conference paper at ICLR 2017mem. vec.viread valueaddresskikey manifold Kread keyqweight schemeFigure 1: Retrieval of value from memory via a key. Weightings with unit sum are assigned to differentmemories depending on the distances from the addresses to the read key. Linear smoothing over values is usedto emit the final read value. Both inverse-square and softmax schemes follow this method, but differ in theircomputations of the weightings.memoryvand strength sare created by MLP’s v(h)2Rmands(h)2[0;1]takinghas input. Afterwriting, the new memory set is,0:= [f(q0(w);v(h);s(h))g:No explicit erase mechanism is provided, but to erase a memory (k;v;s ), the controller may intheory write (k;v;s).4.3 C OMBINING WITH RANDOM ACCESSFinally we combine this relative addressing procedure with direct random-access to give the modelthe ability for absolute address access. We do this by outputting an absolute address each stepand simply interpolating with our current head. Write t(h)2[0;1]for the interpolation gate and~q(h)2K for our proposed random-access layer. For key space manifolds KlikeRn,4there’s awell defined straight-line interpolation between two points, so we can setq0:=a(tq+ (1t)~q)where we have omitted the implied dependence on h. For other manifolds like the spheres Snthat have well-behaved projection functions :Rn!Sn, we can just project the straight-lineinterpolation to the sphere:q0:=a(tq+ (1t)~q):In the case of a sphere Sn,is justL2-normalization.55 E XPERIMENTSWe experiment with Lie-access memory on a variety of algorithmic learning tasks. We are partic-ularly interested in: (a) how Lie-access memory can be trained, (b) whether it can be effectivelyutilized for algorithmic learning, and (c) what internal structures the model learns compared to sys-tems based directly on soft discrete memory. In particular Lie access is not equipped with an explicitstack or tape, so it would need to learn continuous patterns that capture these properties.Setup. Our experiments utilize an LSTM controller in a version of the encoder-decoder setup(Sutskever et al., 2014), i.e. an encoding input pass followed by a decoding output pass. The encoderreads and writes memories at each step; the decoder only reads memories. The encoder is given hsi,4Or in general, manifolds with convex embeddings in Rn.5Technically, in the sphere case, dom=Rdf0g. But in practice one almost never gets 0 from astraight-line interpolation, so computationally this makes little difference.5Published as a conference paper at ICLR 2017followed by an the input sequence, and then h=sito terminate input. The decoder is not re-fed itsoutput or the correct symbol, i.e. we do not use teacher forcing, so x(t)is a fixed placeholder inputsymbol. The decoder must correctly emit an end-of-output symbol h=eito terminate.Models and Baselines. We implement three main baseline models including: (a) a standard LSTMencoder-decoder, without explicit external memory, (b) a random access memory network, RAMusing the key-value formulation as described in the background, roughly analogous to an attention-based encoder-decoder, and (c) an interpolation of a RAM/Tape -based memory network as describedin the background, i.e. a highly simplified version of a true NTM (Graves et al., 2014) with asharpening parameter. Our models include four versions of Lie-access memory. The main model,LANTM , has an LSTM controller, with a shift group A=R2acting additively on key space K=R2. We also consider a model SLANTM with spherical memory, utilizing a rotation group A=SO(3)acting on keys in the sphere K=S2. For both of the models, the distance function dis theEuclidean (L2) distance, and we experiment with smoothing using inverse-square (default) and withan annealed softmax .6Model Setup. For all tasks, the LSTM baseline has 1 to 4 layers, each with 256 cells. Each ofthe other models has a single-layer, 50-cell LSTM controller, with memory width (i.e. the size ofeach memory vector) 20. Other parameters such as learning rate, decay, and intialization are foundthrough grid search. Further hyperparameter details are give in the appendix.Tasks. Our experiments are on a series of algorithmic tasks shown in Table 1a. The C OPY, RE-VERSE , and B IGRAM FLIPtasks are based on Grefenstette et al. (2015); the D OUBLE and I NTER -LEAVED ADDtasks are designed in a similar vein. Additionally we also include three harder tasks:ODDFIRST , REPEAT COPY, and P RIORITY SORT. In O DDFIRST , the model must output the odd-indexed elements first, followed by the even-indexed elements. In R EPEAT COPY, each model mustrepeat a sequence of length 20, Ntimes. In P RIORITY SORT, each item of the input sequence isgiven a priority, and the model must output them in priority order.We train each model in two regimes, one with a small number of samples (16K) and one with a largenumber of samples (320K). In the former case, the samples are iterated through 20 times, while inthe latter, the samples are iterated through only once. Thus in both regimes, the total training timesare the same. Training is done by minimizing negative log likelihood with RMSProp.Prediction is performed via argmax/greedy prediction at each step. To evaluate the performance ofthe models, we compute the fraction of tokens correctly predicted and the fraction of all answerscompletely correctly predicted, respectively called fine and coarse scores. We assess the models on3.2K randomly generated out-of-sample 2x length examples, i.e. with sequence lengths 2k(or repeatnumber 2Nin the case of R EPEAT COPY) to test the generalization of the system. More precisely,for all tasks other than repeat copy, during training, the length kis varied in the interval [lk;uk](asshown in table 1ba). During test time, the length kis varied in the range [uk+ 1;2uk]. For repeatcopy, the repetition number Nis varied similarly, instead of k.Results. Main results comparing the different memory systems and read computations on a seriesof tasks are shown in Table 1b. Consistent with previous work the fixed-memory LSTM systemfails consistently when required to generalize to the 2x samples, unable to solve any 2x problemcorrectly, and only able to predict at most 50% of the symbols for all tasks except interleavedaddition, regardless of training regime. The RAM (attention-based) and the RAM/tape hybrid aremuch stronger baselines, answering more than 50% of the characters correctly for all but the 6-O DDFIRST task. Perhaps surprisingly, RAM and RAM/tape learned the 7-R EPEAT COPY task withalmost perfect generalization scores when trained in the large sample regime. In general, it does notseem that the simple tape memory confers much advantage to the RAM model, as the generalizationperformances of both models are similar for the most part, which motivates more advanced NTMenhancements beyond sharpening.The last four columns illustrate the performance of the LANTM models. We found the inverse-square LANTM and SLANTM models to be the most effective, achieving >90% generalization6Note that the read weight calculation of a SLANTM with softmax is essentially the same as the RAMmodel: For head q,exp(d(q;ki)2=T) = exp(kqkik2=T) = exp((22hq;kii)=T), wherethe last equality comes from kqk=kkik= 1 (key-space is on the sphere). Therefore the weightswi=siexp(d(q;ki)2=T)Pjsjexp(d(q;kj)2=T)=siexp(2hq;kii=T)Pjsjexp(2hq;kji=T), which is the RAM weighting scheme.6Published as a conference paper at ICLR 2017Task Input Output Size kjVj1 - C OPY a1a2a3ak a1a2a3ak [2;64] 1282 - R EVERSE a1a2a3ak akak1ak2a1 [2;64] 1283 - B IGRAM FLIP a1a2a3a4a2k1a2ka2a1a4a3a2ka2k1 [1;16] 1284 - D OUBLE a1a2ak 2jaka1j [2;40] 105 - I NTERLEAVED ADDa1a2a3a4a2k1a2kja2ka2k2a2j+ja2k1a1j [2;16] 106 - O DDFIRST a1a2a3a4a2k1a2ka1a3a2k1a2a4a2k [1;16] 1287 - R EPEAT COPY Na1a20 a1a20a1a20(Ntimes)N2[1;5] 1288 - P RIORITY SORT 5a52a29a9 a1a2a3ak [2;10] 128(a) Task descriptions and parameters. jaka1jmeans the decimal number repesented by decimal digitsaka1. Arithmetic tasks have all numbers formatted with the least significant digits on the left and with zeropadding. The D OUBLE task takes an integer x2[0;10k)padded tokdigits and outputs 2xink+ 1 digits,zero padded to k+ 1digits. The I NTERLEAVED ADDtask takes two integers x;y2[0;10k)padded tokdigitsand interleaved, forming a length 2kinput sequence and outputs x+yzero padded to k+ 1digits. The lasttwo tasks use numbers in unary format: Nis the shorthand for a length Nsequence of a special symbol @,encodingNin unary, e.g. 3 = @@@ .Base Memory LieLSTM RAM RAM/Tape LANTM LANTM-s SLANTM SLANTM-sS L S L S L S L S L S L S L1 16/0 21/0 61/0 61/1 70/2 70/1 ? ? ? ? ? ? ? ?2 26/0 32/0 58/2 54/2 24/1 43/2 ? ? 97/44 98/88 99/96 ? ? ?3 30/0 39/0 56/5 54/9 64/8 69/9 ? ? ? 99/94 99/99 97/67 93/60 90/434 44/0 47/0 72/8 74/15 70/12 71/6 ? ? ? ? ? ? ? ?5 60/0 61/0 74/13 76/17 77/23 67/19 99/93 99/93 90/38 94/57 99/91 99/97 98/78 ?6 29/0 42/0 31/5 46/4 43/8 62/8 99/91 99/95 90/29 50/0 49/7 56/8 74/15 76/167 24/0 37/0 98/56 99/98 71/18 99/93 67/0 70/0 17/0 48/0 99/91 99/78 96/41 99/518 46/0 53/0 60/5 80/22 78/15 66/9 87/35 98/72 99/95 99/99 ? 99/99 98/79 ?(b) Main results. Numbers represent the accuracy percentages on the fine/coarse evaluations on the out-of-sample 2tasks. The S and L columns resp. indicate small and large sample training regimes. Symbol ?indicates exact 100% accuracy (Fine scores above 99.5 are not rounded up). Baselines are described in thebody. LANTM and SLANTM use inverse-square while LANTM-s and SLANTM-s use softmax weightingscheme. The best scores, if not 100% (denoted by stars), are bolded for each of the small and large sampleregimes.accuracy on most tasks, and together they solve all of the tasks here with >90% coarse score. Inparticular, LANTM is able to solve the 6-O DDFIRST problem when no other model can correctlysolve 20% of the 2x instances; SLANTM on the other hand is the only Lie access model able tosolve the 7-R EPEAT COPY problem.The best Lie access model trained with the small sample regime beats or is competitive with any ofthe baseline trained under the large sample regime. In all tasks other than 7-R EPEAT COPY, the gapin the coarse score between the best Lie access model in small sample regime and the best baselinein any sample regime is 70%. However, in most cases, training under the large sample regimedoes not improve much. For a few tasks, small sample regime actually produces a model with bettergeneralization than large sample regime. We observed in these instances, the generalization errorcurve under a large sample regime reaches an optimum at around 2/3 to 3/4 of training time, andthen increases almost monotonically from there. Thus, the model likely has found an algorithm thatworks only for the training sizes; in particular, this phenomenon does not seem to be due to lack oftraining time.6 D ISCUSSIONQualitative Analysis. We did further visual analysis of the different Lie-access techniques to seehow the models were learning the underlying tasks, and to verify that they were using the relativeaddressing scheme. Figure 2 shows two diagrams of the LANTM model of the tasks of priority sortand repeat copy. Figure 3 shows two diagrams of the SLANTM model for the same two tasks. Fig-7Published as a conference paper at ICLR 2017@@79@@@@98@5@@@107119dec./uni00A0reads119 /uni00A05 79 107 98 /uni00A0$(a) (b)Figure 2: Analysis of the LANTM model. (a)PCA projection from key space R2to 1D for the memories and read heads qof LANTM for the unary 8-P RIORITY SORT task. In this task, the encoder reads a priority,encoded in unary, and then a value; the decoder must output these values in priority order. In this examplethe sequence is [@;@;79;@;@;@;@;98;@;5;@;@;@;107;@;119], where the special symbol @ is a unaryencoding of the priority. From top to bottom, each row indicates the movement of the encoder write head q(w)as it is fed each input character. Fill indicates the strength siof memory write (black indicates high strength).Position of a dot within its row indicates the PCA projection of the key ki. The last line indicates the movementof decoder read head q. Interestingly, we note that, instead of writing to memory, the controller remembersthe item 119 itself. (b)Raw coordinates in key space R2of writes (red) and reads (blue) from LANTM on7-R EPEAT COPY. Red line indicates the writes, which occur along a straight line during the encoding phase.Blue line indicates the reads, which zip back and forth in the process of copying the input sequence 6 times.Enc./uni00A0Writes Dec./uni00A0Reads287443102883980/uni00A076273(a) (b)Figure 3: Analysis of the SLANTM model. (a)PCA projection from the spherical key space S2to 2D of thememories and read heads qof SLANTM for the task of 7-R EPEAT COPY. Here the model is to repeatedlyoutput the sequence 10 times. Input is 10 repetitions of special symbol @ followed by [28, 74, 43, 102, 88, 39,... ]. Left: the positions of write head q(w)during the encoding phase. Fill indicates strength si(black meanshigh strength); number indicates the character stored. SLANTM traverses in a circle clockwise starting at point28, and stores data at regular intervals. Right : the positions of read head qduring the decoding phase. Startingfrom the blue dot, the reads move clockwise around the sphere, and end at the red dot. For the sake of clarity,read positions are indicated by bends in the blue line, instead of by dots. Intriguingly, the model implementsa cyclic list data structure, taking advantage of the spherical structure of the memory. (b)Raw coordinates inkey spaceS2of writes (red) and reads (blue) from SLANTM on a non-unary encoded variant of the prioritysort task. Red line indicates the movements of the write-head q(w)to place points along a sub-manifold of K(an arc ofS2) during the encoding phase. Notably, this movement is not sequential, but random-access, so asto store elements in correct priority order. Blue line indicates the simple traversal of this arc during decoding.8Published as a conference paper at ICLR 2017Figure 4: Memory access pattern of LANTM on 6-O DDFIRST . Left: In the middle of training. LANTMlearns to store data in a zigzag such that odd-indexed items fall on one side and even-indexed items fall on theother. However reading is only half correct. Right: After training. During reading, the model simply reads theodd-indexed items in a straight line, followed by the even-indexed items in a parallel line.ure 4 shows the memory access pattern of LANTM on 6-O DDFIRST task. Additionally, animationstracing the evolution of the memory access pattern of models over training time can be found athttp://nlp.seas.harvard.edu/lantm . They demonstrate that the models indeed learn relativeaddressing and internally are constructing geometric data structures to solve these algorithmic tasks.Unbounded storage One possible criticism of the LANTM framework could be that the amountof information stored increases linearly with time, which limits the usefulness of this framework forlong timescale tasks. This is indeed the case with our implementations, but need not be the case ingeneral. There can be many ways of limiting physical memory usage. For example, a simple way isto discard the least recently used memory, as in the work of Graves et al. (2016) and Gulcehre et al.(2016). Another way is to approximate with fixed number of bits the read function that takes a headposition and returns the read value. For example, noting that this function is a rational function onthe head position, keys, and memory vectors, we can approximate the numerators and denominatorswith a fixed degree polynomial.Content address Our Lie-access framework is not mutually exclusive from content addressingmethods. For example, in each of our implementations, we could have the controllers output both aposition in the key space and a content addresser of the same size as memory vectors, and interpo-lated the read values from Lie-access and the read values from content addressing.7 C ONCLUSIONThis paper introduces Lie-access memory as an alternative neural memory access paradigm, andexplored several different implementations of this approach. LANTMs follow similar axioms asdiscrete Turing machines while providing differentiability. Experiments show that simple modelscan learn algorithmic tasks. Internally these models naturally learn equivalence of standard datastructures like stack and cyclic lists. In future work we hope to experiment with more groups and toscale these methods to more difficult reasoning tasks. For instance, we hope to build a general pur-pose encoder-decoder model for tasks like question answering and machine translation that makesuse of differentiable relative-addressing schemes to replace RAM-style attention.9Published as a conference paper at ICLR 2017 | ryok7XQVx | review | 6: Marginally above acceptance threshold | The paper introduces a novel memory mechanism for NTMs based on differentiable Lie groups.
This allows to place memory elements as points on a manifold, while still allowing training with backpropagation.
It's a more general version of the NTM memory, and possibly allows for training a more efficient addressing schemes.
Pros:
- novel and interesting idea for memory access
- nicely written
Cons:
- need to manually specify the Lie group to use (it would be better if network could learn the best way of accessing memory)
- not clear if this really works better than standard NTM (compared only to simplified version)
- not clear if this is useful in practice (no comparison on real tasks)
| 4: The reviewer is confident but not absolutely certain that the evaluation is correct |
Byiy-Pqlx | ICLR.cc/2017/conference | 2017 | Lie-Access Neural Turing Machines | ["Greg Yang", "Alexander Rush"] |
External neural memory structures have recently become a popular tool for
algorithmic deep learning
(Graves et al. 2014; Weston et al. 2014). These models
generally utilize differentiable versions of traditional discrete
memory-access structures (random access, stacks, tapes) to provide
the storage necessary for computational tasks. In
this work, we argue that these neural memory systems lack specific
structure important for relative indexing, and propose an
alternative model, Lie-access memory, that is explicitly designed
for the neural setting. In this paradigm, memory is accessed using
a continuous head in a key-space manifold. The head is moved via Lie
group actions, such as shifts or rotations, generated by a
controller, and memory access is performed by linear smoothing in
key space. We argue that Lie groups provide a natural generalization
of discrete memory structures, such as Turing machines, as they
provide inverse and identity operators while maintaining
differentiability. To experiment with this approach, we implement
a simplified Lie-access neural Turing machine (LANTM) with
different Lie groups. We find that this approach is able to perform
well on a range of algorithmic tasks. | ["Natural language processing", "Deep learning", "Supervised Learning"] | ABSTRACTExternal neural memory structures have recently become a popular tool for algo-rithmic deep learning (Graves et al., 2014; Weston et al., 2014). These models gen-erally utilize differentiable versions of traditional discrete memory-access struc-tures (random access, stacks, tapes) to provide the storage necessary for computa-tional tasks. In this work, we argue that these neural memory systems lack specificstructure important for relative indexing, and propose an alternative model, Lie-access memory, that is explicitly designed for the neural setting. In this paradigm,memory is accessed using a continuous head in a key-space manifold. The head ismoved via Lie group actions, such as shifts or rotations, generated by a controller,and memory access is performed by linear smoothing in key space. We argue thatLie groups provide a natural generalization of discrete memory structures, such asTuring machines, as they provide inverse and identity operators while maintainingdifferentiability. To experiment with this approach, we implement a simplifiedLie-access neural Turing machine (LANTM) with different Lie groups. We findthat this approach is able to perform well on a range of algorithmic tasks.1 I NTRODUCTIONRecent work on neural Turing machines (NTMs) (Graves et al., 2014; 2016) and memory networks(MemNNs) (Weston et al., 2014) has repopularized the use of explicit external memory in neuralnetworks and demonstrated that these networks can be effectively trained in an end-to-end fash-ion. These methods have been successfully applied to question answering (Weston et al., 2014;Sukhbaatar et al., 2015; Kumar et al., 2015), algorithm learning (Graves et al., 2014; Kalchbrenneret al., 2015; Kaiser & Sutskever, 2015; Kurach et al., 2015; Zaremba & Sutskever, 2015; Grefen-stette et al., 2015; Joulin & Mikolov, 2015), machine translation (Kalchbrenner et al., 2015), andother tasks. This methodology has the potential to extend deep networks in a general-purpose waybeyond the limitations of fixed-length encodings such as standard recurrent neural networks (RNNs).A shared theme in many of these works (and earlier exploration of neural memory) is to re-frametraditional memory access paradigms to be continuous and possibly differentiable to allow for back-propagation. In MemNNs, traditional random-access memory is replaced with a ranking approachthat finds the most likely memory. In the work of Grefenstette et al. (2015), classical stack-,queue- , and deque-based memories are replaced by soft-differentiable stack, queue, and deque data-structures. In NTMs, sequential local-access memory is simulated by an explicit tape data structure.This work questions the assumption that neural memory should mimic the structure of traditionaldiscrete memory. We argue that a neural memory should provide the following: (A) differentiabilityfor end-to-end training and (B) robust relative indexing (perhaps in addition to random-access).Surprisingly many neural memory systems fail one of these conditions, either lacking Criterion B,discussed below, or employing extensions like REINFORCE to work around lack of differentiability(Zaremba & Sutskever, 2015).We propose instead a class of memory access techniques based around Lie groups, i.e. groups withdifferentiable operations, which provide a natural structure for neural memory access. By definition,their differentiability satisfies the concerns of Criterion A. Additionally the group axioms provideidentity, invertibility, and associativity, all of which are desirable properties for a relative indexingscheme (Criterion B), and all of which are satisfied by standard Turing machines. Notably though,1Published as a conference paper at ICLR 2017simple group properties like invertibility are not satisfied by neural Turing machines, differentiableneural computers, or even by simple soft-tape machines. In short, in our method, we constructmemory systems with keys placed on a manifold, and where relative access operations are providedby Lie groups.To experiment with this approach, we implement a neural Turing machine with an LSTM con-troller and several versions of Lie-access memory, which we call Lie-access neural Turing machines(LANTM). The details of these models are exhibited in Section 4.1Our main experimental resultsare presented in Section 5. The LANTM model is able to learn non-trivial algorithmic tasks suchas copying and permutating sequences with higher accuracy than more traditional memory-basedapproaches, and significantly better than fixed memory LSTM models. The memory structures andkey transformation learned by the model resemble interesting continuous space representations oftraditional discrete memory data structures.2 B ACKGROUND : RECURRENT NEURAL NETWORKS WITH MEMORYThis work focuses particularly on recurrent neural network (RNN) controllers of abstract neuralmemories. Formally, an RNN is a differentiable function RNN :XH!H , whereXis anarbitrary input space and His the hidden state space. On input (x(1);:::;x(T))2XTand withinitial stateh(0)2H, the RNN produces states h(1);:::;h(T)based on the recurrence,h(t):= RNN(x(t);h(t1)):These states can be used for downstream tasks, for example sequence prediction which producesoutputs (y(1);:::;y(T))based on an additional transformation and prediction layer y(t)=F(h(t))such as a linear-layer followed by a softmax. RNNs can be trained end-to-end by backpropagation-through-time (BPTT) (Werbos, 1990). In practice, we use long short-term memory (LSTM) RNNs(Hochreiter & Schmidhuber, 1997). LSTM’s hidden state consists of two variables (c(t);h(t)), whereh(t)is also the output to the external world; we however use the above notation for simplicity.An RNN can also serve as the controller for an external memory system (Graves et al., 2014; Grefen-stette et al., 2015; Zaremba & Sutskever, 2015), which enables: (1) the entire system to carry stateover time from both the RNN and the external memory, and (2) the RNN controller to collect read-ings from and compute additional instructions to the external memory. Formally, we extend therecurrence to,h(t):= RNN([x(t);(t1)];h(t1));(t);(t):= RW((t1);h(t));where is the abstract memory state, and (t)is the value read from memory, and his used as anabstract controller command to a read/write function RW. Writing occurs in the mutation of ateach time step. Throughout this work, will take the form of an ordered set f(ki;vi;si)giwhereki2K is an arbitrary key, vi2Rmis a memory value, and si2R+is a memory strength.In order for the model to be trainable with backpropagation, the memory function RW must alsobe differentiable. Several forms of differentiable memory have been proposed in the literature. Webegin by describing two simple forms: (neural) random-access memory and (neural) tape-basedmemory. For this section, we focus on the read step and assume is fixed.Random-Access Memory Random-access memory consists of using a now standard attention-mechanism or MemNN to read a memory (our description follows Miller et al. (2016)). The con-troller hidden state is used to output a random-access pointer, q0(h)that determines a weighting ofmemory vectors via dot products with the corresponding keys. This weighting in turn determinesthe read values via linear smoothing based on a function w,wi(q;) :=siexphq;kiiPjsjexphq;kji:=Xiwi(q0(h);)vi:The final read memory is based on how “close” the read pointer was to each of the keys, wherecloseness in key space is determined by w.1Our implementations are available at https://github.com/harvardnlp/lie-access-memory2Published as a conference paper at ICLR 2017Tape-Based Memory Neural memories can also be extended to support relative access by main-taining read state. Following notation from Turing machines, we call this state the head ,q. In thesimplest case the recurrence now has the form,0;q0;= RW(;q;h);and this can be extended to support multiple heads.In the simplest case of soft tape-based memory (a naive version of the much more complicated neuralTuring machine), the keys kiindicate one-hot positions along a tape with ki=i. The headqis aprobability distribution over tape positions. It determines the read value by directly specifying theweights. The controller can only “shift” the head by outputting a kernel K(h) = (K1;K0;K+1)in the probability simplex 2and applying convolution.q0(q;h) :=qK(h); i.e. q0j=qj1K+1+qjK0+qj+1K1We can view this as the soft version of a single-step discrete Turing machine where the kernel cansoftly shift the “head” of the machine one to the left, one to the right, or remain in the same location.The value returned can then be computed with linear smoothing as above,wi(q;) :=sihq;kiiPjsjhq;kji:=Xiwi(q0(q;h);)vi:3 L IEGROUPS FOR MEMORYLet us now take a brief digression and consider the standard (non-neural) Turing machine (TM) andthe movement of its head over a tape. A TM has a head q2Zindicating the position on a tape.Between reads, the head can move any number of steps left or right. Moving a+bsteps and thencsteps eventually puts the head at the same location as moving asteps and then b+csteps — i.e.the head movement is associative . In addition, the machine should be able to reverse a head shift,for example, in a stack simulation algorithm, going from push to pop — i.e. each head movementshould also have a corresponding inverse . Finally, the head should also be allowed to stay put, forexample, to read a single data item and use it for multiple time points, an identity .These movements correspond directly to group actions: the possible head movements should beassociative, and contain inverse and identity elements. This group acts on the set of possible headlocations. In a TM, the set of Z-valued head movement acts on the set of locations on the Z-indexedinfinite tape. By our reasoning above, if a Turing machine is to store data contents at points in ageneral spaceK(instead of an infinite Z-indexed tape), then its head movements should form agroup and act onKvia group actions.For a neural memory system, we desire the network to be (almost everywhere) differentiable. Thenotion of “differentiable” groups is well-studied in mathematics, where they are known as Liegroups , and “differentiable group actions” are correspondingly called Lie group actions . In ourcase, using Lie group actions as generalized head movements on a general key space (more accu-rately, manifolds) would most importantly mean that we can take derivatives of these movementsand perform the usual backpropagation algorithm.4 L IE-ACCESS NEURAL TURING MACHINESThese properties motivate us to propose Lie access as an alternative formalism to popular neuralmemory systems, such as probabilistic tapes, which surprisingly do not satisfy invertibility and oftendo not provide an identity.2Our Lie-access memory will consist of a set of points in a manifold K.2The Markov kernel convolutional soft head shift mechanism proposed in Graves et al. (2014) and sketchedin Section 2 does not in general have inverses. Indeed, the authors reported problems with the soft head losing“sharpness” over time, which they dealt with by sharpening coefficients. In the followup work, Graves et al.(2016) utilize a temporal memory link matrix for actions. They note, “the operation Lwsmoothly shifts thefocus forwards to the locations written ... whereas L>wshifts the focus backwards” but do not enforce this asa true inverse. They also explicitly do not include an identity, noting “Self-links are excluded (the diagonal ofthe link matrix is always 0)”; however, they could ignore the link matrix with an interpolation gate, which ineffect acts as the identity.3Published as a conference paper at ICLR 2017We replace the discrete head with a continuous head q2K . The head moves based on a set ofLie group actions a2A generated by the controller. To read memories, we will rely on a distancemeasure in this space, d:KK! R0.3Together these properties describe a general class ofpossible neural memory architectures.Formally a Lie-access neural Turing machine (LANTM) computes the following function,0;q0;q0(w);:= RW(;q;q (w);h)whereq;q (w)2K are resp. read and write heads, and is the memory itself. We implement , asabove, as a weighted dictionary =f(ki;vi;si)gi.4.1 A DDRESSING PROCEDUREThe LANTM maintains a read head qwhich at every step is first updated to q0and then used to readfrom the memory table. This update occurs by selecting a Lie group action from Awhich then actssmoothly on the key space K. We parametrize the action transformation, a:H7!A by the hiddenstate to produce the Lie action, a(h)2A. In the simplest case, the head is then updated based onthis action (heredenotes group action): q0:=a(h)q.For instance, consider two possible Lie groups:(1) A shift group R2acting additively on R2. This means thatA=R2so thata(h) = (;)actsupon a head q= (x;y)by,a(h)q= (;) + (x;y) = (x+;y+):(2) A rotation group SO(3)acting on the sphere S2=fv2R3:kvk= 1g. Each rotation can bedescribed by its axis (a unit vector) and angle . An action (;)qis just the appropriate rotationof the pointq, and is given by Rodrigues’ rotation formula,a(h)q= (;)q=qcos+ (q) sin+h;qi(1cos):Heredenotes cross product.4.2 R EADING AND WRITING MEMORIESRecall that memories are stored in , each with a key, ki, memory vector, vi, and strength, si, andthat memories are read using linear smoothing over vectors based on a key weighting function w,:=Piwi(q0;)vi. While there are many possible weighting schemes, we use one based onthe distance of each memory address from the head in key-space assuming a metric donK. Weconsider two different weighting functions (1) inverse-square and (2) softmax. There first uses thepolynomial law and the second an annealed softmax of the squared distances:w(1)i(q;) :=sid(q;ki)2Pjsjd(q;kj)2w(2)i(q;;T) :=siexp(d(q;ki)2=T)Pjsjexp(d(q;kj)2=T);where we use the convention that it takes the limit value when q!kiandTis atemperature thatrepresents the certainty of its reading, i.e. higher Tcreates more uniform w.The writing procedure is similar to reading. The LANTM maintains a separate write headq(w)thatmoves analogously to the read head, i.e. with action function a(w)(h)and updated value q0(w). Ateach call to RW, a new memory is automatically appended to withk=q0(w). The corresponding3This metric should satisfy a compatibility relation with the Lie group action. When points x;y2Xare simultaneously moved by the same Lie group action v, their distance should stay the same (One possiblemathematical formalization is that Xshould be a Riemannian manifold and the Lie group should be a subgroupofX’s isometry group.): d(vx;vy ) =d(x;y):This condition ensures that if the machine writes a sequence ofdata along a “straight line” at points x;vx;v2x;:::;vkx, then it can read the same sequence by emitting a readlocationyclose toxand then follow the “ v-trail”y;vy;v2y;:::;vky.4Published as a conference paper at ICLR 2017mem. vec.viread valueaddresskikey manifold Kread keyqweight schemeFigure 1: Retrieval of value from memory via a key. Weightings with unit sum are assigned to differentmemories depending on the distances from the addresses to the read key. Linear smoothing over values is usedto emit the final read value. Both inverse-square and softmax schemes follow this method, but differ in theircomputations of the weightings.memoryvand strength sare created by MLP’s v(h)2Rmands(h)2[0;1]takinghas input. Afterwriting, the new memory set is,0:= [f(q0(w);v(h);s(h))g:No explicit erase mechanism is provided, but to erase a memory (k;v;s ), the controller may intheory write (k;v;s).4.3 C OMBINING WITH RANDOM ACCESSFinally we combine this relative addressing procedure with direct random-access to give the modelthe ability for absolute address access. We do this by outputting an absolute address each stepand simply interpolating with our current head. Write t(h)2[0;1]for the interpolation gate and~q(h)2K for our proposed random-access layer. For key space manifolds KlikeRn,4there’s awell defined straight-line interpolation between two points, so we can setq0:=a(tq+ (1t)~q)where we have omitted the implied dependence on h. For other manifolds like the spheres Snthat have well-behaved projection functions :Rn!Sn, we can just project the straight-lineinterpolation to the sphere:q0:=a(tq+ (1t)~q):In the case of a sphere Sn,is justL2-normalization.55 E XPERIMENTSWe experiment with Lie-access memory on a variety of algorithmic learning tasks. We are partic-ularly interested in: (a) how Lie-access memory can be trained, (b) whether it can be effectivelyutilized for algorithmic learning, and (c) what internal structures the model learns compared to sys-tems based directly on soft discrete memory. In particular Lie access is not equipped with an explicitstack or tape, so it would need to learn continuous patterns that capture these properties.Setup. Our experiments utilize an LSTM controller in a version of the encoder-decoder setup(Sutskever et al., 2014), i.e. an encoding input pass followed by a decoding output pass. The encoderreads and writes memories at each step; the decoder only reads memories. The encoder is given hsi,4Or in general, manifolds with convex embeddings in Rn.5Technically, in the sphere case, dom=Rdf0g. But in practice one almost never gets 0 from astraight-line interpolation, so computationally this makes little difference.5Published as a conference paper at ICLR 2017followed by an the input sequence, and then h=sito terminate input. The decoder is not re-fed itsoutput or the correct symbol, i.e. we do not use teacher forcing, so x(t)is a fixed placeholder inputsymbol. The decoder must correctly emit an end-of-output symbol h=eito terminate.Models and Baselines. We implement three main baseline models including: (a) a standard LSTMencoder-decoder, without explicit external memory, (b) a random access memory network, RAMusing the key-value formulation as described in the background, roughly analogous to an attention-based encoder-decoder, and (c) an interpolation of a RAM/Tape -based memory network as describedin the background, i.e. a highly simplified version of a true NTM (Graves et al., 2014) with asharpening parameter. Our models include four versions of Lie-access memory. The main model,LANTM , has an LSTM controller, with a shift group A=R2acting additively on key space K=R2. We also consider a model SLANTM with spherical memory, utilizing a rotation group A=SO(3)acting on keys in the sphere K=S2. For both of the models, the distance function dis theEuclidean (L2) distance, and we experiment with smoothing using inverse-square (default) and withan annealed softmax .6Model Setup. For all tasks, the LSTM baseline has 1 to 4 layers, each with 256 cells. Each ofthe other models has a single-layer, 50-cell LSTM controller, with memory width (i.e. the size ofeach memory vector) 20. Other parameters such as learning rate, decay, and intialization are foundthrough grid search. Further hyperparameter details are give in the appendix.Tasks. Our experiments are on a series of algorithmic tasks shown in Table 1a. The C OPY, RE-VERSE , and B IGRAM FLIPtasks are based on Grefenstette et al. (2015); the D OUBLE and I NTER -LEAVED ADDtasks are designed in a similar vein. Additionally we also include three harder tasks:ODDFIRST , REPEAT COPY, and P RIORITY SORT. In O DDFIRST , the model must output the odd-indexed elements first, followed by the even-indexed elements. In R EPEAT COPY, each model mustrepeat a sequence of length 20, Ntimes. In P RIORITY SORT, each item of the input sequence isgiven a priority, and the model must output them in priority order.We train each model in two regimes, one with a small number of samples (16K) and one with a largenumber of samples (320K). In the former case, the samples are iterated through 20 times, while inthe latter, the samples are iterated through only once. Thus in both regimes, the total training timesare the same. Training is done by minimizing negative log likelihood with RMSProp.Prediction is performed via argmax/greedy prediction at each step. To evaluate the performance ofthe models, we compute the fraction of tokens correctly predicted and the fraction of all answerscompletely correctly predicted, respectively called fine and coarse scores. We assess the models on3.2K randomly generated out-of-sample 2x length examples, i.e. with sequence lengths 2k(or repeatnumber 2Nin the case of R EPEAT COPY) to test the generalization of the system. More precisely,for all tasks other than repeat copy, during training, the length kis varied in the interval [lk;uk](asshown in table 1ba). During test time, the length kis varied in the range [uk+ 1;2uk]. For repeatcopy, the repetition number Nis varied similarly, instead of k.Results. Main results comparing the different memory systems and read computations on a seriesof tasks are shown in Table 1b. Consistent with previous work the fixed-memory LSTM systemfails consistently when required to generalize to the 2x samples, unable to solve any 2x problemcorrectly, and only able to predict at most 50% of the symbols for all tasks except interleavedaddition, regardless of training regime. The RAM (attention-based) and the RAM/tape hybrid aremuch stronger baselines, answering more than 50% of the characters correctly for all but the 6-O DDFIRST task. Perhaps surprisingly, RAM and RAM/tape learned the 7-R EPEAT COPY task withalmost perfect generalization scores when trained in the large sample regime. In general, it does notseem that the simple tape memory confers much advantage to the RAM model, as the generalizationperformances of both models are similar for the most part, which motivates more advanced NTMenhancements beyond sharpening.The last four columns illustrate the performance of the LANTM models. We found the inverse-square LANTM and SLANTM models to be the most effective, achieving >90% generalization6Note that the read weight calculation of a SLANTM with softmax is essentially the same as the RAMmodel: For head q,exp(d(q;ki)2=T) = exp(kqkik2=T) = exp((22hq;kii)=T), wherethe last equality comes from kqk=kkik= 1 (key-space is on the sphere). Therefore the weightswi=siexp(d(q;ki)2=T)Pjsjexp(d(q;kj)2=T)=siexp(2hq;kii=T)Pjsjexp(2hq;kji=T), which is the RAM weighting scheme.6Published as a conference paper at ICLR 2017Task Input Output Size kjVj1 - C OPY a1a2a3ak a1a2a3ak [2;64] 1282 - R EVERSE a1a2a3ak akak1ak2a1 [2;64] 1283 - B IGRAM FLIP a1a2a3a4a2k1a2ka2a1a4a3a2ka2k1 [1;16] 1284 - D OUBLE a1a2ak 2jaka1j [2;40] 105 - I NTERLEAVED ADDa1a2a3a4a2k1a2kja2ka2k2a2j+ja2k1a1j [2;16] 106 - O DDFIRST a1a2a3a4a2k1a2ka1a3a2k1a2a4a2k [1;16] 1287 - R EPEAT COPY Na1a20 a1a20a1a20(Ntimes)N2[1;5] 1288 - P RIORITY SORT 5a52a29a9 a1a2a3ak [2;10] 128(a) Task descriptions and parameters. jaka1jmeans the decimal number repesented by decimal digitsaka1. Arithmetic tasks have all numbers formatted with the least significant digits on the left and with zeropadding. The D OUBLE task takes an integer x2[0;10k)padded tokdigits and outputs 2xink+ 1 digits,zero padded to k+ 1digits. The I NTERLEAVED ADDtask takes two integers x;y2[0;10k)padded tokdigitsand interleaved, forming a length 2kinput sequence and outputs x+yzero padded to k+ 1digits. The lasttwo tasks use numbers in unary format: Nis the shorthand for a length Nsequence of a special symbol @,encodingNin unary, e.g. 3 = @@@ .Base Memory LieLSTM RAM RAM/Tape LANTM LANTM-s SLANTM SLANTM-sS L S L S L S L S L S L S L1 16/0 21/0 61/0 61/1 70/2 70/1 ? ? ? ? ? ? ? ?2 26/0 32/0 58/2 54/2 24/1 43/2 ? ? 97/44 98/88 99/96 ? ? ?3 30/0 39/0 56/5 54/9 64/8 69/9 ? ? ? 99/94 99/99 97/67 93/60 90/434 44/0 47/0 72/8 74/15 70/12 71/6 ? ? ? ? ? ? ? ?5 60/0 61/0 74/13 76/17 77/23 67/19 99/93 99/93 90/38 94/57 99/91 99/97 98/78 ?6 29/0 42/0 31/5 46/4 43/8 62/8 99/91 99/95 90/29 50/0 49/7 56/8 74/15 76/167 24/0 37/0 98/56 99/98 71/18 99/93 67/0 70/0 17/0 48/0 99/91 99/78 96/41 99/518 46/0 53/0 60/5 80/22 78/15 66/9 87/35 98/72 99/95 99/99 ? 99/99 98/79 ?(b) Main results. Numbers represent the accuracy percentages on the fine/coarse evaluations on the out-of-sample 2tasks. The S and L columns resp. indicate small and large sample training regimes. Symbol ?indicates exact 100% accuracy (Fine scores above 99.5 are not rounded up). Baselines are described in thebody. LANTM and SLANTM use inverse-square while LANTM-s and SLANTM-s use softmax weightingscheme. The best scores, if not 100% (denoted by stars), are bolded for each of the small and large sampleregimes.accuracy on most tasks, and together they solve all of the tasks here with >90% coarse score. Inparticular, LANTM is able to solve the 6-O DDFIRST problem when no other model can correctlysolve 20% of the 2x instances; SLANTM on the other hand is the only Lie access model able tosolve the 7-R EPEAT COPY problem.The best Lie access model trained with the small sample regime beats or is competitive with any ofthe baseline trained under the large sample regime. In all tasks other than 7-R EPEAT COPY, the gapin the coarse score between the best Lie access model in small sample regime and the best baselinein any sample regime is 70%. However, in most cases, training under the large sample regimedoes not improve much. For a few tasks, small sample regime actually produces a model with bettergeneralization than large sample regime. We observed in these instances, the generalization errorcurve under a large sample regime reaches an optimum at around 2/3 to 3/4 of training time, andthen increases almost monotonically from there. Thus, the model likely has found an algorithm thatworks only for the training sizes; in particular, this phenomenon does not seem to be due to lack oftraining time.6 D ISCUSSIONQualitative Analysis. We did further visual analysis of the different Lie-access techniques to seehow the models were learning the underlying tasks, and to verify that they were using the relativeaddressing scheme. Figure 2 shows two diagrams of the LANTM model of the tasks of priority sortand repeat copy. Figure 3 shows two diagrams of the SLANTM model for the same two tasks. Fig-7Published as a conference paper at ICLR 2017@@79@@@@98@5@@@107119dec./uni00A0reads119 /uni00A05 79 107 98 /uni00A0$(a) (b)Figure 2: Analysis of the LANTM model. (a)PCA projection from key space R2to 1D for the memories and read heads qof LANTM for the unary 8-P RIORITY SORT task. In this task, the encoder reads a priority,encoded in unary, and then a value; the decoder must output these values in priority order. In this examplethe sequence is [@;@;79;@;@;@;@;98;@;5;@;@;@;107;@;119], where the special symbol @ is a unaryencoding of the priority. From top to bottom, each row indicates the movement of the encoder write head q(w)as it is fed each input character. Fill indicates the strength siof memory write (black indicates high strength).Position of a dot within its row indicates the PCA projection of the key ki. The last line indicates the movementof decoder read head q. Interestingly, we note that, instead of writing to memory, the controller remembersthe item 119 itself. (b)Raw coordinates in key space R2of writes (red) and reads (blue) from LANTM on7-R EPEAT COPY. Red line indicates the writes, which occur along a straight line during the encoding phase.Blue line indicates the reads, which zip back and forth in the process of copying the input sequence 6 times.Enc./uni00A0Writes Dec./uni00A0Reads287443102883980/uni00A076273(a) (b)Figure 3: Analysis of the SLANTM model. (a)PCA projection from the spherical key space S2to 2D of thememories and read heads qof SLANTM for the task of 7-R EPEAT COPY. Here the model is to repeatedlyoutput the sequence 10 times. Input is 10 repetitions of special symbol @ followed by [28, 74, 43, 102, 88, 39,... ]. Left: the positions of write head q(w)during the encoding phase. Fill indicates strength si(black meanshigh strength); number indicates the character stored. SLANTM traverses in a circle clockwise starting at point28, and stores data at regular intervals. Right : the positions of read head qduring the decoding phase. Startingfrom the blue dot, the reads move clockwise around the sphere, and end at the red dot. For the sake of clarity,read positions are indicated by bends in the blue line, instead of by dots. Intriguingly, the model implementsa cyclic list data structure, taking advantage of the spherical structure of the memory. (b)Raw coordinates inkey spaceS2of writes (red) and reads (blue) from SLANTM on a non-unary encoded variant of the prioritysort task. Red line indicates the movements of the write-head q(w)to place points along a sub-manifold of K(an arc ofS2) during the encoding phase. Notably, this movement is not sequential, but random-access, so asto store elements in correct priority order. Blue line indicates the simple traversal of this arc during decoding.8Published as a conference paper at ICLR 2017Figure 4: Memory access pattern of LANTM on 6-O DDFIRST . Left: In the middle of training. LANTMlearns to store data in a zigzag such that odd-indexed items fall on one side and even-indexed items fall on theother. However reading is only half correct. Right: After training. During reading, the model simply reads theodd-indexed items in a straight line, followed by the even-indexed items in a parallel line.ure 4 shows the memory access pattern of LANTM on 6-O DDFIRST task. Additionally, animationstracing the evolution of the memory access pattern of models over training time can be found athttp://nlp.seas.harvard.edu/lantm . They demonstrate that the models indeed learn relativeaddressing and internally are constructing geometric data structures to solve these algorithmic tasks.Unbounded storage One possible criticism of the LANTM framework could be that the amountof information stored increases linearly with time, which limits the usefulness of this framework forlong timescale tasks. This is indeed the case with our implementations, but need not be the case ingeneral. There can be many ways of limiting physical memory usage. For example, a simple way isto discard the least recently used memory, as in the work of Graves et al. (2016) and Gulcehre et al.(2016). Another way is to approximate with fixed number of bits the read function that takes a headposition and returns the read value. For example, noting that this function is a rational function onthe head position, keys, and memory vectors, we can approximate the numerators and denominatorswith a fixed degree polynomial.Content address Our Lie-access framework is not mutually exclusive from content addressingmethods. For example, in each of our implementations, we could have the controllers output both aposition in the key space and a content addresser of the same size as memory vectors, and interpo-lated the read values from Lie-access and the read values from content addressing.7 C ONCLUSIONThis paper introduces Lie-access memory as an alternative neural memory access paradigm, andexplored several different implementations of this approach. LANTMs follow similar axioms asdiscrete Turing machines while providing differentiability. Experiments show that simple modelscan learn algorithmic tasks. Internally these models naturally learn equivalence of standard datastructures like stack and cyclic lists. In future work we hope to experiment with more groups and toscale these methods to more difficult reasoning tasks. For instance, we hope to build a general pur-pose encoder-decoder model for tasks like question answering and machine translation that makesuse of differentiable relative-addressing schemes to replace RAM-style attention.9Published as a conference paper at ICLR 2017 | rkhhu9bEg | interesting new | 6: Marginally above acceptance threshold | The paper proposes a new memory access scheme based on Lie group actions for NTMs.
Pros:
* Well written
* Novel addressing scheme as an extension to NTM.
* Seems to work slightly better than normal NTMs.
* Some interesting theory about the novel addressing scheme based on Lie groups.
Cons:
* In the results, the LANTM only seems to be slightly better than the normal NTM.
* The result tables are a bit confusing.
* No source code available.
* The difference to the properties of normal NTM doesn't become too clear. Esp it is said that LANTM are better than NTM because they are differentiable end-to-end and provide a robust relative indexing scheme but NTM are also differentiable end-to-end and also provide a robust indexing scheme.
* It is said that the head is discrete in NTM but actually it is in space R^n, i.e. it is already continuous. It doesn't become clear what is meant here.
* No tests on real-world tasks, only some toy tasks.
* No comparisons to some of the other NTM extensions such as D-NTM or Sparse Access Memory (SAM) (https://arxiv.org/abs/1610.09027). Although the motivations of other NTM extensions might be different, such comparisons still would have been interesting.
| 3: The reviewer is fairly confident that the evaluation is correct |
Bygq-H9eg | ICLR.cc/2017/conference | 2017 | An Analysis of Deep Neural Network Models for Practical Applications | ["Alfredo Canziani", "Adam Paszke", "Eugenio Culurciello"] | Since the emergence of Deep Neural Networks (DNNs) as a prominent technique in the field of computer vision, the ImageNet classification challenge has played a major role in advancing the state-of-the-art. While accuracy figures have steadily increased, the resource utilisation of winning models has not been properly taken into account. In this work, we present a comprehensive analysis of important metrics in practical applications: accuracy, memory footprint, parameters, operations count, inference time and power consumption. Key findings are: (1) power consumption is independent of batch size and architecture; (2) accuracy and inference time are in a hyperbolic relationship; (3) energy constraint are an upper bound on the maximum achievable accuracy and model complexity; (4) the number of operations is a reliable estimate of the inference time. We believe our analysis provides a compelling set of information that helps design and engineer efficient DNNs. | ["Computer vision", "Deep learning", "Applications"] | ABSTRACTSince the emergence of Deep Neural Networks (DNNs) as a prominent techniquein the field of computer vision, the ImageNet classification challenge has played amajor role in advancing the state-of-the-art. While accuracy figures have steadilyincreased, the resource utilisation of winning models has not been properly takeninto account. In this work, we present a comprehensive analysis of important met-rics in practical applications: accuracy, memory footprint, parameters, operationscount, inference time and power consumption. Key findings are: (1) power con-sumption is independent of batch size and architecture; (2) accuracy and inferencetime are in a hyperbolic relationship; (3) energy constraint are an upper bound onthe maximum achievable accuracy and model complexity; (4) the number of oper-ations is a reliable estimate of the inference time. We believe our analysis providesa compelling set of information that helps design and engineer efficient DNNs.1 I NTRODUCTIONSince the breakthrough in 2012 ImageNet competition (Russakovsky et al. , 2015) achieved byAlexNet (Krizhevsky et al. , 2012) — the first entry that used a Deep Neural Network (DNN) —several other DNNs with increasing complexity have been submitted to the challenge in order toachieve better performance.In the ImageNet classification challenge, the ultimate goal is to obtain the highest accuracy in amulti-class classification problem framework, regardless of the actual inference time. We believethat this has given rise to several problems. Firstly, it is now normal practice to run several trainedinstances of a given model over multiple similar instances of each validation image. This practice,also know as model averaging or ensemble of DNNs, dramatically increases the amount of com-putation required at inference time to achieve the published accuracy. Secondly, model selection ishindered by the fact that different submissions are evaluating their (ensemble of) models a differentnumber of times on the validation images, and therefore the reported accuracy is biased on the spe-cific sampling technique (and ensemble size). Thirdly, there is currently no incentive in speeding upinference time, which is a key element in practical applications of these models, and affects resourceutilisation, power-consumption, and latency.This article aims to compare state-of-the-art DNN architectures, submitted for the ImageNet chal-lenge over the last 4 years, in terms of computational requirements and accuracy. We compare thesearchitectures on multiple metrics related to resource utilisation in actual deployments: accuracy,memory footprint, parameters, operations count, inference time and power consumption. The pur-pose of this paper is to stress the importance of these figures, which are essential hard constraintsfor the optimisation of these networks in practical deployments and applications.2 M ETHODSIn order to compare the quality of different models, we collected and analysed the accuracy valuesreported in the literature. We immediately found that different sampling techniques do not allow fora direct comparison of resource utilisation. For example, central-crop (top-5 validation) errors of a1Under review as a conference paper at ICLR 2017AlexNetBN-AlexNetBN-NINENetGoogLeNet ResNet-18VGG-16VGG-19ResNet-34ResNet-50ResNet-101ResNet-152Inception-v3Inception-v450556065707580Top-1 accuracy [%]0 5 10 15 20 25 30 35 40Operations [G-Ops]50556065707580Top-1 accuracy [%]BN-NINInception-v3Inception-v4BN-AlexNetAlexNetVGG-16 VGG-19ResNet-18ResNet-34ResNet-50ResNet-101ResNet-152GoogLeNetENet5M 35M 65M 95M 125M 155MFigure 1: Top1 vs.network. Single-crop top-1 vali-dation accuracies for top scoring single-model archi-tectures. We introduce with this chart our choice ofcolour scheme, which will be used throughout thispublication to distinguish effectively different archi-tectures and their correspondent authors. Notice thatnetworks of the same group share the same hue, forexample ResNet are all variations of pink.Figure 2: Top1 vs.operations, size/parameters.Top-1 one-crop accuracy versus amount of operationsrequired for a single forward pass. The size of theblobs is proportional to the number of network pa-rameters; a legend is reported in the bottom right cor-ner, spanning from 5106to155106params. Boththese figures share the same y-axis, and the grey dotshighlight the centre of the blobs.single run of VGG-161(Simonyan & Zisserman, 2014) and GoogLeNet (Szegedy et al. , 2014) are8:70% and10:07% respectively, revealing that VGG-16 performs better than GoogLeNet. Whenmodels are run with 10-crop sampling,2then the errors become 9:33% and9:15% respectively, andtherefore VGG-16 will perform worse than GoogLeNet, using a single central-crop. For this reason,we decided to base our analysis on re-evaluations of top-1 accuracies3for all networks with a singlecentral-crop sampling technique (Zagoruyko, 2016).For inference time and memory usage measurements we have used Torch7 (Collobert et al. , 2011)with cuDNN-v5 (Chetlur et al. , 2014) and CUDA-v8 back-end. All experiments were conducted ona JetPack-2.3 NVIDIA Jetson TX1 board (nVIDIA): an embedded visual computing system witha 64-bit ARM RA57 CPU, a 1 T-Flop/s 256-core NVIDIA Maxwell GPU and 4 GB LPDDR4of shared RAM. We use this resource-limited device to better underline the differences betweennetwork architecture, but similar results can be obtained on most recent GPUs, such as the NVIDIAK40 or Titan X, to name a few. Operation counts were obtained using an open-source tool that wedeveloped (Paszke, 2016). For measuring the power consumption, a Keysight 1146B Hall effectcurrent probe has been used with a Keysight MSO-X 2024A 200 MHz digital oscilloscope with asampling period of 2 sand50 kSa =ssample rate. The system was powered by a Keysight E3645AGPIB controlled DC power supply.3 R ESULTSIn this section we report our results and comparisons. We analysed the following DDNs: AlexNet(Krizhevsky et al. , 2012), batch normalised AlexNet (Zagoruyko, 2016), batch normalised NetworkIn Network (NIN) (Lin et al. , 2013), ENet (Paszke et al. , 2016) for ImageNet (Culurciello, 2016),GoogLeNet (Szegedy et al. , 2014), VGG-16 and -19 (Simonyan & Zisserman, 2014), ResNet-18,-34, -50, -101 and -152 (He et al. , 2015), Inception-v3 (Szegedy et al. , 2015) and Inception-v4(Szegedy et al. , 2016) since they obtained the highest performance, in these four years, on theImageNet (Russakovsky et al. , 2015) challenge.1In the original paper this network is called VGG-D, which is the best performing network. Here we preferto highlight the number of layer utilised, so we will call it VGG-16 in this publication.2From a given image multiple patches are extracted: four corners plus central crop and their horizontalmirrored twins.3Accuracy and error rate always sum to 100, therefore in this paper they are used interchangeably.2Under review as a conference paper at ICLR 20171 2 4 8 16 32 64Batch size [ / ]5102050100200Foward time per image [ms]BN-NINGoogLeNetInception-v3Inception-v4AlexNetBN-AlexNetVGG-16VGG-19ResNet-18ResNet-34ResNet-50ResNet-101ResNet-152ENet1 2 4 8 16 32 64Batch size [ / ]891011121314Net power consumption [W] BN-NINGoogLeNetInception-v3Inception-v4AlexNetBN-AlexNetVGG-16VGG-19ResNet-18ResNet-34ResNet-50ResNet-101ResNet-152ENetFigure 3: Inference time vs.batch size. Thischart show inference time across different batch sizeswith a logarithmic ordinate and logarithmic abscissa.Missing data points are due to lack of enough systemmemory required to process larger batches. A speedup of 3is achieved by AlexNet due to better optimi-sation of its fully connected layers for larger batches.Figure 4: Power vs.batch size. Net power consump-tion (due only to the forward processing of severalDNNs) for different batch sizes. The idle power ofthe TX1 board, with no HDMI screen connected, was1:30 W on average. The max frequency componentof power supply current was 1:4 kHz , correspondingto a Nyquist sampling frequency of 2:8 kHz .3.1 A CCURACYFigure 1 shows one-crop accuracies of the most relevant entries submitted to the ImageNet chal-lenge, from the AlexNet (Krizhevsky et al. , 2012), on the far left, to the best performing Inception-v4(Szegedy et al. , 2016). The newest ResNet and Inception architectures surpass all other architecturesby a significant margin of at least 7%.Figure 2 provides a different, but more informative view of the accuracy values, because it alsovisualises computational cost and number of network’s parameters. The first thing that is very ap-parent is that VGG, even though it is widely used in many applications, is by far the most expensivearchitecture — both in terms of computational requirements and number of parameters. Its 16- and19-layer implementations are in fact isolated from all other networks. The other architectures form asteep straight line, that seems to start to flatten with the latest incarnations of Inception and ResNet.This might suggest that models are reaching an inflection point on this data set. At this inflectionpoint, the costs — in terms of complexity — start to outweigh gains in accuracy. We will later showthat this trend is hyperbolic.3.2 I NFERENCE TIMEFigure 3 reports inference time per image on each architecture, as a function of image batch size(from 1 to 64). We notice that VGG processes one image in a fifth of a second, making it a less likelycontender in real-time applications on an NVIDIA TX1. AlexNet shows a speed up of roughly 3going from batch of 1 to 64 images, due to weak optimisation of its fully connected layers. It is avery surprising finding, that will be further discussed in the next subsection.3.3 P OWERPower measurements are complicated by the high frequency swings in current consumption, whichrequired high sampling current read-out to avoid aliasing. In this work, we used a 200 MHz digitaloscilloscope with a current probe, as reported in section 2. Other measuring instruments, such asan AC power strip with 2 Hz sampling rate, or a GPIB controlled DC power supply with 12 Hzsampling rate, did not provide enough bandwidth to properly conduct power measurements.In figure 4 we see that the power consumption is mostly independent with the batch size. Low powervalues for AlexNet (batch of 1) and VGG (batch of 2) are associated to slower forward times perimage, as shown in figure 3.3Under review as a conference paper at ICLR 20171 2 4 8 16 32 64Batch size [ / ]20030050010002000Maximum net memory utilisation [MB]BN-NINGoogLeNetInception-v3AlexNetBN-AlexNetVGG-16VGG-19ResNet-18ResNet-34ResNet-50ResNet-1010 100 200 300 400 500Parameters [MB]100200300400500600700800Maximum net memory utilisation [MB]Batch of 1 image1.30Figure 5: Memory vs.batch size. Maximum sys-tem memory utilisation for batches of different sizes.Memory usage shows a knee graph, due to the net-work model memory static allocation and the variablememory used by batch size.Figure 6: Memory vs.parameters count. De-tailed view on static parameters allocation and cor-responding memory utilisation. Minimum memoryof200 MB , linear afterwards with slope 1:30.0 20 40 60 80 100 120 140 160Foward time per image [ms]0510152025303540Operations [G-Ops]Batch of 1 image0 20 40 60 80 100 120 140 160Foward time per image [ms]Batch of 16 imagesFigure 7: Operations vs.inference time, size /parameters. Relationship between operations and inferencetime, for batches of size 1 and 16 (biggest size for which all architectures can still run). Not surprisingly, wenotice a linear trend, and therefore operations count represent a good estimation of inference time. Furthermore,we can notice an increase in the slope of the trend for larger batches, which correspond to shorter inferencetime due to batch processing optimisation.3.4 M EMORYWe analysed system memory consumption of the TX1 device, which uses shared memory for bothCPU and GPU. Figure 5 shows that the maximum system memory usage is initially constant andthen raises with the batch size. This is due the initial memory allocation of the network model —which is the large static component — and the contribution of the memory required while processingthe batch, proportionally increasing with the number of images. In figure 6 we can also notice thatthe initial allocation never drops below 200 MB , for network sized below 100 MB , and it is linearafterwards, with respect to the parameters and a slope of 1:30.3.5 O PERATIONSOperations count is essential for establishing a rough estimate of inference time and hardware circuitsize, in case of custom implementation of neural network accelerators. In figure 7, for a batch of16 images, there is a linear relationship between operations count and inference time per image.Therefore, at design time, we can pose a constraint on the number of operation to keep processingspeed in a usable range for real-time applications or resource-limited deployments.4Under review as a conference paper at ICLR 20179 10 11 12 13Net power consumption [W]0510152025303540Operations [G-Ops]Batch of 1 image9 10 11 12 13Net power consumption [W]Batch of 16 imagesFigure 8: Operations vs.power consumption, size /parameters. Independency of power and operations isshown by a lack of directionality of the distributions shown in these scatter charts. Full resources utilisationand lower inference time for AlexNet architecture is reached with larger batches.0 20 40 60 80 100 120 140Images per second [Hz]50556065707580Accuracy [%]Batch of 1 image0 20 40 60 80 100 120 140Images per second [Hz]Batch of 16 imagesFigure 9: Accuracy vs.inferences per second, size /operations. Non trivial linear upper bound is shownin these scatter plots, illustrating the relationship between prediction accuracy and throughput of all examinedarchitectures. These are the first charts in which the area of the blobs is proportional to the amount of operations,instead of the parameters count. We can notice that larger blobs are concentrated on the left side of the charts,in correspondence of low throughput, i.e.longer inference times. Most of the architectures lay on the linearinterface between the grey and white areas. If a network falls in the shaded area, it means it achieves exceptionalaccuracy or inference speed. The white area indicates a suboptimal region. E.g. both AlexNet architecturesimprove processing speed as larger batches are adopted, gaining 80 Hz .3.6 O PERATIONS AND POWERIn this section we analyse the relationship between power consumption and number of operationsrequired by a given model. Figure 8 reports that there is no specific power footprint for different ar-chitectures. When full resources utilisation is reached, generally with larger batch sizes, all networksconsume roughly an additional 11:8 W, with a standard deviation of 0:7 W. Idle power is 1:30 W .This corresponds to the maximum system power at full utilisation. Therefore, if energy consumptionis one of our concerns, for example for battery-powered devices, one can simply choose the slowestarchitecture which satisfies the application minimum requirements.3.7 A CCURACY AND THROUGHPUTWe note that there is a non-trivial linear upper bound between accuracy and number of inferencesper unit time. Figure 9 illustrates that for a given frame rate, the maximum accuracy that can beachieved is linearly proportional to the frame rate itself. All networks analysed here come fromseveral publications, and have been independently trained by other research groups. A linear fit ofthe accuracy shows all architecture trade accuracy vs. speed. Moreover, chosen a specific inferencetime, one can now come up with the theoretical accuracy upper bound when resources are fully5Under review as a conference paper at ICLR 2017VGG-19 VGG-16 AlexNetBN-AlexNet ResNet-152 ResNet-101 Inception-v4ResNet-50Inception-v3ResNet-34 ResNet-18BN-NINGoogLeNetENet024681012Top-1 accuracy density [%/M-Params]Figure 10: Accuracy per parameter vs.network. Information density (accuracy per parameters) is an effi-ciency metric that highlight that capacity of a specific architecture to better utilise its parametric space. Modelslike VGG and AlexNet are clearly oversized, and do not take fully advantage of their potential learning abil-ity. On the far right, ResNet-18, BN-NIN, GoogLeNet and ENet (marked by grey arrows) do a better job at“squeezing” all their neurons to learn the given task, and are the winners of this section.utilised, as seen in section 3.6. Since the power consumption is constant, we can even go one stepfurther, and obtain an upper bound in accuracy even for an energetic constraint, which could possiblybe an essential designing factor for a network that needs to run on an embedded system.As the spoiler in section 3.1 gave already away, the linear nature of the accuracy vs.throughputrelationship translates into a hyperbolical one when the forward inference time is considered instead.Then, given that the operations count is linear with the inference time, we get that the accuracy hasan hyperbolical dependency on the amount of computations that a network requires.3.8 P ARAMETERS UTILISATIONDNNs are known to be highly inefficient in utilising their full learning power (number of parameters/ degrees of freedom). Prominent work (Han et al. , 2015) exploits this flaw to reduce networkfile size up to 50, using weights pruning, quantisation and variable-length symbol encoding. It isworth noticing that, using more efficient architectures to begin with may produce even more compactrepresentations. In figure 10 we clearly see that, although VGG has a better accuracy than AlexNet(as shown by figure 1), its information density is worse. This means that the amount of degreesof freedom introduced in the VGG architecture bring a lesser improvement in terms of accuracy.Moreover, ENet (Paszke et al. , 2016) — which we have specifically designed to be highly efficientand it has been adapted and retrained on ImageNet (Culurciello, 2016) for this work — achieves thehighest score, showing that 24less parameters are sufficient to provide state-of-the-art results.4 C ONCLUSIONSIn this paper we analysed multiple state-of-the-art deep neural networks submitted to the ImageNetchallenge, in terms of accuracy, memory footprint, parameters, operations count, inference timeand power consumption. Our goal is to provide insights into the design choices that can lead toefficient neural networks for practical application, and optimisation of the often-limited resources inactual deployments, which lead us to the creation of ENet — or Efficient-Network — for ImageNet.We show that accuracy and inference time are in a hyperbolic relationship: a little increment inaccuracy costs a lot of computational time. We show that number of operations in a network modelcan effectively estimate inference time. We show that an energy constraint will set a specific upperbound on the maximum achievable accuracy and model complexity, in terms of operations counts.Finally, we show that ENet is the best architecture in terms of parameters space utilisation, squeezingup to 13more information per parameter used respect to the reference model AlexNet, and 24respect VGG-19.6Under review as a conference paper at ICLR 2017ACKNOWLEDGMENTSThis paper would have not look so pretty without the Python Software Foundation , thematplot-lib library and the communities of stackoverflow and T EX of StackExchange which I ought tothank. This work is partly supported by the Office of Naval Research (ONR) grants N00014-12-1-0167, N00014-15-1-2791 and MURI N00014-10-1-0278. We gratefully acknowledge the supportof NVIDIA Corporation with the donation of the TX1, Titan X, K40 GPUs used for this research. | rytc3qbEe | solid work but not surprising | 4: Ok but not good enough - rejection | The authors did solid work in collecting all the reported data. However, most findings don't seem to be too surprising to me:
- Finding #1 mainly shows that all architectures and batch sizes manage to utilize the GPU fully (or to the same percentage).
- Regarding Finding #2, I agree that from a linear relationship in Figure 9 you could conclude said hyperbolic relationship.
However, for this finding to be relevant, it has to hold especially for the latest generations of models. These cluster in the upper left corner of Figure 9 and on their own do not seem to show too much of a linear behaviour. Therefore I think there is not enough evidence to conclude asymptotic hyperbolic behaviour: For this the linear behaviour would have to be the stronger, the more models approach the upper left corner.
- Finding #3 seems to be a simple conclusion from finding #1: As long as slower models are better and faster models do draw the same power, finding #3 holds.
- Finding #4 is again similar to finding #1: If all architectures manage to fully utilize the GPU, inference time should be proportional to the number of operations.
Maybe the most interesting finding would be that all tested models seem to use the same percentage of computational resources available on the GPU, while one might expect that more complex models don't manage to utilize as much computational resources due to inter-dependencies. However actual GPU utilization was not evaluated and as the authors choose to use an older GPU, one would expect that all models manage to make use of all available computational power.
Additionally, I think these findings would have to be put in relation with compressing techniques or tested on actual production networks to be of more interest.
| 3: The reviewer is fairly confident that the evaluation is correct |
Bygq-H9eg | ICLR.cc/2017/conference | 2017 | An Analysis of Deep Neural Network Models for Practical Applications | ["Alfredo Canziani", "Adam Paszke", "Eugenio Culurciello"] | Since the emergence of Deep Neural Networks (DNNs) as a prominent technique in the field of computer vision, the ImageNet classification challenge has played a major role in advancing the state-of-the-art. While accuracy figures have steadily increased, the resource utilisation of winning models has not been properly taken into account. In this work, we present a comprehensive analysis of important metrics in practical applications: accuracy, memory footprint, parameters, operations count, inference time and power consumption. Key findings are: (1) power consumption is independent of batch size and architecture; (2) accuracy and inference time are in a hyperbolic relationship; (3) energy constraint are an upper bound on the maximum achievable accuracy and model complexity; (4) the number of operations is a reliable estimate of the inference time. We believe our analysis provides a compelling set of information that helps design and engineer efficient DNNs. | ["Computer vision", "Deep learning", "Applications"] | ABSTRACTSince the emergence of Deep Neural Networks (DNNs) as a prominent techniquein the field of computer vision, the ImageNet classification challenge has played amajor role in advancing the state-of-the-art. While accuracy figures have steadilyincreased, the resource utilisation of winning models has not been properly takeninto account. In this work, we present a comprehensive analysis of important met-rics in practical applications: accuracy, memory footprint, parameters, operationscount, inference time and power consumption. Key findings are: (1) power con-sumption is independent of batch size and architecture; (2) accuracy and inferencetime are in a hyperbolic relationship; (3) energy constraint are an upper bound onthe maximum achievable accuracy and model complexity; (4) the number of oper-ations is a reliable estimate of the inference time. We believe our analysis providesa compelling set of information that helps design and engineer efficient DNNs.1 I NTRODUCTIONSince the breakthrough in 2012 ImageNet competition (Russakovsky et al. , 2015) achieved byAlexNet (Krizhevsky et al. , 2012) — the first entry that used a Deep Neural Network (DNN) —several other DNNs with increasing complexity have been submitted to the challenge in order toachieve better performance.In the ImageNet classification challenge, the ultimate goal is to obtain the highest accuracy in amulti-class classification problem framework, regardless of the actual inference time. We believethat this has given rise to several problems. Firstly, it is now normal practice to run several trainedinstances of a given model over multiple similar instances of each validation image. This practice,also know as model averaging or ensemble of DNNs, dramatically increases the amount of com-putation required at inference time to achieve the published accuracy. Secondly, model selection ishindered by the fact that different submissions are evaluating their (ensemble of) models a differentnumber of times on the validation images, and therefore the reported accuracy is biased on the spe-cific sampling technique (and ensemble size). Thirdly, there is currently no incentive in speeding upinference time, which is a key element in practical applications of these models, and affects resourceutilisation, power-consumption, and latency.This article aims to compare state-of-the-art DNN architectures, submitted for the ImageNet chal-lenge over the last 4 years, in terms of computational requirements and accuracy. We compare thesearchitectures on multiple metrics related to resource utilisation in actual deployments: accuracy,memory footprint, parameters, operations count, inference time and power consumption. The pur-pose of this paper is to stress the importance of these figures, which are essential hard constraintsfor the optimisation of these networks in practical deployments and applications.2 M ETHODSIn order to compare the quality of different models, we collected and analysed the accuracy valuesreported in the literature. We immediately found that different sampling techniques do not allow fora direct comparison of resource utilisation. For example, central-crop (top-5 validation) errors of a1Under review as a conference paper at ICLR 2017AlexNetBN-AlexNetBN-NINENetGoogLeNet ResNet-18VGG-16VGG-19ResNet-34ResNet-50ResNet-101ResNet-152Inception-v3Inception-v450556065707580Top-1 accuracy [%]0 5 10 15 20 25 30 35 40Operations [G-Ops]50556065707580Top-1 accuracy [%]BN-NINInception-v3Inception-v4BN-AlexNetAlexNetVGG-16 VGG-19ResNet-18ResNet-34ResNet-50ResNet-101ResNet-152GoogLeNetENet5M 35M 65M 95M 125M 155MFigure 1: Top1 vs.network. Single-crop top-1 vali-dation accuracies for top scoring single-model archi-tectures. We introduce with this chart our choice ofcolour scheme, which will be used throughout thispublication to distinguish effectively different archi-tectures and their correspondent authors. Notice thatnetworks of the same group share the same hue, forexample ResNet are all variations of pink.Figure 2: Top1 vs.operations, size/parameters.Top-1 one-crop accuracy versus amount of operationsrequired for a single forward pass. The size of theblobs is proportional to the number of network pa-rameters; a legend is reported in the bottom right cor-ner, spanning from 5106to155106params. Boththese figures share the same y-axis, and the grey dotshighlight the centre of the blobs.single run of VGG-161(Simonyan & Zisserman, 2014) and GoogLeNet (Szegedy et al. , 2014) are8:70% and10:07% respectively, revealing that VGG-16 performs better than GoogLeNet. Whenmodels are run with 10-crop sampling,2then the errors become 9:33% and9:15% respectively, andtherefore VGG-16 will perform worse than GoogLeNet, using a single central-crop. For this reason,we decided to base our analysis on re-evaluations of top-1 accuracies3for all networks with a singlecentral-crop sampling technique (Zagoruyko, 2016).For inference time and memory usage measurements we have used Torch7 (Collobert et al. , 2011)with cuDNN-v5 (Chetlur et al. , 2014) and CUDA-v8 back-end. All experiments were conducted ona JetPack-2.3 NVIDIA Jetson TX1 board (nVIDIA): an embedded visual computing system witha 64-bit ARM RA57 CPU, a 1 T-Flop/s 256-core NVIDIA Maxwell GPU and 4 GB LPDDR4of shared RAM. We use this resource-limited device to better underline the differences betweennetwork architecture, but similar results can be obtained on most recent GPUs, such as the NVIDIAK40 or Titan X, to name a few. Operation counts were obtained using an open-source tool that wedeveloped (Paszke, 2016). For measuring the power consumption, a Keysight 1146B Hall effectcurrent probe has been used with a Keysight MSO-X 2024A 200 MHz digital oscilloscope with asampling period of 2 sand50 kSa =ssample rate. The system was powered by a Keysight E3645AGPIB controlled DC power supply.3 R ESULTSIn this section we report our results and comparisons. We analysed the following DDNs: AlexNet(Krizhevsky et al. , 2012), batch normalised AlexNet (Zagoruyko, 2016), batch normalised NetworkIn Network (NIN) (Lin et al. , 2013), ENet (Paszke et al. , 2016) for ImageNet (Culurciello, 2016),GoogLeNet (Szegedy et al. , 2014), VGG-16 and -19 (Simonyan & Zisserman, 2014), ResNet-18,-34, -50, -101 and -152 (He et al. , 2015), Inception-v3 (Szegedy et al. , 2015) and Inception-v4(Szegedy et al. , 2016) since they obtained the highest performance, in these four years, on theImageNet (Russakovsky et al. , 2015) challenge.1In the original paper this network is called VGG-D, which is the best performing network. Here we preferto highlight the number of layer utilised, so we will call it VGG-16 in this publication.2From a given image multiple patches are extracted: four corners plus central crop and their horizontalmirrored twins.3Accuracy and error rate always sum to 100, therefore in this paper they are used interchangeably.2Under review as a conference paper at ICLR 20171 2 4 8 16 32 64Batch size [ / ]5102050100200Foward time per image [ms]BN-NINGoogLeNetInception-v3Inception-v4AlexNetBN-AlexNetVGG-16VGG-19ResNet-18ResNet-34ResNet-50ResNet-101ResNet-152ENet1 2 4 8 16 32 64Batch size [ / ]891011121314Net power consumption [W] BN-NINGoogLeNetInception-v3Inception-v4AlexNetBN-AlexNetVGG-16VGG-19ResNet-18ResNet-34ResNet-50ResNet-101ResNet-152ENetFigure 3: Inference time vs.batch size. Thischart show inference time across different batch sizeswith a logarithmic ordinate and logarithmic abscissa.Missing data points are due to lack of enough systemmemory required to process larger batches. A speedup of 3is achieved by AlexNet due to better optimi-sation of its fully connected layers for larger batches.Figure 4: Power vs.batch size. Net power consump-tion (due only to the forward processing of severalDNNs) for different batch sizes. The idle power ofthe TX1 board, with no HDMI screen connected, was1:30 W on average. The max frequency componentof power supply current was 1:4 kHz , correspondingto a Nyquist sampling frequency of 2:8 kHz .3.1 A CCURACYFigure 1 shows one-crop accuracies of the most relevant entries submitted to the ImageNet chal-lenge, from the AlexNet (Krizhevsky et al. , 2012), on the far left, to the best performing Inception-v4(Szegedy et al. , 2016). The newest ResNet and Inception architectures surpass all other architecturesby a significant margin of at least 7%.Figure 2 provides a different, but more informative view of the accuracy values, because it alsovisualises computational cost and number of network’s parameters. The first thing that is very ap-parent is that VGG, even though it is widely used in many applications, is by far the most expensivearchitecture — both in terms of computational requirements and number of parameters. Its 16- and19-layer implementations are in fact isolated from all other networks. The other architectures form asteep straight line, that seems to start to flatten with the latest incarnations of Inception and ResNet.This might suggest that models are reaching an inflection point on this data set. At this inflectionpoint, the costs — in terms of complexity — start to outweigh gains in accuracy. We will later showthat this trend is hyperbolic.3.2 I NFERENCE TIMEFigure 3 reports inference time per image on each architecture, as a function of image batch size(from 1 to 64). We notice that VGG processes one image in a fifth of a second, making it a less likelycontender in real-time applications on an NVIDIA TX1. AlexNet shows a speed up of roughly 3going from batch of 1 to 64 images, due to weak optimisation of its fully connected layers. It is avery surprising finding, that will be further discussed in the next subsection.3.3 P OWERPower measurements are complicated by the high frequency swings in current consumption, whichrequired high sampling current read-out to avoid aliasing. In this work, we used a 200 MHz digitaloscilloscope with a current probe, as reported in section 2. Other measuring instruments, such asan AC power strip with 2 Hz sampling rate, or a GPIB controlled DC power supply with 12 Hzsampling rate, did not provide enough bandwidth to properly conduct power measurements.In figure 4 we see that the power consumption is mostly independent with the batch size. Low powervalues for AlexNet (batch of 1) and VGG (batch of 2) are associated to slower forward times perimage, as shown in figure 3.3Under review as a conference paper at ICLR 20171 2 4 8 16 32 64Batch size [ / ]20030050010002000Maximum net memory utilisation [MB]BN-NINGoogLeNetInception-v3AlexNetBN-AlexNetVGG-16VGG-19ResNet-18ResNet-34ResNet-50ResNet-1010 100 200 300 400 500Parameters [MB]100200300400500600700800Maximum net memory utilisation [MB]Batch of 1 image1.30Figure 5: Memory vs.batch size. Maximum sys-tem memory utilisation for batches of different sizes.Memory usage shows a knee graph, due to the net-work model memory static allocation and the variablememory used by batch size.Figure 6: Memory vs.parameters count. De-tailed view on static parameters allocation and cor-responding memory utilisation. Minimum memoryof200 MB , linear afterwards with slope 1:30.0 20 40 60 80 100 120 140 160Foward time per image [ms]0510152025303540Operations [G-Ops]Batch of 1 image0 20 40 60 80 100 120 140 160Foward time per image [ms]Batch of 16 imagesFigure 7: Operations vs.inference time, size /parameters. Relationship between operations and inferencetime, for batches of size 1 and 16 (biggest size for which all architectures can still run). Not surprisingly, wenotice a linear trend, and therefore operations count represent a good estimation of inference time. Furthermore,we can notice an increase in the slope of the trend for larger batches, which correspond to shorter inferencetime due to batch processing optimisation.3.4 M EMORYWe analysed system memory consumption of the TX1 device, which uses shared memory for bothCPU and GPU. Figure 5 shows that the maximum system memory usage is initially constant andthen raises with the batch size. This is due the initial memory allocation of the network model —which is the large static component — and the contribution of the memory required while processingthe batch, proportionally increasing with the number of images. In figure 6 we can also notice thatthe initial allocation never drops below 200 MB , for network sized below 100 MB , and it is linearafterwards, with respect to the parameters and a slope of 1:30.3.5 O PERATIONSOperations count is essential for establishing a rough estimate of inference time and hardware circuitsize, in case of custom implementation of neural network accelerators. In figure 7, for a batch of16 images, there is a linear relationship between operations count and inference time per image.Therefore, at design time, we can pose a constraint on the number of operation to keep processingspeed in a usable range for real-time applications or resource-limited deployments.4Under review as a conference paper at ICLR 20179 10 11 12 13Net power consumption [W]0510152025303540Operations [G-Ops]Batch of 1 image9 10 11 12 13Net power consumption [W]Batch of 16 imagesFigure 8: Operations vs.power consumption, size /parameters. Independency of power and operations isshown by a lack of directionality of the distributions shown in these scatter charts. Full resources utilisationand lower inference time for AlexNet architecture is reached with larger batches.0 20 40 60 80 100 120 140Images per second [Hz]50556065707580Accuracy [%]Batch of 1 image0 20 40 60 80 100 120 140Images per second [Hz]Batch of 16 imagesFigure 9: Accuracy vs.inferences per second, size /operations. Non trivial linear upper bound is shownin these scatter plots, illustrating the relationship between prediction accuracy and throughput of all examinedarchitectures. These are the first charts in which the area of the blobs is proportional to the amount of operations,instead of the parameters count. We can notice that larger blobs are concentrated on the left side of the charts,in correspondence of low throughput, i.e.longer inference times. Most of the architectures lay on the linearinterface between the grey and white areas. If a network falls in the shaded area, it means it achieves exceptionalaccuracy or inference speed. The white area indicates a suboptimal region. E.g. both AlexNet architecturesimprove processing speed as larger batches are adopted, gaining 80 Hz .3.6 O PERATIONS AND POWERIn this section we analyse the relationship between power consumption and number of operationsrequired by a given model. Figure 8 reports that there is no specific power footprint for different ar-chitectures. When full resources utilisation is reached, generally with larger batch sizes, all networksconsume roughly an additional 11:8 W, with a standard deviation of 0:7 W. Idle power is 1:30 W .This corresponds to the maximum system power at full utilisation. Therefore, if energy consumptionis one of our concerns, for example for battery-powered devices, one can simply choose the slowestarchitecture which satisfies the application minimum requirements.3.7 A CCURACY AND THROUGHPUTWe note that there is a non-trivial linear upper bound between accuracy and number of inferencesper unit time. Figure 9 illustrates that for a given frame rate, the maximum accuracy that can beachieved is linearly proportional to the frame rate itself. All networks analysed here come fromseveral publications, and have been independently trained by other research groups. A linear fit ofthe accuracy shows all architecture trade accuracy vs. speed. Moreover, chosen a specific inferencetime, one can now come up with the theoretical accuracy upper bound when resources are fully5Under review as a conference paper at ICLR 2017VGG-19 VGG-16 AlexNetBN-AlexNet ResNet-152 ResNet-101 Inception-v4ResNet-50Inception-v3ResNet-34 ResNet-18BN-NINGoogLeNetENet024681012Top-1 accuracy density [%/M-Params]Figure 10: Accuracy per parameter vs.network. Information density (accuracy per parameters) is an effi-ciency metric that highlight that capacity of a specific architecture to better utilise its parametric space. Modelslike VGG and AlexNet are clearly oversized, and do not take fully advantage of their potential learning abil-ity. On the far right, ResNet-18, BN-NIN, GoogLeNet and ENet (marked by grey arrows) do a better job at“squeezing” all their neurons to learn the given task, and are the winners of this section.utilised, as seen in section 3.6. Since the power consumption is constant, we can even go one stepfurther, and obtain an upper bound in accuracy even for an energetic constraint, which could possiblybe an essential designing factor for a network that needs to run on an embedded system.As the spoiler in section 3.1 gave already away, the linear nature of the accuracy vs.throughputrelationship translates into a hyperbolical one when the forward inference time is considered instead.Then, given that the operations count is linear with the inference time, we get that the accuracy hasan hyperbolical dependency on the amount of computations that a network requires.3.8 P ARAMETERS UTILISATIONDNNs are known to be highly inefficient in utilising their full learning power (number of parameters/ degrees of freedom). Prominent work (Han et al. , 2015) exploits this flaw to reduce networkfile size up to 50, using weights pruning, quantisation and variable-length symbol encoding. It isworth noticing that, using more efficient architectures to begin with may produce even more compactrepresentations. In figure 10 we clearly see that, although VGG has a better accuracy than AlexNet(as shown by figure 1), its information density is worse. This means that the amount of degreesof freedom introduced in the VGG architecture bring a lesser improvement in terms of accuracy.Moreover, ENet (Paszke et al. , 2016) — which we have specifically designed to be highly efficientand it has been adapted and retrained on ImageNet (Culurciello, 2016) for this work — achieves thehighest score, showing that 24less parameters are sufficient to provide state-of-the-art results.4 C ONCLUSIONSIn this paper we analysed multiple state-of-the-art deep neural networks submitted to the ImageNetchallenge, in terms of accuracy, memory footprint, parameters, operations count, inference timeand power consumption. Our goal is to provide insights into the design choices that can lead toefficient neural networks for practical application, and optimisation of the often-limited resources inactual deployments, which lead us to the creation of ENet — or Efficient-Network — for ImageNet.We show that accuracy and inference time are in a hyperbolic relationship: a little increment inaccuracy costs a lot of computational time. We show that number of operations in a network modelcan effectively estimate inference time. We show that an energy constraint will set a specific upperbound on the maximum achievable accuracy and model complexity, in terms of operations counts.Finally, we show that ENet is the best architecture in terms of parameters space utilisation, squeezingup to 13more information per parameter used respect to the reference model AlexNet, and 24respect VGG-19.6Under review as a conference paper at ICLR 2017ACKNOWLEDGMENTSThis paper would have not look so pretty without the Python Software Foundation , thematplot-lib library and the communities of stackoverflow and T EX of StackExchange which I ought tothank. This work is partly supported by the Office of Naval Research (ONR) grants N00014-12-1-0167, N00014-15-1-2791 and MURI N00014-10-1-0278. We gratefully acknowledge the supportof NVIDIA Corporation with the donation of the TX1, Titan X, K40 GPUs used for this research. | rkT46V7Xx | Interesting paper. Some flaws. | 5: Marginally below acceptance threshold | A few issues with this paper:
1- I find finding #2 trivial and unworthy of mention, but the author don't seem to agree with me that it is. See discussions.
2- Finding #1 relies on Fig #4, which appears very noisy and doesn't provide any error analysis. It makes me question how robust this finding is. One would have naively expected the power usage trend to mirror Fig #3, but given the level of noise, I can't convince myself whether the null hypothesis of there being no dependency between batch size and power consumption is more likely than the alternative.
3- Paper is unfriendly to colorblind readers (or those with B/W printers)
Overall, this paper is a reasonable review of where we are in terms of SOTA vision architectures, but doesn't provide much new insight. I found most interesting the clear illustration that VGG models stand out in terms of being a bad tradeoff in resource-constrained environments (too many researchers are tempted to benchmark their model compression algorithm on VGG-class models because that's always where one can show 10x improvements without doing much.) | 4: The reviewer is confident but not absolutely certain that the evaluation is correct |
Bygq-H9eg | ICLR.cc/2017/conference | 2017 | An Analysis of Deep Neural Network Models for Practical Applications | ["Alfredo Canziani", "Adam Paszke", "Eugenio Culurciello"] | Since the emergence of Deep Neural Networks (DNNs) as a prominent technique in the field of computer vision, the ImageNet classification challenge has played a major role in advancing the state-of-the-art. While accuracy figures have steadily increased, the resource utilisation of winning models has not been properly taken into account. In this work, we present a comprehensive analysis of important metrics in practical applications: accuracy, memory footprint, parameters, operations count, inference time and power consumption. Key findings are: (1) power consumption is independent of batch size and architecture; (2) accuracy and inference time are in a hyperbolic relationship; (3) energy constraint are an upper bound on the maximum achievable accuracy and model complexity; (4) the number of operations is a reliable estimate of the inference time. We believe our analysis provides a compelling set of information that helps design and engineer efficient DNNs. | ["Computer vision", "Deep learning", "Applications"] | ABSTRACTSince the emergence of Deep Neural Networks (DNNs) as a prominent techniquein the field of computer vision, the ImageNet classification challenge has played amajor role in advancing the state-of-the-art. While accuracy figures have steadilyincreased, the resource utilisation of winning models has not been properly takeninto account. In this work, we present a comprehensive analysis of important met-rics in practical applications: accuracy, memory footprint, parameters, operationscount, inference time and power consumption. Key findings are: (1) power con-sumption is independent of batch size and architecture; (2) accuracy and inferencetime are in a hyperbolic relationship; (3) energy constraint are an upper bound onthe maximum achievable accuracy and model complexity; (4) the number of oper-ations is a reliable estimate of the inference time. We believe our analysis providesa compelling set of information that helps design and engineer efficient DNNs.1 I NTRODUCTIONSince the breakthrough in 2012 ImageNet competition (Russakovsky et al. , 2015) achieved byAlexNet (Krizhevsky et al. , 2012) — the first entry that used a Deep Neural Network (DNN) —several other DNNs with increasing complexity have been submitted to the challenge in order toachieve better performance.In the ImageNet classification challenge, the ultimate goal is to obtain the highest accuracy in amulti-class classification problem framework, regardless of the actual inference time. We believethat this has given rise to several problems. Firstly, it is now normal practice to run several trainedinstances of a given model over multiple similar instances of each validation image. This practice,also know as model averaging or ensemble of DNNs, dramatically increases the amount of com-putation required at inference time to achieve the published accuracy. Secondly, model selection ishindered by the fact that different submissions are evaluating their (ensemble of) models a differentnumber of times on the validation images, and therefore the reported accuracy is biased on the spe-cific sampling technique (and ensemble size). Thirdly, there is currently no incentive in speeding upinference time, which is a key element in practical applications of these models, and affects resourceutilisation, power-consumption, and latency.This article aims to compare state-of-the-art DNN architectures, submitted for the ImageNet chal-lenge over the last 4 years, in terms of computational requirements and accuracy. We compare thesearchitectures on multiple metrics related to resource utilisation in actual deployments: accuracy,memory footprint, parameters, operations count, inference time and power consumption. The pur-pose of this paper is to stress the importance of these figures, which are essential hard constraintsfor the optimisation of these networks in practical deployments and applications.2 M ETHODSIn order to compare the quality of different models, we collected and analysed the accuracy valuesreported in the literature. We immediately found that different sampling techniques do not allow fora direct comparison of resource utilisation. For example, central-crop (top-5 validation) errors of a1Under review as a conference paper at ICLR 2017AlexNetBN-AlexNetBN-NINENetGoogLeNet ResNet-18VGG-16VGG-19ResNet-34ResNet-50ResNet-101ResNet-152Inception-v3Inception-v450556065707580Top-1 accuracy [%]0 5 10 15 20 25 30 35 40Operations [G-Ops]50556065707580Top-1 accuracy [%]BN-NINInception-v3Inception-v4BN-AlexNetAlexNetVGG-16 VGG-19ResNet-18ResNet-34ResNet-50ResNet-101ResNet-152GoogLeNetENet5M 35M 65M 95M 125M 155MFigure 1: Top1 vs.network. Single-crop top-1 vali-dation accuracies for top scoring single-model archi-tectures. We introduce with this chart our choice ofcolour scheme, which will be used throughout thispublication to distinguish effectively different archi-tectures and their correspondent authors. Notice thatnetworks of the same group share the same hue, forexample ResNet are all variations of pink.Figure 2: Top1 vs.operations, size/parameters.Top-1 one-crop accuracy versus amount of operationsrequired for a single forward pass. The size of theblobs is proportional to the number of network pa-rameters; a legend is reported in the bottom right cor-ner, spanning from 5106to155106params. Boththese figures share the same y-axis, and the grey dotshighlight the centre of the blobs.single run of VGG-161(Simonyan & Zisserman, 2014) and GoogLeNet (Szegedy et al. , 2014) are8:70% and10:07% respectively, revealing that VGG-16 performs better than GoogLeNet. Whenmodels are run with 10-crop sampling,2then the errors become 9:33% and9:15% respectively, andtherefore VGG-16 will perform worse than GoogLeNet, using a single central-crop. For this reason,we decided to base our analysis on re-evaluations of top-1 accuracies3for all networks with a singlecentral-crop sampling technique (Zagoruyko, 2016).For inference time and memory usage measurements we have used Torch7 (Collobert et al. , 2011)with cuDNN-v5 (Chetlur et al. , 2014) and CUDA-v8 back-end. All experiments were conducted ona JetPack-2.3 NVIDIA Jetson TX1 board (nVIDIA): an embedded visual computing system witha 64-bit ARM RA57 CPU, a 1 T-Flop/s 256-core NVIDIA Maxwell GPU and 4 GB LPDDR4of shared RAM. We use this resource-limited device to better underline the differences betweennetwork architecture, but similar results can be obtained on most recent GPUs, such as the NVIDIAK40 or Titan X, to name a few. Operation counts were obtained using an open-source tool that wedeveloped (Paszke, 2016). For measuring the power consumption, a Keysight 1146B Hall effectcurrent probe has been used with a Keysight MSO-X 2024A 200 MHz digital oscilloscope with asampling period of 2 sand50 kSa =ssample rate. The system was powered by a Keysight E3645AGPIB controlled DC power supply.3 R ESULTSIn this section we report our results and comparisons. We analysed the following DDNs: AlexNet(Krizhevsky et al. , 2012), batch normalised AlexNet (Zagoruyko, 2016), batch normalised NetworkIn Network (NIN) (Lin et al. , 2013), ENet (Paszke et al. , 2016) for ImageNet (Culurciello, 2016),GoogLeNet (Szegedy et al. , 2014), VGG-16 and -19 (Simonyan & Zisserman, 2014), ResNet-18,-34, -50, -101 and -152 (He et al. , 2015), Inception-v3 (Szegedy et al. , 2015) and Inception-v4(Szegedy et al. , 2016) since they obtained the highest performance, in these four years, on theImageNet (Russakovsky et al. , 2015) challenge.1In the original paper this network is called VGG-D, which is the best performing network. Here we preferto highlight the number of layer utilised, so we will call it VGG-16 in this publication.2From a given image multiple patches are extracted: four corners plus central crop and their horizontalmirrored twins.3Accuracy and error rate always sum to 100, therefore in this paper they are used interchangeably.2Under review as a conference paper at ICLR 20171 2 4 8 16 32 64Batch size [ / ]5102050100200Foward time per image [ms]BN-NINGoogLeNetInception-v3Inception-v4AlexNetBN-AlexNetVGG-16VGG-19ResNet-18ResNet-34ResNet-50ResNet-101ResNet-152ENet1 2 4 8 16 32 64Batch size [ / ]891011121314Net power consumption [W] BN-NINGoogLeNetInception-v3Inception-v4AlexNetBN-AlexNetVGG-16VGG-19ResNet-18ResNet-34ResNet-50ResNet-101ResNet-152ENetFigure 3: Inference time vs.batch size. Thischart show inference time across different batch sizeswith a logarithmic ordinate and logarithmic abscissa.Missing data points are due to lack of enough systemmemory required to process larger batches. A speedup of 3is achieved by AlexNet due to better optimi-sation of its fully connected layers for larger batches.Figure 4: Power vs.batch size. Net power consump-tion (due only to the forward processing of severalDNNs) for different batch sizes. The idle power ofthe TX1 board, with no HDMI screen connected, was1:30 W on average. The max frequency componentof power supply current was 1:4 kHz , correspondingto a Nyquist sampling frequency of 2:8 kHz .3.1 A CCURACYFigure 1 shows one-crop accuracies of the most relevant entries submitted to the ImageNet chal-lenge, from the AlexNet (Krizhevsky et al. , 2012), on the far left, to the best performing Inception-v4(Szegedy et al. , 2016). The newest ResNet and Inception architectures surpass all other architecturesby a significant margin of at least 7%.Figure 2 provides a different, but more informative view of the accuracy values, because it alsovisualises computational cost and number of network’s parameters. The first thing that is very ap-parent is that VGG, even though it is widely used in many applications, is by far the most expensivearchitecture — both in terms of computational requirements and number of parameters. Its 16- and19-layer implementations are in fact isolated from all other networks. The other architectures form asteep straight line, that seems to start to flatten with the latest incarnations of Inception and ResNet.This might suggest that models are reaching an inflection point on this data set. At this inflectionpoint, the costs — in terms of complexity — start to outweigh gains in accuracy. We will later showthat this trend is hyperbolic.3.2 I NFERENCE TIMEFigure 3 reports inference time per image on each architecture, as a function of image batch size(from 1 to 64). We notice that VGG processes one image in a fifth of a second, making it a less likelycontender in real-time applications on an NVIDIA TX1. AlexNet shows a speed up of roughly 3going from batch of 1 to 64 images, due to weak optimisation of its fully connected layers. It is avery surprising finding, that will be further discussed in the next subsection.3.3 P OWERPower measurements are complicated by the high frequency swings in current consumption, whichrequired high sampling current read-out to avoid aliasing. In this work, we used a 200 MHz digitaloscilloscope with a current probe, as reported in section 2. Other measuring instruments, such asan AC power strip with 2 Hz sampling rate, or a GPIB controlled DC power supply with 12 Hzsampling rate, did not provide enough bandwidth to properly conduct power measurements.In figure 4 we see that the power consumption is mostly independent with the batch size. Low powervalues for AlexNet (batch of 1) and VGG (batch of 2) are associated to slower forward times perimage, as shown in figure 3.3Under review as a conference paper at ICLR 20171 2 4 8 16 32 64Batch size [ / ]20030050010002000Maximum net memory utilisation [MB]BN-NINGoogLeNetInception-v3AlexNetBN-AlexNetVGG-16VGG-19ResNet-18ResNet-34ResNet-50ResNet-1010 100 200 300 400 500Parameters [MB]100200300400500600700800Maximum net memory utilisation [MB]Batch of 1 image1.30Figure 5: Memory vs.batch size. Maximum sys-tem memory utilisation for batches of different sizes.Memory usage shows a knee graph, due to the net-work model memory static allocation and the variablememory used by batch size.Figure 6: Memory vs.parameters count. De-tailed view on static parameters allocation and cor-responding memory utilisation. Minimum memoryof200 MB , linear afterwards with slope 1:30.0 20 40 60 80 100 120 140 160Foward time per image [ms]0510152025303540Operations [G-Ops]Batch of 1 image0 20 40 60 80 100 120 140 160Foward time per image [ms]Batch of 16 imagesFigure 7: Operations vs.inference time, size /parameters. Relationship between operations and inferencetime, for batches of size 1 and 16 (biggest size for which all architectures can still run). Not surprisingly, wenotice a linear trend, and therefore operations count represent a good estimation of inference time. Furthermore,we can notice an increase in the slope of the trend for larger batches, which correspond to shorter inferencetime due to batch processing optimisation.3.4 M EMORYWe analysed system memory consumption of the TX1 device, which uses shared memory for bothCPU and GPU. Figure 5 shows that the maximum system memory usage is initially constant andthen raises with the batch size. This is due the initial memory allocation of the network model —which is the large static component — and the contribution of the memory required while processingthe batch, proportionally increasing with the number of images. In figure 6 we can also notice thatthe initial allocation never drops below 200 MB , for network sized below 100 MB , and it is linearafterwards, with respect to the parameters and a slope of 1:30.3.5 O PERATIONSOperations count is essential for establishing a rough estimate of inference time and hardware circuitsize, in case of custom implementation of neural network accelerators. In figure 7, for a batch of16 images, there is a linear relationship between operations count and inference time per image.Therefore, at design time, we can pose a constraint on the number of operation to keep processingspeed in a usable range for real-time applications or resource-limited deployments.4Under review as a conference paper at ICLR 20179 10 11 12 13Net power consumption [W]0510152025303540Operations [G-Ops]Batch of 1 image9 10 11 12 13Net power consumption [W]Batch of 16 imagesFigure 8: Operations vs.power consumption, size /parameters. Independency of power and operations isshown by a lack of directionality of the distributions shown in these scatter charts. Full resources utilisationand lower inference time for AlexNet architecture is reached with larger batches.0 20 40 60 80 100 120 140Images per second [Hz]50556065707580Accuracy [%]Batch of 1 image0 20 40 60 80 100 120 140Images per second [Hz]Batch of 16 imagesFigure 9: Accuracy vs.inferences per second, size /operations. Non trivial linear upper bound is shownin these scatter plots, illustrating the relationship between prediction accuracy and throughput of all examinedarchitectures. These are the first charts in which the area of the blobs is proportional to the amount of operations,instead of the parameters count. We can notice that larger blobs are concentrated on the left side of the charts,in correspondence of low throughput, i.e.longer inference times. Most of the architectures lay on the linearinterface between the grey and white areas. If a network falls in the shaded area, it means it achieves exceptionalaccuracy or inference speed. The white area indicates a suboptimal region. E.g. both AlexNet architecturesimprove processing speed as larger batches are adopted, gaining 80 Hz .3.6 O PERATIONS AND POWERIn this section we analyse the relationship between power consumption and number of operationsrequired by a given model. Figure 8 reports that there is no specific power footprint for different ar-chitectures. When full resources utilisation is reached, generally with larger batch sizes, all networksconsume roughly an additional 11:8 W, with a standard deviation of 0:7 W. Idle power is 1:30 W .This corresponds to the maximum system power at full utilisation. Therefore, if energy consumptionis one of our concerns, for example for battery-powered devices, one can simply choose the slowestarchitecture which satisfies the application minimum requirements.3.7 A CCURACY AND THROUGHPUTWe note that there is a non-trivial linear upper bound between accuracy and number of inferencesper unit time. Figure 9 illustrates that for a given frame rate, the maximum accuracy that can beachieved is linearly proportional to the frame rate itself. All networks analysed here come fromseveral publications, and have been independently trained by other research groups. A linear fit ofthe accuracy shows all architecture trade accuracy vs. speed. Moreover, chosen a specific inferencetime, one can now come up with the theoretical accuracy upper bound when resources are fully5Under review as a conference paper at ICLR 2017VGG-19 VGG-16 AlexNetBN-AlexNet ResNet-152 ResNet-101 Inception-v4ResNet-50Inception-v3ResNet-34 ResNet-18BN-NINGoogLeNetENet024681012Top-1 accuracy density [%/M-Params]Figure 10: Accuracy per parameter vs.network. Information density (accuracy per parameters) is an effi-ciency metric that highlight that capacity of a specific architecture to better utilise its parametric space. Modelslike VGG and AlexNet are clearly oversized, and do not take fully advantage of their potential learning abil-ity. On the far right, ResNet-18, BN-NIN, GoogLeNet and ENet (marked by grey arrows) do a better job at“squeezing” all their neurons to learn the given task, and are the winners of this section.utilised, as seen in section 3.6. Since the power consumption is constant, we can even go one stepfurther, and obtain an upper bound in accuracy even for an energetic constraint, which could possiblybe an essential designing factor for a network that needs to run on an embedded system.As the spoiler in section 3.1 gave already away, the linear nature of the accuracy vs.throughputrelationship translates into a hyperbolical one when the forward inference time is considered instead.Then, given that the operations count is linear with the inference time, we get that the accuracy hasan hyperbolical dependency on the amount of computations that a network requires.3.8 P ARAMETERS UTILISATIONDNNs are known to be highly inefficient in utilising their full learning power (number of parameters/ degrees of freedom). Prominent work (Han et al. , 2015) exploits this flaw to reduce networkfile size up to 50, using weights pruning, quantisation and variable-length symbol encoding. It isworth noticing that, using more efficient architectures to begin with may produce even more compactrepresentations. In figure 10 we clearly see that, although VGG has a better accuracy than AlexNet(as shown by figure 1), its information density is worse. This means that the amount of degreesof freedom introduced in the VGG architecture bring a lesser improvement in terms of accuracy.Moreover, ENet (Paszke et al. , 2016) — which we have specifically designed to be highly efficientand it has been adapted and retrained on ImageNet (Culurciello, 2016) for this work — achieves thehighest score, showing that 24less parameters are sufficient to provide state-of-the-art results.4 C ONCLUSIONSIn this paper we analysed multiple state-of-the-art deep neural networks submitted to the ImageNetchallenge, in terms of accuracy, memory footprint, parameters, operations count, inference timeand power consumption. Our goal is to provide insights into the design choices that can lead toefficient neural networks for practical application, and optimisation of the often-limited resources inactual deployments, which lead us to the creation of ENet — or Efficient-Network — for ImageNet.We show that accuracy and inference time are in a hyperbolic relationship: a little increment inaccuracy costs a lot of computational time. We show that number of operations in a network modelcan effectively estimate inference time. We show that an energy constraint will set a specific upperbound on the maximum achievable accuracy and model complexity, in terms of operations counts.Finally, we show that ENet is the best architecture in terms of parameters space utilisation, squeezingup to 13more information per parameter used respect to the reference model AlexNet, and 24respect VGG-19.6Under review as a conference paper at ICLR 2017ACKNOWLEDGMENTSThis paper would have not look so pretty without the Python Software Foundation , thematplot-lib library and the communities of stackoverflow and T EX of StackExchange which I ought tothank. This work is partly supported by the Office of Naval Research (ONR) grants N00014-12-1-0167, N00014-15-1-2791 and MURI N00014-10-1-0278. We gratefully acknowledge the supportof NVIDIA Corporation with the donation of the TX1, Titan X, K40 GPUs used for this research. | HyjqRq4Vl | 4: Ok but not good enough - rejection | The paper evaluates recent development in competitive ILSVRC CNN architectures from the perspective of resource utilization. It is clear that a lot of work has been put into the evaluations. The findings are well presented and the topic itself is important.
However, most of the results are not surprising to people working with CNNs on a regular basis. And even if they are, I am not convinced about their practical value. It is hard to tell what we actually learn from these findings when approaching new problems with computational constraints or when in production settings. In my opinion, this is mainly because the paper does not discuss realistic circumstances.
Main concerns:
1) The evaluation does not tell me much for realistic scenarios, that mostly involve fine-tuning networks, as ILSVRC is just a starting point in most cases. VGG for instance really shines for fine-tuning, but it is cumbersome to train from scratch. And VGG works well for compression, too. So possibly it is a very good choice if these by now standard steps are taken into account. Such questions are of high practical relevance!
2) Compressed networks have a much higher acc/parameter density, so comparison how well models can be compressed is important, or at least comparing to some of the most well-known and publicly available compressed networks.
3) There is no analysis on the actual topology of the networks and where the bottlenecks lie. This would be very useful to have as well.
Minor concern:
1) Why did the authors choose to use batch normalization in NiN and AlexNet? | 3: The reviewer is fairly confident that the evaluation is correct |
|
rkFd2P5gl | ICLR.cc/2017/conference | 2017 | Leveraging Asynchronicity in Gradient Descent for Scalable Deep Learning | ["Jeff Daily", "Abhinav Vishnu", "Charles Siegel"] | In this paper, we present multiple approaches for improving the performance of gradient descent when utilizing mutiple compute resources. The proposed approaches span a solution space ranging from equivalence to running on a single compute device to delaying gradient updates a fixed number of times. We present a new approach, asynchronous layer-wise gradient descent that maximizes overlap of layer-wise backpropagation (computation) with gradient synchronization (communication). This approach provides maximal theoretical equivalence to the de facto gradient descent algorithm, requires limited asynchronicity across multiple iterations of gradient descent, theoretically improves overall speedup, while minimizing the additional space requirements for asynchronicity. We implement all of our proposed approaches using Caffe – a high performance Deep Learning library – and evaluate it on both an Intel Sandy Bridge cluster connected with InfiniBand as well as an NVIDIA DGX-1 connected with NVLink. The evaluations are performed on a set of well known workloads including AlexNet and GoogleNet on the ImageNet dataset. Our evaluation of these neural network topologies indicates asynchronous gradient descent has a speedup of up to 1.7x compared to synchronous. | ["Deep learning"] | ABSTRACTIn this paper, we present multiple approaches for improving the performance ofgradient descent when utilizing mutiple compute resources. The proposed ap-proaches span a solution space ranging from equivalence to running on a singlecompute device to delaying gradient updates a fixed number of times. We presenta new approach, asynchronous layer-wise gradient descent that maximizes overlapof layer-wise backpropagation (computation) with gradient synchronization (com-munication). This approach provides maximal theoretical equivalence to the defacto gradient descent algorithm, requires limited asynchronicity across multipleiterations of gradient descent, theoretically improves overall speedup, while mini-mizing the additional space requirements for asynchronicity. We implement all ofour proposed approaches using Caffe – a high performance Deep Learning library– and evaluate it on both an Intel Sandy Bridge cluster connected with Infini-Band as well as an NVIDIA DGX-1 connected with NVLink. The evaluations areperformed on a set of well known workloads including AlexNet and GoogleNeton the ImageNet dataset. Our evaluation of these neural network topologies in-dicates asynchronous gradient descent has a speedup of up to 1.7x compared tosynchronous.1 I NTRODUCTIONDeep Learning (DL) algorithms are a class of Machine Learning and Data Mining (MLDM) algo-rithms, which use an inter-connection of neurons andsynapses to emulate the computational struc-ture of a mammalian brain. DL algorithms have demonstrated resounding success in many com-puter vision tasks and science domains such as high energy physics, computational chemistry andhigh performance computing use-cases. Several DL implementations such as TensorFlow, Caffe,Theano, and Torch have become available. These implementations are primarily geared towardscompute nodes that may contain multi-core architecture (such as Intel Xeon/KNC/KNL) and ormany-core architectures (GPUs).DL algorithms are under-going a tremendous revolution of their own. Widely used DL algorithmssuch as Convolutional Neural Networks (CNNs) and Recurrent Neural Networks (RNNs) are com-putationally expensive. Their computational requirements are further worsened by: 1) Very deepneural networks such as recently proposed 1000-layer complex Residual Networks ( ResNet ), 2) In-creasing volume of data produced by simulations, experiments and handheld devices. An importantsolution to these problems is the design and implementation of DL algorithms that are capable ofexecution on distributed memory large scale cluster/cloud computing systems. A few distributed DLimplementations such as CaffeonSpark, Distributed TensorFlow, CNTK, Machine Learning Toolkiton Extreme Scale (MaTEx), and FireCaffe have become available. Implementations such as CNTK,FireCaffe and MaTEx use MPI (Gropp et al., 1996; Geist et al., 1996) – which makes them a naturalfit for high-end systems.DL algorithms primarily use gradient descent – an iterative technique in which the weights ofsynapes are updated using the difference between the ground truth (actual value) and the predictedvalue (using the current state of the neural network). The larger the difference, the steeper the de-1Under review as a conference paper at ICLR 2017scent to a minima (a low value of minima generates the solution). An important type of gradientdescent is batch gradient descent – where a random subset of samples are used for iterative feed-forward (calculation of predicted value) and back-propagation (update of synaptic weights). A smallbatch is prone to severe pertubations to the descent, while a large batch results in slow convergence.Hence, a data scientist tends to use a fairly average batch – which finds the balance between thesetwo conflicting metrics.A large scale parallelization of gradient descent must maximize the equivalence to the default algo-rithm, such that the convergence property is maintained. Consider a scenario where a batch ( b) inthe original algorithm is split across multiple compute nodes ( n) – an example of data parallelism .To provide equivalence to the default algorithm, the batch must be split equally tobn, although thecommunication which would require an all-to-all reduction would increase as (logn). Naturally,asnis increased and bis held constant ( strong scaling ), this becomes prohibitive, whereas keepingthe batch size per node b=nconstant ( weak scaling ) increases the convergence time.Several researchers have proposed methods to alleviate the communication requirements of dis-tributed gradient descent. Parameter-server based approaches use a server to hold the latest versionof the model while clients send computed gradients and request the latest model. This approach hasbeen proposed and extended by several researchers. While theoretically this provides O(1)time-complexity since all batch updates can be computed simultaneously, this approach fails to scalebeyond a few compute nodes when considering the time to convergence relative to having run thecomputation on a single device. Others have proven divergence from the original algorithm. RemoteDirect Memory Access (RDMA) based approaches have been proposed, but they also diverge fromthe original algorithm. Several other implementations are primarily geared towards shared memorysystems, and address the thread contention issue for gradient descent.Our objective is to design a non-parameter-server based technique, which maximizes the equivalenceto the default algorithm, while leveraging high performance architectures – including computationalunits such as GPUs and high performance interconnects such as InfiniBand, Intel Omni-path archi-tectures by using MPI.1.1 C ONTRIBUTIONSSpecifically, we make the following contributions in this paper:We design a baseline asynchronous gradient descent, which delays the gradient updates ofthe entire model by one or more iterations adaptively on the basis of available overlap anduser-defined input.We propose a layer-wise gradient descent method, which overlaps weight updates of alayer with inter-node synchronization of other layers. The proposed method is exactlyequiavalent to the default sequential algorithm.We implement our approaches and other baseline techniques using the Machine LearningToolkit for Extreme Scale (MaTEx), which consists of a distributed memory implementa-tion of Caffe using MPI (Gropp et al., 1996; Geist et al., 1996).We evaluate our approaches and other baseline implementations on a large scale CPU-basedInfiniBand cluster as well as on NVIDIA’s DGX-1 multi-GPU system. We use several wellstudied datasets and DNN topologies such as ImageNet (1.3M images, 250GB dataset)with AlexNet and GoogleNet DNNs.Our evaluation indicates the efficacy of the proposed approach. Specifically, the best asynchronousapproach is up to 1.7x faster than the synchronous approach while achieving up to 82% parallelefficiency.The rest of the paper is organized as follows: In section 2, we present related work of our proposedresearch. We present the background in section 3, followed by an in-depth solution space in sec-tion 4. In section 6, we present a detailed performance evaluation of asynchronous gradient descent,and conclusions with future directions in section 7.2Under review as a conference paper at ICLR 20172 R ELATED WORKBatch gradient descent is the most widely used algorithm for training Deep Learning models. Thisalgorithm has been implemented several times for sequential, multi-core and many-core systemssuch as GPUs. The most widely used implementations are Caffe (Jia et al., 2014) (CPUs/GPUs),Warp-CTC (GPUs), Theano (Bastien et al., 2012; Bergstra et al., 2010) (CPUs/GPUs), Torch (Col-lobert et al., 2002) (CPUs/GPUs), CNTK (Agarwal et al., 2014) (GPUs and Distributed Memoryusing MPI) and Google TensorFlow (Abadi et al., 2015) which use nVIDIA CUDA Deep NeuralNetwork (cuDNN).Caffe is one of the leading software tools for training and deploying deep learning algorithms, andit can be used to develop novel extensions to these algorithms such as the ones described below.Caffe supports execution on a single node (connected with several GPUs) and a version has beenimplemented that takes full advantage of Intel systems. While the research described below wasperformed using Caffe, the extensions can be applied to Tensorflow as well.Caffe (and other deep learning software) is also equipped with several optimizations designed toavoid significant problems in training deep networks. The vanishing gradient problem (Bianchini& Scarselli, 2014) causes deep networks to fail to learn much at all in the early layers, and wassolved in (Hinton & Osindero, 2006) and (Bengio et al., 2007) where it was shown that a networkcould be trained one layer at a time with autoencoders (Hinton & Salakhutdinov, 2006), and thenput together to form a single network (Vincent et al., 2010). Another optimization that helps to solvethis problem is switching from sigmoidal neurons torectified linear neurons .The problem of accelerating gradient descent, especially disctributed across compute resources, is ofinterest to many researchers. Approaches generally fall into two categories, whether or not they areequivalent to having run using a single compute device; utilizing a single compute device necessarilycomputes gradient updates and applies them immediately to the model. Further, the gradient updatescan be classified as either synchronous or asynchronous depending on whether the communication ofthe gradients can be overlapped with any computation of the gradients. For example, the DistBeliefparameter server approach (Dean et al., 2012) computes gradient updates asynchronously based onan out-of-date copy of the model and applies them to the latest model. Though this is not equivalentto having run on a single device, it is able to process samples much faster.Chen et al. (2016) revisit asynchronous gradient descent and propose a few synchronous variants inorder to impove time to convergence. Notably, they show that waiting for all workers to complete,aggregating the gradients, and applying the gradients to the same common model (thereby eachworker has a copy of the latest model) provides a good time to convergence while also leveragingmultiple compute devices. Their approach is where this paper begins while additionally proposingapproaches ranging from synchronous to parameter server variants.3 F UNDAMENTALS3.1 N EURAL NETWORKSMachine Learning algorithms designed to emulate the computational structure of the brain to modeldata are called “Neural Networks.” The basic unit of a neural network is the neuron and neurons areconnected to one another via synapses .3.1.1 B ACKPROPAGATIONNeural networks are trained through an algorithm called backpropagation . This is a means of com-puting gradients layer by layer to implement the gradient descent algorithm ’s update rule ofw0=w+rwC (1)b0=b+rbC (2)where ware the weights, bthe biases,the learning rate, and Cis a cost function to be optimized,usually square error or cross-entropy. This rule is often replaced by a slightly more complex rule,such as Adaptive Gradient Descent (AdaGrad) (Duchi et al., 2011) or Momentum (Qian, 1999).3Under review as a conference paper at ICLR 2017To compute the gradients, we set W(`),b(`)the weights and biases for each layer, z(`+1)=W(`)a(`)+b(`)anda(`)=(z(`)), whereis the activation function. Let n`represent the numberof layers. Then, we use Algorithm 1.Algorithm 1 Back Propagation1:input: DataX2Rnpand labelsY2Rn`2:forifrom 1 tondo3: Compute all z(`)anda(`).4:(n`)=(yan`)(z(n`))5: for`fromn`1to 2do6:(`)=W`(`+1)0(z(`))7: end for8:rW(`)C=(`+1)a(`)T9:rb(`)C=(`+1)10:end forAlthough there are several nonlinear activation functions in common use, the networks examined inthis paper only include rectified linear units (ReLU) where ReLU(x) = max(0;x).3.2 C AFFECaffe (Jia et al., 2014) is one of the leading software packages for building and training neuralnetworks. It provides abstractions for a wide range of topologies and for training them with manydifferent types of optimizers. Caffe provides abstractions for operations on multi-dimensional arrays(tensors) which are essential for implementing Deep Learning algorithms. From an input tensor, anoutput tensor, and tensors for each hidden layer, Caffe constructs a computational graph that man-ages these tensors and their updates as a single object. Caffe is particularly useful for researchers,because it is heavily optimized and can be modified through an open source C++ backend.As Caffe’s runtime is implemented in C++, it can extract native performance from the computa-tion environment it is run on. Furthermore, Caffe abstracts GPU computations, leveraging nVIDIACUDA Deep Neural Network Library (cuDNN) for the task. We have modified this code for dis-tributed memory computation on large scale systems using MPI to natively use network hardware foroptimal performance. The base, synchronous implementation is similar to FireCaffe (Iandola et al.,2015), another distributed memory implementation of Caffe. Further modifications are described inSection 4.There are three phases of computation within Caffe that pass over the enumerated layers of thenetwork. First, the forward pass computes the output result given the samples from the input batch,starting at the first layer. Next, starting at the last (output) layer, based on the difference betweenthe output result and the ground truth, the backward pass uses the backpropagation technique tocompute the gradients for each layer. Lastly, one final pass is made over the network to apply thegradients to the weights and biases before starting the process over again with the next batch.4 S OLUTION SPACEThe goal of improving gradient descent is to accelerate the time to solution without sacrificing theaccuracy of the model. The base case to consider is then computing and applying gradients onebatch at a time on a single compute device. One way to accelerate the computation while alsomaintaining equivalence to the sequential is to use data parallelism. Data parallelism is where thetraditional batch is further subdivided into equally-sized mini-batches, each mini-batch is computedon separate devices, then the gradients resulting from each mini-batch is averaged together. Sinceeach gradient update is itself an average, taking the average of the mini-gradients results in an updatethat is effectively the same as having computed the original batch size. This is called the effectivebatch size . Data parallelism is the approach we explore in this paper, attempting many ways ofhiding the latency of the gradient communication that occurs between compute devices. We useMPI to communicate the gradients.4Under review as a conference paper at ICLR 2017Caffe provides callback methods in its C++ interface that interject user-defined functionality into keyphases of the computation (see 3.2). Specifically, one user-defined function is executed immediatelybefore the foward pass when the batch computation begins. The other user-defined function executesafter the backward pass finishes, but before the application of the gradients to the weights and biases.Additional callback functions were added to support finer-grained control over the three phases ofcomputation. One of the additional callbacks executes after each gradient is computed during thebackward phase, once per set of learnable parameters, such as the weights or biases of a given layer.Another callback function that was added is called once per learnable parameter during the applyphase, just before the gradients are applied. Lastly, a callback function was added that turns thegradient application into a task queue, requesting additional tasks in an unspecified order until allgradients have been applied.A critical implementation detail for any of our proposed approaches is to make sure the individualnetwork models maintained by each compute device start from the same random initial conditionsfor the weights and biases. Before the first batch is computed, the weights and biases from the masterprocess are copied (broadcast) to the other processes. That way any gradients that are computed,when averaged together, are based on the same initial conditions.4.1 S YNCHRONOUS GRADIENT DESCENTSimilar to what Chen et al. (2016) proposes and what is implemented in FireCaffe (Iandola et al.,2015), synchronous gradient descent averages the gradients from each mini-batch together beforeapplying them, forming one complete batch at a time. The way this is implemented in Caffe is touse the callback function that executes when all gradients are ready to be applied. During this call-back, MPI Allreduce is used to sum the gradients, placing the same resulting sum on each computedevice. This function is blocking, meaning it returns control back to Caffe only after the sum iscomputed across all devices. Since the result is a sum and not the intended average, it is then scaleddown based on the number of compute devices in use. It is important to note that the reductionoperation can be performed in-place, meaning it can use the memory location directly holding thegradient without performing any costly memory copies, especially for networks with a large numberof parameters such as AlexNet. This approach also has the important quality that the gradients areaveraged after they have been used by each layer of the backpropagation, preserving the importanceof any activations within the network against the mini-batch instead of against the effective batch.4.2 L AYER -WISEGRADIENT DESCENTChen et al. (2016) proposes the pipelining of gradient computation and application. For example,the gradients of upper layers can be concurrently applied while computing the gradients of lowerlayers. This approach must be done carefully to maintain equivalence with the sequential base case.We make the observation that gradients can be averaged as soon as they are computed during thebackward phase, instead of waiting for all gradients to be computed. However, adjacent layers willuse and/or update the gradients of layers that have otherwise finished computing their gradients.This implies the averaging of the gradients must be performed on a copy of the gradients rather thanin-place. Further, the averaging of the copied gradients must finish before they can be applied.We utilize a background thread of computation in order to perform the gradient averaging concurrentwith the remaining gradient computation. This provides maximal overlap of the communicationlatency with useful computation. There are a few options when to apply the averaged gradients.Waiting for all communication to finish before applying all gradients is straightfoward and similar tothe synchronous approach described previously, though perhaps at least some of the communicationlatency would be overlapped. Another approach is to wait, one layer at a time, for the gradientsfor a particular layer to finish averaging and then apply the gradients. It is intuitive to perform thewaiting in the same order in which backpropagation was performed, from the last layer to the firstlayer. Lastly, since all gradient updates are independent, we can perform them in an arbitrary order.This takes advantage of the observation that not all layers have the same number of parameters, andfurther, the gradients for the weights and the gradients for the biases can be averaged separately;the size of the weight gradients are typically larger than the bias gradients, implying that the biasgradients will complete their communication more quickly. Since the communcation of the variousparameters can finish somewhat arbitrarily based on when the communication was initiated and the5Under review as a conference paper at ICLR 2017size of the communication, we can apply the gradients as soon as they complete their averaging. Weevaluate these strategies in 6.4.3 A SYNCHRONOUS GRADIENT DESCENTAs stated in (Chen et al., 2016), parameter server implementations suffer from poor convergencesince gradient updates are calculated based on out-of-date networks. Continuing with our data par-allel approach, there is a lower limit to the size of the mini-batches and therefore the number ofcompute devices that can be utilized. As the amount of work per compute device decreases pro-portional to the decreasing size of the mini-batches, there is less computation available to mask thelatency of the gradient averaging across the devices. Initiating the averaging layer-wise as describedabove may not be enough to mitigate this problem.We propose delaying the application of the gradients by a fixed number of iterations much smallerthan the number of compute devices as would have been done in a parameter server approach.The gradients are delayed by using a concurrent communication thread and applying the gradientone, two, or three iterations later thus giving the averaging enough time to complete as needed.If the gradient needs to be delayed by one iteration, this requires one communication thread andone additional buffer to hold the gradient; delaying by two iterations requires two communicationthreads and two additional buffers and so on. This approach is somewhere between a parameterserver (Dean et al., 2012) and the various approaches that maintain equivalency with a sequentialcomputation.5 I MPLEMENTATION DETAILSThe implementations evaluated in this paper focus on data parallelism and the averaging of gradientsacross compute devices. This is achieved using MPI and parallel I/O.5.1 H ANDLING I/OThe data parallelism is achieved by distributing datasets across compute devices, partitioning thembased on the number of devices utilized; each device receives a disjoint subset of the dataset andno samples are shuffled or exchanged between compute devices outside of the gradient averaging.Caffe frequently uses a database in LMDB format for its datasets, however this format cannot beused on remote (network) filesystems or even between processes on the same host. Caffe mitigatesthis issue when using more than one GPU on the same host by using a single I/O reading threadand a round-robin deal of the samples to device-specific queues. Our implementations mitigate thisissue by first converting an LMDB database into a netCDF file (Rew & Davis, 1990). netCDF filescan be read and partitioned using parallel MPI-IO via the parallel netCDF library (Li et al., 2003).5.2 D ISTRIBUTED MEMORY IMPLEMENTATION USING MPIFor single-node GPU computation, using one or more GPU devices in a single host, Caffe providesa means of allocating one contiguous buffer to hold the data for the weights and biases and a secondbuffer to hold the gradients for each. We extended this approach for CPU hosts. A single contigousbuffer allows the non-layer-wise, i.e., network-wise gradient averages to be performed using a singleMPI reduction operation. The layer-wise implementations require one MPI reduction operation pernetwork parameter. There is a fixed cost to start a communication primitive regardless of how muchdata is communicated. It is sometimes beneficial to aggregate otherwise many small communicationrequests into a larger one.Although Caffe provides a way of utilizing all GPUs within the host, it does not currently leverageNVIDIA’s NCCL package (NVIDIA Corporation, 2015) for optimized, high-bandwidth collectivecommunication routines. We used the NCCL equivalent to the MPI all reduction to sum gradientsacross GPU devices on the DGX-1 platform.6Under review as a conference paper at ICLR 20176 E XPERIMENTAL EVALUATIONIn this section, we present an experimental evaluation and analysis of the heuristics described insection 4.6.1 H ARDWARE ARCHITECTURESWe evaluate using a CPU cluster as well as NVIDIA’s speialized DGX-1 multi-GPU host system.Each node of the multi-node cluster consists of a multi-core Intel Sandybridge CPU connected viaInfiniBand. We use Intel MPI 5.1.2 for performance evaluation. The heuristics are implemented inCaffe (Jia et al., 2014), specifically the intelcaffe branch designed to optimize performance on IntelCPUs.The DGX-1 system contains 8 Pascal GPUs connected using the high-speed NVlink interconnect.For the DGX-1 evaluations, the latest version of Berkley’s Caffe was modified to use the NCCLcommunicaiton primitives in addition to our algorithmic changes.6.2 I MAGE NET AND NETWORK ARCHITECTURESWe evaluate on two distinct network architectures trained on the ImageNet dataset. ImageNet refersspecifically to the ILSVRC2015 (Russakovsky et al., 2015) dataset. This dataset consists of a train-ing set of just under 1.3 million images of various sizes (as jpg files) divided among 1000 classes,along with a validation set consisting of 50000 images of the same type and classes. Additionally,for the competition, there is a testing set, but it is held separately and not available publicly. It isestablished as one of the benchmark dataset for machine learning with large datasets, and among thefamous architectures that achieved record top 1 and top 5 accuracies on it are AlexNet (Krizhevskyet al., 2012) and GoogLeNet (Szegedy et al., 2015).We evaluate on AlexNet and GoogLeNet because they are now well-established models with knowntraining regimes and loss curves. They also demonstrate two different regimes for paralleliza-tion: AlexNet has approximately 60 million parameters that need to be communicated, whereasGoogLeNet has approximately 4 million. In contrast to the smaller amount of communication forGoogLeNet, it requires roughly twice the amount of time to process a each image than AlexNet doeswhen communication is ignored.6.3 E VALUATIONFigure 1 compares the implemented approaches relative to a communication-less baseline “nocomm”. The effective batch sizes were 256 and 32 for AleNet and GoogLeNet, respectively. Forexample, using 8 compute devices for GoogLeNet uses a mini-batch size of 32=8 = 4 . The evalu-ation on DGX-1 were limited to 8 compute devices whereas the CPU cluster evaluation eventuallyhit the strong scaling limit for data parallelism.These results show that delaying the gradient updates by one or more iterations is the most effectivemeans of hiding the communication latency. The layer-wise approaches did not perform as well asexpected. These trends were consistent across both hardware platforms.The layer-wise approaches, though promising as equivalent to a sequential computation, were notable to complete their gradient averages quickly enough. Compared to the delayed gradient ap-proach, this is perhaps intuitive. The delayed gradient approach is able to hide the communicationlatency across all three complete phases of the computation whereas the layer-wise approaches onlyhave as long as it takes to complete the backpropagation phase. This is not enough time to completethe communication, especially as the mini-batch sizes decrease and therefore provide less work tomask the communication.In addition to looking at the time per batch above, the rates of convergence of these heuristics mustbe evaluated. All of the heuristics completed training AlexNet to the standard top-1 accuracy of54% using the default AlexNet settings that come with Caffe. However, it is worth noting that atthe beginning of training, they showed different loss curves showing that there is a tradeoff betweennumber of batches per second and accuracy at a given batch as shown in Table 1.7Under review as a conference paper at ICLR 20170"0.5"1"1.5"2"2.5"3"3.5"no"comm"SGD"SGD"Layer4wise"AGD"1"comm"AGD"2"comm"SGD"task4wise,"1"comm"SGD"task4wise,"2"comm"Itera&ons*per*second*1"2"4"8"16"32"(a) AlexNet CPU0"5"10"15"20"25"no"comm"SGD"AGD"1"comm"AGD"2"comm"AGD"3"comm"Itera&ons*per*second*1"2"4"8" (b) AlexNet DGX-10"0.5"1"1.5"2"2.5"3"no"comm"SGD"SGD"Layer4wise"AGD"1"comm"AGD"2"comm"SGD"task4wise,"1"comm"SGD"task4wise,"2"comm"Itera&ons*per*second*1"2"4"8"16"(c) GoogLeNet CPU0"5"10"15"20"25"30"no"comm"SGD"AGD"1"comm"AGD"2"comm"AGD"3"comm"Itera&ons*per*second*1"2"4"8" (d) GoogLeNet DGX-1Figure 1: Evaluation of SGD and AGD approaches. Effective batch sizes were 256 and 32 for AlexNet andGoogLeNet, respectively.Table 1: AlexNet Accuracy After Every 1000 Batches on DGX-1batch 1000 2000 3000 4000 5000serial, 1 GPU 0.0124 0.05164 0.10102 0.13432 0.16454SGD 0.01116 0.03984 0.07594 0.10622 0.13052AGD, 1 comm 0.0039 0.01324 0.02632 0.05076 0.07362AGD, 2 comm 0.00104 0.00356 0.00636 0.01282 0.01688We also evaluated whether these approaches converged in addition to just improving the number ofiterations per second. All approaches evaluated managed to converge within the exepcted number ofiterations. Notably, AlexNet on DGX-1 reached convergence in 11 hours using the delayed gradientapproach and two communication threads using the standard AlexNet network from Caffe.7 C ONCLUSIONSThere is a tradeoff between maintaining equivalence to sequential methods versus leveraging thevast computational resources available for gradient descent. We find that asynchronous methodscan give a 1.7x speedup while not sacrificing accuracy at the end of an otherwise identical trainingregime. This improvement was achieved without the need for a warm start, contrary to previouslypublished results using parameter servers.8Under review as a conference paper at ICLR 2017 | S1X2Nn07e | Lacks Strong Baselines and Wall-Time Results | 3: Clear rejection | The authors present methods to speed-up gradient descent by leveraging asynchronicity in a layer-wise manner.
While they obtain up-to 1.7x speedup compared to synchronous training, their baseline is weak. More importantly, they dismiss parameter-server based methods, which are becoming standard, and so effectively just do not compare to the current state-of-the-art. They also do not present wall-time measurements. With these flaws, the paper is not ready for ICLR acceptance. | 4: The reviewer is confident but not absolutely certain that the evaluation is correct |
rkFd2P5gl | ICLR.cc/2017/conference | 2017 | Leveraging Asynchronicity in Gradient Descent for Scalable Deep Learning | ["Jeff Daily", "Abhinav Vishnu", "Charles Siegel"] | In this paper, we present multiple approaches for improving the performance of gradient descent when utilizing mutiple compute resources. The proposed approaches span a solution space ranging from equivalence to running on a single compute device to delaying gradient updates a fixed number of times. We present a new approach, asynchronous layer-wise gradient descent that maximizes overlap of layer-wise backpropagation (computation) with gradient synchronization (communication). This approach provides maximal theoretical equivalence to the de facto gradient descent algorithm, requires limited asynchronicity across multiple iterations of gradient descent, theoretically improves overall speedup, while minimizing the additional space requirements for asynchronicity. We implement all of our proposed approaches using Caffe – a high performance Deep Learning library – and evaluate it on both an Intel Sandy Bridge cluster connected with InfiniBand as well as an NVIDIA DGX-1 connected with NVLink. The evaluations are performed on a set of well known workloads including AlexNet and GoogleNet on the ImageNet dataset. Our evaluation of these neural network topologies indicates asynchronous gradient descent has a speedup of up to 1.7x compared to synchronous. | ["Deep learning"] | ABSTRACTIn this paper, we present multiple approaches for improving the performance ofgradient descent when utilizing mutiple compute resources. The proposed ap-proaches span a solution space ranging from equivalence to running on a singlecompute device to delaying gradient updates a fixed number of times. We presenta new approach, asynchronous layer-wise gradient descent that maximizes overlapof layer-wise backpropagation (computation) with gradient synchronization (com-munication). This approach provides maximal theoretical equivalence to the defacto gradient descent algorithm, requires limited asynchronicity across multipleiterations of gradient descent, theoretically improves overall speedup, while mini-mizing the additional space requirements for asynchronicity. We implement all ofour proposed approaches using Caffe – a high performance Deep Learning library– and evaluate it on both an Intel Sandy Bridge cluster connected with Infini-Band as well as an NVIDIA DGX-1 connected with NVLink. The evaluations areperformed on a set of well known workloads including AlexNet and GoogleNeton the ImageNet dataset. Our evaluation of these neural network topologies in-dicates asynchronous gradient descent has a speedup of up to 1.7x compared tosynchronous.1 I NTRODUCTIONDeep Learning (DL) algorithms are a class of Machine Learning and Data Mining (MLDM) algo-rithms, which use an inter-connection of neurons andsynapses to emulate the computational struc-ture of a mammalian brain. DL algorithms have demonstrated resounding success in many com-puter vision tasks and science domains such as high energy physics, computational chemistry andhigh performance computing use-cases. Several DL implementations such as TensorFlow, Caffe,Theano, and Torch have become available. These implementations are primarily geared towardscompute nodes that may contain multi-core architecture (such as Intel Xeon/KNC/KNL) and ormany-core architectures (GPUs).DL algorithms are under-going a tremendous revolution of their own. Widely used DL algorithmssuch as Convolutional Neural Networks (CNNs) and Recurrent Neural Networks (RNNs) are com-putationally expensive. Their computational requirements are further worsened by: 1) Very deepneural networks such as recently proposed 1000-layer complex Residual Networks ( ResNet ), 2) In-creasing volume of data produced by simulations, experiments and handheld devices. An importantsolution to these problems is the design and implementation of DL algorithms that are capable ofexecution on distributed memory large scale cluster/cloud computing systems. A few distributed DLimplementations such as CaffeonSpark, Distributed TensorFlow, CNTK, Machine Learning Toolkiton Extreme Scale (MaTEx), and FireCaffe have become available. Implementations such as CNTK,FireCaffe and MaTEx use MPI (Gropp et al., 1996; Geist et al., 1996) – which makes them a naturalfit for high-end systems.DL algorithms primarily use gradient descent – an iterative technique in which the weights ofsynapes are updated using the difference between the ground truth (actual value) and the predictedvalue (using the current state of the neural network). The larger the difference, the steeper the de-1Under review as a conference paper at ICLR 2017scent to a minima (a low value of minima generates the solution). An important type of gradientdescent is batch gradient descent – where a random subset of samples are used for iterative feed-forward (calculation of predicted value) and back-propagation (update of synaptic weights). A smallbatch is prone to severe pertubations to the descent, while a large batch results in slow convergence.Hence, a data scientist tends to use a fairly average batch – which finds the balance between thesetwo conflicting metrics.A large scale parallelization of gradient descent must maximize the equivalence to the default algo-rithm, such that the convergence property is maintained. Consider a scenario where a batch ( b) inthe original algorithm is split across multiple compute nodes ( n) – an example of data parallelism .To provide equivalence to the default algorithm, the batch must be split equally tobn, although thecommunication which would require an all-to-all reduction would increase as (logn). Naturally,asnis increased and bis held constant ( strong scaling ), this becomes prohibitive, whereas keepingthe batch size per node b=nconstant ( weak scaling ) increases the convergence time.Several researchers have proposed methods to alleviate the communication requirements of dis-tributed gradient descent. Parameter-server based approaches use a server to hold the latest versionof the model while clients send computed gradients and request the latest model. This approach hasbeen proposed and extended by several researchers. While theoretically this provides O(1)time-complexity since all batch updates can be computed simultaneously, this approach fails to scalebeyond a few compute nodes when considering the time to convergence relative to having run thecomputation on a single device. Others have proven divergence from the original algorithm. RemoteDirect Memory Access (RDMA) based approaches have been proposed, but they also diverge fromthe original algorithm. Several other implementations are primarily geared towards shared memorysystems, and address the thread contention issue for gradient descent.Our objective is to design a non-parameter-server based technique, which maximizes the equivalenceto the default algorithm, while leveraging high performance architectures – including computationalunits such as GPUs and high performance interconnects such as InfiniBand, Intel Omni-path archi-tectures by using MPI.1.1 C ONTRIBUTIONSSpecifically, we make the following contributions in this paper:We design a baseline asynchronous gradient descent, which delays the gradient updates ofthe entire model by one or more iterations adaptively on the basis of available overlap anduser-defined input.We propose a layer-wise gradient descent method, which overlaps weight updates of alayer with inter-node synchronization of other layers. The proposed method is exactlyequiavalent to the default sequential algorithm.We implement our approaches and other baseline techniques using the Machine LearningToolkit for Extreme Scale (MaTEx), which consists of a distributed memory implementa-tion of Caffe using MPI (Gropp et al., 1996; Geist et al., 1996).We evaluate our approaches and other baseline implementations on a large scale CPU-basedInfiniBand cluster as well as on NVIDIA’s DGX-1 multi-GPU system. We use several wellstudied datasets and DNN topologies such as ImageNet (1.3M images, 250GB dataset)with AlexNet and GoogleNet DNNs.Our evaluation indicates the efficacy of the proposed approach. Specifically, the best asynchronousapproach is up to 1.7x faster than the synchronous approach while achieving up to 82% parallelefficiency.The rest of the paper is organized as follows: In section 2, we present related work of our proposedresearch. We present the background in section 3, followed by an in-depth solution space in sec-tion 4. In section 6, we present a detailed performance evaluation of asynchronous gradient descent,and conclusions with future directions in section 7.2Under review as a conference paper at ICLR 20172 R ELATED WORKBatch gradient descent is the most widely used algorithm for training Deep Learning models. Thisalgorithm has been implemented several times for sequential, multi-core and many-core systemssuch as GPUs. The most widely used implementations are Caffe (Jia et al., 2014) (CPUs/GPUs),Warp-CTC (GPUs), Theano (Bastien et al., 2012; Bergstra et al., 2010) (CPUs/GPUs), Torch (Col-lobert et al., 2002) (CPUs/GPUs), CNTK (Agarwal et al., 2014) (GPUs and Distributed Memoryusing MPI) and Google TensorFlow (Abadi et al., 2015) which use nVIDIA CUDA Deep NeuralNetwork (cuDNN).Caffe is one of the leading software tools for training and deploying deep learning algorithms, andit can be used to develop novel extensions to these algorithms such as the ones described below.Caffe supports execution on a single node (connected with several GPUs) and a version has beenimplemented that takes full advantage of Intel systems. While the research described below wasperformed using Caffe, the extensions can be applied to Tensorflow as well.Caffe (and other deep learning software) is also equipped with several optimizations designed toavoid significant problems in training deep networks. The vanishing gradient problem (Bianchini& Scarselli, 2014) causes deep networks to fail to learn much at all in the early layers, and wassolved in (Hinton & Osindero, 2006) and (Bengio et al., 2007) where it was shown that a networkcould be trained one layer at a time with autoencoders (Hinton & Salakhutdinov, 2006), and thenput together to form a single network (Vincent et al., 2010). Another optimization that helps to solvethis problem is switching from sigmoidal neurons torectified linear neurons .The problem of accelerating gradient descent, especially disctributed across compute resources, is ofinterest to many researchers. Approaches generally fall into two categories, whether or not they areequivalent to having run using a single compute device; utilizing a single compute device necessarilycomputes gradient updates and applies them immediately to the model. Further, the gradient updatescan be classified as either synchronous or asynchronous depending on whether the communication ofthe gradients can be overlapped with any computation of the gradients. For example, the DistBeliefparameter server approach (Dean et al., 2012) computes gradient updates asynchronously based onan out-of-date copy of the model and applies them to the latest model. Though this is not equivalentto having run on a single device, it is able to process samples much faster.Chen et al. (2016) revisit asynchronous gradient descent and propose a few synchronous variants inorder to impove time to convergence. Notably, they show that waiting for all workers to complete,aggregating the gradients, and applying the gradients to the same common model (thereby eachworker has a copy of the latest model) provides a good time to convergence while also leveragingmultiple compute devices. Their approach is where this paper begins while additionally proposingapproaches ranging from synchronous to parameter server variants.3 F UNDAMENTALS3.1 N EURAL NETWORKSMachine Learning algorithms designed to emulate the computational structure of the brain to modeldata are called “Neural Networks.” The basic unit of a neural network is the neuron and neurons areconnected to one another via synapses .3.1.1 B ACKPROPAGATIONNeural networks are trained through an algorithm called backpropagation . This is a means of com-puting gradients layer by layer to implement the gradient descent algorithm ’s update rule ofw0=w+rwC (1)b0=b+rbC (2)where ware the weights, bthe biases,the learning rate, and Cis a cost function to be optimized,usually square error or cross-entropy. This rule is often replaced by a slightly more complex rule,such as Adaptive Gradient Descent (AdaGrad) (Duchi et al., 2011) or Momentum (Qian, 1999).3Under review as a conference paper at ICLR 2017To compute the gradients, we set W(`),b(`)the weights and biases for each layer, z(`+1)=W(`)a(`)+b(`)anda(`)=(z(`)), whereis the activation function. Let n`represent the numberof layers. Then, we use Algorithm 1.Algorithm 1 Back Propagation1:input: DataX2Rnpand labelsY2Rn`2:forifrom 1 tondo3: Compute all z(`)anda(`).4:(n`)=(yan`)(z(n`))5: for`fromn`1to 2do6:(`)=W`(`+1)0(z(`))7: end for8:rW(`)C=(`+1)a(`)T9:rb(`)C=(`+1)10:end forAlthough there are several nonlinear activation functions in common use, the networks examined inthis paper only include rectified linear units (ReLU) where ReLU(x) = max(0;x).3.2 C AFFECaffe (Jia et al., 2014) is one of the leading software packages for building and training neuralnetworks. It provides abstractions for a wide range of topologies and for training them with manydifferent types of optimizers. Caffe provides abstractions for operations on multi-dimensional arrays(tensors) which are essential for implementing Deep Learning algorithms. From an input tensor, anoutput tensor, and tensors for each hidden layer, Caffe constructs a computational graph that man-ages these tensors and their updates as a single object. Caffe is particularly useful for researchers,because it is heavily optimized and can be modified through an open source C++ backend.As Caffe’s runtime is implemented in C++, it can extract native performance from the computa-tion environment it is run on. Furthermore, Caffe abstracts GPU computations, leveraging nVIDIACUDA Deep Neural Network Library (cuDNN) for the task. We have modified this code for dis-tributed memory computation on large scale systems using MPI to natively use network hardware foroptimal performance. The base, synchronous implementation is similar to FireCaffe (Iandola et al.,2015), another distributed memory implementation of Caffe. Further modifications are described inSection 4.There are three phases of computation within Caffe that pass over the enumerated layers of thenetwork. First, the forward pass computes the output result given the samples from the input batch,starting at the first layer. Next, starting at the last (output) layer, based on the difference betweenthe output result and the ground truth, the backward pass uses the backpropagation technique tocompute the gradients for each layer. Lastly, one final pass is made over the network to apply thegradients to the weights and biases before starting the process over again with the next batch.4 S OLUTION SPACEThe goal of improving gradient descent is to accelerate the time to solution without sacrificing theaccuracy of the model. The base case to consider is then computing and applying gradients onebatch at a time on a single compute device. One way to accelerate the computation while alsomaintaining equivalence to the sequential is to use data parallelism. Data parallelism is where thetraditional batch is further subdivided into equally-sized mini-batches, each mini-batch is computedon separate devices, then the gradients resulting from each mini-batch is averaged together. Sinceeach gradient update is itself an average, taking the average of the mini-gradients results in an updatethat is effectively the same as having computed the original batch size. This is called the effectivebatch size . Data parallelism is the approach we explore in this paper, attempting many ways ofhiding the latency of the gradient communication that occurs between compute devices. We useMPI to communicate the gradients.4Under review as a conference paper at ICLR 2017Caffe provides callback methods in its C++ interface that interject user-defined functionality into keyphases of the computation (see 3.2). Specifically, one user-defined function is executed immediatelybefore the foward pass when the batch computation begins. The other user-defined function executesafter the backward pass finishes, but before the application of the gradients to the weights and biases.Additional callback functions were added to support finer-grained control over the three phases ofcomputation. One of the additional callbacks executes after each gradient is computed during thebackward phase, once per set of learnable parameters, such as the weights or biases of a given layer.Another callback function that was added is called once per learnable parameter during the applyphase, just before the gradients are applied. Lastly, a callback function was added that turns thegradient application into a task queue, requesting additional tasks in an unspecified order until allgradients have been applied.A critical implementation detail for any of our proposed approaches is to make sure the individualnetwork models maintained by each compute device start from the same random initial conditionsfor the weights and biases. Before the first batch is computed, the weights and biases from the masterprocess are copied (broadcast) to the other processes. That way any gradients that are computed,when averaged together, are based on the same initial conditions.4.1 S YNCHRONOUS GRADIENT DESCENTSimilar to what Chen et al. (2016) proposes and what is implemented in FireCaffe (Iandola et al.,2015), synchronous gradient descent averages the gradients from each mini-batch together beforeapplying them, forming one complete batch at a time. The way this is implemented in Caffe is touse the callback function that executes when all gradients are ready to be applied. During this call-back, MPI Allreduce is used to sum the gradients, placing the same resulting sum on each computedevice. This function is blocking, meaning it returns control back to Caffe only after the sum iscomputed across all devices. Since the result is a sum and not the intended average, it is then scaleddown based on the number of compute devices in use. It is important to note that the reductionoperation can be performed in-place, meaning it can use the memory location directly holding thegradient without performing any costly memory copies, especially for networks with a large numberof parameters such as AlexNet. This approach also has the important quality that the gradients areaveraged after they have been used by each layer of the backpropagation, preserving the importanceof any activations within the network against the mini-batch instead of against the effective batch.4.2 L AYER -WISEGRADIENT DESCENTChen et al. (2016) proposes the pipelining of gradient computation and application. For example,the gradients of upper layers can be concurrently applied while computing the gradients of lowerlayers. This approach must be done carefully to maintain equivalence with the sequential base case.We make the observation that gradients can be averaged as soon as they are computed during thebackward phase, instead of waiting for all gradients to be computed. However, adjacent layers willuse and/or update the gradients of layers that have otherwise finished computing their gradients.This implies the averaging of the gradients must be performed on a copy of the gradients rather thanin-place. Further, the averaging of the copied gradients must finish before they can be applied.We utilize a background thread of computation in order to perform the gradient averaging concurrentwith the remaining gradient computation. This provides maximal overlap of the communicationlatency with useful computation. There are a few options when to apply the averaged gradients.Waiting for all communication to finish before applying all gradients is straightfoward and similar tothe synchronous approach described previously, though perhaps at least some of the communicationlatency would be overlapped. Another approach is to wait, one layer at a time, for the gradientsfor a particular layer to finish averaging and then apply the gradients. It is intuitive to perform thewaiting in the same order in which backpropagation was performed, from the last layer to the firstlayer. Lastly, since all gradient updates are independent, we can perform them in an arbitrary order.This takes advantage of the observation that not all layers have the same number of parameters, andfurther, the gradients for the weights and the gradients for the biases can be averaged separately;the size of the weight gradients are typically larger than the bias gradients, implying that the biasgradients will complete their communication more quickly. Since the communcation of the variousparameters can finish somewhat arbitrarily based on when the communication was initiated and the5Under review as a conference paper at ICLR 2017size of the communication, we can apply the gradients as soon as they complete their averaging. Weevaluate these strategies in 6.4.3 A SYNCHRONOUS GRADIENT DESCENTAs stated in (Chen et al., 2016), parameter server implementations suffer from poor convergencesince gradient updates are calculated based on out-of-date networks. Continuing with our data par-allel approach, there is a lower limit to the size of the mini-batches and therefore the number ofcompute devices that can be utilized. As the amount of work per compute device decreases pro-portional to the decreasing size of the mini-batches, there is less computation available to mask thelatency of the gradient averaging across the devices. Initiating the averaging layer-wise as describedabove may not be enough to mitigate this problem.We propose delaying the application of the gradients by a fixed number of iterations much smallerthan the number of compute devices as would have been done in a parameter server approach.The gradients are delayed by using a concurrent communication thread and applying the gradientone, two, or three iterations later thus giving the averaging enough time to complete as needed.If the gradient needs to be delayed by one iteration, this requires one communication thread andone additional buffer to hold the gradient; delaying by two iterations requires two communicationthreads and two additional buffers and so on. This approach is somewhere between a parameterserver (Dean et al., 2012) and the various approaches that maintain equivalency with a sequentialcomputation.5 I MPLEMENTATION DETAILSThe implementations evaluated in this paper focus on data parallelism and the averaging of gradientsacross compute devices. This is achieved using MPI and parallel I/O.5.1 H ANDLING I/OThe data parallelism is achieved by distributing datasets across compute devices, partitioning thembased on the number of devices utilized; each device receives a disjoint subset of the dataset andno samples are shuffled or exchanged between compute devices outside of the gradient averaging.Caffe frequently uses a database in LMDB format for its datasets, however this format cannot beused on remote (network) filesystems or even between processes on the same host. Caffe mitigatesthis issue when using more than one GPU on the same host by using a single I/O reading threadand a round-robin deal of the samples to device-specific queues. Our implementations mitigate thisissue by first converting an LMDB database into a netCDF file (Rew & Davis, 1990). netCDF filescan be read and partitioned using parallel MPI-IO via the parallel netCDF library (Li et al., 2003).5.2 D ISTRIBUTED MEMORY IMPLEMENTATION USING MPIFor single-node GPU computation, using one or more GPU devices in a single host, Caffe providesa means of allocating one contiguous buffer to hold the data for the weights and biases and a secondbuffer to hold the gradients for each. We extended this approach for CPU hosts. A single contigousbuffer allows the non-layer-wise, i.e., network-wise gradient averages to be performed using a singleMPI reduction operation. The layer-wise implementations require one MPI reduction operation pernetwork parameter. There is a fixed cost to start a communication primitive regardless of how muchdata is communicated. It is sometimes beneficial to aggregate otherwise many small communicationrequests into a larger one.Although Caffe provides a way of utilizing all GPUs within the host, it does not currently leverageNVIDIA’s NCCL package (NVIDIA Corporation, 2015) for optimized, high-bandwidth collectivecommunication routines. We used the NCCL equivalent to the MPI all reduction to sum gradientsacross GPU devices on the DGX-1 platform.6Under review as a conference paper at ICLR 20176 E XPERIMENTAL EVALUATIONIn this section, we present an experimental evaluation and analysis of the heuristics described insection 4.6.1 H ARDWARE ARCHITECTURESWe evaluate using a CPU cluster as well as NVIDIA’s speialized DGX-1 multi-GPU host system.Each node of the multi-node cluster consists of a multi-core Intel Sandybridge CPU connected viaInfiniBand. We use Intel MPI 5.1.2 for performance evaluation. The heuristics are implemented inCaffe (Jia et al., 2014), specifically the intelcaffe branch designed to optimize performance on IntelCPUs.The DGX-1 system contains 8 Pascal GPUs connected using the high-speed NVlink interconnect.For the DGX-1 evaluations, the latest version of Berkley’s Caffe was modified to use the NCCLcommunicaiton primitives in addition to our algorithmic changes.6.2 I MAGE NET AND NETWORK ARCHITECTURESWe evaluate on two distinct network architectures trained on the ImageNet dataset. ImageNet refersspecifically to the ILSVRC2015 (Russakovsky et al., 2015) dataset. This dataset consists of a train-ing set of just under 1.3 million images of various sizes (as jpg files) divided among 1000 classes,along with a validation set consisting of 50000 images of the same type and classes. Additionally,for the competition, there is a testing set, but it is held separately and not available publicly. It isestablished as one of the benchmark dataset for machine learning with large datasets, and among thefamous architectures that achieved record top 1 and top 5 accuracies on it are AlexNet (Krizhevskyet al., 2012) and GoogLeNet (Szegedy et al., 2015).We evaluate on AlexNet and GoogLeNet because they are now well-established models with knowntraining regimes and loss curves. They also demonstrate two different regimes for paralleliza-tion: AlexNet has approximately 60 million parameters that need to be communicated, whereasGoogLeNet has approximately 4 million. In contrast to the smaller amount of communication forGoogLeNet, it requires roughly twice the amount of time to process a each image than AlexNet doeswhen communication is ignored.6.3 E VALUATIONFigure 1 compares the implemented approaches relative to a communication-less baseline “nocomm”. The effective batch sizes were 256 and 32 for AleNet and GoogLeNet, respectively. Forexample, using 8 compute devices for GoogLeNet uses a mini-batch size of 32=8 = 4 . The evalu-ation on DGX-1 were limited to 8 compute devices whereas the CPU cluster evaluation eventuallyhit the strong scaling limit for data parallelism.These results show that delaying the gradient updates by one or more iterations is the most effectivemeans of hiding the communication latency. The layer-wise approaches did not perform as well asexpected. These trends were consistent across both hardware platforms.The layer-wise approaches, though promising as equivalent to a sequential computation, were notable to complete their gradient averages quickly enough. Compared to the delayed gradient ap-proach, this is perhaps intuitive. The delayed gradient approach is able to hide the communicationlatency across all three complete phases of the computation whereas the layer-wise approaches onlyhave as long as it takes to complete the backpropagation phase. This is not enough time to completethe communication, especially as the mini-batch sizes decrease and therefore provide less work tomask the communication.In addition to looking at the time per batch above, the rates of convergence of these heuristics mustbe evaluated. All of the heuristics completed training AlexNet to the standard top-1 accuracy of54% using the default AlexNet settings that come with Caffe. However, it is worth noting that atthe beginning of training, they showed different loss curves showing that there is a tradeoff betweennumber of batches per second and accuracy at a given batch as shown in Table 1.7Under review as a conference paper at ICLR 20170"0.5"1"1.5"2"2.5"3"3.5"no"comm"SGD"SGD"Layer4wise"AGD"1"comm"AGD"2"comm"SGD"task4wise,"1"comm"SGD"task4wise,"2"comm"Itera&ons*per*second*1"2"4"8"16"32"(a) AlexNet CPU0"5"10"15"20"25"no"comm"SGD"AGD"1"comm"AGD"2"comm"AGD"3"comm"Itera&ons*per*second*1"2"4"8" (b) AlexNet DGX-10"0.5"1"1.5"2"2.5"3"no"comm"SGD"SGD"Layer4wise"AGD"1"comm"AGD"2"comm"SGD"task4wise,"1"comm"SGD"task4wise,"2"comm"Itera&ons*per*second*1"2"4"8"16"(c) GoogLeNet CPU0"5"10"15"20"25"30"no"comm"SGD"AGD"1"comm"AGD"2"comm"AGD"3"comm"Itera&ons*per*second*1"2"4"8" (d) GoogLeNet DGX-1Figure 1: Evaluation of SGD and AGD approaches. Effective batch sizes were 256 and 32 for AlexNet andGoogLeNet, respectively.Table 1: AlexNet Accuracy After Every 1000 Batches on DGX-1batch 1000 2000 3000 4000 5000serial, 1 GPU 0.0124 0.05164 0.10102 0.13432 0.16454SGD 0.01116 0.03984 0.07594 0.10622 0.13052AGD, 1 comm 0.0039 0.01324 0.02632 0.05076 0.07362AGD, 2 comm 0.00104 0.00356 0.00636 0.01282 0.01688We also evaluated whether these approaches converged in addition to just improving the number ofiterations per second. All approaches evaluated managed to converge within the exepcted number ofiterations. Notably, AlexNet on DGX-1 reached convergence in 11 hours using the delayed gradientapproach and two communication threads using the standard AlexNet network from Caffe.7 C ONCLUSIONSThere is a tradeoff between maintaining equivalence to sequential methods versus leveraging thevast computational resources available for gradient descent. We find that asynchronous methodscan give a 1.7x speedup while not sacrificing accuracy at the end of an otherwise identical trainingregime. This improvement was achieved without the need for a warm start, contrary to previouslypublished results using parameter servers.8Under review as a conference paper at ICLR 2017 | r1jpo9N7x | Difficult to read paper. Lack of strong async baseline a major flaw. | 3: Clear rejection | This paper is relatively difficult to parse. Much of the exposition of the proposed algorithm could be better presented using pseudo-code describing the compute flow, or a diagram describing exactly how the updates take place. As it stands, I'm not sure I understand everything. I would also have liked to see exactly described what the various labels in Fig 1 correspond to ("SGD task-wise, 1 comm"? Did you mean layer-wise?).
There are a couple of major issues with the evaluation: first, no comparison is reported against baseline async methods such as using a parameter server. Second, using AlexNet as a benchmark is not informative at all. AlexNet looks very different from any SOTA image recognition model, and in particular it has many fewer layers, which is especially relevant to the discussion in 6.3. It also uses lots of fully-connected layers which affect the compute/communication ratios in ways that are not relevant to most interesting architectures today.
| 5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature |
rkFd2P5gl | ICLR.cc/2017/conference | 2017 | Leveraging Asynchronicity in Gradient Descent for Scalable Deep Learning | ["Jeff Daily", "Abhinav Vishnu", "Charles Siegel"] | In this paper, we present multiple approaches for improving the performance of gradient descent when utilizing mutiple compute resources. The proposed approaches span a solution space ranging from equivalence to running on a single compute device to delaying gradient updates a fixed number of times. We present a new approach, asynchronous layer-wise gradient descent that maximizes overlap of layer-wise backpropagation (computation) with gradient synchronization (communication). This approach provides maximal theoretical equivalence to the de facto gradient descent algorithm, requires limited asynchronicity across multiple iterations of gradient descent, theoretically improves overall speedup, while minimizing the additional space requirements for asynchronicity. We implement all of our proposed approaches using Caffe – a high performance Deep Learning library – and evaluate it on both an Intel Sandy Bridge cluster connected with InfiniBand as well as an NVIDIA DGX-1 connected with NVLink. The evaluations are performed on a set of well known workloads including AlexNet and GoogleNet on the ImageNet dataset. Our evaluation of these neural network topologies indicates asynchronous gradient descent has a speedup of up to 1.7x compared to synchronous. | ["Deep learning"] | ABSTRACTIn this paper, we present multiple approaches for improving the performance ofgradient descent when utilizing mutiple compute resources. The proposed ap-proaches span a solution space ranging from equivalence to running on a singlecompute device to delaying gradient updates a fixed number of times. We presenta new approach, asynchronous layer-wise gradient descent that maximizes overlapof layer-wise backpropagation (computation) with gradient synchronization (com-munication). This approach provides maximal theoretical equivalence to the defacto gradient descent algorithm, requires limited asynchronicity across multipleiterations of gradient descent, theoretically improves overall speedup, while mini-mizing the additional space requirements for asynchronicity. We implement all ofour proposed approaches using Caffe – a high performance Deep Learning library– and evaluate it on both an Intel Sandy Bridge cluster connected with Infini-Band as well as an NVIDIA DGX-1 connected with NVLink. The evaluations areperformed on a set of well known workloads including AlexNet and GoogleNeton the ImageNet dataset. Our evaluation of these neural network topologies in-dicates asynchronous gradient descent has a speedup of up to 1.7x compared tosynchronous.1 I NTRODUCTIONDeep Learning (DL) algorithms are a class of Machine Learning and Data Mining (MLDM) algo-rithms, which use an inter-connection of neurons andsynapses to emulate the computational struc-ture of a mammalian brain. DL algorithms have demonstrated resounding success in many com-puter vision tasks and science domains such as high energy physics, computational chemistry andhigh performance computing use-cases. Several DL implementations such as TensorFlow, Caffe,Theano, and Torch have become available. These implementations are primarily geared towardscompute nodes that may contain multi-core architecture (such as Intel Xeon/KNC/KNL) and ormany-core architectures (GPUs).DL algorithms are under-going a tremendous revolution of their own. Widely used DL algorithmssuch as Convolutional Neural Networks (CNNs) and Recurrent Neural Networks (RNNs) are com-putationally expensive. Their computational requirements are further worsened by: 1) Very deepneural networks such as recently proposed 1000-layer complex Residual Networks ( ResNet ), 2) In-creasing volume of data produced by simulations, experiments and handheld devices. An importantsolution to these problems is the design and implementation of DL algorithms that are capable ofexecution on distributed memory large scale cluster/cloud computing systems. A few distributed DLimplementations such as CaffeonSpark, Distributed TensorFlow, CNTK, Machine Learning Toolkiton Extreme Scale (MaTEx), and FireCaffe have become available. Implementations such as CNTK,FireCaffe and MaTEx use MPI (Gropp et al., 1996; Geist et al., 1996) – which makes them a naturalfit for high-end systems.DL algorithms primarily use gradient descent – an iterative technique in which the weights ofsynapes are updated using the difference between the ground truth (actual value) and the predictedvalue (using the current state of the neural network). The larger the difference, the steeper the de-1Under review as a conference paper at ICLR 2017scent to a minima (a low value of minima generates the solution). An important type of gradientdescent is batch gradient descent – where a random subset of samples are used for iterative feed-forward (calculation of predicted value) and back-propagation (update of synaptic weights). A smallbatch is prone to severe pertubations to the descent, while a large batch results in slow convergence.Hence, a data scientist tends to use a fairly average batch – which finds the balance between thesetwo conflicting metrics.A large scale parallelization of gradient descent must maximize the equivalence to the default algo-rithm, such that the convergence property is maintained. Consider a scenario where a batch ( b) inthe original algorithm is split across multiple compute nodes ( n) – an example of data parallelism .To provide equivalence to the default algorithm, the batch must be split equally tobn, although thecommunication which would require an all-to-all reduction would increase as (logn). Naturally,asnis increased and bis held constant ( strong scaling ), this becomes prohibitive, whereas keepingthe batch size per node b=nconstant ( weak scaling ) increases the convergence time.Several researchers have proposed methods to alleviate the communication requirements of dis-tributed gradient descent. Parameter-server based approaches use a server to hold the latest versionof the model while clients send computed gradients and request the latest model. This approach hasbeen proposed and extended by several researchers. While theoretically this provides O(1)time-complexity since all batch updates can be computed simultaneously, this approach fails to scalebeyond a few compute nodes when considering the time to convergence relative to having run thecomputation on a single device. Others have proven divergence from the original algorithm. RemoteDirect Memory Access (RDMA) based approaches have been proposed, but they also diverge fromthe original algorithm. Several other implementations are primarily geared towards shared memorysystems, and address the thread contention issue for gradient descent.Our objective is to design a non-parameter-server based technique, which maximizes the equivalenceto the default algorithm, while leveraging high performance architectures – including computationalunits such as GPUs and high performance interconnects such as InfiniBand, Intel Omni-path archi-tectures by using MPI.1.1 C ONTRIBUTIONSSpecifically, we make the following contributions in this paper:We design a baseline asynchronous gradient descent, which delays the gradient updates ofthe entire model by one or more iterations adaptively on the basis of available overlap anduser-defined input.We propose a layer-wise gradient descent method, which overlaps weight updates of alayer with inter-node synchronization of other layers. The proposed method is exactlyequiavalent to the default sequential algorithm.We implement our approaches and other baseline techniques using the Machine LearningToolkit for Extreme Scale (MaTEx), which consists of a distributed memory implementa-tion of Caffe using MPI (Gropp et al., 1996; Geist et al., 1996).We evaluate our approaches and other baseline implementations on a large scale CPU-basedInfiniBand cluster as well as on NVIDIA’s DGX-1 multi-GPU system. We use several wellstudied datasets and DNN topologies such as ImageNet (1.3M images, 250GB dataset)with AlexNet and GoogleNet DNNs.Our evaluation indicates the efficacy of the proposed approach. Specifically, the best asynchronousapproach is up to 1.7x faster than the synchronous approach while achieving up to 82% parallelefficiency.The rest of the paper is organized as follows: In section 2, we present related work of our proposedresearch. We present the background in section 3, followed by an in-depth solution space in sec-tion 4. In section 6, we present a detailed performance evaluation of asynchronous gradient descent,and conclusions with future directions in section 7.2Under review as a conference paper at ICLR 20172 R ELATED WORKBatch gradient descent is the most widely used algorithm for training Deep Learning models. Thisalgorithm has been implemented several times for sequential, multi-core and many-core systemssuch as GPUs. The most widely used implementations are Caffe (Jia et al., 2014) (CPUs/GPUs),Warp-CTC (GPUs), Theano (Bastien et al., 2012; Bergstra et al., 2010) (CPUs/GPUs), Torch (Col-lobert et al., 2002) (CPUs/GPUs), CNTK (Agarwal et al., 2014) (GPUs and Distributed Memoryusing MPI) and Google TensorFlow (Abadi et al., 2015) which use nVIDIA CUDA Deep NeuralNetwork (cuDNN).Caffe is one of the leading software tools for training and deploying deep learning algorithms, andit can be used to develop novel extensions to these algorithms such as the ones described below.Caffe supports execution on a single node (connected with several GPUs) and a version has beenimplemented that takes full advantage of Intel systems. While the research described below wasperformed using Caffe, the extensions can be applied to Tensorflow as well.Caffe (and other deep learning software) is also equipped with several optimizations designed toavoid significant problems in training deep networks. The vanishing gradient problem (Bianchini& Scarselli, 2014) causes deep networks to fail to learn much at all in the early layers, and wassolved in (Hinton & Osindero, 2006) and (Bengio et al., 2007) where it was shown that a networkcould be trained one layer at a time with autoencoders (Hinton & Salakhutdinov, 2006), and thenput together to form a single network (Vincent et al., 2010). Another optimization that helps to solvethis problem is switching from sigmoidal neurons torectified linear neurons .The problem of accelerating gradient descent, especially disctributed across compute resources, is ofinterest to many researchers. Approaches generally fall into two categories, whether or not they areequivalent to having run using a single compute device; utilizing a single compute device necessarilycomputes gradient updates and applies them immediately to the model. Further, the gradient updatescan be classified as either synchronous or asynchronous depending on whether the communication ofthe gradients can be overlapped with any computation of the gradients. For example, the DistBeliefparameter server approach (Dean et al., 2012) computes gradient updates asynchronously based onan out-of-date copy of the model and applies them to the latest model. Though this is not equivalentto having run on a single device, it is able to process samples much faster.Chen et al. (2016) revisit asynchronous gradient descent and propose a few synchronous variants inorder to impove time to convergence. Notably, they show that waiting for all workers to complete,aggregating the gradients, and applying the gradients to the same common model (thereby eachworker has a copy of the latest model) provides a good time to convergence while also leveragingmultiple compute devices. Their approach is where this paper begins while additionally proposingapproaches ranging from synchronous to parameter server variants.3 F UNDAMENTALS3.1 N EURAL NETWORKSMachine Learning algorithms designed to emulate the computational structure of the brain to modeldata are called “Neural Networks.” The basic unit of a neural network is the neuron and neurons areconnected to one another via synapses .3.1.1 B ACKPROPAGATIONNeural networks are trained through an algorithm called backpropagation . This is a means of com-puting gradients layer by layer to implement the gradient descent algorithm ’s update rule ofw0=w+rwC (1)b0=b+rbC (2)where ware the weights, bthe biases,the learning rate, and Cis a cost function to be optimized,usually square error or cross-entropy. This rule is often replaced by a slightly more complex rule,such as Adaptive Gradient Descent (AdaGrad) (Duchi et al., 2011) or Momentum (Qian, 1999).3Under review as a conference paper at ICLR 2017To compute the gradients, we set W(`),b(`)the weights and biases for each layer, z(`+1)=W(`)a(`)+b(`)anda(`)=(z(`)), whereis the activation function. Let n`represent the numberof layers. Then, we use Algorithm 1.Algorithm 1 Back Propagation1:input: DataX2Rnpand labelsY2Rn`2:forifrom 1 tondo3: Compute all z(`)anda(`).4:(n`)=(yan`)(z(n`))5: for`fromn`1to 2do6:(`)=W`(`+1)0(z(`))7: end for8:rW(`)C=(`+1)a(`)T9:rb(`)C=(`+1)10:end forAlthough there are several nonlinear activation functions in common use, the networks examined inthis paper only include rectified linear units (ReLU) where ReLU(x) = max(0;x).3.2 C AFFECaffe (Jia et al., 2014) is one of the leading software packages for building and training neuralnetworks. It provides abstractions for a wide range of topologies and for training them with manydifferent types of optimizers. Caffe provides abstractions for operations on multi-dimensional arrays(tensors) which are essential for implementing Deep Learning algorithms. From an input tensor, anoutput tensor, and tensors for each hidden layer, Caffe constructs a computational graph that man-ages these tensors and their updates as a single object. Caffe is particularly useful for researchers,because it is heavily optimized and can be modified through an open source C++ backend.As Caffe’s runtime is implemented in C++, it can extract native performance from the computa-tion environment it is run on. Furthermore, Caffe abstracts GPU computations, leveraging nVIDIACUDA Deep Neural Network Library (cuDNN) for the task. We have modified this code for dis-tributed memory computation on large scale systems using MPI to natively use network hardware foroptimal performance. The base, synchronous implementation is similar to FireCaffe (Iandola et al.,2015), another distributed memory implementation of Caffe. Further modifications are described inSection 4.There are three phases of computation within Caffe that pass over the enumerated layers of thenetwork. First, the forward pass computes the output result given the samples from the input batch,starting at the first layer. Next, starting at the last (output) layer, based on the difference betweenthe output result and the ground truth, the backward pass uses the backpropagation technique tocompute the gradients for each layer. Lastly, one final pass is made over the network to apply thegradients to the weights and biases before starting the process over again with the next batch.4 S OLUTION SPACEThe goal of improving gradient descent is to accelerate the time to solution without sacrificing theaccuracy of the model. The base case to consider is then computing and applying gradients onebatch at a time on a single compute device. One way to accelerate the computation while alsomaintaining equivalence to the sequential is to use data parallelism. Data parallelism is where thetraditional batch is further subdivided into equally-sized mini-batches, each mini-batch is computedon separate devices, then the gradients resulting from each mini-batch is averaged together. Sinceeach gradient update is itself an average, taking the average of the mini-gradients results in an updatethat is effectively the same as having computed the original batch size. This is called the effectivebatch size . Data parallelism is the approach we explore in this paper, attempting many ways ofhiding the latency of the gradient communication that occurs between compute devices. We useMPI to communicate the gradients.4Under review as a conference paper at ICLR 2017Caffe provides callback methods in its C++ interface that interject user-defined functionality into keyphases of the computation (see 3.2). Specifically, one user-defined function is executed immediatelybefore the foward pass when the batch computation begins. The other user-defined function executesafter the backward pass finishes, but before the application of the gradients to the weights and biases.Additional callback functions were added to support finer-grained control over the three phases ofcomputation. One of the additional callbacks executes after each gradient is computed during thebackward phase, once per set of learnable parameters, such as the weights or biases of a given layer.Another callback function that was added is called once per learnable parameter during the applyphase, just before the gradients are applied. Lastly, a callback function was added that turns thegradient application into a task queue, requesting additional tasks in an unspecified order until allgradients have been applied.A critical implementation detail for any of our proposed approaches is to make sure the individualnetwork models maintained by each compute device start from the same random initial conditionsfor the weights and biases. Before the first batch is computed, the weights and biases from the masterprocess are copied (broadcast) to the other processes. That way any gradients that are computed,when averaged together, are based on the same initial conditions.4.1 S YNCHRONOUS GRADIENT DESCENTSimilar to what Chen et al. (2016) proposes and what is implemented in FireCaffe (Iandola et al.,2015), synchronous gradient descent averages the gradients from each mini-batch together beforeapplying them, forming one complete batch at a time. The way this is implemented in Caffe is touse the callback function that executes when all gradients are ready to be applied. During this call-back, MPI Allreduce is used to sum the gradients, placing the same resulting sum on each computedevice. This function is blocking, meaning it returns control back to Caffe only after the sum iscomputed across all devices. Since the result is a sum and not the intended average, it is then scaleddown based on the number of compute devices in use. It is important to note that the reductionoperation can be performed in-place, meaning it can use the memory location directly holding thegradient without performing any costly memory copies, especially for networks with a large numberof parameters such as AlexNet. This approach also has the important quality that the gradients areaveraged after they have been used by each layer of the backpropagation, preserving the importanceof any activations within the network against the mini-batch instead of against the effective batch.4.2 L AYER -WISEGRADIENT DESCENTChen et al. (2016) proposes the pipelining of gradient computation and application. For example,the gradients of upper layers can be concurrently applied while computing the gradients of lowerlayers. This approach must be done carefully to maintain equivalence with the sequential base case.We make the observation that gradients can be averaged as soon as they are computed during thebackward phase, instead of waiting for all gradients to be computed. However, adjacent layers willuse and/or update the gradients of layers that have otherwise finished computing their gradients.This implies the averaging of the gradients must be performed on a copy of the gradients rather thanin-place. Further, the averaging of the copied gradients must finish before they can be applied.We utilize a background thread of computation in order to perform the gradient averaging concurrentwith the remaining gradient computation. This provides maximal overlap of the communicationlatency with useful computation. There are a few options when to apply the averaged gradients.Waiting for all communication to finish before applying all gradients is straightfoward and similar tothe synchronous approach described previously, though perhaps at least some of the communicationlatency would be overlapped. Another approach is to wait, one layer at a time, for the gradientsfor a particular layer to finish averaging and then apply the gradients. It is intuitive to perform thewaiting in the same order in which backpropagation was performed, from the last layer to the firstlayer. Lastly, since all gradient updates are independent, we can perform them in an arbitrary order.This takes advantage of the observation that not all layers have the same number of parameters, andfurther, the gradients for the weights and the gradients for the biases can be averaged separately;the size of the weight gradients are typically larger than the bias gradients, implying that the biasgradients will complete their communication more quickly. Since the communcation of the variousparameters can finish somewhat arbitrarily based on when the communication was initiated and the5Under review as a conference paper at ICLR 2017size of the communication, we can apply the gradients as soon as they complete their averaging. Weevaluate these strategies in 6.4.3 A SYNCHRONOUS GRADIENT DESCENTAs stated in (Chen et al., 2016), parameter server implementations suffer from poor convergencesince gradient updates are calculated based on out-of-date networks. Continuing with our data par-allel approach, there is a lower limit to the size of the mini-batches and therefore the number ofcompute devices that can be utilized. As the amount of work per compute device decreases pro-portional to the decreasing size of the mini-batches, there is less computation available to mask thelatency of the gradient averaging across the devices. Initiating the averaging layer-wise as describedabove may not be enough to mitigate this problem.We propose delaying the application of the gradients by a fixed number of iterations much smallerthan the number of compute devices as would have been done in a parameter server approach.The gradients are delayed by using a concurrent communication thread and applying the gradientone, two, or three iterations later thus giving the averaging enough time to complete as needed.If the gradient needs to be delayed by one iteration, this requires one communication thread andone additional buffer to hold the gradient; delaying by two iterations requires two communicationthreads and two additional buffers and so on. This approach is somewhere between a parameterserver (Dean et al., 2012) and the various approaches that maintain equivalency with a sequentialcomputation.5 I MPLEMENTATION DETAILSThe implementations evaluated in this paper focus on data parallelism and the averaging of gradientsacross compute devices. This is achieved using MPI and parallel I/O.5.1 H ANDLING I/OThe data parallelism is achieved by distributing datasets across compute devices, partitioning thembased on the number of devices utilized; each device receives a disjoint subset of the dataset andno samples are shuffled or exchanged between compute devices outside of the gradient averaging.Caffe frequently uses a database in LMDB format for its datasets, however this format cannot beused on remote (network) filesystems or even between processes on the same host. Caffe mitigatesthis issue when using more than one GPU on the same host by using a single I/O reading threadand a round-robin deal of the samples to device-specific queues. Our implementations mitigate thisissue by first converting an LMDB database into a netCDF file (Rew & Davis, 1990). netCDF filescan be read and partitioned using parallel MPI-IO via the parallel netCDF library (Li et al., 2003).5.2 D ISTRIBUTED MEMORY IMPLEMENTATION USING MPIFor single-node GPU computation, using one or more GPU devices in a single host, Caffe providesa means of allocating one contiguous buffer to hold the data for the weights and biases and a secondbuffer to hold the gradients for each. We extended this approach for CPU hosts. A single contigousbuffer allows the non-layer-wise, i.e., network-wise gradient averages to be performed using a singleMPI reduction operation. The layer-wise implementations require one MPI reduction operation pernetwork parameter. There is a fixed cost to start a communication primitive regardless of how muchdata is communicated. It is sometimes beneficial to aggregate otherwise many small communicationrequests into a larger one.Although Caffe provides a way of utilizing all GPUs within the host, it does not currently leverageNVIDIA’s NCCL package (NVIDIA Corporation, 2015) for optimized, high-bandwidth collectivecommunication routines. We used the NCCL equivalent to the MPI all reduction to sum gradientsacross GPU devices on the DGX-1 platform.6Under review as a conference paper at ICLR 20176 E XPERIMENTAL EVALUATIONIn this section, we present an experimental evaluation and analysis of the heuristics described insection 4.6.1 H ARDWARE ARCHITECTURESWe evaluate using a CPU cluster as well as NVIDIA’s speialized DGX-1 multi-GPU host system.Each node of the multi-node cluster consists of a multi-core Intel Sandybridge CPU connected viaInfiniBand. We use Intel MPI 5.1.2 for performance evaluation. The heuristics are implemented inCaffe (Jia et al., 2014), specifically the intelcaffe branch designed to optimize performance on IntelCPUs.The DGX-1 system contains 8 Pascal GPUs connected using the high-speed NVlink interconnect.For the DGX-1 evaluations, the latest version of Berkley’s Caffe was modified to use the NCCLcommunicaiton primitives in addition to our algorithmic changes.6.2 I MAGE NET AND NETWORK ARCHITECTURESWe evaluate on two distinct network architectures trained on the ImageNet dataset. ImageNet refersspecifically to the ILSVRC2015 (Russakovsky et al., 2015) dataset. This dataset consists of a train-ing set of just under 1.3 million images of various sizes (as jpg files) divided among 1000 classes,along with a validation set consisting of 50000 images of the same type and classes. Additionally,for the competition, there is a testing set, but it is held separately and not available publicly. It isestablished as one of the benchmark dataset for machine learning with large datasets, and among thefamous architectures that achieved record top 1 and top 5 accuracies on it are AlexNet (Krizhevskyet al., 2012) and GoogLeNet (Szegedy et al., 2015).We evaluate on AlexNet and GoogLeNet because they are now well-established models with knowntraining regimes and loss curves. They also demonstrate two different regimes for paralleliza-tion: AlexNet has approximately 60 million parameters that need to be communicated, whereasGoogLeNet has approximately 4 million. In contrast to the smaller amount of communication forGoogLeNet, it requires roughly twice the amount of time to process a each image than AlexNet doeswhen communication is ignored.6.3 E VALUATIONFigure 1 compares the implemented approaches relative to a communication-less baseline “nocomm”. The effective batch sizes were 256 and 32 for AleNet and GoogLeNet, respectively. Forexample, using 8 compute devices for GoogLeNet uses a mini-batch size of 32=8 = 4 . The evalu-ation on DGX-1 were limited to 8 compute devices whereas the CPU cluster evaluation eventuallyhit the strong scaling limit for data parallelism.These results show that delaying the gradient updates by one or more iterations is the most effectivemeans of hiding the communication latency. The layer-wise approaches did not perform as well asexpected. These trends were consistent across both hardware platforms.The layer-wise approaches, though promising as equivalent to a sequential computation, were notable to complete their gradient averages quickly enough. Compared to the delayed gradient ap-proach, this is perhaps intuitive. The delayed gradient approach is able to hide the communicationlatency across all three complete phases of the computation whereas the layer-wise approaches onlyhave as long as it takes to complete the backpropagation phase. This is not enough time to completethe communication, especially as the mini-batch sizes decrease and therefore provide less work tomask the communication.In addition to looking at the time per batch above, the rates of convergence of these heuristics mustbe evaluated. All of the heuristics completed training AlexNet to the standard top-1 accuracy of54% using the default AlexNet settings that come with Caffe. However, it is worth noting that atthe beginning of training, they showed different loss curves showing that there is a tradeoff betweennumber of batches per second and accuracy at a given batch as shown in Table 1.7Under review as a conference paper at ICLR 20170"0.5"1"1.5"2"2.5"3"3.5"no"comm"SGD"SGD"Layer4wise"AGD"1"comm"AGD"2"comm"SGD"task4wise,"1"comm"SGD"task4wise,"2"comm"Itera&ons*per*second*1"2"4"8"16"32"(a) AlexNet CPU0"5"10"15"20"25"no"comm"SGD"AGD"1"comm"AGD"2"comm"AGD"3"comm"Itera&ons*per*second*1"2"4"8" (b) AlexNet DGX-10"0.5"1"1.5"2"2.5"3"no"comm"SGD"SGD"Layer4wise"AGD"1"comm"AGD"2"comm"SGD"task4wise,"1"comm"SGD"task4wise,"2"comm"Itera&ons*per*second*1"2"4"8"16"(c) GoogLeNet CPU0"5"10"15"20"25"30"no"comm"SGD"AGD"1"comm"AGD"2"comm"AGD"3"comm"Itera&ons*per*second*1"2"4"8" (d) GoogLeNet DGX-1Figure 1: Evaluation of SGD and AGD approaches. Effective batch sizes were 256 and 32 for AlexNet andGoogLeNet, respectively.Table 1: AlexNet Accuracy After Every 1000 Batches on DGX-1batch 1000 2000 3000 4000 5000serial, 1 GPU 0.0124 0.05164 0.10102 0.13432 0.16454SGD 0.01116 0.03984 0.07594 0.10622 0.13052AGD, 1 comm 0.0039 0.01324 0.02632 0.05076 0.07362AGD, 2 comm 0.00104 0.00356 0.00636 0.01282 0.01688We also evaluated whether these approaches converged in addition to just improving the number ofiterations per second. All approaches evaluated managed to converge within the exepcted number ofiterations. Notably, AlexNet on DGX-1 reached convergence in 11 hours using the delayed gradientapproach and two communication threads using the standard AlexNet network from Caffe.7 C ONCLUSIONSThere is a tradeoff between maintaining equivalence to sequential methods versus leveraging thevast computational resources available for gradient descent. We find that asynchronous methodscan give a 1.7x speedup while not sacrificing accuracy at the end of an otherwise identical trainingregime. This improvement was achieved without the need for a warm start, contrary to previouslypublished results using parameter servers.8Under review as a conference paper at ICLR 2017 | B1JhRTbNg | review for Leveraging Asynchronicity in Gradient Descent for Scalable Deep Learning | 5: Marginally below acceptance threshold | This paper describe an implementation of delayed synchronize SGD method for multi-GPU deep ne training.
Comments
1) The described manual implementation of delayed synchronization and state protection is helpful. However, such dependency been implemented by a dependency scheduler, without doing threading manually.
2) The overlap of computation and communication is a known technique implemented in existing solutions such as TensorFlow(as described in Chen et.al) and MXNet. The claimed contribution of this point is somewhat limited.
3) The convergence accuracy is only reported for the beginning iterations and only on AlexNet. It would be more helpful to include convergence curve till the end for all compared networks.
In summary, this is paper implements a variant of delayed SyncSGD approach. I find the novelty of the system somewhat limited (due to comment (2)). The experiments should have been improved to demonstrate the advantage of proposed approach.
| 4: The reviewer is confident but not absolutely certain that the evaluation is correct |
rJLS7qKel | ICLR.cc/2017/conference | 2017 | Learning to Act by Predicting the Future | ["Alexey Dosovitskiy", "Vladlen Koltun"] | We present an approach to sensorimotor control in immersive environments. Our approach utilizes a high-dimensional sensory stream and a lower-dimensional measurement stream. The cotemporal structure of these streams provides a rich supervisory signal, which enables training a sensorimotor control model by interacting with the environment. The model is trained using supervised learning techniques, but without extraneous supervision. It learns to act based on raw sensory input from a complex three-dimensional environment. The presented formulation enables learning without a fixed goal at training time, and pursuing dynamically changing goals at test time. We conduct extensive experiments in three-dimensional simulations based on the classical first-person game Doom. The results demonstrate that the presented approach outperforms sophisticated prior formulations, particularly on challenging tasks. The results also show that trained models successfully generalize across environments and goals. A model trained using the presented approach won the Full Deathmatch track of the Visual Doom AI Competition, which was held in previously unseen environments. | ["future", "environment", "model", "goals", "results", "control", "immersive environments", "sensory stream", "measurement stream", "cotemporal structure"] | ABSTRACTWe present an approach to sensorimotor control in immersive environments. Ourapproach utilizes a high-dimensional sensory stream and a lower-dimensionalmeasurement stream. The cotemporal structure of these streams provides a richsupervisory signal, which enables training a sensorimotor control model by in-teracting with the environment. The model is trained using supervised learningtechniques, but without extraneous supervision. It learns to act based on raw sen-sory input from a complex three-dimensional environment. The presented formu-lation enables learning without a fixed goal at training time, and pursuing dynam-ically changing goals at test time. We conduct extensive experiments in three-dimensional simulations based on the classical first-person game Doom. Theresults demonstrate that the presented approach outperforms sophisticated priorformulations, particularly on challenging tasks. The results also show that trainedmodels successfully generalize across environments and goals. A model trainedusing the presented approach won the Full Deathmatch track of the Visual DoomAI Competition, which was held in previously unseen environments.1 I NTRODUCTIONMachine learning problems are commonly divided into three classes: supervised, unsupervised, andreinforcement learning. In this view, supervised learning is concerned with learning input-outputmappings, unsupervised learning aims to find hidden structure in data, and reinforcement learningdeals with goal-directed behavior (Murphy, 2012). Reinforcement learning is compelling becauseit considers the natural setting of an organism acting in its environment. It is generally taken tocomprise a class of problems (learning to act), the mathematical formalization of these problems(maximizing the expected discounted return), and a family of algorithmic approaches (optimizingan objective derived from the Bellman equation) (Kaelbling et al., 1996; Sutton & Barto, 2017).While reinforcement learning (RL) has achieved significant progress (Mnih et al., 2015), key chal-lenges remain. One is sensorimotor control from raw sensory input in complex and dynamic three-dimensional environments, learned directly from experience. Another is the acquisition of generalskills that can be flexibly deployed to accomplish a multitude of dynamically specified goals (Lakeet al., 2016).In this work, we propose an approach to sensorimotor control that aims to assist progress towardsovercoming these challenges. Our approach departs from the reward-based formalization commonlyused in RL. Instead of a monolithic state and a scalar reward, we consider a stream of sensory inputfstgand a stream of measurements fmtg. The sensory stream is typically high-dimensional andmay include the raw visual, auditory, and tactile input. The measurement stream has lower dimen-sionality and constitutes a set of data that pertain to the agent’s current state. In a physical system,measurements can include attitude, supply levels, and structural integrity. In a three-dimensionalcomputer game, they can include health, ammunition levels, and the number of adversaries over-come.Our guiding observation is that the interlocked temporal structure of the sensory and measurementstreams provides a rich supervisory signal. Given present sensory input, measurements, and goal,the agent can be trained to predict the effect of different actions on future measurements. Assumingthat the goal can be expressed in terms of future measurements, predicting these provides all theinformation necessary to support action. This reduces sensorimotor control to supervised learning,while supporting learning from raw experience and without extraneous data. Supervision is pro-1Published as a conference paper at ICLR 2017vided by experience itself: by acting and observing the effects of different actions in the context ofchanging sensory inputs and goals.This approach has two significant benefits. First, in contrast to an occasional scalar reward assumedin traditional RL, the measurement stream provides rich and temporally dense supervision that canstabilize and accelerate training. While a sparse scalar reward may be the only feedback availablein a board game (Tesauro, 1994; Silver et al., 2016), a multidimensional stream of sensations is amore appropriate model for an organism that is learning to function in an immersive environment(Adolph & Berger, 2006).The second advantage of the presented formulation is that it supports training without a fixed goaland pursuing dynamically specified goals at test time. Assuming that the goal can be expressed interms of future measurements, the model can be trained to take the goal into account in its predictionof the future. At test time, the agent can predict future measurements given its current sensory input,measurements, and goal, and then simply select the action that best suits its present goal.We evaluate the presented approach in immersive three-dimensional simulations that require visu-ally navigating a complex three-dimensional environment, recognizing objects, and interacting withdynamic adversaries. We use the classical first-person game Doom, which introduced immersivethree-dimensional games to popular culture (Kushner, 2003). The presented approach is given onlyraw visual input and the statistics shown to the player in the game, such as health and ammunitionlevels. No human gameplay is used, the model trains on raw experience.Experimental results demonstrate that the presented approach outperforms state-of-the-art deep RLmodels, particularly on complex tasks. Experiments further demonstrate that models learned by thepresented approach generalize across environments and goals, and that the use of vectorial measure-ments instead of a scalar reward is beneficial. A model trained with the presented approach won theFull Deathmatch track of the Visual Doom AI Competition, which took place in previously unseenenvironments. The presented approach outperformed the second best submission, which employeda substantially more complex model and additional supervision during training, by more than 50%.2 B ACKGROUNDThe supervised learning (SL) perspective on learning to act by interacting with the environmentdates back decades. Jordan & Rumelhart (1992) analyze this approach, review early work, andargue that the choice of SL versus RL should be guided by the characteristics of the environment.Their analysis suggests that RL may be more efficient when the environment provides only a sparsescalar reward signal, whereas SL can be advantageous when temporally dense multidimensionalfeedback is available.Sutton (1988) analyzed temporal-difference (TD) learning and argued that it is preferable to SL forprediction problems in which the correctness of the prediction is revealed many steps after the pre-diction is made. Sutton’s influential analysis assumes a sparse scalar reward. TD and policy gradientmethods have since come to dominate the study of sensorimotor learning (Kober et al., 2013; Mnihet al., 2015; Sutton & Barto, 2017). While the use of SL is natural in imitation learning (LeCunet al., 2005; Ross et al., 2013) or in conjunction with model-based RL (Levine & Koltun, 2013),the formulation of sensorimotor learning from raw experience as supervised learning is rare (Levineet al., 2016). Our work suggests that when the learner is exposed to dense multidimensional sen-sory feedback, direct future prediction can support effective sensorimotor coordination in complexdynamic environments.Our approach has similarities to Monte Carlo methods. The convergence of such methods wasanalyzed early on and they were seen as theoretically advantageous, particularly when function ap-proximators are used (Bertsekas, 1995; Sutton, 1995; Singh & Sutton, 1996). The choice of TDlearning over Monte Carlo methods was argued on practical grounds, based on empirical perfor-mance on canonical examples (Sutton, 1995). While the understanding of the convergence of bothtypes of methods has since improved (Szepesv ́ari & Littman, 1999; Tsitsiklis, 2002; Even-Dar &Mansour, 2003), the argument for TD versus Monte Carlo is to this day empirical (Sutton & Barto,2017). Sharp negative examples exist (Bertsekas, 2010). Our work deals with the more generalsetting of vectorial feedback and parameterized goals, and shows that a simple Monte-Carlo-typemethod performs extremely well in a compelling instantiation of this setting.2Published as a conference paper at ICLR 2017Vector-valued feedback has been considered in the context of multi-objective decision-making(G ́abor et al., 1998; Roijers et al., 2013). Transfer across related tasks has been analyzed byKonidaris et al. (2012). Parameterized goals have been studied in the context of continuous mo-tor skills such as throwing darts at a target (da Silva et al., 2012; Kober et al., 2012; Deisenrothet al., 2014). A general framework for sharing value function approximators across both states andgoals has been described by Schaul et al. (2015). Our work is most closely related to the frameworkof Schaul et al. (2015), but presents a specific formulation in which goals are defined in terms ofintrinsic measurements and control is based on direct future prediction. We provide an architecturethat handles realistic sensory and measurement streams and achieves state-of-the-art performance incomplex and dynamic three-dimensional environments.Learning to act in simulated environments has been the focus of significant attention following thesuccessful application of deep RL to Atari games by Mnih et al. (2015). A number of recent effortsapplied related ideas to three-dimensional environments. Lillicrap et al. (2016) considered continu-ous and high-dimensional action spaces and learned control policies in the TORCS simulator. Mnihet al. (2016) described asynchronous variants of deep RL methods and demonstrated navigation ina three-dimensional labyrinth. Oh et al. (2016) augmented deep Q-networks with external mem-ory and evaluated their performance on a set of tasks in Minecraft. In a recent technical report,Kulkarni et al. (2016b) proposed end-to-end training of successor representations and demonstratednavigation in a Doom-based environment. In another recent report, Blundell et al. (2016) considereda nonparametric approach to control and conducted experiments in a three-dimensional labyrinth.Experiments reported in Section 4 demonstrate that our approach significantly outperforms state-of-the-art deep RL methods.Prediction of future states in dynamical systems was considered by Littman et al. (2001) and Singhet al. (2003). Predictive representations in the form of generalized value functions were advocatedby Sutton et al. (2011). More recently, Oh et al. (2015) learned to predict future frames in Atarigames. Prediction of full sensory input in realistic three-dimensional environments remains an openchallenge, although significant progress is being made (Mathieu et al., 2016; Finn et al., 2016; Kalch-brenner et al., 2016). Our work considers prediction of future values of meaningful measurementsfrom rich sensory input and shows that such prediction supports effective sensorimotor control.3 M ODELConsider an agent that interacts with the environment over discrete time steps. At each time step t,the agent receives an observation otand executes an action atbased on this observation. We assumethat the observations have the following structure: ot=hst;mti, where stis raw sensory inputandmtis a set of measurements. In our experiments, stis an image: the agent’s view of its three-dimensional environment. More generally, stcan include input from multiple sensory modalities.The measurements mtcan indicate the attitude, supply levels, and structural integrity in a physicalsystem, or health, ammunition, and score in a computer game.The distinction between sensory input stand measurements mtis somewhat artificial: both standmtconstitute sensory input in different forms. In our model, the measurement vector mtis distin-guished from other sensations in two ways. First, the measurement vector is the part of the observa-tion that the agent will aim to predict. At present, predicting full sensory streams is beyond our ca-pabilities (although see the work of Kalchbrenner et al. (2016) and van den Oord et al. (2016) for im-pressive recent progress). We therefore designate a subset of sensations as measurements that will bepredicted. Second, we assume that the agent’s goals can be defined in terms of future measurements.Specifically, let 1;:::;nbe a set of temporal offsets and let f=hmt+1mt;:::;mt+nmtibe the corresponding differences of future and present measurements. We assume that any goalthat the agent will pursue can be defined as maximization of a function u(f;g). Any parametricfunction can be used. Our experiments use goals that are expressed as linear combinations of futuremeasurements:u(f;g) =g>f; (1)where the vector gparameterizes the goal and has the same dimensionality as f. This model gener-alizes the standard reinforcement learning formulation: the scalar reward signal can be viewed as ameasurement, and exponential decay is one possible configuration of the goal vector.3Published as a conference paper at ICLR 2017To predict future measurements, we use a parameterized function approximator, denoted by F:pat=F(ot;a;g;): (2)Herea2A is an action, are the learned parameters of F, andpatis the resulting prediction. Thedimensionality of patmatches the dimensionality of fandg. Note that the prediction is a function ofthe current observation, the considered action, and the goal. At test time, given learned parameters, the agent can choose the action that yields the best predicted outcome:at= arg maxa2Ag>F(ot;a;g;): (3)The goal vector used at test time need not be identical to any goal seen during training.3.1 T RAININGThe predictor Fis trained on experiences collected by the agent. Starting with a random policy, theagent begins to interact with its environment. This interaction takes place over episodes that last fora fixed number of time steps or until a terminal event occurs.Consider a set of experiences collected by the agent, yielding a set Dof training examples:D=fhoi;ai;gi;fiigNi=1. Herehoi;ai;giiis the input and fiis the output of example i. The pre-dictor is trained using a regression loss:L() =NXi=1kF(oi;ai;gi;)fik2: (4)A classification loss can be used for predicting categorical measurements, but this was not necessaryin our experiments.As the agent collects new experiences, the training set Dand the predictor used by the agent change.We maintain an experience memory of the Mmost recent experiences out of which a mini-batch ofNexamples is randomly sampled for every iteration of the solver. The parameters of the predictorused by the agent are updated after every knew experiences. This setup departs from pure on-policy training and we have not observed any adverse effect of using a small experience memory.Additional details are provided in Appendix A.We have evaluated two training regimes:1. Single goal: the goal vector is fixed throughout the training process.2. Randomized goals: the goal vector for each episode is generated at random.In both regimes, the agent follows an "-greedy policy: it acts greedily according to the current goalwith probability 1", and selects a random action with probability ". The value of "is initially setto1and is decreased during training according to a fixed schedule.3.2 A RCHITECTUREThe predictor Fis a deep network parameterized by . The network architecture we use is shownin Figure 1. The network has three input modules: a perception module S(s), a measurementmoduleM(m)and a goal module G(g). In our experiments, sis an image and the perceptionmoduleSis implemented as a convolutional network. The measurement and goal modules arefully-connected networks. The outputs of the three input modules are concatenated, forming thejoint input representation used for subsequent processing:j=J(s;m;g) =hS(s);M(m);G(g)i: (5)Future measurements are predicted based on this input representation. The network emits predic-tions of future measurements for all actions at once. This could be done by a fully-connected modulethat absorbs the input representation and outputs predictions. However, we found that introducingadditional structure into the prediction module enhances its ability to learn the fine differences be-tween the outcomes of different actions. To this end, we build on the ideas of Wang et al. (2016) and4Published as a conference paper at ICLR 2017MeasurementsImageActionExpectationPrediction+GoalTargetNormalizeDuplicateActiontakenFigure 1: Network structure. The image s, measurements m, and goal gare first processed sep-arately by three input modules. The outputs of these modules are concatenated into a joint repre-sentation j. This joint representation is processed by two parallel streams that predict the expectedmeasurements E(j)and the normalized action-conditional differences fAi(j)g, which are then com-bined to produce the final prediction for each action.split the prediction module into two streams: an expectation stream E(j)and an action stream A(j).The expectation stream predicts the average of the future measurements over all potential actions.The action stream concentrates on the fine differences between actions: A(j) =A1(j);:::;Aw(j),wherew=jAjis the number of actions. We add a normalization layer at the end of the actionstream that ensures that the average of the predictions of the action stream is zero for each futuremeasurement:Ai(j) =Ai(j)1wwXk=1Ak(j) (6)for alli. The normalization layer subtracts the average over all actions from each prediction, forcingthe expectation stream Eto compensate by predicting these average values. The output of theexpectation stream has dimensionality dim(f), where fis the vector of future measurements. Theoutput of the action stream has dimensionality wdim(f).The output of the network is a prediction of future measurements for each action, composed bysumming the output of the expectation stream and the normalized action-conditional output of theaction stream:p=hpa1;:::;pawi=DA1(j) +E(j);:::;Aw(j) +E(j)E: (7)The output of the network has the same dimensionality as the output of the action stream.4 E XPERIMENTSWe evaluate the presented approach in immersive three-dimensional simulations based on the classi-cal game Doom. In these simulations, the agent has a first-person view of the environment and mustact based on the same visual information that is shown to human players in the game. To interfacewith the game engine, we use the ViZDoom platform developed by Kempka et al. (2016). One ofthe advantages of this platform is that it allows running the simulation at thousands of frames persecond on a single CPU core, which enables training models on tens of millions of simulation stepsin a single day.We compare the presented approach to state-of-the-art deep RL methods in four scenarios of in-creasing difficulty, study generalization across environments and goals, and evaluate the importanceof different aspects of the model.4.1 S ETUPScenarios. We use four scenarios of increasing difficulty:5Published as a conference paper at ICLR 2017D1: Basic D2: NavigationD3: Battle D4: Battle 2Figure 2: Example frames from the four scenarios.D1 Gathering health kits in a square room. (“Basic”)D2 Gathering health kits and avoiding poison vials in a maze. (“Navigation”)D3 Defending against adversaries while gathering health and ammunition in a maze. (“Battle”)D4 Defending against adversaries while gathering health and ammunition in a more compli-cated maze. (“Battle 2”)These scenarios are illustrated in Figure 2 and in the supplementary video ( http://bit.ly/2f9tacZ ).The first two scenarios are provided with the ViZDoom platform. In D1, the agent is in a squareroom and its health is declining at a constant rate. To survive, it must move around and collect healthkits, which are distributed abundantly in the room. This task is easy: as long as the agent learns toavoid walls and keep traversing the room, performance is good. In D2, the agent is in a maze andits health is again declining at a constant rate. Here it must again collect health kits that increase itshealth, but it must also avoid blue poison vials that decrease health. This task is harder: the agentmust learn to traverse irregularly shaped passageways, and to distinguish health kits from poisonvials. In both tasks, the agent has access to three binary sub-actions: move forward, turn left, andturn right. Any combination of these three can be used at any given time, resulting in 8 possibleactions. The only measurement provided to the agent in these scenarios is health.The last two scenarios, D3 and D4, are more challenging and were designed by us using elements ofthe ViZDoom platform. Here the agent is armed and is under attack by alien monsters. The monstersspawn abundantly, move around in the environment, and shoot fireballs at the agent. Health kits andammunition are sporadically distributed throughout the environment and can be collected by theagent. The environment is a simple maze in D3 and a more complex one in D4. In both scenarios,the agent has access to eight sub-actions: move forward, move backward, turn left, turn right, strafeleft, strafe right, run, and shoot. Any combination of these sub-actions can be used, resulting in6Published as a conference paper at ICLR 2017256 possible actions. The agent is provided with three measurements: health, ammunition, and fragcount (number of monsters killed).Model. The future predictor network used in our experiments was configured to be as close aspossible to the DQN model of Mnih et al. (2015), to ensure a fair comparison. Additional details onthe architecture are provided in Appendix A.Training and testing. The agent is trained and tested over episodes. Each episode terminates after525 steps (equivalent to 1 minute of real time) or when the agent’s health drops to zero. Statisticsreported in figures and tables summarize the final values of respective measurements at the end ofepisodes.We set the temporal offsets 1;:::;nof predicted future measurements to 1, 2, 4, 8, 16, and 32steps in all experiments. Only the latest three time steps contribute to the objective function, withcoefficients (0:5;0:5;1). More details are provided in Appendix A.4.2 R ESULTSComparison to prior work. We have compared the presented approach to three deep RL methods:DQN (Mnih et al., 2015), A3C (Mnih et al., 2016), and DSR (Kulkarni et al., 2016b). DQN is astandard baseline for visuomotor control due to its impressive performance on Atari games. A3Cis more recent and is commonly regarded as the state of the art in this area. DSR is described ina recent technical report and we included it because the authors also used the ViZDoom platformin experiments, albeit with a simple task. Further details on the setup of the prior approaches areprovided in Appendix B.The performance of the different approaches during training is shown in Figure 3. In reporting theresults of these experiments, we refer to our approach as DFP (direct future prediction). For thefirst two scenarios, all approaches were trained to maximize health. For these scenarios, Figure3 reports average health at the end of an episode over the course of training. For the last twoscenarios, all approaches were trained to maximize a linear combination of the three normalizedmeasurements (ammo, health, and frags) with coefficients (0:5;0:5;1). For these scenarios, Figure3 reports average frags at the end of an episode. Each presented curve averages information fromthree independent training runs, and each data point is computed from 350;000steps of testing.DQN, A3C, and DFP were trained for 50million steps. The training procedure for DSR is muchslower and can only process roughly 1 million simulation steps per day. For this reason, we wereonly able to evaluate DSR on the Basic scenario and were not able to perform extensive hyperparam-eter tuning. We report results for this technique after 10days of training. (This time was sufficientto significantly exceed the number of training steps reported in the experiments of Kulkarni et al.(2016b), but not sufficient to approach the number of steps afforded by the other approaches.)Table 1 reports the performance of the models after training. Each fully trained model was testedover1million simulation steps. The table reports average health at the end of an episode for sce-narios D1 and D2, and average frags at the end of an episode for D3 and D4. We also reportthe average training speed for each approach, in millions of simulation steps per day of train-ing. The performance of the different models is additionally illustrated in the supplementary video(http://bit.ly/2f9tacZ ).D1 (health) D2 (health) D3 (frags) D4 (frags) steps/dayDQNA3CDSRDFP89:16:497:50:14:60:197:70:425:47:859:32:084:10:61:20:85:60:233:50:40:40:26:72:916:51:17M80M1M70MTable 1: Comparison to prior work. We report average health at the end of an episode for scenariosD1 and D2, and average frags at the end of an episode for scenarios D3 and D4.7Published as a conference paper at ICLR 20170 10 20 30 40 50Millions of steps020406080100HealthD1: BasicDFPA3CDQNDSR0 10 20 30 40 50Millions of steps020406080100D2: Navigation0 10 20 30 40 50Millions of steps061218243036FragsD3: Battle0 10 20 30 40 50Millions of steps0369121518D4: Battle 2Figure 3: Performance of different approaches during training. DQN, A3C, and DFP achieve sim-ilar performance in the Basic scenario. DFP outperforms the prior approaches in the other threescenarios, with a multiplicative gap in performance in the most complex ones (D3 and D4).In the Basic scenario, DQN, A3C, and DFP all perform well. As reported in Table 1, the performanceof A3C and DFP is virtually identical at 97:5%, while DQN reaches 89%. In the more complexNavigation scenario, a significant gap opens up between DQN and A3C; this is consistent with theexperiments of Mnih et al. (2016). DFP achieves the best performance in this scenario, with a 25percentage point advantage during testing. Note that in these first two scenarios, DFP was onlygiven a single measurement per time step (health).In the more complex Battle and Battle 2 scenarios (D3 and D4), DFP dominates the other ap-proaches. It outperforms A3C at test time by a factor of 6in D3 and by a factor of 2:5in D4.Note that the advantage of DFP is particularly significant in the scenarios that provide richer mea-surements: three measurements per time step in D3 and D4. The effect of multiple measurements isfurther evaluated in controlled experiments reported below.Generalization across environments. We now evaluate how the behaviors learned by the pre-sented approach generalize across different environments. To this end, we have created 100 ran-domly textured versions of the mazes from scenarios D3 and D4. We used 90 of these for trainingand 10 for testing, with disjoint sets of textures in the training and testing environments. We callthese scenarios D3-tx and D4-tx.Table 2 shows the performance of the approach for different combinations of training and testingregimes. For example, the entry in the D4-tx row of the D3 column shows the performance (inaverage number of frags at the end of an episode) of a model trained in D3 and tested in D4-tx. Notsurprisingly, a model trained in the simple D3 environment does not learn sufficient invariance tosurface appearance to generalize well to other environments. Training in the more complex multi-texture environment in D4 yields better generalization: the trained model performs well in D3 andexhibits non-trivial performance in D3-tx and D4-tx. Finally, exposing the model to significantvariation in surface appearance in D3-tx or D4-tx during training yields very good generalization.8Published as a conference paper at ICLR 2017TrainD3 D4 D3-tx D4-tx D4-tx-LTestD3 33:6 17:8 29:8 20:9 22 :0D4 1:6 17:1 5:4 10:8 12 :4D3-tx 3:9 8:1 22:6 15:6 19 :4D4-tx 1:7 5:1 6:2 10:2 12 :7Table 2: Generalization across environments.The last column of Table 2 additionally reports the performance of a higher-capacity model trainedin D4-tx. This combination is referred to as D4-tx-L. As shown in the table, this model performseven better. The architecture is detailed in Appendix A.Visual Doom AI Competition. To further evaluate the presented approach, we participated inthe Visual Doom AI Competition, held during September 2016. The competition evaluated sen-sorimotor control models that act based on raw visual input. The competition had the form of atournament: the submitted agents play multiple games against each other, their performance mea-sured by aggregate frags. The competition included two tracks. The Limited Deathmatch track washeld in a known environment that was given to the participants in advance at training time. TheFull Deathmatch track evaluated generalization to previously unseen environments and took placein multiple new environments that were not available to the participating teams at training time. Weonly enrolled in the Full Deathmatch track. Our model was trained using a variant of the D4-tx-Lregime.Our model won, outperforming the second best submission by more than 50%. That submission, de-scribed by Lample & Chaplot (2016), constitutes a strong baseline. It is a deep recurrent Q-networkthat incorporates an LSTM and was trained using reward shaping and extra supervision from thegame engine. Specifically, the authors took advantage of the ability provided by the ViZDoom plat-form to use the internal configuration of the game, including ground-truth knowledge of the presenceof enemies in the field of view, during training. The authors’ report shows that this additional su-pervision improved performance significantly. Our model, which is simpler, achieved even higherperformance without such additional supervision.Goal-agnostic training. We now evaluate the ability of the presented approach to learn without afixed goal at training time, and adapt to varying goals at test time. These experiments are performedin the Battle scenario. We use three training regimes: (a) fixed goal vector during training, (b)random goal vector with each value sampled uniformly from [0;1]for every episode, and (c) randomgoal vector with each value sampled uniformly from [1;1]for every episode. More details areprovided in Appendix A. Intuitively, in the second regime the agent is instructed to maximize thedifferent measurements, but has no knowledge of their relative importance. The third regime makesno assumptions as to whether the measured quantities are desirable or not.The results are shown in Table 3. Each group of columns corresponds to a training regime and eachrow to a different test-time goal. Goals are given by the weights of the three measurements (ammo,health, and frags) in the objective function. The first test-time goal in Table 3 is the goal vector usedin the battle scenarios in the prior experiments, the second seeks to maximize the frag count, thethird is a pacifist (maximize ammo and health, minimize frags), the fourth seeks to aimlessly drainammunition, and the fifth aims to maximize health. For each row, each group of columns reports theaverage value of each of the three measurements at the end of an episode. Note that health level atthe end of an episode can be negative if the agent suffered major damage in the pre-terminal step.We draw two main conclusions. First, on the main task (first row), models trained without knowingthe goal in advance (b,c) perform nearly as well as a dedicated model trained specifically for theeventual goal (a). Without knowing the eventual goal during training, the agent performs the taskalmost as well as when it was specifically trained for it. Second, all models generalize to new goalsbut not equally well. Models trained with a variety of goals (b,c) generalize much better than amodel trained with a fixed goal.9Published as a conference paper at ICLR 2017(a) fixed goal (0:5;0:5;1) (b) random goals [0;1] (c) random goals [1;1]test goal ammo health frags ammo health frags ammo health frags(0:5;0:5;1) 83 :4 97:0 33:6 92 :3 96:9 31:5 49 :3 94:3 28:9(0;0;1) 0 :33:7 11:5 4 :3 30:0 20:6 21 :8 70:9 24:6(1;1;1) 28 :62:0 0:0 22 :1 4:4 0:2 89 :4 83:6 0:0(1;0;0) 1 :08:3 1:7 1 :97:5 1:2 0 :98:6 1:7(0;1;0) 0 :7 2:7 2:6 9 :0 77:8 6:6 3 :0 69:6 7:9Table 3: Generalization across goals. Each group of three columns corresponds to a training regime,each row corresponds to a test-time goal. The results in the first row indicate that the approachperforms well on the main task even without knowing the goal at training time. The results in theother rows indicate that goal-agnostic training supports generalization across goals at test time.fragsall measurements all offsets 22:6all measurements one offset 17:2frags only all offsets 10:3frags only one offset 5:0Table 4: Ablation study. Predictingall measurements at all temporal offsetsyields the best results.Ablation study. We now perform an ablation studyusing the D3-tx scenario. Specifically, we evaluate theimportance of vectorial feedback versus a scalar reward,and the effect of predicting measurements at multipletemporal offsets. The results are summarized in Ta-ble 4. The table reports the performance (in averagefrags at the end of an episode) of our full model (predict-ing three measurements at six temporal offsets) and ofablated variants that only predict frags (a scalar reward)and/or only predict at the farthest temporal offset. As theresults demonstrate, predicting multiple measurementssignificantly improves the performance of the learnedmodel, even when it is evaluated by only one of thosemeasurements. Predicting measurements at multiple future times is also beneficial. This supportsthe intuition that a dense flow of multivariate measurements is a better training signal than a scalarreward.5 D ISCUSSIONWe presented an approach to sensorimotor control in immersive environments. Our approach issimple and demonstrates that supervised learning techniques can be adapted to learning to act incomplex and dynamic three-dimensional environments given raw sensory input and intrinsic mea-surements. The model trains on raw experience, by interacting with the environment without extra-neous supervision. Natural supervision is provided by the cotemporal structure of the sensory andmeasurement streams. Our experiments have demonstrated that this simple approach outperformssophisticated deep reinforcement learning formulations on challenging tasks in immersive environ-ments. Experiments have further demonstrated that the use of multivariate measurements providesa significant advantage over conventional scalar rewards and that the trained model can effectivelypursue new goals not specified during training.The presented work can be extended in multiple ways that are important for broadening the rangeof behaviors that can be learned. First, the presented model is purely reactive: it acts based onthe current frame only, with no explicit facilities for memory and no test-time retention of internalrepresentations. Recent work has explored memory-based models (Oh et al., 2016) and integratingsuch ideas with the presented approach may yield substantial advances. Second, significant progressin behavioral sophistication will likely require temporal abstraction and hierarchical organization oflearned skills (Barto & Mahadevan, 2003; Kulkarni et al., 2016a). Third, the presented model wasdeveloped for discrete action spaces; applying the presented ideas to continuous actions would beinteresting (Lillicrap et al., 2016). Finally, predicting features learned directly from rich sensoryinput can blur the distinction between sensory and measurement streams (Mathieu et al., 2016; Finnet al., 2016; Kalchbrenner et al., 2016).10Published as a conference paper at ICLR 2017 | ryVinjW4g | 8: Top 50% of accepted papers, clear accept | This paper presents an on-policy deep RL method with additional auxiliary intrinsic variables.
- The method is a special case of an universal value function based approach and the authors do cite the correct references. Maybe the biggest claimed technical contribution of this paper is to distill many of the existing ideas to solve 3D navigation problems. I think the contributions should be more clearly stated in the abstract/intro
- I would have liked to see failure modes of this approach. Under what circumstances does the model have problems generalizing to changing goals? There are other conceptual problems -- since this is an on-policy method, there will be catastrophic forgetting if the agent dosen't repeatedly train on goals from the distant past.
- Since the main contribution of this paper is to integrate several key ideas and show empirical advantage, I would have liked to see results on other domains like Atari (maybe using the ROM as intrinsic variables)
Overall, I think this paper does show clear empirical advantage of using the proposed underlying formulations and experimental insights from this paper might be valuable for future agents | 4: The reviewer is confident but not absolutely certain that the evaluation is correct |
|
rJLS7qKel | ICLR.cc/2017/conference | 2017 | Learning to Act by Predicting the Future | ["Alexey Dosovitskiy", "Vladlen Koltun"] | We present an approach to sensorimotor control in immersive environments. Our approach utilizes a high-dimensional sensory stream and a lower-dimensional measurement stream. The cotemporal structure of these streams provides a rich supervisory signal, which enables training a sensorimotor control model by interacting with the environment. The model is trained using supervised learning techniques, but without extraneous supervision. It learns to act based on raw sensory input from a complex three-dimensional environment. The presented formulation enables learning without a fixed goal at training time, and pursuing dynamically changing goals at test time. We conduct extensive experiments in three-dimensional simulations based on the classical first-person game Doom. The results demonstrate that the presented approach outperforms sophisticated prior formulations, particularly on challenging tasks. The results also show that trained models successfully generalize across environments and goals. A model trained using the presented approach won the Full Deathmatch track of the Visual Doom AI Competition, which was held in previously unseen environments. | ["future", "environment", "model", "goals", "results", "control", "immersive environments", "sensory stream", "measurement stream", "cotemporal structure"] | ABSTRACTWe present an approach to sensorimotor control in immersive environments. Ourapproach utilizes a high-dimensional sensory stream and a lower-dimensionalmeasurement stream. The cotemporal structure of these streams provides a richsupervisory signal, which enables training a sensorimotor control model by in-teracting with the environment. The model is trained using supervised learningtechniques, but without extraneous supervision. It learns to act based on raw sen-sory input from a complex three-dimensional environment. The presented formu-lation enables learning without a fixed goal at training time, and pursuing dynam-ically changing goals at test time. We conduct extensive experiments in three-dimensional simulations based on the classical first-person game Doom. Theresults demonstrate that the presented approach outperforms sophisticated priorformulations, particularly on challenging tasks. The results also show that trainedmodels successfully generalize across environments and goals. A model trainedusing the presented approach won the Full Deathmatch track of the Visual DoomAI Competition, which was held in previously unseen environments.1 I NTRODUCTIONMachine learning problems are commonly divided into three classes: supervised, unsupervised, andreinforcement learning. In this view, supervised learning is concerned with learning input-outputmappings, unsupervised learning aims to find hidden structure in data, and reinforcement learningdeals with goal-directed behavior (Murphy, 2012). Reinforcement learning is compelling becauseit considers the natural setting of an organism acting in its environment. It is generally taken tocomprise a class of problems (learning to act), the mathematical formalization of these problems(maximizing the expected discounted return), and a family of algorithmic approaches (optimizingan objective derived from the Bellman equation) (Kaelbling et al., 1996; Sutton & Barto, 2017).While reinforcement learning (RL) has achieved significant progress (Mnih et al., 2015), key chal-lenges remain. One is sensorimotor control from raw sensory input in complex and dynamic three-dimensional environments, learned directly from experience. Another is the acquisition of generalskills that can be flexibly deployed to accomplish a multitude of dynamically specified goals (Lakeet al., 2016).In this work, we propose an approach to sensorimotor control that aims to assist progress towardsovercoming these challenges. Our approach departs from the reward-based formalization commonlyused in RL. Instead of a monolithic state and a scalar reward, we consider a stream of sensory inputfstgand a stream of measurements fmtg. The sensory stream is typically high-dimensional andmay include the raw visual, auditory, and tactile input. The measurement stream has lower dimen-sionality and constitutes a set of data that pertain to the agent’s current state. In a physical system,measurements can include attitude, supply levels, and structural integrity. In a three-dimensionalcomputer game, they can include health, ammunition levels, and the number of adversaries over-come.Our guiding observation is that the interlocked temporal structure of the sensory and measurementstreams provides a rich supervisory signal. Given present sensory input, measurements, and goal,the agent can be trained to predict the effect of different actions on future measurements. Assumingthat the goal can be expressed in terms of future measurements, predicting these provides all theinformation necessary to support action. This reduces sensorimotor control to supervised learning,while supporting learning from raw experience and without extraneous data. Supervision is pro-1Published as a conference paper at ICLR 2017vided by experience itself: by acting and observing the effects of different actions in the context ofchanging sensory inputs and goals.This approach has two significant benefits. First, in contrast to an occasional scalar reward assumedin traditional RL, the measurement stream provides rich and temporally dense supervision that canstabilize and accelerate training. While a sparse scalar reward may be the only feedback availablein a board game (Tesauro, 1994; Silver et al., 2016), a multidimensional stream of sensations is amore appropriate model for an organism that is learning to function in an immersive environment(Adolph & Berger, 2006).The second advantage of the presented formulation is that it supports training without a fixed goaland pursuing dynamically specified goals at test time. Assuming that the goal can be expressed interms of future measurements, the model can be trained to take the goal into account in its predictionof the future. At test time, the agent can predict future measurements given its current sensory input,measurements, and goal, and then simply select the action that best suits its present goal.We evaluate the presented approach in immersive three-dimensional simulations that require visu-ally navigating a complex three-dimensional environment, recognizing objects, and interacting withdynamic adversaries. We use the classical first-person game Doom, which introduced immersivethree-dimensional games to popular culture (Kushner, 2003). The presented approach is given onlyraw visual input and the statistics shown to the player in the game, such as health and ammunitionlevels. No human gameplay is used, the model trains on raw experience.Experimental results demonstrate that the presented approach outperforms state-of-the-art deep RLmodels, particularly on complex tasks. Experiments further demonstrate that models learned by thepresented approach generalize across environments and goals, and that the use of vectorial measure-ments instead of a scalar reward is beneficial. A model trained with the presented approach won theFull Deathmatch track of the Visual Doom AI Competition, which took place in previously unseenenvironments. The presented approach outperformed the second best submission, which employeda substantially more complex model and additional supervision during training, by more than 50%.2 B ACKGROUNDThe supervised learning (SL) perspective on learning to act by interacting with the environmentdates back decades. Jordan & Rumelhart (1992) analyze this approach, review early work, andargue that the choice of SL versus RL should be guided by the characteristics of the environment.Their analysis suggests that RL may be more efficient when the environment provides only a sparsescalar reward signal, whereas SL can be advantageous when temporally dense multidimensionalfeedback is available.Sutton (1988) analyzed temporal-difference (TD) learning and argued that it is preferable to SL forprediction problems in which the correctness of the prediction is revealed many steps after the pre-diction is made. Sutton’s influential analysis assumes a sparse scalar reward. TD and policy gradientmethods have since come to dominate the study of sensorimotor learning (Kober et al., 2013; Mnihet al., 2015; Sutton & Barto, 2017). While the use of SL is natural in imitation learning (LeCunet al., 2005; Ross et al., 2013) or in conjunction with model-based RL (Levine & Koltun, 2013),the formulation of sensorimotor learning from raw experience as supervised learning is rare (Levineet al., 2016). Our work suggests that when the learner is exposed to dense multidimensional sen-sory feedback, direct future prediction can support effective sensorimotor coordination in complexdynamic environments.Our approach has similarities to Monte Carlo methods. The convergence of such methods wasanalyzed early on and they were seen as theoretically advantageous, particularly when function ap-proximators are used (Bertsekas, 1995; Sutton, 1995; Singh & Sutton, 1996). The choice of TDlearning over Monte Carlo methods was argued on practical grounds, based on empirical perfor-mance on canonical examples (Sutton, 1995). While the understanding of the convergence of bothtypes of methods has since improved (Szepesv ́ari & Littman, 1999; Tsitsiklis, 2002; Even-Dar &Mansour, 2003), the argument for TD versus Monte Carlo is to this day empirical (Sutton & Barto,2017). Sharp negative examples exist (Bertsekas, 2010). Our work deals with the more generalsetting of vectorial feedback and parameterized goals, and shows that a simple Monte-Carlo-typemethod performs extremely well in a compelling instantiation of this setting.2Published as a conference paper at ICLR 2017Vector-valued feedback has been considered in the context of multi-objective decision-making(G ́abor et al., 1998; Roijers et al., 2013). Transfer across related tasks has been analyzed byKonidaris et al. (2012). Parameterized goals have been studied in the context of continuous mo-tor skills such as throwing darts at a target (da Silva et al., 2012; Kober et al., 2012; Deisenrothet al., 2014). A general framework for sharing value function approximators across both states andgoals has been described by Schaul et al. (2015). Our work is most closely related to the frameworkof Schaul et al. (2015), but presents a specific formulation in which goals are defined in terms ofintrinsic measurements and control is based on direct future prediction. We provide an architecturethat handles realistic sensory and measurement streams and achieves state-of-the-art performance incomplex and dynamic three-dimensional environments.Learning to act in simulated environments has been the focus of significant attention following thesuccessful application of deep RL to Atari games by Mnih et al. (2015). A number of recent effortsapplied related ideas to three-dimensional environments. Lillicrap et al. (2016) considered continu-ous and high-dimensional action spaces and learned control policies in the TORCS simulator. Mnihet al. (2016) described asynchronous variants of deep RL methods and demonstrated navigation ina three-dimensional labyrinth. Oh et al. (2016) augmented deep Q-networks with external mem-ory and evaluated their performance on a set of tasks in Minecraft. In a recent technical report,Kulkarni et al. (2016b) proposed end-to-end training of successor representations and demonstratednavigation in a Doom-based environment. In another recent report, Blundell et al. (2016) considereda nonparametric approach to control and conducted experiments in a three-dimensional labyrinth.Experiments reported in Section 4 demonstrate that our approach significantly outperforms state-of-the-art deep RL methods.Prediction of future states in dynamical systems was considered by Littman et al. (2001) and Singhet al. (2003). Predictive representations in the form of generalized value functions were advocatedby Sutton et al. (2011). More recently, Oh et al. (2015) learned to predict future frames in Atarigames. Prediction of full sensory input in realistic three-dimensional environments remains an openchallenge, although significant progress is being made (Mathieu et al., 2016; Finn et al., 2016; Kalch-brenner et al., 2016). Our work considers prediction of future values of meaningful measurementsfrom rich sensory input and shows that such prediction supports effective sensorimotor control.3 M ODELConsider an agent that interacts with the environment over discrete time steps. At each time step t,the agent receives an observation otand executes an action atbased on this observation. We assumethat the observations have the following structure: ot=hst;mti, where stis raw sensory inputandmtis a set of measurements. In our experiments, stis an image: the agent’s view of its three-dimensional environment. More generally, stcan include input from multiple sensory modalities.The measurements mtcan indicate the attitude, supply levels, and structural integrity in a physicalsystem, or health, ammunition, and score in a computer game.The distinction between sensory input stand measurements mtis somewhat artificial: both standmtconstitute sensory input in different forms. In our model, the measurement vector mtis distin-guished from other sensations in two ways. First, the measurement vector is the part of the observa-tion that the agent will aim to predict. At present, predicting full sensory streams is beyond our ca-pabilities (although see the work of Kalchbrenner et al. (2016) and van den Oord et al. (2016) for im-pressive recent progress). We therefore designate a subset of sensations as measurements that will bepredicted. Second, we assume that the agent’s goals can be defined in terms of future measurements.Specifically, let 1;:::;nbe a set of temporal offsets and let f=hmt+1mt;:::;mt+nmtibe the corresponding differences of future and present measurements. We assume that any goalthat the agent will pursue can be defined as maximization of a function u(f;g). Any parametricfunction can be used. Our experiments use goals that are expressed as linear combinations of futuremeasurements:u(f;g) =g>f; (1)where the vector gparameterizes the goal and has the same dimensionality as f. This model gener-alizes the standard reinforcement learning formulation: the scalar reward signal can be viewed as ameasurement, and exponential decay is one possible configuration of the goal vector.3Published as a conference paper at ICLR 2017To predict future measurements, we use a parameterized function approximator, denoted by F:pat=F(ot;a;g;): (2)Herea2A is an action, are the learned parameters of F, andpatis the resulting prediction. Thedimensionality of patmatches the dimensionality of fandg. Note that the prediction is a function ofthe current observation, the considered action, and the goal. At test time, given learned parameters, the agent can choose the action that yields the best predicted outcome:at= arg maxa2Ag>F(ot;a;g;): (3)The goal vector used at test time need not be identical to any goal seen during training.3.1 T RAININGThe predictor Fis trained on experiences collected by the agent. Starting with a random policy, theagent begins to interact with its environment. This interaction takes place over episodes that last fora fixed number of time steps or until a terminal event occurs.Consider a set of experiences collected by the agent, yielding a set Dof training examples:D=fhoi;ai;gi;fiigNi=1. Herehoi;ai;giiis the input and fiis the output of example i. The pre-dictor is trained using a regression loss:L() =NXi=1kF(oi;ai;gi;)fik2: (4)A classification loss can be used for predicting categorical measurements, but this was not necessaryin our experiments.As the agent collects new experiences, the training set Dand the predictor used by the agent change.We maintain an experience memory of the Mmost recent experiences out of which a mini-batch ofNexamples is randomly sampled for every iteration of the solver. The parameters of the predictorused by the agent are updated after every knew experiences. This setup departs from pure on-policy training and we have not observed any adverse effect of using a small experience memory.Additional details are provided in Appendix A.We have evaluated two training regimes:1. Single goal: the goal vector is fixed throughout the training process.2. Randomized goals: the goal vector for each episode is generated at random.In both regimes, the agent follows an "-greedy policy: it acts greedily according to the current goalwith probability 1", and selects a random action with probability ". The value of "is initially setto1and is decreased during training according to a fixed schedule.3.2 A RCHITECTUREThe predictor Fis a deep network parameterized by . The network architecture we use is shownin Figure 1. The network has three input modules: a perception module S(s), a measurementmoduleM(m)and a goal module G(g). In our experiments, sis an image and the perceptionmoduleSis implemented as a convolutional network. The measurement and goal modules arefully-connected networks. The outputs of the three input modules are concatenated, forming thejoint input representation used for subsequent processing:j=J(s;m;g) =hS(s);M(m);G(g)i: (5)Future measurements are predicted based on this input representation. The network emits predic-tions of future measurements for all actions at once. This could be done by a fully-connected modulethat absorbs the input representation and outputs predictions. However, we found that introducingadditional structure into the prediction module enhances its ability to learn the fine differences be-tween the outcomes of different actions. To this end, we build on the ideas of Wang et al. (2016) and4Published as a conference paper at ICLR 2017MeasurementsImageActionExpectationPrediction+GoalTargetNormalizeDuplicateActiontakenFigure 1: Network structure. The image s, measurements m, and goal gare first processed sep-arately by three input modules. The outputs of these modules are concatenated into a joint repre-sentation j. This joint representation is processed by two parallel streams that predict the expectedmeasurements E(j)and the normalized action-conditional differences fAi(j)g, which are then com-bined to produce the final prediction for each action.split the prediction module into two streams: an expectation stream E(j)and an action stream A(j).The expectation stream predicts the average of the future measurements over all potential actions.The action stream concentrates on the fine differences between actions: A(j) =A1(j);:::;Aw(j),wherew=jAjis the number of actions. We add a normalization layer at the end of the actionstream that ensures that the average of the predictions of the action stream is zero for each futuremeasurement:Ai(j) =Ai(j)1wwXk=1Ak(j) (6)for alli. The normalization layer subtracts the average over all actions from each prediction, forcingthe expectation stream Eto compensate by predicting these average values. The output of theexpectation stream has dimensionality dim(f), where fis the vector of future measurements. Theoutput of the action stream has dimensionality wdim(f).The output of the network is a prediction of future measurements for each action, composed bysumming the output of the expectation stream and the normalized action-conditional output of theaction stream:p=hpa1;:::;pawi=DA1(j) +E(j);:::;Aw(j) +E(j)E: (7)The output of the network has the same dimensionality as the output of the action stream.4 E XPERIMENTSWe evaluate the presented approach in immersive three-dimensional simulations based on the classi-cal game Doom. In these simulations, the agent has a first-person view of the environment and mustact based on the same visual information that is shown to human players in the game. To interfacewith the game engine, we use the ViZDoom platform developed by Kempka et al. (2016). One ofthe advantages of this platform is that it allows running the simulation at thousands of frames persecond on a single CPU core, which enables training models on tens of millions of simulation stepsin a single day.We compare the presented approach to state-of-the-art deep RL methods in four scenarios of in-creasing difficulty, study generalization across environments and goals, and evaluate the importanceof different aspects of the model.4.1 S ETUPScenarios. We use four scenarios of increasing difficulty:5Published as a conference paper at ICLR 2017D1: Basic D2: NavigationD3: Battle D4: Battle 2Figure 2: Example frames from the four scenarios.D1 Gathering health kits in a square room. (“Basic”)D2 Gathering health kits and avoiding poison vials in a maze. (“Navigation”)D3 Defending against adversaries while gathering health and ammunition in a maze. (“Battle”)D4 Defending against adversaries while gathering health and ammunition in a more compli-cated maze. (“Battle 2”)These scenarios are illustrated in Figure 2 and in the supplementary video ( http://bit.ly/2f9tacZ ).The first two scenarios are provided with the ViZDoom platform. In D1, the agent is in a squareroom and its health is declining at a constant rate. To survive, it must move around and collect healthkits, which are distributed abundantly in the room. This task is easy: as long as the agent learns toavoid walls and keep traversing the room, performance is good. In D2, the agent is in a maze andits health is again declining at a constant rate. Here it must again collect health kits that increase itshealth, but it must also avoid blue poison vials that decrease health. This task is harder: the agentmust learn to traverse irregularly shaped passageways, and to distinguish health kits from poisonvials. In both tasks, the agent has access to three binary sub-actions: move forward, turn left, andturn right. Any combination of these three can be used at any given time, resulting in 8 possibleactions. The only measurement provided to the agent in these scenarios is health.The last two scenarios, D3 and D4, are more challenging and were designed by us using elements ofthe ViZDoom platform. Here the agent is armed and is under attack by alien monsters. The monstersspawn abundantly, move around in the environment, and shoot fireballs at the agent. Health kits andammunition are sporadically distributed throughout the environment and can be collected by theagent. The environment is a simple maze in D3 and a more complex one in D4. In both scenarios,the agent has access to eight sub-actions: move forward, move backward, turn left, turn right, strafeleft, strafe right, run, and shoot. Any combination of these sub-actions can be used, resulting in6Published as a conference paper at ICLR 2017256 possible actions. The agent is provided with three measurements: health, ammunition, and fragcount (number of monsters killed).Model. The future predictor network used in our experiments was configured to be as close aspossible to the DQN model of Mnih et al. (2015), to ensure a fair comparison. Additional details onthe architecture are provided in Appendix A.Training and testing. The agent is trained and tested over episodes. Each episode terminates after525 steps (equivalent to 1 minute of real time) or when the agent’s health drops to zero. Statisticsreported in figures and tables summarize the final values of respective measurements at the end ofepisodes.We set the temporal offsets 1;:::;nof predicted future measurements to 1, 2, 4, 8, 16, and 32steps in all experiments. Only the latest three time steps contribute to the objective function, withcoefficients (0:5;0:5;1). More details are provided in Appendix A.4.2 R ESULTSComparison to prior work. We have compared the presented approach to three deep RL methods:DQN (Mnih et al., 2015), A3C (Mnih et al., 2016), and DSR (Kulkarni et al., 2016b). DQN is astandard baseline for visuomotor control due to its impressive performance on Atari games. A3Cis more recent and is commonly regarded as the state of the art in this area. DSR is described ina recent technical report and we included it because the authors also used the ViZDoom platformin experiments, albeit with a simple task. Further details on the setup of the prior approaches areprovided in Appendix B.The performance of the different approaches during training is shown in Figure 3. In reporting theresults of these experiments, we refer to our approach as DFP (direct future prediction). For thefirst two scenarios, all approaches were trained to maximize health. For these scenarios, Figure3 reports average health at the end of an episode over the course of training. For the last twoscenarios, all approaches were trained to maximize a linear combination of the three normalizedmeasurements (ammo, health, and frags) with coefficients (0:5;0:5;1). For these scenarios, Figure3 reports average frags at the end of an episode. Each presented curve averages information fromthree independent training runs, and each data point is computed from 350;000steps of testing.DQN, A3C, and DFP were trained for 50million steps. The training procedure for DSR is muchslower and can only process roughly 1 million simulation steps per day. For this reason, we wereonly able to evaluate DSR on the Basic scenario and were not able to perform extensive hyperparam-eter tuning. We report results for this technique after 10days of training. (This time was sufficientto significantly exceed the number of training steps reported in the experiments of Kulkarni et al.(2016b), but not sufficient to approach the number of steps afforded by the other approaches.)Table 1 reports the performance of the models after training. Each fully trained model was testedover1million simulation steps. The table reports average health at the end of an episode for sce-narios D1 and D2, and average frags at the end of an episode for D3 and D4. We also reportthe average training speed for each approach, in millions of simulation steps per day of train-ing. The performance of the different models is additionally illustrated in the supplementary video(http://bit.ly/2f9tacZ ).D1 (health) D2 (health) D3 (frags) D4 (frags) steps/dayDQNA3CDSRDFP89:16:497:50:14:60:197:70:425:47:859:32:084:10:61:20:85:60:233:50:40:40:26:72:916:51:17M80M1M70MTable 1: Comparison to prior work. We report average health at the end of an episode for scenariosD1 and D2, and average frags at the end of an episode for scenarios D3 and D4.7Published as a conference paper at ICLR 20170 10 20 30 40 50Millions of steps020406080100HealthD1: BasicDFPA3CDQNDSR0 10 20 30 40 50Millions of steps020406080100D2: Navigation0 10 20 30 40 50Millions of steps061218243036FragsD3: Battle0 10 20 30 40 50Millions of steps0369121518D4: Battle 2Figure 3: Performance of different approaches during training. DQN, A3C, and DFP achieve sim-ilar performance in the Basic scenario. DFP outperforms the prior approaches in the other threescenarios, with a multiplicative gap in performance in the most complex ones (D3 and D4).In the Basic scenario, DQN, A3C, and DFP all perform well. As reported in Table 1, the performanceof A3C and DFP is virtually identical at 97:5%, while DQN reaches 89%. In the more complexNavigation scenario, a significant gap opens up between DQN and A3C; this is consistent with theexperiments of Mnih et al. (2016). DFP achieves the best performance in this scenario, with a 25percentage point advantage during testing. Note that in these first two scenarios, DFP was onlygiven a single measurement per time step (health).In the more complex Battle and Battle 2 scenarios (D3 and D4), DFP dominates the other ap-proaches. It outperforms A3C at test time by a factor of 6in D3 and by a factor of 2:5in D4.Note that the advantage of DFP is particularly significant in the scenarios that provide richer mea-surements: three measurements per time step in D3 and D4. The effect of multiple measurements isfurther evaluated in controlled experiments reported below.Generalization across environments. We now evaluate how the behaviors learned by the pre-sented approach generalize across different environments. To this end, we have created 100 ran-domly textured versions of the mazes from scenarios D3 and D4. We used 90 of these for trainingand 10 for testing, with disjoint sets of textures in the training and testing environments. We callthese scenarios D3-tx and D4-tx.Table 2 shows the performance of the approach for different combinations of training and testingregimes. For example, the entry in the D4-tx row of the D3 column shows the performance (inaverage number of frags at the end of an episode) of a model trained in D3 and tested in D4-tx. Notsurprisingly, a model trained in the simple D3 environment does not learn sufficient invariance tosurface appearance to generalize well to other environments. Training in the more complex multi-texture environment in D4 yields better generalization: the trained model performs well in D3 andexhibits non-trivial performance in D3-tx and D4-tx. Finally, exposing the model to significantvariation in surface appearance in D3-tx or D4-tx during training yields very good generalization.8Published as a conference paper at ICLR 2017TrainD3 D4 D3-tx D4-tx D4-tx-LTestD3 33:6 17:8 29:8 20:9 22 :0D4 1:6 17:1 5:4 10:8 12 :4D3-tx 3:9 8:1 22:6 15:6 19 :4D4-tx 1:7 5:1 6:2 10:2 12 :7Table 2: Generalization across environments.The last column of Table 2 additionally reports the performance of a higher-capacity model trainedin D4-tx. This combination is referred to as D4-tx-L. As shown in the table, this model performseven better. The architecture is detailed in Appendix A.Visual Doom AI Competition. To further evaluate the presented approach, we participated inthe Visual Doom AI Competition, held during September 2016. The competition evaluated sen-sorimotor control models that act based on raw visual input. The competition had the form of atournament: the submitted agents play multiple games against each other, their performance mea-sured by aggregate frags. The competition included two tracks. The Limited Deathmatch track washeld in a known environment that was given to the participants in advance at training time. TheFull Deathmatch track evaluated generalization to previously unseen environments and took placein multiple new environments that were not available to the participating teams at training time. Weonly enrolled in the Full Deathmatch track. Our model was trained using a variant of the D4-tx-Lregime.Our model won, outperforming the second best submission by more than 50%. That submission, de-scribed by Lample & Chaplot (2016), constitutes a strong baseline. It is a deep recurrent Q-networkthat incorporates an LSTM and was trained using reward shaping and extra supervision from thegame engine. Specifically, the authors took advantage of the ability provided by the ViZDoom plat-form to use the internal configuration of the game, including ground-truth knowledge of the presenceof enemies in the field of view, during training. The authors’ report shows that this additional su-pervision improved performance significantly. Our model, which is simpler, achieved even higherperformance without such additional supervision.Goal-agnostic training. We now evaluate the ability of the presented approach to learn without afixed goal at training time, and adapt to varying goals at test time. These experiments are performedin the Battle scenario. We use three training regimes: (a) fixed goal vector during training, (b)random goal vector with each value sampled uniformly from [0;1]for every episode, and (c) randomgoal vector with each value sampled uniformly from [1;1]for every episode. More details areprovided in Appendix A. Intuitively, in the second regime the agent is instructed to maximize thedifferent measurements, but has no knowledge of their relative importance. The third regime makesno assumptions as to whether the measured quantities are desirable or not.The results are shown in Table 3. Each group of columns corresponds to a training regime and eachrow to a different test-time goal. Goals are given by the weights of the three measurements (ammo,health, and frags) in the objective function. The first test-time goal in Table 3 is the goal vector usedin the battle scenarios in the prior experiments, the second seeks to maximize the frag count, thethird is a pacifist (maximize ammo and health, minimize frags), the fourth seeks to aimlessly drainammunition, and the fifth aims to maximize health. For each row, each group of columns reports theaverage value of each of the three measurements at the end of an episode. Note that health level atthe end of an episode can be negative if the agent suffered major damage in the pre-terminal step.We draw two main conclusions. First, on the main task (first row), models trained without knowingthe goal in advance (b,c) perform nearly as well as a dedicated model trained specifically for theeventual goal (a). Without knowing the eventual goal during training, the agent performs the taskalmost as well as when it was specifically trained for it. Second, all models generalize to new goalsbut not equally well. Models trained with a variety of goals (b,c) generalize much better than amodel trained with a fixed goal.9Published as a conference paper at ICLR 2017(a) fixed goal (0:5;0:5;1) (b) random goals [0;1] (c) random goals [1;1]test goal ammo health frags ammo health frags ammo health frags(0:5;0:5;1) 83 :4 97:0 33:6 92 :3 96:9 31:5 49 :3 94:3 28:9(0;0;1) 0 :33:7 11:5 4 :3 30:0 20:6 21 :8 70:9 24:6(1;1;1) 28 :62:0 0:0 22 :1 4:4 0:2 89 :4 83:6 0:0(1;0;0) 1 :08:3 1:7 1 :97:5 1:2 0 :98:6 1:7(0;1;0) 0 :7 2:7 2:6 9 :0 77:8 6:6 3 :0 69:6 7:9Table 3: Generalization across goals. Each group of three columns corresponds to a training regime,each row corresponds to a test-time goal. The results in the first row indicate that the approachperforms well on the main task even without knowing the goal at training time. The results in theother rows indicate that goal-agnostic training supports generalization across goals at test time.fragsall measurements all offsets 22:6all measurements one offset 17:2frags only all offsets 10:3frags only one offset 5:0Table 4: Ablation study. Predictingall measurements at all temporal offsetsyields the best results.Ablation study. We now perform an ablation studyusing the D3-tx scenario. Specifically, we evaluate theimportance of vectorial feedback versus a scalar reward,and the effect of predicting measurements at multipletemporal offsets. The results are summarized in Ta-ble 4. The table reports the performance (in averagefrags at the end of an episode) of our full model (predict-ing three measurements at six temporal offsets) and ofablated variants that only predict frags (a scalar reward)and/or only predict at the farthest temporal offset. As theresults demonstrate, predicting multiple measurementssignificantly improves the performance of the learnedmodel, even when it is evaluated by only one of thosemeasurements. Predicting measurements at multiple future times is also beneficial. This supportsthe intuition that a dense flow of multivariate measurements is a better training signal than a scalarreward.5 D ISCUSSIONWe presented an approach to sensorimotor control in immersive environments. Our approach issimple and demonstrates that supervised learning techniques can be adapted to learning to act incomplex and dynamic three-dimensional environments given raw sensory input and intrinsic mea-surements. The model trains on raw experience, by interacting with the environment without extra-neous supervision. Natural supervision is provided by the cotemporal structure of the sensory andmeasurement streams. Our experiments have demonstrated that this simple approach outperformssophisticated deep reinforcement learning formulations on challenging tasks in immersive environ-ments. Experiments have further demonstrated that the use of multivariate measurements providesa significant advantage over conventional scalar rewards and that the trained model can effectivelypursue new goals not specified during training.The presented work can be extended in multiple ways that are important for broadening the rangeof behaviors that can be learned. First, the presented model is purely reactive: it acts based onthe current frame only, with no explicit facilities for memory and no test-time retention of internalrepresentations. Recent work has explored memory-based models (Oh et al., 2016) and integratingsuch ideas with the presented approach may yield substantial advances. Second, significant progressin behavioral sophistication will likely require temporal abstraction and hierarchical organization oflearned skills (Barto & Mahadevan, 2003; Kulkarni et al., 2016a). Third, the presented model wasdeveloped for discrete action spaces; applying the presented ideas to continuous actions would beinteresting (Lillicrap et al., 2016). Finally, predicting features learned directly from rich sensoryinput can blur the distinction between sensory and measurement streams (Mathieu et al., 2016; Finnet al., 2016; Kalchbrenner et al., 2016).10Published as a conference paper at ICLR 2017 | BJvWS_GVg | Compelling empirically driven result | 7: Good paper, accept | Deep RL (using deep neural networks for function approximators in RL algorithms) have had a number of successes solving RL in large state spaces. This empirically driven work builds on these approaches. It introduces a new algorithm which performs better in novel 3D environments from raw sensory data and allows better generalization across goals and environments. Notably, this algorithm was the winner of the Visual Doom AI competition.
The key idea of their algorithm is to use additional low-dimensional observations (such as ammo or health which is provided by the game engine) as a supervised target for prediction. Importantly, this prediction is conditioned on a goal vector (which is given, not learned) and the current action. Once trained the optimal action for the current state can be chosen as the action that maximises the predicted outcome according the goal. Unlike in successor feature representations, learning is supervised and there is no TD relationship between the predictions of the current state and the next state.
There have been a number of prior works both in predicting future states as part of RL and goal driven function approximators which the authors review in section 2. The key contributions of this work are the focus on Monte Carlo estimation (rather than TD), the use of low-dimensional ‘measurements’ for prediction, the parametrized goals and, perhaps most importantly, the empirical comparison to relevant prior work.
In addition to the comparison with Visual Doom AI, the authors show that their algorithm is able to learn generalizable policies which can respond, without further training, to limited changes in the goal.
The paper is well-communicated and the empirical results compelling and will be of significant interest.
Some minor potential improvements:
There is an approximation in the supervised training as it is making an on-policy assumption but it learns from a replay buffer (with the Monte Carlo regression the expectation of the remainder of the trajectory is assumed to follow the current policy, but is being sampled from episodes generated by prior versions of the policy). This should be discussed.
The algorithm uses additional metadata (the information about which parts of the sensory input are worth predicting) that the compared algorithms do not. I think this, and the limitations of this approach (e.g. it may not work well in a sensory environment if such measurements are not provided) should be mentioned more clearly.
| 4: The reviewer is confident but not absolutely certain that the evaluation is correct |
rJLS7qKel | ICLR.cc/2017/conference | 2017 | Learning to Act by Predicting the Future | ["Alexey Dosovitskiy", "Vladlen Koltun"] | We present an approach to sensorimotor control in immersive environments. Our approach utilizes a high-dimensional sensory stream and a lower-dimensional measurement stream. The cotemporal structure of these streams provides a rich supervisory signal, which enables training a sensorimotor control model by interacting with the environment. The model is trained using supervised learning techniques, but without extraneous supervision. It learns to act based on raw sensory input from a complex three-dimensional environment. The presented formulation enables learning without a fixed goal at training time, and pursuing dynamically changing goals at test time. We conduct extensive experiments in three-dimensional simulations based on the classical first-person game Doom. The results demonstrate that the presented approach outperforms sophisticated prior formulations, particularly on challenging tasks. The results also show that trained models successfully generalize across environments and goals. A model trained using the presented approach won the Full Deathmatch track of the Visual Doom AI Competition, which was held in previously unseen environments. | ["future", "environment", "model", "goals", "results", "control", "immersive environments", "sensory stream", "measurement stream", "cotemporal structure"] | ABSTRACTWe present an approach to sensorimotor control in immersive environments. Ourapproach utilizes a high-dimensional sensory stream and a lower-dimensionalmeasurement stream. The cotemporal structure of these streams provides a richsupervisory signal, which enables training a sensorimotor control model by in-teracting with the environment. The model is trained using supervised learningtechniques, but without extraneous supervision. It learns to act based on raw sen-sory input from a complex three-dimensional environment. The presented formu-lation enables learning without a fixed goal at training time, and pursuing dynam-ically changing goals at test time. We conduct extensive experiments in three-dimensional simulations based on the classical first-person game Doom. Theresults demonstrate that the presented approach outperforms sophisticated priorformulations, particularly on challenging tasks. The results also show that trainedmodels successfully generalize across environments and goals. A model trainedusing the presented approach won the Full Deathmatch track of the Visual DoomAI Competition, which was held in previously unseen environments.1 I NTRODUCTIONMachine learning problems are commonly divided into three classes: supervised, unsupervised, andreinforcement learning. In this view, supervised learning is concerned with learning input-outputmappings, unsupervised learning aims to find hidden structure in data, and reinforcement learningdeals with goal-directed behavior (Murphy, 2012). Reinforcement learning is compelling becauseit considers the natural setting of an organism acting in its environment. It is generally taken tocomprise a class of problems (learning to act), the mathematical formalization of these problems(maximizing the expected discounted return), and a family of algorithmic approaches (optimizingan objective derived from the Bellman equation) (Kaelbling et al., 1996; Sutton & Barto, 2017).While reinforcement learning (RL) has achieved significant progress (Mnih et al., 2015), key chal-lenges remain. One is sensorimotor control from raw sensory input in complex and dynamic three-dimensional environments, learned directly from experience. Another is the acquisition of generalskills that can be flexibly deployed to accomplish a multitude of dynamically specified goals (Lakeet al., 2016).In this work, we propose an approach to sensorimotor control that aims to assist progress towardsovercoming these challenges. Our approach departs from the reward-based formalization commonlyused in RL. Instead of a monolithic state and a scalar reward, we consider a stream of sensory inputfstgand a stream of measurements fmtg. The sensory stream is typically high-dimensional andmay include the raw visual, auditory, and tactile input. The measurement stream has lower dimen-sionality and constitutes a set of data that pertain to the agent’s current state. In a physical system,measurements can include attitude, supply levels, and structural integrity. In a three-dimensionalcomputer game, they can include health, ammunition levels, and the number of adversaries over-come.Our guiding observation is that the interlocked temporal structure of the sensory and measurementstreams provides a rich supervisory signal. Given present sensory input, measurements, and goal,the agent can be trained to predict the effect of different actions on future measurements. Assumingthat the goal can be expressed in terms of future measurements, predicting these provides all theinformation necessary to support action. This reduces sensorimotor control to supervised learning,while supporting learning from raw experience and without extraneous data. Supervision is pro-1Published as a conference paper at ICLR 2017vided by experience itself: by acting and observing the effects of different actions in the context ofchanging sensory inputs and goals.This approach has two significant benefits. First, in contrast to an occasional scalar reward assumedin traditional RL, the measurement stream provides rich and temporally dense supervision that canstabilize and accelerate training. While a sparse scalar reward may be the only feedback availablein a board game (Tesauro, 1994; Silver et al., 2016), a multidimensional stream of sensations is amore appropriate model for an organism that is learning to function in an immersive environment(Adolph & Berger, 2006).The second advantage of the presented formulation is that it supports training without a fixed goaland pursuing dynamically specified goals at test time. Assuming that the goal can be expressed interms of future measurements, the model can be trained to take the goal into account in its predictionof the future. At test time, the agent can predict future measurements given its current sensory input,measurements, and goal, and then simply select the action that best suits its present goal.We evaluate the presented approach in immersive three-dimensional simulations that require visu-ally navigating a complex three-dimensional environment, recognizing objects, and interacting withdynamic adversaries. We use the classical first-person game Doom, which introduced immersivethree-dimensional games to popular culture (Kushner, 2003). The presented approach is given onlyraw visual input and the statistics shown to the player in the game, such as health and ammunitionlevels. No human gameplay is used, the model trains on raw experience.Experimental results demonstrate that the presented approach outperforms state-of-the-art deep RLmodels, particularly on complex tasks. Experiments further demonstrate that models learned by thepresented approach generalize across environments and goals, and that the use of vectorial measure-ments instead of a scalar reward is beneficial. A model trained with the presented approach won theFull Deathmatch track of the Visual Doom AI Competition, which took place in previously unseenenvironments. The presented approach outperformed the second best submission, which employeda substantially more complex model and additional supervision during training, by more than 50%.2 B ACKGROUNDThe supervised learning (SL) perspective on learning to act by interacting with the environmentdates back decades. Jordan & Rumelhart (1992) analyze this approach, review early work, andargue that the choice of SL versus RL should be guided by the characteristics of the environment.Their analysis suggests that RL may be more efficient when the environment provides only a sparsescalar reward signal, whereas SL can be advantageous when temporally dense multidimensionalfeedback is available.Sutton (1988) analyzed temporal-difference (TD) learning and argued that it is preferable to SL forprediction problems in which the correctness of the prediction is revealed many steps after the pre-diction is made. Sutton’s influential analysis assumes a sparse scalar reward. TD and policy gradientmethods have since come to dominate the study of sensorimotor learning (Kober et al., 2013; Mnihet al., 2015; Sutton & Barto, 2017). While the use of SL is natural in imitation learning (LeCunet al., 2005; Ross et al., 2013) or in conjunction with model-based RL (Levine & Koltun, 2013),the formulation of sensorimotor learning from raw experience as supervised learning is rare (Levineet al., 2016). Our work suggests that when the learner is exposed to dense multidimensional sen-sory feedback, direct future prediction can support effective sensorimotor coordination in complexdynamic environments.Our approach has similarities to Monte Carlo methods. The convergence of such methods wasanalyzed early on and they were seen as theoretically advantageous, particularly when function ap-proximators are used (Bertsekas, 1995; Sutton, 1995; Singh & Sutton, 1996). The choice of TDlearning over Monte Carlo methods was argued on practical grounds, based on empirical perfor-mance on canonical examples (Sutton, 1995). While the understanding of the convergence of bothtypes of methods has since improved (Szepesv ́ari & Littman, 1999; Tsitsiklis, 2002; Even-Dar &Mansour, 2003), the argument for TD versus Monte Carlo is to this day empirical (Sutton & Barto,2017). Sharp negative examples exist (Bertsekas, 2010). Our work deals with the more generalsetting of vectorial feedback and parameterized goals, and shows that a simple Monte-Carlo-typemethod performs extremely well in a compelling instantiation of this setting.2Published as a conference paper at ICLR 2017Vector-valued feedback has been considered in the context of multi-objective decision-making(G ́abor et al., 1998; Roijers et al., 2013). Transfer across related tasks has been analyzed byKonidaris et al. (2012). Parameterized goals have been studied in the context of continuous mo-tor skills such as throwing darts at a target (da Silva et al., 2012; Kober et al., 2012; Deisenrothet al., 2014). A general framework for sharing value function approximators across both states andgoals has been described by Schaul et al. (2015). Our work is most closely related to the frameworkof Schaul et al. (2015), but presents a specific formulation in which goals are defined in terms ofintrinsic measurements and control is based on direct future prediction. We provide an architecturethat handles realistic sensory and measurement streams and achieves state-of-the-art performance incomplex and dynamic three-dimensional environments.Learning to act in simulated environments has been the focus of significant attention following thesuccessful application of deep RL to Atari games by Mnih et al. (2015). A number of recent effortsapplied related ideas to three-dimensional environments. Lillicrap et al. (2016) considered continu-ous and high-dimensional action spaces and learned control policies in the TORCS simulator. Mnihet al. (2016) described asynchronous variants of deep RL methods and demonstrated navigation ina three-dimensional labyrinth. Oh et al. (2016) augmented deep Q-networks with external mem-ory and evaluated their performance on a set of tasks in Minecraft. In a recent technical report,Kulkarni et al. (2016b) proposed end-to-end training of successor representations and demonstratednavigation in a Doom-based environment. In another recent report, Blundell et al. (2016) considereda nonparametric approach to control and conducted experiments in a three-dimensional labyrinth.Experiments reported in Section 4 demonstrate that our approach significantly outperforms state-of-the-art deep RL methods.Prediction of future states in dynamical systems was considered by Littman et al. (2001) and Singhet al. (2003). Predictive representations in the form of generalized value functions were advocatedby Sutton et al. (2011). More recently, Oh et al. (2015) learned to predict future frames in Atarigames. Prediction of full sensory input in realistic three-dimensional environments remains an openchallenge, although significant progress is being made (Mathieu et al., 2016; Finn et al., 2016; Kalch-brenner et al., 2016). Our work considers prediction of future values of meaningful measurementsfrom rich sensory input and shows that such prediction supports effective sensorimotor control.3 M ODELConsider an agent that interacts with the environment over discrete time steps. At each time step t,the agent receives an observation otand executes an action atbased on this observation. We assumethat the observations have the following structure: ot=hst;mti, where stis raw sensory inputandmtis a set of measurements. In our experiments, stis an image: the agent’s view of its three-dimensional environment. More generally, stcan include input from multiple sensory modalities.The measurements mtcan indicate the attitude, supply levels, and structural integrity in a physicalsystem, or health, ammunition, and score in a computer game.The distinction between sensory input stand measurements mtis somewhat artificial: both standmtconstitute sensory input in different forms. In our model, the measurement vector mtis distin-guished from other sensations in two ways. First, the measurement vector is the part of the observa-tion that the agent will aim to predict. At present, predicting full sensory streams is beyond our ca-pabilities (although see the work of Kalchbrenner et al. (2016) and van den Oord et al. (2016) for im-pressive recent progress). We therefore designate a subset of sensations as measurements that will bepredicted. Second, we assume that the agent’s goals can be defined in terms of future measurements.Specifically, let 1;:::;nbe a set of temporal offsets and let f=hmt+1mt;:::;mt+nmtibe the corresponding differences of future and present measurements. We assume that any goalthat the agent will pursue can be defined as maximization of a function u(f;g). Any parametricfunction can be used. Our experiments use goals that are expressed as linear combinations of futuremeasurements:u(f;g) =g>f; (1)where the vector gparameterizes the goal and has the same dimensionality as f. This model gener-alizes the standard reinforcement learning formulation: the scalar reward signal can be viewed as ameasurement, and exponential decay is one possible configuration of the goal vector.3Published as a conference paper at ICLR 2017To predict future measurements, we use a parameterized function approximator, denoted by F:pat=F(ot;a;g;): (2)Herea2A is an action, are the learned parameters of F, andpatis the resulting prediction. Thedimensionality of patmatches the dimensionality of fandg. Note that the prediction is a function ofthe current observation, the considered action, and the goal. At test time, given learned parameters, the agent can choose the action that yields the best predicted outcome:at= arg maxa2Ag>F(ot;a;g;): (3)The goal vector used at test time need not be identical to any goal seen during training.3.1 T RAININGThe predictor Fis trained on experiences collected by the agent. Starting with a random policy, theagent begins to interact with its environment. This interaction takes place over episodes that last fora fixed number of time steps or until a terminal event occurs.Consider a set of experiences collected by the agent, yielding a set Dof training examples:D=fhoi;ai;gi;fiigNi=1. Herehoi;ai;giiis the input and fiis the output of example i. The pre-dictor is trained using a regression loss:L() =NXi=1kF(oi;ai;gi;)fik2: (4)A classification loss can be used for predicting categorical measurements, but this was not necessaryin our experiments.As the agent collects new experiences, the training set Dand the predictor used by the agent change.We maintain an experience memory of the Mmost recent experiences out of which a mini-batch ofNexamples is randomly sampled for every iteration of the solver. The parameters of the predictorused by the agent are updated after every knew experiences. This setup departs from pure on-policy training and we have not observed any adverse effect of using a small experience memory.Additional details are provided in Appendix A.We have evaluated two training regimes:1. Single goal: the goal vector is fixed throughout the training process.2. Randomized goals: the goal vector for each episode is generated at random.In both regimes, the agent follows an "-greedy policy: it acts greedily according to the current goalwith probability 1", and selects a random action with probability ". The value of "is initially setto1and is decreased during training according to a fixed schedule.3.2 A RCHITECTUREThe predictor Fis a deep network parameterized by . The network architecture we use is shownin Figure 1. The network has three input modules: a perception module S(s), a measurementmoduleM(m)and a goal module G(g). In our experiments, sis an image and the perceptionmoduleSis implemented as a convolutional network. The measurement and goal modules arefully-connected networks. The outputs of the three input modules are concatenated, forming thejoint input representation used for subsequent processing:j=J(s;m;g) =hS(s);M(m);G(g)i: (5)Future measurements are predicted based on this input representation. The network emits predic-tions of future measurements for all actions at once. This could be done by a fully-connected modulethat absorbs the input representation and outputs predictions. However, we found that introducingadditional structure into the prediction module enhances its ability to learn the fine differences be-tween the outcomes of different actions. To this end, we build on the ideas of Wang et al. (2016) and4Published as a conference paper at ICLR 2017MeasurementsImageActionExpectationPrediction+GoalTargetNormalizeDuplicateActiontakenFigure 1: Network structure. The image s, measurements m, and goal gare first processed sep-arately by three input modules. The outputs of these modules are concatenated into a joint repre-sentation j. This joint representation is processed by two parallel streams that predict the expectedmeasurements E(j)and the normalized action-conditional differences fAi(j)g, which are then com-bined to produce the final prediction for each action.split the prediction module into two streams: an expectation stream E(j)and an action stream A(j).The expectation stream predicts the average of the future measurements over all potential actions.The action stream concentrates on the fine differences between actions: A(j) =A1(j);:::;Aw(j),wherew=jAjis the number of actions. We add a normalization layer at the end of the actionstream that ensures that the average of the predictions of the action stream is zero for each futuremeasurement:Ai(j) =Ai(j)1wwXk=1Ak(j) (6)for alli. The normalization layer subtracts the average over all actions from each prediction, forcingthe expectation stream Eto compensate by predicting these average values. The output of theexpectation stream has dimensionality dim(f), where fis the vector of future measurements. Theoutput of the action stream has dimensionality wdim(f).The output of the network is a prediction of future measurements for each action, composed bysumming the output of the expectation stream and the normalized action-conditional output of theaction stream:p=hpa1;:::;pawi=DA1(j) +E(j);:::;Aw(j) +E(j)E: (7)The output of the network has the same dimensionality as the output of the action stream.4 E XPERIMENTSWe evaluate the presented approach in immersive three-dimensional simulations based on the classi-cal game Doom. In these simulations, the agent has a first-person view of the environment and mustact based on the same visual information that is shown to human players in the game. To interfacewith the game engine, we use the ViZDoom platform developed by Kempka et al. (2016). One ofthe advantages of this platform is that it allows running the simulation at thousands of frames persecond on a single CPU core, which enables training models on tens of millions of simulation stepsin a single day.We compare the presented approach to state-of-the-art deep RL methods in four scenarios of in-creasing difficulty, study generalization across environments and goals, and evaluate the importanceof different aspects of the model.4.1 S ETUPScenarios. We use four scenarios of increasing difficulty:5Published as a conference paper at ICLR 2017D1: Basic D2: NavigationD3: Battle D4: Battle 2Figure 2: Example frames from the four scenarios.D1 Gathering health kits in a square room. (“Basic”)D2 Gathering health kits and avoiding poison vials in a maze. (“Navigation”)D3 Defending against adversaries while gathering health and ammunition in a maze. (“Battle”)D4 Defending against adversaries while gathering health and ammunition in a more compli-cated maze. (“Battle 2”)These scenarios are illustrated in Figure 2 and in the supplementary video ( http://bit.ly/2f9tacZ ).The first two scenarios are provided with the ViZDoom platform. In D1, the agent is in a squareroom and its health is declining at a constant rate. To survive, it must move around and collect healthkits, which are distributed abundantly in the room. This task is easy: as long as the agent learns toavoid walls and keep traversing the room, performance is good. In D2, the agent is in a maze andits health is again declining at a constant rate. Here it must again collect health kits that increase itshealth, but it must also avoid blue poison vials that decrease health. This task is harder: the agentmust learn to traverse irregularly shaped passageways, and to distinguish health kits from poisonvials. In both tasks, the agent has access to three binary sub-actions: move forward, turn left, andturn right. Any combination of these three can be used at any given time, resulting in 8 possibleactions. The only measurement provided to the agent in these scenarios is health.The last two scenarios, D3 and D4, are more challenging and were designed by us using elements ofthe ViZDoom platform. Here the agent is armed and is under attack by alien monsters. The monstersspawn abundantly, move around in the environment, and shoot fireballs at the agent. Health kits andammunition are sporadically distributed throughout the environment and can be collected by theagent. The environment is a simple maze in D3 and a more complex one in D4. In both scenarios,the agent has access to eight sub-actions: move forward, move backward, turn left, turn right, strafeleft, strafe right, run, and shoot. Any combination of these sub-actions can be used, resulting in6Published as a conference paper at ICLR 2017256 possible actions. The agent is provided with three measurements: health, ammunition, and fragcount (number of monsters killed).Model. The future predictor network used in our experiments was configured to be as close aspossible to the DQN model of Mnih et al. (2015), to ensure a fair comparison. Additional details onthe architecture are provided in Appendix A.Training and testing. The agent is trained and tested over episodes. Each episode terminates after525 steps (equivalent to 1 minute of real time) or when the agent’s health drops to zero. Statisticsreported in figures and tables summarize the final values of respective measurements at the end ofepisodes.We set the temporal offsets 1;:::;nof predicted future measurements to 1, 2, 4, 8, 16, and 32steps in all experiments. Only the latest three time steps contribute to the objective function, withcoefficients (0:5;0:5;1). More details are provided in Appendix A.4.2 R ESULTSComparison to prior work. We have compared the presented approach to three deep RL methods:DQN (Mnih et al., 2015), A3C (Mnih et al., 2016), and DSR (Kulkarni et al., 2016b). DQN is astandard baseline for visuomotor control due to its impressive performance on Atari games. A3Cis more recent and is commonly regarded as the state of the art in this area. DSR is described ina recent technical report and we included it because the authors also used the ViZDoom platformin experiments, albeit with a simple task. Further details on the setup of the prior approaches areprovided in Appendix B.The performance of the different approaches during training is shown in Figure 3. In reporting theresults of these experiments, we refer to our approach as DFP (direct future prediction). For thefirst two scenarios, all approaches were trained to maximize health. For these scenarios, Figure3 reports average health at the end of an episode over the course of training. For the last twoscenarios, all approaches were trained to maximize a linear combination of the three normalizedmeasurements (ammo, health, and frags) with coefficients (0:5;0:5;1). For these scenarios, Figure3 reports average frags at the end of an episode. Each presented curve averages information fromthree independent training runs, and each data point is computed from 350;000steps of testing.DQN, A3C, and DFP were trained for 50million steps. The training procedure for DSR is muchslower and can only process roughly 1 million simulation steps per day. For this reason, we wereonly able to evaluate DSR on the Basic scenario and were not able to perform extensive hyperparam-eter tuning. We report results for this technique after 10days of training. (This time was sufficientto significantly exceed the number of training steps reported in the experiments of Kulkarni et al.(2016b), but not sufficient to approach the number of steps afforded by the other approaches.)Table 1 reports the performance of the models after training. Each fully trained model was testedover1million simulation steps. The table reports average health at the end of an episode for sce-narios D1 and D2, and average frags at the end of an episode for D3 and D4. We also reportthe average training speed for each approach, in millions of simulation steps per day of train-ing. The performance of the different models is additionally illustrated in the supplementary video(http://bit.ly/2f9tacZ ).D1 (health) D2 (health) D3 (frags) D4 (frags) steps/dayDQNA3CDSRDFP89:16:497:50:14:60:197:70:425:47:859:32:084:10:61:20:85:60:233:50:40:40:26:72:916:51:17M80M1M70MTable 1: Comparison to prior work. We report average health at the end of an episode for scenariosD1 and D2, and average frags at the end of an episode for scenarios D3 and D4.7Published as a conference paper at ICLR 20170 10 20 30 40 50Millions of steps020406080100HealthD1: BasicDFPA3CDQNDSR0 10 20 30 40 50Millions of steps020406080100D2: Navigation0 10 20 30 40 50Millions of steps061218243036FragsD3: Battle0 10 20 30 40 50Millions of steps0369121518D4: Battle 2Figure 3: Performance of different approaches during training. DQN, A3C, and DFP achieve sim-ilar performance in the Basic scenario. DFP outperforms the prior approaches in the other threescenarios, with a multiplicative gap in performance in the most complex ones (D3 and D4).In the Basic scenario, DQN, A3C, and DFP all perform well. As reported in Table 1, the performanceof A3C and DFP is virtually identical at 97:5%, while DQN reaches 89%. In the more complexNavigation scenario, a significant gap opens up between DQN and A3C; this is consistent with theexperiments of Mnih et al. (2016). DFP achieves the best performance in this scenario, with a 25percentage point advantage during testing. Note that in these first two scenarios, DFP was onlygiven a single measurement per time step (health).In the more complex Battle and Battle 2 scenarios (D3 and D4), DFP dominates the other ap-proaches. It outperforms A3C at test time by a factor of 6in D3 and by a factor of 2:5in D4.Note that the advantage of DFP is particularly significant in the scenarios that provide richer mea-surements: three measurements per time step in D3 and D4. The effect of multiple measurements isfurther evaluated in controlled experiments reported below.Generalization across environments. We now evaluate how the behaviors learned by the pre-sented approach generalize across different environments. To this end, we have created 100 ran-domly textured versions of the mazes from scenarios D3 and D4. We used 90 of these for trainingand 10 for testing, with disjoint sets of textures in the training and testing environments. We callthese scenarios D3-tx and D4-tx.Table 2 shows the performance of the approach for different combinations of training and testingregimes. For example, the entry in the D4-tx row of the D3 column shows the performance (inaverage number of frags at the end of an episode) of a model trained in D3 and tested in D4-tx. Notsurprisingly, a model trained in the simple D3 environment does not learn sufficient invariance tosurface appearance to generalize well to other environments. Training in the more complex multi-texture environment in D4 yields better generalization: the trained model performs well in D3 andexhibits non-trivial performance in D3-tx and D4-tx. Finally, exposing the model to significantvariation in surface appearance in D3-tx or D4-tx during training yields very good generalization.8Published as a conference paper at ICLR 2017TrainD3 D4 D3-tx D4-tx D4-tx-LTestD3 33:6 17:8 29:8 20:9 22 :0D4 1:6 17:1 5:4 10:8 12 :4D3-tx 3:9 8:1 22:6 15:6 19 :4D4-tx 1:7 5:1 6:2 10:2 12 :7Table 2: Generalization across environments.The last column of Table 2 additionally reports the performance of a higher-capacity model trainedin D4-tx. This combination is referred to as D4-tx-L. As shown in the table, this model performseven better. The architecture is detailed in Appendix A.Visual Doom AI Competition. To further evaluate the presented approach, we participated inthe Visual Doom AI Competition, held during September 2016. The competition evaluated sen-sorimotor control models that act based on raw visual input. The competition had the form of atournament: the submitted agents play multiple games against each other, their performance mea-sured by aggregate frags. The competition included two tracks. The Limited Deathmatch track washeld in a known environment that was given to the participants in advance at training time. TheFull Deathmatch track evaluated generalization to previously unseen environments and took placein multiple new environments that were not available to the participating teams at training time. Weonly enrolled in the Full Deathmatch track. Our model was trained using a variant of the D4-tx-Lregime.Our model won, outperforming the second best submission by more than 50%. That submission, de-scribed by Lample & Chaplot (2016), constitutes a strong baseline. It is a deep recurrent Q-networkthat incorporates an LSTM and was trained using reward shaping and extra supervision from thegame engine. Specifically, the authors took advantage of the ability provided by the ViZDoom plat-form to use the internal configuration of the game, including ground-truth knowledge of the presenceof enemies in the field of view, during training. The authors’ report shows that this additional su-pervision improved performance significantly. Our model, which is simpler, achieved even higherperformance without such additional supervision.Goal-agnostic training. We now evaluate the ability of the presented approach to learn without afixed goal at training time, and adapt to varying goals at test time. These experiments are performedin the Battle scenario. We use three training regimes: (a) fixed goal vector during training, (b)random goal vector with each value sampled uniformly from [0;1]for every episode, and (c) randomgoal vector with each value sampled uniformly from [1;1]for every episode. More details areprovided in Appendix A. Intuitively, in the second regime the agent is instructed to maximize thedifferent measurements, but has no knowledge of their relative importance. The third regime makesno assumptions as to whether the measured quantities are desirable or not.The results are shown in Table 3. Each group of columns corresponds to a training regime and eachrow to a different test-time goal. Goals are given by the weights of the three measurements (ammo,health, and frags) in the objective function. The first test-time goal in Table 3 is the goal vector usedin the battle scenarios in the prior experiments, the second seeks to maximize the frag count, thethird is a pacifist (maximize ammo and health, minimize frags), the fourth seeks to aimlessly drainammunition, and the fifth aims to maximize health. For each row, each group of columns reports theaverage value of each of the three measurements at the end of an episode. Note that health level atthe end of an episode can be negative if the agent suffered major damage in the pre-terminal step.We draw two main conclusions. First, on the main task (first row), models trained without knowingthe goal in advance (b,c) perform nearly as well as a dedicated model trained specifically for theeventual goal (a). Without knowing the eventual goal during training, the agent performs the taskalmost as well as when it was specifically trained for it. Second, all models generalize to new goalsbut not equally well. Models trained with a variety of goals (b,c) generalize much better than amodel trained with a fixed goal.9Published as a conference paper at ICLR 2017(a) fixed goal (0:5;0:5;1) (b) random goals [0;1] (c) random goals [1;1]test goal ammo health frags ammo health frags ammo health frags(0:5;0:5;1) 83 :4 97:0 33:6 92 :3 96:9 31:5 49 :3 94:3 28:9(0;0;1) 0 :33:7 11:5 4 :3 30:0 20:6 21 :8 70:9 24:6(1;1;1) 28 :62:0 0:0 22 :1 4:4 0:2 89 :4 83:6 0:0(1;0;0) 1 :08:3 1:7 1 :97:5 1:2 0 :98:6 1:7(0;1;0) 0 :7 2:7 2:6 9 :0 77:8 6:6 3 :0 69:6 7:9Table 3: Generalization across goals. Each group of three columns corresponds to a training regime,each row corresponds to a test-time goal. The results in the first row indicate that the approachperforms well on the main task even without knowing the goal at training time. The results in theother rows indicate that goal-agnostic training supports generalization across goals at test time.fragsall measurements all offsets 22:6all measurements one offset 17:2frags only all offsets 10:3frags only one offset 5:0Table 4: Ablation study. Predictingall measurements at all temporal offsetsyields the best results.Ablation study. We now perform an ablation studyusing the D3-tx scenario. Specifically, we evaluate theimportance of vectorial feedback versus a scalar reward,and the effect of predicting measurements at multipletemporal offsets. The results are summarized in Ta-ble 4. The table reports the performance (in averagefrags at the end of an episode) of our full model (predict-ing three measurements at six temporal offsets) and ofablated variants that only predict frags (a scalar reward)and/or only predict at the farthest temporal offset. As theresults demonstrate, predicting multiple measurementssignificantly improves the performance of the learnedmodel, even when it is evaluated by only one of thosemeasurements. Predicting measurements at multiple future times is also beneficial. This supportsthe intuition that a dense flow of multivariate measurements is a better training signal than a scalarreward.5 D ISCUSSIONWe presented an approach to sensorimotor control in immersive environments. Our approach issimple and demonstrates that supervised learning techniques can be adapted to learning to act incomplex and dynamic three-dimensional environments given raw sensory input and intrinsic mea-surements. The model trains on raw experience, by interacting with the environment without extra-neous supervision. Natural supervision is provided by the cotemporal structure of the sensory andmeasurement streams. Our experiments have demonstrated that this simple approach outperformssophisticated deep reinforcement learning formulations on challenging tasks in immersive environ-ments. Experiments have further demonstrated that the use of multivariate measurements providesa significant advantage over conventional scalar rewards and that the trained model can effectivelypursue new goals not specified during training.The presented work can be extended in multiple ways that are important for broadening the rangeof behaviors that can be learned. First, the presented model is purely reactive: it acts based onthe current frame only, with no explicit facilities for memory and no test-time retention of internalrepresentations. Recent work has explored memory-based models (Oh et al., 2016) and integratingsuch ideas with the presented approach may yield substantial advances. Second, significant progressin behavioral sophistication will likely require temporal abstraction and hierarchical organization oflearned skills (Barto & Mahadevan, 2003; Kulkarni et al., 2016a). Third, the presented model wasdeveloped for discrete action spaces; applying the presented ideas to continuous actions would beinteresting (Lillicrap et al., 2016). Finally, predicting features learned directly from rich sensoryinput can blur the distinction between sensory and measurement streams (Mathieu et al., 2016; Finnet al., 2016; Kalchbrenner et al., 2016).10Published as a conference paper at ICLR 2017 | BkBdc_ZEl | Review | 8: Top 50% of accepted papers, clear accept | The paper presents an on-policy method to predict future intrinsic measurements. All the experiments are performed in the game of Doom (vizDoom to be exact), and instead of just predicting win/loss or the number of frags (score), the authors trained their model to predict (a sequence of) triplets of (health, ammunition, frags), weighted by (a sequence of) "goal" triplets that they provided as input. Changing the weights of the goal triplet is a way to perform/guide exploration. At test time, one can act by maximizing the long term goal only.
The results are impressive, as this model won the 2016 vizDoom competition. The experimental section of the paper seems sound:
- There are comparisons of DFP with A3C, DQN, and an attempt to compare with DSR (a recent similar approach from Kulkarni et al., 2016). DFP outperforms other approaches (or equal them when they reach a ceiling / optimum, as for A3C in scenario D1).
- There is an ablation study that supports the thesis that all the "added complexity" of the paper's model is useful.
Predicting intrinsic motivation (Singh et al. 2004), auxiliary variables, and forward modelling, are well-studied domains of reinforcement learning. The version that I read (December 4th revision) adequately references prior work, even if it is not completely exhaustive.
A few comments (nitpicks) on the form:
- Doom is described as a 3D environment, whereas it is actually a 2D environment (the height is not a discriminative/actionable dimension) presented in (fake) 3D.
- The use of "P" in (2) (and subsequently) may be misleading as it stands for prediction but not probability (as is normally the case for P).
- The double use of "j" (admittedly, with different fonts) in (6) may be misleading.
- Results tables could repeat the units of the measurements (in particular as they are heterogenous in Table 1).
I think that this paper is a clear accept. One could argue that experiments could be conducted on different environments or that the novelty is limited, but I feel that "correct" (no-nonsense, experimentally sound on Doom, appendix providing details for reproducibility) and "milestone" (vizDoom winner) papers should get published. | 4: The reviewer is confident but not absolutely certain that the evaluation is correct |
rJ6DhP5xe | ICLR.cc/2017/conference | 2017 | Generalizable Features From Unsupervised Learning | ["Mehdi Mirza", "Aaron Courville", "Yoshua Bengio"] | Humans learn a predictive model of the world and use this model to reason about future events and the consequences of actions. In contrast to most machine predictors, we exhibit an impressive ability to generalize to unseen scenarios and reason intelligently in these settings. One important aspect of this ability is physical intuition(Lake et al., 2016). In this work, we explore the potential of unsupervised learning to find features that promote better generalization to settings outside the supervised training distribution. Our task is predicting the stability of towers of square blocks. We demonstrate that an unsupervised model, trained to predict future frames of a video sequence of stable and unstable block configurations, can yield features that support extrapolating stability prediction to blocks configurations outside the training set distribution | ["Unsupervised Learning", "Deep learning"] | ABSTRACTHumans learn a predictive model of the world and use this model to reason aboutfuture events and the consequences of actions. In contrast to most machine predic-tors, we exhibit an impressive ability to generalize to unseen scenarios and reasonintelligently in these settings. One important aspect of this ability is physical in-tuition (Lake et al., 2016). In this work, we explore the potential of unsupervisedlearning to find features that promote better generalization to settings outside thesupervised training distribution. Our task is predicting the stability of towers ofsquare blocks. We demonstrate that an unsupervised model, trained to predict fu-ture frames of a video sequence of stable and unstable block configurations, canyield features that support extrapolating stability prediction to blocks configura-tions outside the training set distribution.1 I NTRODUCTIONHumans learn a tremendous amount of knowledge about the world with almost no supervision andcan construct a predictive model of the world. We use this model of the world to interact with ourenvironment. As also argued by Lake et al. (2016) one of the core ingredients of human intelligenceis intuitive physics. Children can learn and predict some of the common physical behaviors of ourworld just by observing and interacting without any direct supervision. And they form a sophisti-cated predictive model of the physical environment and expect the world to behave based on theirmental model and have a reasonable expectation about unseen situations T ́egl ́as et al. (2011).Despite impressive progress in the last few years in the training of the supervised models, we havenot yet quite been able to achieve similar results in unsupervised learning, and it remains one of thechallenging research areas in the field. The full potential of the application of unsupervised learningis yet to be realized.In this work, we leverage unsupervised learning to train a predictive model over sequences. We usethe imagined and predicted future sequence data to help a physical environment prediction modelgeneralize better to unseen settings.More specifically we focus on the task of predicting if a tower of square bricks will fall or not, asintroduced by Lerer et al. (2016). They showed that a deep convolution neural network could predictthe fall of the towers with super-human accuracy. But despite the strengths of convolution neuralnetworks, Zhang et al. (2016) shows how deep neural networks have a hard time generalizing tonovel situations in the same way as humans or simulation-based models can do. In this work, weshow that deep neural networks are capable of generalizing to novel situations through a form ofunsupervised learning. The core idea is to observe the world without any supervision and build afuture predictive model of it, and in a later stage leverage and utilize the imagined future to train abetter fall prediction model.2 R ELATED WORKIn the beginning, unsupervised learning and generative models emerged as pre-training method Hin-ton & Salakhutdinov (2006); Hinton et al. (2006); Bengio et al. (2007) to help other tasks such as1Under review as a conference paper at ICLR 2017supervised learning. But since Krizhevsky et al. (2012) many other regularization Srivastava et al.(2014), weight initialization Glorot & Bengio (2010) and normalization Ioffe & Szegedy (2015)techniques and architecture designs He et al. (2015) has been introduced that diminish the effect ofpre-training. Although pre-training still could be useful in data scarce domains they are many otherways and applications that unsupervised learning are still very interesting models and it is a veryactive area of research. Just to name a few applications are semi-supervised learning Kingma et al.(2014); Salimans et al. (2016); Dumoulin et al. (2016) super resolution Sønderby et al. (2016).Video generation is one active area of research with many applications, and many of the recentworks have been using some of the states of the art neural networks for video generation. Srivas-tava et al. (2015) uses LSTM recurrent neural networks to train an unsupervised future predictivemodel for video generation. And here we use a very similar architecture as described in Section 4.1.Mathieu et al. (2015) combines the common mean-squared-error objective function with an adver-sarial training cost in order to generate sharper samples. Lotter et al. (2016) introduce another formof unsupervised video prediction training scheme that manages to predict future events such as thedirection of the turn of a car which could have potential use in training of the self-driving cars.Model-based reinforcement learning (RL) is an active research area that holds the promise of makingthe RL agents less data hungry. Learning agents could explore, learn in an unsupervised way abouttheir world, and learn even more by dreaming about future states. We believe that action-conditionvideo prediction models are an important ingredient for this task. Fragkiadaki et al. (2015) learnthe dynamics of billiards balls by supervised training of a neural net. Action-conditioned videoprediction models have been applied to Atari playing agent Oh et al. (2015) as well as robotics (Finnet al., 2016; Finn & Levine, 2016).3 D ATASETRecent datasets for predicting the stability of block configurations (Lerer et al., 2016; Zhang et al.,2016) only provide binary labels of stability, and exclude the video simulation of the block configu-ration. We, therefore, construct a new dataset, with a similar setup as Lerer et al. (2016); Zhang et al.(2016), that includes this video sequence. We use a Javascript based physics engine1to generate thedata.We construct towers made of 35square blocks. To sample a random tower configuration, weuniformly shift each block in its xandyposition such that it touches the block below. Because tallertowers are more unstable, this shift is smaller when we add more blocks. To simplify our learningsetting, we balance the number of stable and unstable block configurations. For each tower height,we create 8000 ,1000 and3000 video clips for the training, validation, and test set, respectively. Thevideo clips are sub-sampled in time to include more noticeable changes in the blocks configurations.We decided to keep 39 number of frames which with our sub-sampling rate was enough time forunstable towers to collapse. Each video frame is an RGB image of size 64x64. In addition to binarystability label, we include the number of blocks that fell down.4 A RCHITECTUREThe core idea of this paper is to use future state predictions of a generative video model to en-hance the performance of a supervised prediction model. Our architecture consists of two separatemodules:Frame predictor A generative model to predict future frames of a video sequence. This model istrained to either generate the last frame or the complete sequence of frames.Stability predictor In the original task, stability is predicted from a static image of a block config-uration. We explore whether, in addition to initial configuration, the last frame predictionof our unsupervised model improves the performance of the stability prediction.In the following sections, we explore several different architectures for both modules.1https://chandlerprall.github.io/Physijs/2Under review as a conference paper at ICLR 20174.1 F UTURE FRAME PREDICTIONWe consider two different model architectures for this task. The first one, named ConvDeconv, onlytakes the first frame as input and predicts the last frame of the video sequence. The architectureconsist of a block of convolution and max-pooling layers. To compensate for the dimensionalityreduction of the max-pooling layers, we have a fully-connected layer following the last max-poolinglayer. And finally a subsequent block of deconvolution layers with the output size same as the modelinput size. All activation functions are ReLU(Nair & Hinton, 2010). See Table 1 for more detailsof the architecture. The objective function is the mean squared error between the generated lastframe and the ground-truth frame; as a result, this training will not require any labels. We alsoexperimented with an additional adversarial cost as in Mathieu et al. (2015) but did not observeany improvement for the stability prediction task. We hypothesize that although the adversarialobjective function helps to have sharper images, such improved sample quality does not transferto better stability prediction. Figure 1 shows a few examples of the generated data on the test set.Mean squared error is minimized using the AdaM Optimizer(Kingma & Ba, 2014) and we use early-stopping when the validation loss does not improve for 100epochs.We extend this ConvDeconv model in a second architecture, named ConvLSTMDeconv, to predictthe next frame at each timestep. This model is composed of an LSTM architecture. The sameconvolutional and deconvolutional blocks as ConvDeconv is utilized to respectively input the currentframe to the LSTM transition and output the next frame from the current LSTM state. The detailsof the ConvLSTMDeconv model architecture are shown in Table 2 and Figure 3 shows the diagramof the both architectures. During the training at each time step the ground-truth data feeds in to themodel, but during the test time only the initial time step gets the first frame from the data and forsubsequent time steps the generated frames from the previous time steps feed in to the model. The issimilar setup to recurrent neural network language models Mikolov (2012), and this is necessary asduring the test time we only have access to the first frame. As before, the model is trained to predictthe next frame at each time step by minimizing the predictive mean-squared-error using AdaMoptimizer and early-stopping. For training, we further subsample in time dimension and reduce thesequence length to 5-time steps. Figure 2 shows some sample generated sequences from the test set.Layer Type Output channels/dimensions Kernel/Pool size1 Conv 64 332 MaxPool 64 443 Conv 128 334 MaxPool 64 335 Conv 64 336 MaxPool 64 337 FC 646416 = 655368 DeConv 64 339 DeConv 128 3310 DeConv 64 3311 DeConv 3 33Table 1: ConvDeconv model architecture.FC stands for ”Fully Connected”.Layer Type Output channels/Dimension Kernel/Pool size1 Conv 64 332 MaxPool 64 443 Conv 128 334 MaxPool 64 335 Conv 64 336 MaxPool 64 337 FC LSTM 20008 FC 646439 DeConv 64 3310 DeConv 64 3311 DeConv 3 33Table 2: ConvLSTMDeconv model architecture.FC stands for ”Fully Connected”.Figure 1: Samples from the ConvDeconv model. First and second rows show first and last framerespectively from the test data. And the third row shows generated last frame samples.4.2 S TABILITY PREDICTIONWe have two supervised models for stability prediction. The first one will be a baseline that takesas input the first frame and predict the fall of the tower. For this model we use 50 layer ResNet3Under review as a conference paper at ICLR 2017Figure 2: Samples from the ConvLSTMDeconv model. Each row is a different sample. The leftsequence is the data and the right sequence is the generated data. Note that during generation modelonly see the first frame and for next time steps uses its own output from the last timestep.architecture from He et al. (2016). We trained the baseline model on each of the different towerheights 3, 4, 5. We call it the single model and name experiments 3S, 4S, 5S respectively for thenumber of blocks that it was trained on. The second model will be the one using the generateddata: it takes as input the first frame and the generated last frame. It consisted of two 50 LayerResNet blocks in parallel, one for the first frame and one for last frame and the last hidden layerof both models are concatenated together before a logistic regression layer (or Softmax in the caseof non-binary labels). Both ResNet blocks share parameters. Based on whether the generated datais coming from ConvDeconv model or ConvLSTMDeconv model we labeled experiments as 3CD,4CD, 5CD and 3CLD, 4CLD, 5CLD respectively.None of the models are pre-trained and all the weights are randomly initialized. As in 4.1, we useAdaM and we stopped the training when the validation accuracy was not improved for 100epochs.All images are contrast normalized independently and we augment our training set using randomhorizontal flip of the images and randomly changing the contrast and brightness.Figure 3: Different model architectures. The first two on the left are ConvDeconv and ConvLST-MDeconv described in Section 4.1. And the two on the right are models used for the supervised fallprediction described in Section 4.2. Single frame predictor is the baseline model. And the doubleframe predictor is the model that uses the generated data.4Under review as a conference paper at ICLR 20175 R ESULTSFigure 4 shows the classification results for each of the 9 models described in Section 4.2 tested on3, 4 and 5 blocks. Each test case is shown with a different color. And Table 3 shows all the 27 testcase results’ numerical values. In almost all cases the generated data improves the generalizationperformance to test cases with a different number of blocks than it was trained on. For comparisonwe have included results from Zhang et al. (2016) in Table 4. Since Zhang et al. (2016) only reportsthe results when the models are trained on tower of 4 blocks, the corresponding results would be thesecond block row in Table 3, models 4S, 4CD and 4CLD. Even though the datasets are not the same,but it can be observed that the range of performance of the baseline 4S model is consistent with therange of performance of AlexNet model on Table 4. It can be seen that how the results of the 4CDmodel are significantly better than both IPE and human performance reported in Zhang et al. (2016),while the baselines have similar performances.One observation is the fact that the improvements are more significant when it’s been tested onscenarios with more bricks than during training. It also improves the reverse case, i.e. fewer bricksthan during training, but the improvement is not as significant. It is worth mentioning that testingon a lower number of bricks is a much harder problem as pointed out in Zhang et al. (2016) too. Intheir case, the prediction performance was almost random when going from 4 blocks to 3 blocks,which is not the case in our experiments2. One possible explanation for performance loss is that abalanced tower with fewer blocks corresponds to an unstable configuration for a tower with moreblocks e.g. a tower with 3 blocks is classified as unstable for a prediction model trained on towers of5 blocks. One solution could be to train these models to predict how many blocks have fallen insteadof a binary stability label. Because we have access to this data in our dataset, we explored the sameexperiments using these labels. Unfortunately, we did not observe any significant improvement. Themain reason could be that the distribution of the number of fallen blocks is extremely unbalanced. Itis hard to collect data with a balanced number of fallen blocks because some configurations are thusvery unlikely e.g. a tower of 5 blocks with only two blocks falls (the majority of the time the wholetower collapses).The another observation is the fact that models that use ConvDeconv generated data performedslightly better than those that use ConvLSTMDeconv. As seen in Figure 2 the samples in the Con-vLSTMDeconv case are more noisy and less sharper than those in Figure 1. This could be causedsince after the first time step the model outputs from the last time step is used as input for the nexttime step, the samples degenerates the longer the sequence is.Data augmentation was crucial to increase the generalization performance of the stability predictione.g. 5CD model tested on 4 bricks achieved only 50% without data augmentation while reaching74:5%accuracy with data augmentation. This significant improvement from data augmentationcould be partly because our dataset was relatively small.Figure 4: Accuracy in percentage for each of the 9 models tested on test sets with a different numberof blocks. Each color represents the number of blocks that the model was tested on. 50% is chance.2We are not using the same dataset as Zhang et al. (2016) and hence direct comparison is not possible.5Under review as a conference paper at ICLR 2017Model Train set Test set Accuracy3S 3 3 91.87 %3S 3 4 66.1 %3S 3 5 63.7 %3CD 3 3 95.5 %3CD 3 4 92.63 %3CD 3 5 89 %3CLD 3 3 93.3 %3CLD 3 4 90.33 %3CLD 3 5 84.30 %4S 4 3 52.5 %4S 4 4 87 %4S 4 5 75.53 %4CD 4 3 80.53 %4CD 4 4 92.5 %4CD 4 5 89.1 %4CLD 4 3 65.53 %4CLD 4 4 91.20 %4CLD 4 5 84.20 %5S 5 3 59.26 %5S 5 4 67.23 %5S 5 5 86.50 %5CD 5 3 58.27 %5CD 5 4 74.50 %5CD 5 5 88.53 %5CLD 5 3 58.90 %5CLD 5 4 74.50 %5CLD 5 5 88.53 %Table 3: The results from our experimentsModel Train set Test set AccuracyAlexNet 4 3 51 %AlexNet 4 4 95 %AlexNet 4 5 78.5 %IPE N/A 3 72 %IPE N/A 4 64 %IPE N/A 5 56 %Human N/A 3 76.5 %Human N/A 4 68.5 %Human N/A 5 59 %Table 4: The results reported on Zhang et al.(2016). We emphasize that these results are ona different dataset.6 C ONCLUSIONIn this paper, we showed that data generated from an unsupervised model could help a supervisedlearner to generalize to unseen scenarios. We argue that this ability of transfer learning and gener-alization by observing the world could be one of the ingredients to construct a model of the worldthat could have applications in many tasks, such as model-based RL. We aim to extend this work infuture by looking at the videos of robots manipulating objects and being able to predict their failurebeforehand, which could help an RL agent to explore more intelligently.ACKNOWLEDGMENTSWe would like to thank Harm de Vries and Laurent Dinh for their help and feedback in writingthe paper. And also thank Adam Lerer and Jiajun Wu for sharing their dataset. We thank NSERC,CIFAR, IBM, Canada Research Chairs, Google and Samsung for funding. | ryyOs5xVg | This work seems rather preliminary in terms of experimentation and using forward modeling as pretraining has already been proposed and applied to video and text classification tasks. Discussion on related work is insufficient. The end task choice (will there be motion?) might not be the best to advocate for unsupervised training. | 3: Clear rejection | *** Paper Summary ***
The paper proposes to learn a predictive model (aka predict the next video frames given an input image) and uses the prediction from this model to improve a supervised classifier. The effectiveness of the approach is illustrated on a tower stability dataset.
*** Review Summary ***
This work seems rather preliminary in terms of experimentation and using forward modeling as pretraining has already been proposed and applied to video and text classification tasks. Discussion on related work is insufficient. The end task choice (will there be motion?) might not be the best to advocate for unsupervised training.
*** Detailed Review ***
This work seems rather preliminary. There is no comparison with alternative semi-supervised strategies. Any approach that consider the next frames as latent variables (or privileged information) can be considered. Also I am not sure if the supervised stability prediction model is actually needed once the next frame is predicted. Basically the task can be reduced to predict whether there will be motion in the video following the current frame or not (for instance comparing the first frame and last prediction or the density of gray in the top part of the video might work just as well). Also training a model to predict the presence of motion from the unsupervised data only would probably do very well. I would suggest to stir away from task where the label can be inferred trivially from the unsupervised data, meaning that unlabeled videos can be considered labeled frames in that case.
The related work section misses a discussion on previous work on learning unsupervised features from video (through predictive models, dimensionality reduction...) for helping classification of still images or videos [Fathi et al 2008; Mabahi et al 2009; Srivastava et al 2015]. More recently, Wang and Gupta (2015) have obtained excellent ImageNet results from features pre trained on unlabeled videos. Vondrick et al (2016) have shown that generative models of video can help initialize models for video classification tasks. Also in the field of text classification, pre training of classifier with a language model is a form predictive modeling, e.g. Dai & Le 2015.
I would also suggest to report test results on the dataset from Lerrer et al 2016 (I understand that you need your own videos to pre train the predictive model) but stability prediction only require still images.
Overall, I feel the experimental section is too preliminary. It would be better to focus on a task where solving the unsupervised task does not necessarily imply that the supervised task is trivially solved (or conversely that a simple rule can turn the unlabeled data into label data).
*** Reference ***
Fathi, Alireza, and Greg Mori. "Action recognition by learning mid-level motion features." Computer Vision and Pattern Recognition, 2008. CVPR 2008. IEEE Conference on. IEEE, 2008.
Mobahi, Hossein, Ronan Collobert, and Jason Weston. "Deep learning from temporal coherence in video." Proceedings of the 26th Annual International Conference on Machine Learning. ACM, 2009.
Srivastava, Nitish, Elman Mansimov, and Ruslan Salakhutdinov. "Unsupervised learning of video representations using lstms." CoRR, abs/1502.04681 2 (2015).
A. Dai, Q.V. Le, Semi-supervised Sequence Learning, NIPS, 2015
Unsupervised learning of visual representations using videos, X Wang, A Gupta, ICCV 2015
Generating videos with scene dynamics, C Vondrick, H Pirsiavash, A Torralba, NIPS 16
| 4: The reviewer is confident but not absolutely certain that the evaluation is correct |
rJ6DhP5xe | ICLR.cc/2017/conference | 2017 | Generalizable Features From Unsupervised Learning | ["Mehdi Mirza", "Aaron Courville", "Yoshua Bengio"] | Humans learn a predictive model of the world and use this model to reason about future events and the consequences of actions. In contrast to most machine predictors, we exhibit an impressive ability to generalize to unseen scenarios and reason intelligently in these settings. One important aspect of this ability is physical intuition(Lake et al., 2016). In this work, we explore the potential of unsupervised learning to find features that promote better generalization to settings outside the supervised training distribution. Our task is predicting the stability of towers of square blocks. We demonstrate that an unsupervised model, trained to predict future frames of a video sequence of stable and unstable block configurations, can yield features that support extrapolating stability prediction to blocks configurations outside the training set distribution | ["Unsupervised Learning", "Deep learning"] | ABSTRACTHumans learn a predictive model of the world and use this model to reason aboutfuture events and the consequences of actions. In contrast to most machine predic-tors, we exhibit an impressive ability to generalize to unseen scenarios and reasonintelligently in these settings. One important aspect of this ability is physical in-tuition (Lake et al., 2016). In this work, we explore the potential of unsupervisedlearning to find features that promote better generalization to settings outside thesupervised training distribution. Our task is predicting the stability of towers ofsquare blocks. We demonstrate that an unsupervised model, trained to predict fu-ture frames of a video sequence of stable and unstable block configurations, canyield features that support extrapolating stability prediction to blocks configura-tions outside the training set distribution.1 I NTRODUCTIONHumans learn a tremendous amount of knowledge about the world with almost no supervision andcan construct a predictive model of the world. We use this model of the world to interact with ourenvironment. As also argued by Lake et al. (2016) one of the core ingredients of human intelligenceis intuitive physics. Children can learn and predict some of the common physical behaviors of ourworld just by observing and interacting without any direct supervision. And they form a sophisti-cated predictive model of the physical environment and expect the world to behave based on theirmental model and have a reasonable expectation about unseen situations T ́egl ́as et al. (2011).Despite impressive progress in the last few years in the training of the supervised models, we havenot yet quite been able to achieve similar results in unsupervised learning, and it remains one of thechallenging research areas in the field. The full potential of the application of unsupervised learningis yet to be realized.In this work, we leverage unsupervised learning to train a predictive model over sequences. We usethe imagined and predicted future sequence data to help a physical environment prediction modelgeneralize better to unseen settings.More specifically we focus on the task of predicting if a tower of square bricks will fall or not, asintroduced by Lerer et al. (2016). They showed that a deep convolution neural network could predictthe fall of the towers with super-human accuracy. But despite the strengths of convolution neuralnetworks, Zhang et al. (2016) shows how deep neural networks have a hard time generalizing tonovel situations in the same way as humans or simulation-based models can do. In this work, weshow that deep neural networks are capable of generalizing to novel situations through a form ofunsupervised learning. The core idea is to observe the world without any supervision and build afuture predictive model of it, and in a later stage leverage and utilize the imagined future to train abetter fall prediction model.2 R ELATED WORKIn the beginning, unsupervised learning and generative models emerged as pre-training method Hin-ton & Salakhutdinov (2006); Hinton et al. (2006); Bengio et al. (2007) to help other tasks such as1Under review as a conference paper at ICLR 2017supervised learning. But since Krizhevsky et al. (2012) many other regularization Srivastava et al.(2014), weight initialization Glorot & Bengio (2010) and normalization Ioffe & Szegedy (2015)techniques and architecture designs He et al. (2015) has been introduced that diminish the effect ofpre-training. Although pre-training still could be useful in data scarce domains they are many otherways and applications that unsupervised learning are still very interesting models and it is a veryactive area of research. Just to name a few applications are semi-supervised learning Kingma et al.(2014); Salimans et al. (2016); Dumoulin et al. (2016) super resolution Sønderby et al. (2016).Video generation is one active area of research with many applications, and many of the recentworks have been using some of the states of the art neural networks for video generation. Srivas-tava et al. (2015) uses LSTM recurrent neural networks to train an unsupervised future predictivemodel for video generation. And here we use a very similar architecture as described in Section 4.1.Mathieu et al. (2015) combines the common mean-squared-error objective function with an adver-sarial training cost in order to generate sharper samples. Lotter et al. (2016) introduce another formof unsupervised video prediction training scheme that manages to predict future events such as thedirection of the turn of a car which could have potential use in training of the self-driving cars.Model-based reinforcement learning (RL) is an active research area that holds the promise of makingthe RL agents less data hungry. Learning agents could explore, learn in an unsupervised way abouttheir world, and learn even more by dreaming about future states. We believe that action-conditionvideo prediction models are an important ingredient for this task. Fragkiadaki et al. (2015) learnthe dynamics of billiards balls by supervised training of a neural net. Action-conditioned videoprediction models have been applied to Atari playing agent Oh et al. (2015) as well as robotics (Finnet al., 2016; Finn & Levine, 2016).3 D ATASETRecent datasets for predicting the stability of block configurations (Lerer et al., 2016; Zhang et al.,2016) only provide binary labels of stability, and exclude the video simulation of the block configu-ration. We, therefore, construct a new dataset, with a similar setup as Lerer et al. (2016); Zhang et al.(2016), that includes this video sequence. We use a Javascript based physics engine1to generate thedata.We construct towers made of 35square blocks. To sample a random tower configuration, weuniformly shift each block in its xandyposition such that it touches the block below. Because tallertowers are more unstable, this shift is smaller when we add more blocks. To simplify our learningsetting, we balance the number of stable and unstable block configurations. For each tower height,we create 8000 ,1000 and3000 video clips for the training, validation, and test set, respectively. Thevideo clips are sub-sampled in time to include more noticeable changes in the blocks configurations.We decided to keep 39 number of frames which with our sub-sampling rate was enough time forunstable towers to collapse. Each video frame is an RGB image of size 64x64. In addition to binarystability label, we include the number of blocks that fell down.4 A RCHITECTUREThe core idea of this paper is to use future state predictions of a generative video model to en-hance the performance of a supervised prediction model. Our architecture consists of two separatemodules:Frame predictor A generative model to predict future frames of a video sequence. This model istrained to either generate the last frame or the complete sequence of frames.Stability predictor In the original task, stability is predicted from a static image of a block config-uration. We explore whether, in addition to initial configuration, the last frame predictionof our unsupervised model improves the performance of the stability prediction.In the following sections, we explore several different architectures for both modules.1https://chandlerprall.github.io/Physijs/2Under review as a conference paper at ICLR 20174.1 F UTURE FRAME PREDICTIONWe consider two different model architectures for this task. The first one, named ConvDeconv, onlytakes the first frame as input and predicts the last frame of the video sequence. The architectureconsist of a block of convolution and max-pooling layers. To compensate for the dimensionalityreduction of the max-pooling layers, we have a fully-connected layer following the last max-poolinglayer. And finally a subsequent block of deconvolution layers with the output size same as the modelinput size. All activation functions are ReLU(Nair & Hinton, 2010). See Table 1 for more detailsof the architecture. The objective function is the mean squared error between the generated lastframe and the ground-truth frame; as a result, this training will not require any labels. We alsoexperimented with an additional adversarial cost as in Mathieu et al. (2015) but did not observeany improvement for the stability prediction task. We hypothesize that although the adversarialobjective function helps to have sharper images, such improved sample quality does not transferto better stability prediction. Figure 1 shows a few examples of the generated data on the test set.Mean squared error is minimized using the AdaM Optimizer(Kingma & Ba, 2014) and we use early-stopping when the validation loss does not improve for 100epochs.We extend this ConvDeconv model in a second architecture, named ConvLSTMDeconv, to predictthe next frame at each timestep. This model is composed of an LSTM architecture. The sameconvolutional and deconvolutional blocks as ConvDeconv is utilized to respectively input the currentframe to the LSTM transition and output the next frame from the current LSTM state. The detailsof the ConvLSTMDeconv model architecture are shown in Table 2 and Figure 3 shows the diagramof the both architectures. During the training at each time step the ground-truth data feeds in to themodel, but during the test time only the initial time step gets the first frame from the data and forsubsequent time steps the generated frames from the previous time steps feed in to the model. The issimilar setup to recurrent neural network language models Mikolov (2012), and this is necessary asduring the test time we only have access to the first frame. As before, the model is trained to predictthe next frame at each time step by minimizing the predictive mean-squared-error using AdaMoptimizer and early-stopping. For training, we further subsample in time dimension and reduce thesequence length to 5-time steps. Figure 2 shows some sample generated sequences from the test set.Layer Type Output channels/dimensions Kernel/Pool size1 Conv 64 332 MaxPool 64 443 Conv 128 334 MaxPool 64 335 Conv 64 336 MaxPool 64 337 FC 646416 = 655368 DeConv 64 339 DeConv 128 3310 DeConv 64 3311 DeConv 3 33Table 1: ConvDeconv model architecture.FC stands for ”Fully Connected”.Layer Type Output channels/Dimension Kernel/Pool size1 Conv 64 332 MaxPool 64 443 Conv 128 334 MaxPool 64 335 Conv 64 336 MaxPool 64 337 FC LSTM 20008 FC 646439 DeConv 64 3310 DeConv 64 3311 DeConv 3 33Table 2: ConvLSTMDeconv model architecture.FC stands for ”Fully Connected”.Figure 1: Samples from the ConvDeconv model. First and second rows show first and last framerespectively from the test data. And the third row shows generated last frame samples.4.2 S TABILITY PREDICTIONWe have two supervised models for stability prediction. The first one will be a baseline that takesas input the first frame and predict the fall of the tower. For this model we use 50 layer ResNet3Under review as a conference paper at ICLR 2017Figure 2: Samples from the ConvLSTMDeconv model. Each row is a different sample. The leftsequence is the data and the right sequence is the generated data. Note that during generation modelonly see the first frame and for next time steps uses its own output from the last timestep.architecture from He et al. (2016). We trained the baseline model on each of the different towerheights 3, 4, 5. We call it the single model and name experiments 3S, 4S, 5S respectively for thenumber of blocks that it was trained on. The second model will be the one using the generateddata: it takes as input the first frame and the generated last frame. It consisted of two 50 LayerResNet blocks in parallel, one for the first frame and one for last frame and the last hidden layerof both models are concatenated together before a logistic regression layer (or Softmax in the caseof non-binary labels). Both ResNet blocks share parameters. Based on whether the generated datais coming from ConvDeconv model or ConvLSTMDeconv model we labeled experiments as 3CD,4CD, 5CD and 3CLD, 4CLD, 5CLD respectively.None of the models are pre-trained and all the weights are randomly initialized. As in 4.1, we useAdaM and we stopped the training when the validation accuracy was not improved for 100epochs.All images are contrast normalized independently and we augment our training set using randomhorizontal flip of the images and randomly changing the contrast and brightness.Figure 3: Different model architectures. The first two on the left are ConvDeconv and ConvLST-MDeconv described in Section 4.1. And the two on the right are models used for the supervised fallprediction described in Section 4.2. Single frame predictor is the baseline model. And the doubleframe predictor is the model that uses the generated data.4Under review as a conference paper at ICLR 20175 R ESULTSFigure 4 shows the classification results for each of the 9 models described in Section 4.2 tested on3, 4 and 5 blocks. Each test case is shown with a different color. And Table 3 shows all the 27 testcase results’ numerical values. In almost all cases the generated data improves the generalizationperformance to test cases with a different number of blocks than it was trained on. For comparisonwe have included results from Zhang et al. (2016) in Table 4. Since Zhang et al. (2016) only reportsthe results when the models are trained on tower of 4 blocks, the corresponding results would be thesecond block row in Table 3, models 4S, 4CD and 4CLD. Even though the datasets are not the same,but it can be observed that the range of performance of the baseline 4S model is consistent with therange of performance of AlexNet model on Table 4. It can be seen that how the results of the 4CDmodel are significantly better than both IPE and human performance reported in Zhang et al. (2016),while the baselines have similar performances.One observation is the fact that the improvements are more significant when it’s been tested onscenarios with more bricks than during training. It also improves the reverse case, i.e. fewer bricksthan during training, but the improvement is not as significant. It is worth mentioning that testingon a lower number of bricks is a much harder problem as pointed out in Zhang et al. (2016) too. Intheir case, the prediction performance was almost random when going from 4 blocks to 3 blocks,which is not the case in our experiments2. One possible explanation for performance loss is that abalanced tower with fewer blocks corresponds to an unstable configuration for a tower with moreblocks e.g. a tower with 3 blocks is classified as unstable for a prediction model trained on towers of5 blocks. One solution could be to train these models to predict how many blocks have fallen insteadof a binary stability label. Because we have access to this data in our dataset, we explored the sameexperiments using these labels. Unfortunately, we did not observe any significant improvement. Themain reason could be that the distribution of the number of fallen blocks is extremely unbalanced. Itis hard to collect data with a balanced number of fallen blocks because some configurations are thusvery unlikely e.g. a tower of 5 blocks with only two blocks falls (the majority of the time the wholetower collapses).The another observation is the fact that models that use ConvDeconv generated data performedslightly better than those that use ConvLSTMDeconv. As seen in Figure 2 the samples in the Con-vLSTMDeconv case are more noisy and less sharper than those in Figure 1. This could be causedsince after the first time step the model outputs from the last time step is used as input for the nexttime step, the samples degenerates the longer the sequence is.Data augmentation was crucial to increase the generalization performance of the stability predictione.g. 5CD model tested on 4 bricks achieved only 50% without data augmentation while reaching74:5%accuracy with data augmentation. This significant improvement from data augmentationcould be partly because our dataset was relatively small.Figure 4: Accuracy in percentage for each of the 9 models tested on test sets with a different numberof blocks. Each color represents the number of blocks that the model was tested on. 50% is chance.2We are not using the same dataset as Zhang et al. (2016) and hence direct comparison is not possible.5Under review as a conference paper at ICLR 2017Model Train set Test set Accuracy3S 3 3 91.87 %3S 3 4 66.1 %3S 3 5 63.7 %3CD 3 3 95.5 %3CD 3 4 92.63 %3CD 3 5 89 %3CLD 3 3 93.3 %3CLD 3 4 90.33 %3CLD 3 5 84.30 %4S 4 3 52.5 %4S 4 4 87 %4S 4 5 75.53 %4CD 4 3 80.53 %4CD 4 4 92.5 %4CD 4 5 89.1 %4CLD 4 3 65.53 %4CLD 4 4 91.20 %4CLD 4 5 84.20 %5S 5 3 59.26 %5S 5 4 67.23 %5S 5 5 86.50 %5CD 5 3 58.27 %5CD 5 4 74.50 %5CD 5 5 88.53 %5CLD 5 3 58.90 %5CLD 5 4 74.50 %5CLD 5 5 88.53 %Table 3: The results from our experimentsModel Train set Test set AccuracyAlexNet 4 3 51 %AlexNet 4 4 95 %AlexNet 4 5 78.5 %IPE N/A 3 72 %IPE N/A 4 64 %IPE N/A 5 56 %Human N/A 3 76.5 %Human N/A 4 68.5 %Human N/A 5 59 %Table 4: The results reported on Zhang et al.(2016). We emphasize that these results are ona different dataset.6 C ONCLUSIONIn this paper, we showed that data generated from an unsupervised model could help a supervisedlearner to generalize to unseen scenarios. We argue that this ability of transfer learning and gener-alization by observing the world could be one of the ingredients to construct a model of the worldthat could have applications in many tasks, such as model-based RL. We aim to extend this work infuture by looking at the videos of robots manipulating objects and being able to predict their failurebeforehand, which could help an RL agent to explore more intelligently.ACKNOWLEDGMENTSWe would like to thank Harm de Vries and Laurent Dinh for their help and feedback in writingthe paper. And also thank Adam Lerer and Jiajun Wu for sharing their dataset. We thank NSERC,CIFAR, IBM, Canada Research Chairs, Google and Samsung for funding. | rJLj58VNx | Good work, though more detailed analysis would be helpful | 5: Marginally below acceptance threshold | Summary
===
This paper trains models to predict whether block towers will fall down
or not. It shows that an additional model of how blocks fall down
(predicting a sequence of frames via unsupervised learning) helps the original
supervised task to generalize better.
This work constructs a synthetic dataset of block towers containing
3 to 5 blocks places in more or less precarious positions. It includes both
labels (the tower falls or not) and video frame sequences of the tower's
evolution according to a physics engine.
Three kinds of models are trained. The first (S) simply takes an image of a
tower's starting state and predicts whether it will fall or not. The
other two types (CD and CLD) take both the start state and the final state of the
tower (after it has or has not fallen) and predict whether it has fallen or not,
they only differ in how the final state is provided. One model (ConvDeconv, CD)
predicts the final frame using only the start frame and the other
(ConvLSTMDeconv) predicts a series of intermediate frames before coming
to the final frame. Both CD and CLD are unsupervised.
Each model is trained on towers of a particular heigh and tested on
towers with an unseen height. When the height of the train towers
is the same as the test tower height, all models perform roughly the same
(with in a few percentage points). However, when the test height is
greater than the train height it is extremely helpful to explicitly
model the final state of the block tower before deciding whether it has
fallen or not (via CD and CLD models).
Pros
===
* There are very clear (large) gains in accuracy from adding an unsupervised
final frame predictor. Because the generalization problem is also particularly
clear (train and test with different numbers of blocks), this makes for
a very nice toy example where unsupervised learning provides a clear benefit.
* The writing is clear.
Cons
===
My one major concern is a lack of more detailed analysis. The paper
establishes a base result, but does not explore the idea to the extent
to which I think an ICLR paper should. Two general directions for potential
analysis follow:
* Is this a limitation of the particular way the block towers are rendered?
The LSTM model could be limited by the sub-sampling strategy. It looks
like the sampling may be too coarse from the provided examples. For the
two towers in figure 2 that fall, they have fallen after only 1 or 2
time steps. How quickly do most towers fall? What happens if the LSTM
is trained at a higher frame rate? What is the frame-by-frame video
prediction accuracy of the LSTM? (Is that quantity meaningful?)
How much does performance improve if the LSTM is provided ground truth
for only the first k frames?
* Why is generalization to different block heights limited?
Is it limited by model capacity or architecture design?
What would happen if the S-type models were made wider/deeper with the CD/CLD
fall predictor capacity fixed?
Is it limited by the precise task specification?
What would happen if networks were trained with towers of multiple heights
(apparently this experiment is in the works)?
I appreciate that one experiment in this direction was provided.
Is it limited by training procedure? What if the CD/CLD models were trained
in an end-to-end manner? What if the double frame fall predictor were trained
with ground truth final frames instead of generated final frames?
Minor concerns:
* It may be asking too much to re-implement Zhang et. al. 2016 and PhysNet
for the newly proposed dataset, but it would help the paper to have baselines
which are directly comparable to the proposed results. I do not think this
is a major concern because the point of the paper is about the role of
unsupervised learning rather than creating the best fall prediction network.
* The auxiliary experiment provided is motivated as follows:
"One solution could be to train these models to predict how many blocks have
fallen instead of a binary stability label."
Is there a clear intuition for why this might make the task easier?
* Will the dataset, or code to generate it, be released?
Overall Evaluation
===
The writing, presentation, and experiments are clear and of high enough
quality for ICLR. However the experiments provide limited analysis past
the main result (see comments above). The idea is a clear extension of ideas behind unsupervised
learning (video prediction) and recent results in intuitive physics from
Lerer et. al. 2016 and Zhang et. al. 2016, so there is only moderate novelty.
However, these results would provide a valuable addition to the literation,
especially if more analysis was provided.
| 4: The reviewer is confident but not absolutely certain that the evaluation is correct |
rJ6DhP5xe | ICLR.cc/2017/conference | 2017 | Generalizable Features From Unsupervised Learning | ["Mehdi Mirza", "Aaron Courville", "Yoshua Bengio"] | Humans learn a predictive model of the world and use this model to reason about future events and the consequences of actions. In contrast to most machine predictors, we exhibit an impressive ability to generalize to unseen scenarios and reason intelligently in these settings. One important aspect of this ability is physical intuition(Lake et al., 2016). In this work, we explore the potential of unsupervised learning to find features that promote better generalization to settings outside the supervised training distribution. Our task is predicting the stability of towers of square blocks. We demonstrate that an unsupervised model, trained to predict future frames of a video sequence of stable and unstable block configurations, can yield features that support extrapolating stability prediction to blocks configurations outside the training set distribution | ["Unsupervised Learning", "Deep learning"] | ABSTRACTHumans learn a predictive model of the world and use this model to reason aboutfuture events and the consequences of actions. In contrast to most machine predic-tors, we exhibit an impressive ability to generalize to unseen scenarios and reasonintelligently in these settings. One important aspect of this ability is physical in-tuition (Lake et al., 2016). In this work, we explore the potential of unsupervisedlearning to find features that promote better generalization to settings outside thesupervised training distribution. Our task is predicting the stability of towers ofsquare blocks. We demonstrate that an unsupervised model, trained to predict fu-ture frames of a video sequence of stable and unstable block configurations, canyield features that support extrapolating stability prediction to blocks configura-tions outside the training set distribution.1 I NTRODUCTIONHumans learn a tremendous amount of knowledge about the world with almost no supervision andcan construct a predictive model of the world. We use this model of the world to interact with ourenvironment. As also argued by Lake et al. (2016) one of the core ingredients of human intelligenceis intuitive physics. Children can learn and predict some of the common physical behaviors of ourworld just by observing and interacting without any direct supervision. And they form a sophisti-cated predictive model of the physical environment and expect the world to behave based on theirmental model and have a reasonable expectation about unseen situations T ́egl ́as et al. (2011).Despite impressive progress in the last few years in the training of the supervised models, we havenot yet quite been able to achieve similar results in unsupervised learning, and it remains one of thechallenging research areas in the field. The full potential of the application of unsupervised learningis yet to be realized.In this work, we leverage unsupervised learning to train a predictive model over sequences. We usethe imagined and predicted future sequence data to help a physical environment prediction modelgeneralize better to unseen settings.More specifically we focus on the task of predicting if a tower of square bricks will fall or not, asintroduced by Lerer et al. (2016). They showed that a deep convolution neural network could predictthe fall of the towers with super-human accuracy. But despite the strengths of convolution neuralnetworks, Zhang et al. (2016) shows how deep neural networks have a hard time generalizing tonovel situations in the same way as humans or simulation-based models can do. In this work, weshow that deep neural networks are capable of generalizing to novel situations through a form ofunsupervised learning. The core idea is to observe the world without any supervision and build afuture predictive model of it, and in a later stage leverage and utilize the imagined future to train abetter fall prediction model.2 R ELATED WORKIn the beginning, unsupervised learning and generative models emerged as pre-training method Hin-ton & Salakhutdinov (2006); Hinton et al. (2006); Bengio et al. (2007) to help other tasks such as1Under review as a conference paper at ICLR 2017supervised learning. But since Krizhevsky et al. (2012) many other regularization Srivastava et al.(2014), weight initialization Glorot & Bengio (2010) and normalization Ioffe & Szegedy (2015)techniques and architecture designs He et al. (2015) has been introduced that diminish the effect ofpre-training. Although pre-training still could be useful in data scarce domains they are many otherways and applications that unsupervised learning are still very interesting models and it is a veryactive area of research. Just to name a few applications are semi-supervised learning Kingma et al.(2014); Salimans et al. (2016); Dumoulin et al. (2016) super resolution Sønderby et al. (2016).Video generation is one active area of research with many applications, and many of the recentworks have been using some of the states of the art neural networks for video generation. Srivas-tava et al. (2015) uses LSTM recurrent neural networks to train an unsupervised future predictivemodel for video generation. And here we use a very similar architecture as described in Section 4.1.Mathieu et al. (2015) combines the common mean-squared-error objective function with an adver-sarial training cost in order to generate sharper samples. Lotter et al. (2016) introduce another formof unsupervised video prediction training scheme that manages to predict future events such as thedirection of the turn of a car which could have potential use in training of the self-driving cars.Model-based reinforcement learning (RL) is an active research area that holds the promise of makingthe RL agents less data hungry. Learning agents could explore, learn in an unsupervised way abouttheir world, and learn even more by dreaming about future states. We believe that action-conditionvideo prediction models are an important ingredient for this task. Fragkiadaki et al. (2015) learnthe dynamics of billiards balls by supervised training of a neural net. Action-conditioned videoprediction models have been applied to Atari playing agent Oh et al. (2015) as well as robotics (Finnet al., 2016; Finn & Levine, 2016).3 D ATASETRecent datasets for predicting the stability of block configurations (Lerer et al., 2016; Zhang et al.,2016) only provide binary labels of stability, and exclude the video simulation of the block configu-ration. We, therefore, construct a new dataset, with a similar setup as Lerer et al. (2016); Zhang et al.(2016), that includes this video sequence. We use a Javascript based physics engine1to generate thedata.We construct towers made of 35square blocks. To sample a random tower configuration, weuniformly shift each block in its xandyposition such that it touches the block below. Because tallertowers are more unstable, this shift is smaller when we add more blocks. To simplify our learningsetting, we balance the number of stable and unstable block configurations. For each tower height,we create 8000 ,1000 and3000 video clips for the training, validation, and test set, respectively. Thevideo clips are sub-sampled in time to include more noticeable changes in the blocks configurations.We decided to keep 39 number of frames which with our sub-sampling rate was enough time forunstable towers to collapse. Each video frame is an RGB image of size 64x64. In addition to binarystability label, we include the number of blocks that fell down.4 A RCHITECTUREThe core idea of this paper is to use future state predictions of a generative video model to en-hance the performance of a supervised prediction model. Our architecture consists of two separatemodules:Frame predictor A generative model to predict future frames of a video sequence. This model istrained to either generate the last frame or the complete sequence of frames.Stability predictor In the original task, stability is predicted from a static image of a block config-uration. We explore whether, in addition to initial configuration, the last frame predictionof our unsupervised model improves the performance of the stability prediction.In the following sections, we explore several different architectures for both modules.1https://chandlerprall.github.io/Physijs/2Under review as a conference paper at ICLR 20174.1 F UTURE FRAME PREDICTIONWe consider two different model architectures for this task. The first one, named ConvDeconv, onlytakes the first frame as input and predicts the last frame of the video sequence. The architectureconsist of a block of convolution and max-pooling layers. To compensate for the dimensionalityreduction of the max-pooling layers, we have a fully-connected layer following the last max-poolinglayer. And finally a subsequent block of deconvolution layers with the output size same as the modelinput size. All activation functions are ReLU(Nair & Hinton, 2010). See Table 1 for more detailsof the architecture. The objective function is the mean squared error between the generated lastframe and the ground-truth frame; as a result, this training will not require any labels. We alsoexperimented with an additional adversarial cost as in Mathieu et al. (2015) but did not observeany improvement for the stability prediction task. We hypothesize that although the adversarialobjective function helps to have sharper images, such improved sample quality does not transferto better stability prediction. Figure 1 shows a few examples of the generated data on the test set.Mean squared error is minimized using the AdaM Optimizer(Kingma & Ba, 2014) and we use early-stopping when the validation loss does not improve for 100epochs.We extend this ConvDeconv model in a second architecture, named ConvLSTMDeconv, to predictthe next frame at each timestep. This model is composed of an LSTM architecture. The sameconvolutional and deconvolutional blocks as ConvDeconv is utilized to respectively input the currentframe to the LSTM transition and output the next frame from the current LSTM state. The detailsof the ConvLSTMDeconv model architecture are shown in Table 2 and Figure 3 shows the diagramof the both architectures. During the training at each time step the ground-truth data feeds in to themodel, but during the test time only the initial time step gets the first frame from the data and forsubsequent time steps the generated frames from the previous time steps feed in to the model. The issimilar setup to recurrent neural network language models Mikolov (2012), and this is necessary asduring the test time we only have access to the first frame. As before, the model is trained to predictthe next frame at each time step by minimizing the predictive mean-squared-error using AdaMoptimizer and early-stopping. For training, we further subsample in time dimension and reduce thesequence length to 5-time steps. Figure 2 shows some sample generated sequences from the test set.Layer Type Output channels/dimensions Kernel/Pool size1 Conv 64 332 MaxPool 64 443 Conv 128 334 MaxPool 64 335 Conv 64 336 MaxPool 64 337 FC 646416 = 655368 DeConv 64 339 DeConv 128 3310 DeConv 64 3311 DeConv 3 33Table 1: ConvDeconv model architecture.FC stands for ”Fully Connected”.Layer Type Output channels/Dimension Kernel/Pool size1 Conv 64 332 MaxPool 64 443 Conv 128 334 MaxPool 64 335 Conv 64 336 MaxPool 64 337 FC LSTM 20008 FC 646439 DeConv 64 3310 DeConv 64 3311 DeConv 3 33Table 2: ConvLSTMDeconv model architecture.FC stands for ”Fully Connected”.Figure 1: Samples from the ConvDeconv model. First and second rows show first and last framerespectively from the test data. And the third row shows generated last frame samples.4.2 S TABILITY PREDICTIONWe have two supervised models for stability prediction. The first one will be a baseline that takesas input the first frame and predict the fall of the tower. For this model we use 50 layer ResNet3Under review as a conference paper at ICLR 2017Figure 2: Samples from the ConvLSTMDeconv model. Each row is a different sample. The leftsequence is the data and the right sequence is the generated data. Note that during generation modelonly see the first frame and for next time steps uses its own output from the last timestep.architecture from He et al. (2016). We trained the baseline model on each of the different towerheights 3, 4, 5. We call it the single model and name experiments 3S, 4S, 5S respectively for thenumber of blocks that it was trained on. The second model will be the one using the generateddata: it takes as input the first frame and the generated last frame. It consisted of two 50 LayerResNet blocks in parallel, one for the first frame and one for last frame and the last hidden layerof both models are concatenated together before a logistic regression layer (or Softmax in the caseof non-binary labels). Both ResNet blocks share parameters. Based on whether the generated datais coming from ConvDeconv model or ConvLSTMDeconv model we labeled experiments as 3CD,4CD, 5CD and 3CLD, 4CLD, 5CLD respectively.None of the models are pre-trained and all the weights are randomly initialized. As in 4.1, we useAdaM and we stopped the training when the validation accuracy was not improved for 100epochs.All images are contrast normalized independently and we augment our training set using randomhorizontal flip of the images and randomly changing the contrast and brightness.Figure 3: Different model architectures. The first two on the left are ConvDeconv and ConvLST-MDeconv described in Section 4.1. And the two on the right are models used for the supervised fallprediction described in Section 4.2. Single frame predictor is the baseline model. And the doubleframe predictor is the model that uses the generated data.4Under review as a conference paper at ICLR 20175 R ESULTSFigure 4 shows the classification results for each of the 9 models described in Section 4.2 tested on3, 4 and 5 blocks. Each test case is shown with a different color. And Table 3 shows all the 27 testcase results’ numerical values. In almost all cases the generated data improves the generalizationperformance to test cases with a different number of blocks than it was trained on. For comparisonwe have included results from Zhang et al. (2016) in Table 4. Since Zhang et al. (2016) only reportsthe results when the models are trained on tower of 4 blocks, the corresponding results would be thesecond block row in Table 3, models 4S, 4CD and 4CLD. Even though the datasets are not the same,but it can be observed that the range of performance of the baseline 4S model is consistent with therange of performance of AlexNet model on Table 4. It can be seen that how the results of the 4CDmodel are significantly better than both IPE and human performance reported in Zhang et al. (2016),while the baselines have similar performances.One observation is the fact that the improvements are more significant when it’s been tested onscenarios with more bricks than during training. It also improves the reverse case, i.e. fewer bricksthan during training, but the improvement is not as significant. It is worth mentioning that testingon a lower number of bricks is a much harder problem as pointed out in Zhang et al. (2016) too. Intheir case, the prediction performance was almost random when going from 4 blocks to 3 blocks,which is not the case in our experiments2. One possible explanation for performance loss is that abalanced tower with fewer blocks corresponds to an unstable configuration for a tower with moreblocks e.g. a tower with 3 blocks is classified as unstable for a prediction model trained on towers of5 blocks. One solution could be to train these models to predict how many blocks have fallen insteadof a binary stability label. Because we have access to this data in our dataset, we explored the sameexperiments using these labels. Unfortunately, we did not observe any significant improvement. Themain reason could be that the distribution of the number of fallen blocks is extremely unbalanced. Itis hard to collect data with a balanced number of fallen blocks because some configurations are thusvery unlikely e.g. a tower of 5 blocks with only two blocks falls (the majority of the time the wholetower collapses).The another observation is the fact that models that use ConvDeconv generated data performedslightly better than those that use ConvLSTMDeconv. As seen in Figure 2 the samples in the Con-vLSTMDeconv case are more noisy and less sharper than those in Figure 1. This could be causedsince after the first time step the model outputs from the last time step is used as input for the nexttime step, the samples degenerates the longer the sequence is.Data augmentation was crucial to increase the generalization performance of the stability predictione.g. 5CD model tested on 4 bricks achieved only 50% without data augmentation while reaching74:5%accuracy with data augmentation. This significant improvement from data augmentationcould be partly because our dataset was relatively small.Figure 4: Accuracy in percentage for each of the 9 models tested on test sets with a different numberof blocks. Each color represents the number of blocks that the model was tested on. 50% is chance.2We are not using the same dataset as Zhang et al. (2016) and hence direct comparison is not possible.5Under review as a conference paper at ICLR 2017Model Train set Test set Accuracy3S 3 3 91.87 %3S 3 4 66.1 %3S 3 5 63.7 %3CD 3 3 95.5 %3CD 3 4 92.63 %3CD 3 5 89 %3CLD 3 3 93.3 %3CLD 3 4 90.33 %3CLD 3 5 84.30 %4S 4 3 52.5 %4S 4 4 87 %4S 4 5 75.53 %4CD 4 3 80.53 %4CD 4 4 92.5 %4CD 4 5 89.1 %4CLD 4 3 65.53 %4CLD 4 4 91.20 %4CLD 4 5 84.20 %5S 5 3 59.26 %5S 5 4 67.23 %5S 5 5 86.50 %5CD 5 3 58.27 %5CD 5 4 74.50 %5CD 5 5 88.53 %5CLD 5 3 58.90 %5CLD 5 4 74.50 %5CLD 5 5 88.53 %Table 3: The results from our experimentsModel Train set Test set AccuracyAlexNet 4 3 51 %AlexNet 4 4 95 %AlexNet 4 5 78.5 %IPE N/A 3 72 %IPE N/A 4 64 %IPE N/A 5 56 %Human N/A 3 76.5 %Human N/A 4 68.5 %Human N/A 5 59 %Table 4: The results reported on Zhang et al.(2016). We emphasize that these results are ona different dataset.6 C ONCLUSIONIn this paper, we showed that data generated from an unsupervised model could help a supervisedlearner to generalize to unseen scenarios. We argue that this ability of transfer learning and gener-alization by observing the world could be one of the ingredients to construct a model of the worldthat could have applications in many tasks, such as model-based RL. We aim to extend this work infuture by looking at the videos of robots manipulating objects and being able to predict their failurebeforehand, which could help an RL agent to explore more intelligently.ACKNOWLEDGMENTSWe would like to thank Harm de Vries and Laurent Dinh for their help and feedback in writingthe paper. And also thank Adam Lerer and Jiajun Wu for sharing their dataset. We thank NSERC,CIFAR, IBM, Canada Research Chairs, Google and Samsung for funding. | SkG-C_NNl | Good preliminary work, more controls and detailed analysis are needed. | 5: Marginally below acceptance threshold | Paper Summary
This paper evaluates the ability of two unsupervised learning models to learn a
generalizable physical intuition governing the stability of a tower of blocks.
The two models are (1) A model that predicts the final state of the tower given
the initial state, and (2) A model that predicts the sequence of states of this
tower over time given the initial state. Generalizability is evaluated by
training a model on towers made of a certain number of blocks but testing on
towers made of a different number of blocks.
Strengths
- This paper explores an interesting way to evaluate representations in terms of
their generalizability to out-of-domain data, as opposed to more standard
methods which use train and test data drawn from the same distribution.
- Experiments show that the predictions of deep unsupervised learning models on
such out-of-domain data do seem to help, even though the models were not
trained explicitly to help in this way.
Weaknesses
- Based on Fig 4, it seems that the models trained on 3 blocks (3CD, 3CLD)
``generalize" to 4 and 5 blocks. However, it is plausible that these models
only pay attention to the bottom 3 blocks of the 4 or 5 block towers in order to
determine their stability. This would work correctly a significant fraction of
the time. Therefore, the models might actually be overfitting to 3 block towers
and not really generalizing the physics of these blocks. Is this a possibility ?
I think more careful controls are needed to make the claim that the features
actually generalize. For example, test the 3 block model on a 5 block test set
but only make the 4th or 5th block unstable. If the model still works well, then
we could argue that it is actually generalizing.
- The experimental analysis seems somewhat preliminary and can be improved. In
particular, it would help to see visualizations of what the final state looks
like for models trained on 3 blocks but test on 5 (and vice-versa). That would
help understand if the generalization is really working. The discriminative
objective gives some indication of this, but might obfuscate some aspects of
physical realism that we would really want to test. In Figure 1 and 2, it is
not mentioned whether these models are being tested on the same number of blocks
they were trained for.
- It seems that the task of the predicting the final state is really a binary
task - whether or not to remove the blocks and replace them with gray
background. The places where the blocks land in case of a fall is probably quite
hard to predict, even for a human, because small perturbations can have a big
impact on the final state. It seems that in order to get a generalizable
physics model, it could help to have a high frame rate sequence prediction task.
Currently, the video is subsampled to only 5 time steps.
Quality
A more detailed analysis and careful choices of testing conditions can increase
the quality of this paper and strengthen the conclusions that can be drawn from
this work.
Clarity
The paper is well written and easy to follow.
Originality
The particular setting explored in this paper is novel.
Significance
This paper provides a valuable addition to the growing work on
transferability/generalizability as an evaluation method for unsupervised
learning. However, more detailed experiments and analysis are needed to make
this paper significant enough for an ICLR paper.
Minor comments and suggestions
- The acronym IPE is used without mentioning its expansion anywhere in the text.
- There seems to be a strong dependence on data augmentation. But given that
this is a synthetic dataset, it is not clear why more data was not generated
in the first place.
- Table 3 : It might be better to draw this as a 9 x 3 grid : 9 rows corresponding to the
models and 3 columns corresponding to the test sets. Mentioning the train set is
redundant since it is already captured in the model name. That might make it
easier to read.
Overall
This is an excellent direction to work and preliminary results look great.
However, more controls and detailed analysis are needed to make strong
conclusions from these experiments. | 4: The reviewer is confident but not absolutely certain that the evaluation is correct |
S1Jhfftgx | ICLR.cc/2017/conference | 2017 | Enforcing constraints on outputs with unconstrained inference | ["Jay Yoon Lee", "Michael L. Wick", "Jean-Baptiste Tristan"] | Increasingly, practitioners apply neural networks to complex
problems in natural language processing (NLP), such as syntactic
parsing, that have rich output structures. Many such applications
require deterministic constraints on the output values; for example,
requiring that the sequential outputs encode a valid tree. While
hidden units might capture such properties, the network is not
always able to learn them from the training data alone, and
practitioners must then resort to post-processing. In this paper, we
present an inference method for neural networks that enforces
deterministic constraints on outputs without performing
post-processing or expensive discrete search over the feasible
space. Instead, for each input, we nudge the continuous weights
until the network's unconstrained inference procedure generates an
output that satisfies the constraints. We find that our method
reduces the number of violating outputs by up to 81\%, while
improving accuracy. | ["Natural language processing", "Structured prediction", "Deep learning"] | ABSTRACTIncreasingly, practitioners apply neural networks to complex problems in natu-ral language processing (NLP), such as syntactic parsing, that have rich outputstructures. Many such applications require deterministic constraints on the outputvalues; for example, requiring that the sequential outputs encode a valid tree. Whilehidden units might capture such properties, the network is not always able tolearn them from the training data alone, and practitioners must then resort to post-processing. In this paper, we present an inference method for neural networks thatenforces deterministic constraints on outputs without performing post-processingor expensive discrete search over the feasible space. Instead, for each input, wenudge the continuous weights until the network’s unconstrained inference proce-dure generates an output that satisfies the constraints. We find that our methodreduces the number of violating outputs by up to 81%, while improving accuracy.1 I NTRODUCTIONMany neural networks have discrete-valued output units that correspond to an inference or predictionabout an input. Often, a problem might involve multiple discrete outputs. Unlike multiclass classi-fication, which associates a single discrete output with each input, so called structured predictionproblems associate multiple outputs with each input. For example, in multi-label classification,instead of predicting a single relevant class pertaining to the image or sentence, we must predict allrelevant classes: the image contains a dog, a tree, and a sky. In sequence prediction problems, thediscrete outputs might be a sequence of words or symbols that must form a coherent translation of asource language sentence (Cho et al., 2014; Sutskever et al., 2014), description of an image (Vinyalset al., 2015b), answer to a question (Kumar et al., 2016), or a parse-tree for an input sentence (Vinyalset al., 2015a). Crucially, in structured prediction, the output values are interdependent. Even thoughneural networks usually predict outputs independently or sequentially (one output at a time), thehidden units allow them to successfully capture many dependencies.Sometimes, the outputs must obey hard constraints. For example, in sequence labeling with BILOUencoding, a ‘begin’ marker Bcannot immediately follow an ‘inside’ marker I(Ratinov & Roth,2009). In clustering, pairwise binary decisions must obey transitivity so that they yield a validequivalence class relation over the data points (McCallum & Wellner, 2005; Wick et al., 2006; 2008).In syntactic/dependency parsing, the output sequence must encode a valid parse tree (McDonald& Pereira, 2006; Vinyals et al., 2015a; Dyer et al., 2016). In formal language generation or neuralcompilers the output must belong to a context free language or compile (Reed & de Freitas, 2016). Indual decomposition approaches to joint inference, copies of variables must satisfy equality constraints(Koo et al., 2010; Rush et al., 2010; Rush & Collins, 2012). Finally, in some ensemble methods,the outputs of multiple conditionally independent classifiers must reach a consensus on the outputclass. Indeed, there are a tremendous number of problems that require hard constraints on the outputs.Unlike softer dependencies, violating a hard-constraint is often unacceptable because the output ofthe network would not “type-check” causing problems for downstream components. Unfortunatelyin practice, networks are not always able to exactly learn constraints from the training data alone.As a motivating example, consider a sequence-to-sequence network that inputs a sentence and outputsa sequence of “shift-reduce” commands that describe the sentence’s parse tree. Briefly, the shift-1Under review as a conference paper at ICLR 2017reduce commands control a parsing algorithm by indicating how and when to use its stack. Eachcommand controls whether to shift ( s) a token onto the stack, reduce ( r) the top of the stack into aparent tree node, or push ( !) the current reduction back onto the stack.To be successful, the network must generate commands that imply a valid tree over the entire inputsentence. However, the decoder outputs just a single command at a time, producing some outputsthat are not globally-consistent, valid shift-reduce programs. Indeed, the output may not have enoughshifts to include every input token in the tree or may attempt to reduce when the stack is empty. Forexample, the following input sentence “ So it ’s a very mixed bag . ” comprises ten space-delimitedtokens (the quotations are part of the input), but our unconstrained sequence-to-sequence networkoutputs an invalid sequence with only nine shifts ssr!sr!ssssrrr!rr!ssrrrrrr! . We mustintroduce another shift so the last token is pushed onto the stack and issue another reduce so itis inserted into the tree.We could attempt to fix the output with post-processing, but where is the right place to insertthese commands in the sequence? There are 406 = choose (29;2)candidate locations. Furthercomplicating our post-processing dilemma is the fact that the output contains several other errorsthat are seemingly unrelated to the constraint. Instead, we could attempt to fix the problem with amore sophisticated decoder, but this is difficult because the decoder outputs a single character at eachtime-step and our constraints are global, limiting corrections to the end of the sequence when it is toolate to rectify an earlier decision. A beam search is less myopic, but in practice most of the network’soutput mass is peaked on the best output token, resulting in little improvement.In this paper, we propose an inference method for neural networks that enforces output constraintswithout employing combinatorial discrete search. The idea is to modify some (or all) of the weightsfor each instance at test-time, iteratively nudging them, until the network’s efficient unconstrainedinference procedure produces a valid output. We achieve this by expressing the hard constraints asan optimization problem over the continuous weights and employ back-propagation to change them.Prima facie , back-propagation is doomed because the constraint loss is necessarily a function of theargmax that produced the discrete values. However, we circumvent this problem by optimizing overthe energy of the violating outputs instead. Since the weights directly determine the output throughthe energy, we are able to manipulate the unconstrained inference procedure to produce the desiredresult. Much like scoped-learning, the algorithm customizes the weights for each example at test-time(Blei et al., 2002), but does so in a way to satisfy the constraints.When applied to the above example, our method removes enough energy mass from the invalid outputspace in only twelve steps, allowing unconstrained decoding to produce a valid output sequence:ssr!sr!ssssrrr!rr!ssrrrrrr! (initial output)sssr!ssssrr!srrr!rr!ssrrrrrr! (rectified output after 12 steps)Interestingly, the network generates an additional scommand at the beginning of the sequence whilealso producing a cascade of error correction in later time steps: the new output now satisfies theconstraints and is a perfectly correct parse. Of course, enforcing constraints does not always lead toan improvement in accuracy, but we find that often it does in practice, especially for a well-trainednetwork. We find that our method is able to completely satisfy constraints in up to 81% of the outputs.2 B ACKGROUNDConsider a neural network that generates a variable length output vector y=fyigny1from a variablelength input vector x=fxigmx1. For example, in image classification, the input vector encodes fixedmulti-dimensional tensor of pixel intensities and the output vector comprises just a single elementcorresponding to the discrete class label. In sequence-to-sequence, the input might be a variablelength vector of French tokens, and the output would be a variable length vector of its Englishtranslation. It is sometimes convenient to think of the network as a function from input to outputf(x;W)7!y (1)However, for the purpose of exposition, we separate the neural network into a real-valued model(negative energy function) that scores the compatibility of the outputs (given the weights and input)and an inference procedure that searches for high scoring outputs.2Under review as a conference paper at ICLR 2017For the model, let yibe a discrete output from an output unit and let (yi;x;W)be its correspondingreal-valued log-space activation score (e.g., the log of the softmax for locally normalized models orsimply a linear activation value for globally normalized models). Define the negative energy over acollection of output values yas an exponentiated sum of log-space activation scores(y;x;W) = exp Xi (yi;x;W)!(2)Then, inference is the problem of finding the values of the outputs ythat maximize the negativeenergy given fixed inputs xand weights W. Thus, we can rewrite the neural network as the function:f(x;W)7!argmaxy(y;x;W) (3)The purpose of separating the model from the inference procedure is so we can later formalize ouroptimization problem. We emphasize that this formulation is consistent with existing neural networks.Indeed, inference in feed-forward networks is a single feed-forward pass from inputs to outputs.When the outputs only depend on each other through hidden states that only depend on earlier layersof the network, feed-forward inference is exact in the sense that it finds the optimum of Equation 3.For recurrent neural networks (RNNs), each output depends on hidden states that are functions ofprevious output values. However, we can still think of the usual procedure that produces the highestscoring output at each time step as a local greedy approximation to global inference; of course, theprocedure can optionally be improved with a beam.3 C ONSTRAINED INFERENCE FOR NEURAL NETWORKSA major advantage of neural networks is that once trained, inference is extremely efficient. However,constraints can render inference intractable due to discrete search. Our goal is take advantage of thefact that unconstrained inference is inexpensive and design a constrained inference algorithm thatexploits such a procedure as a black box. Our method iteratively adjusts the weights for each test-timeinput, concentrating the probability mass on the feasible region so that unconstrained inferencebecomes increasingly likely to generate an output that satisfies the constraints.In this work, we focus on constraints that require the outputs to belong to an input-dependent context-free languageLx(CFL). The idea is to treat the output space of the neural network as the terminalsymbols, and devise the appropriate production rules and non-terminals to express constraints onthem. An advantage of employing CFLs over other formalisms such as first order logic (FOL) isthat CFLs are intuitive for expressing constraints on the outputs, especially for language models andsequence-to-sequence networks. For example, when modeling Python or Java code, it is easy toexpress many of the desired programming language’s constraints using a CFL, but cumbersome inFOL. Indeed, CFLs are an expressive class of languages.To motivate our algorithm, we begin with the ideal optimization problem and argue that unlikefor linear models with local constraints, the resulting Lagrangian is not well suited for globallyconstrained inference in neural networks. We ultimately settle on an alternative objective function thatreasonably models our constrained inference problem. Although our algorithm lacks the theoreticalguarantees enjoyed by classic relaxation algorithms we nevertheless find it works well in practice.Consider the following constrained inference problem for neural networksmaxy(x;y;W)s:t:y2Lx(4)Naively enforcing the constraint requires combinatorial discrete search, which is intractable in general.Instead, we prefer a smooth optimization problem with meaningful gradients to guide the search.With this in mind, let g(y;L)7!rforr2R+be a function that measures a loss between a sentenceyand a grammarLsuch thatg(y;L) = 0 if and only if there are no grammatical errors in y. That is,g(y;L) = 0 for the feasible region and is strictly positive everywhere else. For a large class of CFLs,gcould be the least errors count function (Lyon, 1974) or a weighted version thereof. We could thenexpress CFL membership as an equality constraint and minimize the Lagrangianminmaxy(x;y;W) +g(y;L) (5)3Under review as a conference paper at ICLR 2017However, this dual optimization problem has a major flaw. Our constraints are global and do notnecessarily factorize over the individual outputs. Consequently, there is just a single dual variable. Optimizing does little more than eliminate a single contour of output configurations at a time,resulting in a brute-force trial and error search.Instead, observe that the network’s weights control the negative energy of the output configurations.By properly adjusting the weights, we can affect the outcome of inference by removing mass frominvalid outputs. The weights are likely to generalize much better than the single dual variable becausein most neural networks, the weights are tied across space (e.g., CNNs) or time (e.g., RNNs). As aresult, lowering the negative energy for a single invalid output has the effect of lowering the negativeenergy for an entire family of invalid outputs, enabling faster search. With this in mind, we introducean independent copy Wof the network’s weights Wand minimize with respect to these “dualweights” instead of the dual variable. This is powerful because we have effectively introduced anexponential number of “dual variables” (via the energy, which scores each output) that we can easilycontrol via the weights; although similar, the new optimization is no longer equivalent to the original:minWmaxy(x;y;W) + ( x;y;W)g(y;L) (6)While a step in the right direction, the objective still requires combinatorial search because (1) themaximization involves two non-linear neural networks and (2) a greedy decoding algorithm is unableto cope with the global loss g() because the constraints do not factorize over the individual outputs.In contrast the functions involved in classic Lagrangian relaxation methods for NLP have multipliersfor each output variable that can be combined with linear models to form a single unified decodingproblem for which efficient inference exists (Koo et al., 2010; Rush et al., 2010; Rush & Collins,2012). Since our non-linear functions and global constraints do not afford us the same ability, wemust modify the optimization problem for a final time so that we can employ the network’s efficientinference procedure as a black-box. In particular, we (1) remove the negative-energy term thatinvolves the original weights Wand compensate with a regularizer that attempts to keep the dualweightsWas close to these weights as possible and (2) maximize exclusively over the networkparameterized by W. The result is a different optimization problem on which our algorithm is based:minW(x;y;W)g(y;Lx) +kWWk2y= argmaxy(x;y;W)(7)Informally, our algorithm alternates the maximization (by running efficient unconstrained inference)and minimization (by performing SGD) until it produces a feasible output or it exceeds a maximumnumber of iterations. For each test-example, we re-initialize the dual weights to the trained weights toensure the network does not deviate too far from the trained network. More precisely see Algorithm 1.Algorithm 1 Constrained inference for neural netsInputs: test instance x, input specific CFL Lx, pretrained weights WW W#reset instance-specific weightswhile not converged doy f(x;W)#perform inference using weights Wr @@W(x;y;W)g(y;Lx) +kWWk2#compute constraint lossW Wr#update instance-specific weights with SGD or a variant thereofend while4 A PPLICATION TO PARSINGConsider the structured prediction problem of syntactic parsing in which the goal is to input a sentencecomprising a sequence of tokens and output a tree describing the grammatical parse of the sentence.One way to model the problem with neural networks is to linearize the representation of the parsetree and then employ the familiar sequence-to-sequence model (Vinyals et al., 2015a).Let us suppose we linearize the tree using a sequence of shift ( s) and reduce ( r,r! ) commands thatcontrol an implicit shift reduce parser. Intuitively, these commands describe the exact instructions forconverting the input sentence into a complete parse tree: the interpretation of the symbol sis that we4Under review as a conference paper at ICLR 2017shift an input token onto the stack and the interpretation of the symbol ris that we start (or continue)reducing (popping) the top elements of the stack, the interpretation of a third symbol !is that we stopreducing and push the reduced result back onto the stack. Thus, given an input sentence and an outputsequence of shift-reduce commands, we can deterministically recover the tree by simulating a shiftreduce parser. For example, the sequence ssrr!ssr!rr!rr! encodes a type-free version of theparse tree (S (NP the ball) (VP is (NP red))) for the input sentence “the ball is red”It is easy to recover the tree structure from the input sentence and the output commands by simulatinga shift reduce parser, performing one command at a time as prescribed by the classic algorithm.Note that for output sequences to form a valid tree over the input, the sequence must satisfy a numberof constraints. First, the number of shifts must equal the number of input tokens mx, otherwise eitherthe tree would not cover the entire input sentence or the tree would contain spurious terminal symbols.Second, the parser cannot issue a reduce command if there are no items left on the stack. Third, thenumber of reduces must be sufficient to leave just a single item, the root node, on the stack.We can express most of these constraints with a CFLL=8>>>>><>>>>>:G!sRr!R!sRrR!Rr!R!RRR!(8)Intuitively, Rule 1 states that a valid shift-reduce command set must begin with a shift (since stack isinitially empty, there is nothing to reduce) and end with a reduce that places the final result on thestack. Rule 2 states that if we do a shift, then we need to reduce the shifted token at some point in thefuture. Rule 3 states that if we do not shift then we are allowed to reduce only if we also push theresult on the stack. Rule 4 allows for multiple subtrees. Rule 5 is the base case.Note, however, that this grammar is for a general purpose shift-reduce language, but we need toconstrain the number of shifts to equal the number of input tokens mx. Since the constraint is a bitverbose to express with production rules, we can instead write the regular language (s(r!)?)mx(r!)?wheremis the number of elements in xand intersect it with our CFL.Lx=L\(s(r!)?)mx(r!)?(9)Rather than relying on a general purpose algorithm to compute g(y;Lx)that measures the numberof grammatical errors, we instead implement it specifically for our language. Let ctni=1(b(i))be thefunction that counts the number of times proposition b(i)is true. Now, define the following lossg(y;Lx) = (mcti(yi=s))2+ Xictj>i(yj=r)ctj>i(yj2fs;!g)!2+cti(yi=r)(cti(yi2fs;!g))2(10)The first term measures the amount of violation due to the regular language and the second and thirdterms measure the amount of violation according to the CFL.5 R ELATED WORKThere has been recent work in applying neural networks to structured prediction problems. Forexample, the recent structured prediction energy networks (SPENS) combines graphical models andneural networks via an energy function defined over the output variables (Belanger & McCallum,2016). SPENS focuses on soft constraints (via the energy function) and performs inference byrelaxing the binary output variables to be continuous and then backpropagating into them. In contrast,our method focuses on hard constraints and we backpropagate into the weights rather than into theoutputs directly. We could combine our method with SPENs to handle soft constraints; for example,by back-propagating the output energy into the weights instead of the relaxed outputs themselves.There has been recent work on applying neural networks to parsing problems that require the ability tohandle hard constraints. For example, by employing a sequence-to-sequence network (Vinyals et al.,2015a) or a custom network designed for shift reduce parsing (Dyer et al., 2016). The former requires5Under review as a conference paper at ICLR 2017task inference weights changed (W)conversion rate accuracyazbzunconstrained none 0.0% 75.6%constrained all 65.2% 82.4%constrained output only 20.9% 77.8%constrained encoder only 58.2% 82.5%constrained decoder only 57.4% 82.3%srno typesunconstrained none 0.0% 84.0%constrained all 81.8% 84.4%srwith typesunconstrained none 0.0% 87.8%constrained all 79.2% 88.3%constrained output only 5.0% 88.1%constrained decoder (top layer) 36.2% 88.2%constrained decoder (all layers) 54.7% 88.3%constrained decoder (top) + attention 38.0% 88.1%constrained decoder (all) + attention 56.5% 88.2%Table 1: Conversion rates on all three tasks with 100 steps of SGD. Note that satisfying the constraintshas no negative affect on accuracy and often has a positive affect.bzazbzazbzazazbzbzbzbzbz !zbaaazbaaazbaaaaaazbzbzbzbzbiteration output loss accuracy0zbaaazbaaazbaaaaaazbzbzbaaazbzb 0.260 75.039zbaaazbaaazbaaaaaazbzbzbaaazbzb 0.259 75.040zbaaazbaaazbaaaaaazbzbzbaaazb 0.250 80.072zbaaazbaaazbaaaaaazbzbzbaaazb 0.249 80.073zbaaazbaaazbaaaaaazbzbzbzbzb 0.0 100.0Table 2: An example for which enforcing the constraints improves accuracy. Red indicates errors.The output changes more than once before the constraints are finally enforced. Greedy decoding withconstraints might correct this example because the spurious a’s are at the end of the sequence.the output to form a valid parse tree and hence they employ post-processing to ensure this property.The latter satisfies constraints as part of the decoding process by sampling over a combinatorial space.Our approach does not rely on post processing or discrete search.Another intriguing approach is to distill the hard constraints into the weights at training time using ateacher network (Hu et al., 2016). The method is appealing because it does not require constrainedinference or combinatorial search. However, the method must achieve a difficult balance between theloss due to the training data and the loss due to the constraint violations. Further, it would cruciallyrely on network’s ability to generalize the constraints learned on the training data to the testing data.Finally, our method highly resembles dual decomposition and more generally Lagrangian relaxationfor structured prediction (Koo et al., 2010; Rush et al., 2010; Rush & Collins, 2012). In suchtechniques, it is assumed that a computationally efficient inference algorithm can maximize overa superset of the feasible region (indeed this assumption parallels our exploitation of the fact thatunconstrained inference in the neural network is efficient). Then, the method employs gradientdescent to gradually concentrate this superset onto the feasible region until the constraints aresatisfied. However, for computational reasons, these techniques assume that the constraints factorizeover the output and that the functions are linear so that they can be combined into a single model. Incontrast, we have a single dual variable so we instead minimize with respect to the weights, whichgeneralize better over the output. Further, we are unable to combine the dual into a single model overwhich we can do inference because the network is highly non-linear.6 E XPERIMENTSIn this section we empirically evaluate our constrained inference procedure on two sequence-to-sequence tasks. The first is a transduction task between two simple languages, which we describenext. The second is the sequence-to-sequence shift-reduce parsing task described in Section 4.6Under review as a conference paper at ICLR 2017azazbzazbzbzazbzbzbzbzbz !aaaaaazbaaazbzbaaazbzbzbzbzbiteration output loss accuracy0aaaaaazbaaazbaaazbzbzbzbaaazb 0.2472 66.71aaaaaazbaaazbaaazbzbzbzbaaazb 0.2467 66.72aaaaaazbaaazbaaazbzbzbzbaaazb 0.2462 66.73aaaaaazbaaazbzbaaazbzbzbzbzb 0.0 100.0Table 3: An example for which enforcing the constraints improves accuracy. Red indicates errors.Note that greedy decoding with constraints would not fix the errors in the middle since errors aremade before constraints are violated. In contrast, the proposed method takes the constraints intoaccount in a globall manner, allowing earlier errors to be corrected by future constraint violations.bzbzbzbzazbzbzazazazazbz !zbzbzbzbaaazbzbaaaaaaaaaaaazbiteration output loss accuracy0zbzbzbzbaaazbaaaaaaaaaaaazbaaa 0.2954 74.24zbzbzbzbzbaaaaaaaaazbzbaaaaaa 0.0 60.0Table 4: An example for which enforcing the constraints degrades accuracy. Errors in red.A transducer T:L1!L 2is a function from a source language to a target language. For the purposeof the experiments Tis known and our goal is to learn it from data. We choose a transducer similarto those studied in recent work (Grefenstette et al., 2015). The source language L0is(az|bz)?and the target language L1is(aaa|zb)?. The transducer is defined to map aztoaaa andbztozb. For example, T( bzazbz )7!zbaaazb . The training set comprises 1934 sequences of length2–20 and the test set contain sentences of lengths 21-24. As is common practice, we employ shortersentences for training to require generalization to longer sentences at test time.We employ a thirty-two hidden unit single-layered, attentionless, sequence-to-sequence long short-term memory (LSTM) in which the decoder LSTM inputs the final encoder state at each time-step. Theencoder and decoder LSTMs each have their own set of weights. We train the network for 1000 epochsusing RMSProp to maximize the likelihood of the output (decoder) sequences in the training set. Thenetwork achieves perfect train accuracy while learning the rules of the output grammar nearly perfectly,even on the test-set. However, despite learning the train-set perfectly, the network fails to learn theinput-specific constraint that the number of a’s in the output should be three times as the number inthe input. We implement a loss for this constraint and evaluate how well our method enforces theconstraint at test-time: g(y;Lx1) = (n+m)13PxiI(xi=a)PyiI(yi=a)2wheren+m, the combined intput/output length, normalizes between 0 and 1. For constrained inference werun Algorithm 1 and employ vanilla stochastic gradient descent with a learning rate of 0.05 and noweight decay. We cap the number of iterations at a maximum of 100.The top section of Table 1 contains the results for this azbz task. We use the term converted to referto a sentence that initially had a constraint-violation, but was later fixed by the constrained-inferenceprocedure. The conversion rate is the percentage of such sentences that we convert: on this task, upto two-thirds. We experiment with which subset of the weights is best for satisfying the constraints,finding that it is best to modify them all. We also report accuracy to study an initial concern.Specifically, we had to omit the negative energy of the original weights Wfrom our optimizationproblem, Equation 7, potentially allowing the network to find a set of dual weights Wthat happento satisfy the constraints, but that have poor performance. However, we found this not to be the case.In fact, we report the token-wise accuracy over the examples for which the unconstrained neuralnetwork violated constraints and find that on the contrary, accuracy improves. Further, we find theregularizer is unnecessary since the initialization W=Wensures the network never drifts too far.In order to gain a better understanding of the algorithm’s behavior, we provide data-cases thathighlight both success and failure (Tables 2,3,4). The title of these tables is the input and the desiredground truth output. The rows of the table show the network’s output at each iteration (as indicated).The loss column is the constraint loss weighted by the output’s energy (x;y;W)g(y;LX1), andthe final column is the token-wise accuracy between the output and the ground truth.7Under review as a conference paper at ICLR 2017h“ So it ’s a very mixed bag . ” i! sssr!ssssrr!srrr!rr!ssrrrrrr!iteration output loss accuracy0ssr!sr!ssssrrr!rr!ssrrrrrr! 0.0857 33.3%11ssr!sr!ssssrrr!rr!ssrrrrrr! 0.0855 33.3%12sssr!ssssrr!srrr!rr!ssrrrrrr! 0.0000 100.0%Table 5: A shift-reduce example for which the method successfully enforces constraints. The initialoutput has only nine shifts, but there are ten tokens in the input. Enforcing the constraint not onlycorrects the number of shifts to ten, but changes the implied tree structure to the correct tree.Table 2 contains an example for which our method successfully satisfies the constraints resultingin perfect accuracy. However, because the constraint violation appears at the end of the string, agreedy decoder that opportunistically enforces constraints on the fly could potentially correct thiserror. In Table 3 we show a more interesting example for which such a greedy decoder would notbe as successful. In particular, the unconstrained network outputs the final aaa too early in thesequence, but the constraint that controls the number of a’s in the output is not violated until theend of the sequence. In contrast, our method takes the constraint into account globally, allowingthe network to not only rectify the constraint, but to achieve perfect accuracy on the sentence(in just four gradient updates). Finally, in Table 4, we show an example for which enforcing theconstraints hurts the accuracy. The updates causes the network to erroneously change outputs thatwere actually correct. This can happen if (a) the underlying network is sometimes inaccurate inits output or confidence/probabilities thereon or (b) the gradient steps are too large causing thenetwork to completely leapfrog over the correct solution in a single step. The latter can be avoided bynormalizing the constraint loss so it does not grow unbounded with the number of outputs and byerring on the side of a smaller learning rate.We repeat the same experiment (middle section of Table 1), but on the shift-reduce parsing taskdescribed in Section 4. We convert the Wall Street Journal portion of the Penn Tree Bank (PTB) intoshift-reduce commands and randomly split into 30k train and 9.2k test examples. We increase thenumber of hidden units to sixty-four to accommodate the larger input space (50k words) and employEquation 10 (normalized by sequence length) for the constraint loss. We measure the sequence-aligned token accuracy. Otherwise, we employ the exact same experimental parameters as the azbztask, both for training the LSTM and for our algorithm. We find that our algorithm performs evenbetter on the real-world task, converting over 80% of the violated outputs. We again find that ourprocedure has no negative impact on accuracy, which in fact improves, but not as substantially as fortheazbz task. Table 5 contains a successful example that we had previously highlighted in Section 1.The algorithm satisfies the constraints, and also corrects the remaining output errors.Finally, we conduct a version of the shift-reduce experiment that includes the phrase types (e.g.,noun-phrase (NP)). To accommodate the larger output space (output alphabet size increases to 479),we employ a larger network with 128 hidden units, attention and three-layers. Note that even thismore sophisticated network fails to learn the constraints from data and adding layers does not help.The larger network affords us the opportunity to experiment with modifying different subsets ofweights for enforcing constraints. As seen in the last section of Table 1, modifying all the weightsworks best, converting 79.2% of the violating sentences; again without negatively affecting accuracy.7 C ONCLUSIONWe presented an algorithm for satisfying constraints in neural networks that avoids combinatorialsearch, but employs the network’s efficient unconstrained procedure as a black box. We evaluatedthe algorithm on two sequence to sequence tasks, a toy transducer problem and a real-world shift-reduce parsing problem. We found that the method was able to completely rectify up to 80% ofviolated outputs when capping the number of iterations at 100. Often, enforcing constraints causedthe accuracy to improve, dispelling initial concerns that adjusting the weights at test-time wouldbe treacherous. Our method currently lacks the same theoretical guarantees as classic Lagrangianrelaxation methods, so in future work we want to focus on supplemental theory and additionalobjective functions. We also hope to extend the work to handle soft constraints, for example, asimposed by an external language model.8Under review as a conference paper at ICLR 2017 | rk_Zn-G4x | Not very convincing | 3: Clear rejection | This paper proposes a way of enforcing constraints (or penalizing violations of those constraints) on outputs in structured prediction problems, while keeping inference unconstrained. The idea is to tweak the neural network parameters to make those output constraints hold. The underlying model is that of structured prediction energy networks (SPENs), recently proposed by Belanger et al.
Overall, I didn't find the approach very convincing and the paper has a few problems regarding the empirical evaluation. There's also some imprecisions throughout. The proposed approach (secs 6 and 7) looks more like a "little hack" to try to make it vaguely similar to Lagrangian relaxation methods than something that is theoretically well motivated.
Before eq. 6: "an exponential number of dual variables" -- why exponential? it's not one dual variable per output.
From the clarification questions:
- The accuracy reported in Table 1 needs to be explained.
- for the parsing experiments it would be good to report the usual F1 metric of parseval, and to compare with state of the art systems.
- should use the standard training/dev/test splits of the Penn Treebank.
The reported conversion rate in Table 1 does not tell us how many violations are left by the unconstrained decoder to start with. It would be good to know what happens in highly structured problems where these violations are frequent, since these are the problems where the proposed approach could be more beneficial.
Minor comments/typos:
- sec.1: "there are" -> there is?
- sec 1: "We find that out method is able to completely satisfy constraints on 81% of the outputs." -> at this point, without specifying the problem, the model, and the constraints, this means very little. How many constrains does the unconstrained method satisfies?
- sec 2 (last paragraph): "For RNNs, each output depends on hidden states that are functions of previous output values" -- this is not very accurate, as it doesn't hold for general RNNs, but only for those (e.g. RNN decoders in language modeling) where the outputs are fed back to the input in the next time frame.
- sec 3: "A major advantage of neural networks is that once trained, inference is extremely efficient." -- advantage over what? also, this is not necessarily true, depends on the network and on its size.
- sec 3: "our goal is take advantage" -> to take advantage
- last paragraph of sec 6: "the larger model affords us" -> offers?
| 4: The reviewer is confident but not absolutely certain that the evaluation is correct |