paper_id
stringlengths
9
13
venue
stringclasses
171 values
year
stringclasses
7 values
paper_title
stringlengths
0
188
paper_authors
stringlengths
4
1.01k
paper_abstract
stringlengths
0
5k
paper_keywords
stringlengths
2
679
paper_content
stringlengths
0
100k
review_id
stringlengths
9
12
review_title
stringlengths
0
500
review_rating
stringclasses
92 values
review_text
stringlengths
0
28.3k
review_confidence
stringclasses
21 values
r1GKzP5xx
ICLR.cc/2017/conference
2017
Recurrent Normalization Propagation
["C\u00e9sar Laurent", "Nicolas Ballas", "Pascal Vincent"]
We propose a LSTM parametrization that preserves the means and variances of the hidden states and memory cells across time. While having training benefits similar to Recurrent Batch Normalization and Layer Normalization, it does not need to estimate statistics at each time step, therefore, requiring fewer computations overall. We also investigate the parametrization impact on the gradient flows and present a way of initializing the weights accordingly. We evaluate our proposal on language modelling and image generative modelling tasks. We empirically show that it performs similarly or better than other recurrent normalization approaches, while being faster to execute.
["Deep learning", "Optimization"]
ABSTRACTWe propose an LSTM parametrization that preserves the means and variances ofthe hidden states and memory cells across time. While having training benefitssimilar to Recurrent Batch Normalization and Layer Normalization, it does notneed to estimate statistics at each time step, therefore, requiring fewer computa-tions overall. We also investigate the parametrization impact on the gradient flowsand present a way of initializing the weights accordingly.We evaluate our proposal on language modelling and image generative modellingtasks. We empirically show that it performs similarly or better than other recurrentnormalization approaches, while being faster to execute.1 I NTRODUCTIONRecurrent neural network have shown remarkably good performances for sequential modelling tasksincluding machine translation (Bahdanau et al., 2015), visual captioning (Xu et al., 2015; Yao et al.,2015) or question answering (Hermann et al., 2015). However, such models remain notoriouslyhard to train with gradient backpropagation. As the number of time steps in the input sequenceincreases, the contractive or expanding effects associated with the state-to-state transformation ateach time step can shrink or grow exponentially, leading respectively to vanishing or explodinggradients (Hochreiter, 1991; Bengio et al., 1994; Pascanu et al., 2012). In particular, with gradi-ent vanishing, states at a given time are not influenced by changes happening much earlier in thesequence, preventing the model from learning long-term dependencies.While the long-term dependencies problem is unsolvable in absolute (Hochreiter, 1991; Bengioet al., 1994), different RNN parameterizations, such as LSTM or GRU (Hochreiter & Schmidhuber,1997; Cho et al., 2014) can help mitigate it. Furthermore, the LSTM parametrization has beenrecently extended to include layer-wise normalization (Cooijmans et al., 2016; Ba et al., 2016),building upon Batch Normalization (BN) (Ioffe & Szegedy, 2015). By normalizing the hidden statedistributions to a fix scale and shift through the different time steps, normalized LSTMs have beenshown to ease training, resulting in a parametrization that converges faster than a standard LSTM.However, normalized LSTM introduces extra-computations as it involves standardizing the hiddenstates, enforcing their means and variances at each time step. By contrast, we propose an LSTMreparametrization that allows by construction to cheaply preserve the normalization of the hiddenstates through time. Our approach can be seen as the recurrent counterpart to the recent normal-ization propagation applied in feed-forward network (Arpit et al., 2016). It results in faster trainingconvergence similar to Layer Normalization (LN) and Recurrent Batch Normalization while requir-ing fewer operations per time step and generalizing naturally to variable length sequences.In addition, we investigate the impact of our parametrization, and more generally of normalizedLSTM, on the vanishing and exploding gradient problems. We observe that layer-wise normalizationprovides a direct way to orient LSTM behaviour toward either gradient explosion or vanishing, andtherefore biases the LSTM either towards reliably storing bits of information throughout time orallowing it to be more sensitive to new input changes.Associate Fellow, Canadian Institute For Advanced Research (CIFAR)1Under review as a conference paper at ICLR 2017We empirically validate our proposal on character-level language modelling on the Penn Treebankcorpus (Marcus et al., 1993) and on image generative modelling, applying our normalisation to theDRAW architecture (Gregor et al., 2015).The paper is structured as follows: section 2 provides a brief overview of the Batch-NormalizedLSTM, in section 3 we derive our Normalized LSTM, section 4 investigates the impact of suchnormalization on the gradient flow, section 5 presents some experimental results, and we concludein section 5.2 P RE-REQUISITES2.1 BN-LSTMBatch-Normalized Long Short-Term Memory (BN-LSTM) (Cooijmans et al., 2016) is areparametrization of LSTM that takes advantage of Batch Normalization (BN) to address the Co-variate Shift (Shimodaira, 2000) occurring between time steps. Changes in the LSTM output at onetime-step are likely to cause correlated changes in the summed inputs of the sequence next time-steps. This Temporal Covariate Shift can slow down the training process as the parameters of themodel must not only be updated to minimize the cost of the task at hand but also adapt to the chang-ing distribution of the inputs. In other words, the latter time steps in an LSTM need to account forthe shifting distribution of the previous hidden states.BN-LSTM proposes to reduce this temporal covariate shift by fixing the mean and the variance ateach time step, relying on the BN transformBN(x;;) =xbE[x]qdVar[x] ++ (1)wherebE[x];dVar[x]are the activation mean and variance estimated from the mini-batch samples.Given an input sequence X= (x1;x2;:::; xT), the BN-LSTM defines a sequence of hidden stateshtand memory cell states ctaccording to0BB@~it~ft~ot~gt1CCA= BN( Wxxt;x;x) + BN( Whht1;h;h) +b (2)ct=(~it)tanh( ~gt) +(~ft)ct1 (3)ht=(~ot)tanh(BN( ct;c;c)); (4)where Wh2Rdh4dh;Wx2Rdx4dh;b2R4dhand the initial states h02Rdh;c02Rdharemodel parameters. is the logistic sigmoid function, and denotes the Hadamard product. Ba et al.(2016) latter extended this parametrization by estimating the normalizing statistics (bE[x];dVar[x])using the different feature channels rather than mini-batch samples in order to naturally generalizeto variable length sequences.2.2 N ORMALIZATION PROPAGATIONWhile increasing the training convergence speed relatively to a standard LSTM (Cooijmans et al.,2016), BN-LSTM needs to perform more computations per sample as it requires to compute 3x theBN transform at each time step.On the other hand, Normalization Propagation (Norm Prop) (Arpit et al., 2016) aims at preserve thenormalization of the input throughout the network. Unlike BN, the normalization doesn’t rely onthe statistics of the mini-batch. Instead, it is the structure of the network itself that maintains thenormalization. We therefore propose an LSTM reparametrization that preserves the normalizationthrough the different time steps in order to avoid those extra computation.2Under review as a conference paper at ICLR 20173 N ORMALIZED LSTMWhile Norm Prop properties are appealing for recurrent models, its application to LSTM is notstraightforward due to the memory cell structure. In this section we show how to derive a LSTMreparametrization that preserves normalization of the state htthrough time.3.1 C ONSTRUCTION OF THE NORMALIZED LSTMFollowing (Arpit et al., 2016; Salimans & Kingma, 2016), we will attempt to ensure, through ananalytical reparametrization, that several intermediate quantities in the computation remain approx-imately standardized. We first compensate for the distribution changes induced by the weight matri-ces in the gates and cell candidate gtcomputations0BB@~it~ft~ot~gt1CCA=xWxjjWx;ijj2xt+hWhjjWh;ijj2ht1+b: (5)wherejjW;ijj2is the vector of L2-norm of each line of the matrix and xandhare the trainablerescaling factors that restore the representation power lost in the rescaling of the weight matrices.To preserve the constant error carousel mechanism of the LSTM, we use the usual cell update,ct=(~it)tanh( ~gt) +(~ft)ct1 (6)Let us now construct an approximate analytical estimate of Var(ct). The evolution of ctthroughtime can bee seen as a geometric series, with (~ft)as constant ratio. Since ()is upper-bounded by(and in practice smaller than) 1, ctwill converge in expectation to a fixed value. This is the reasonwhy in BN-LSTM the mini-batch statistics converge to a fixed value after a few time steps (Cooij-mans et al., 2016). Moreover, if we consider that ~it;~ft;~gtandct1are (as a rough approximation)independent1, we can use the variance product rule of two independent random variables XandYVar[XY] = Var[X] Var[Y] + Var[X]E[Y]2+ Var[Y]E[X]2(7)to compute Var[ct]. Considering that E[tanh( ~gt)]0and assuming that the cell has converged i.e.Var[ct] = Var[ ct1], we haveVar[ct] = Var[tanh( ~gt)]Var[(~it)] +E[(~it)]21Var[(~ft)]E[(~ft)]2(8)We can therefore analytically or numerically compute the mean and variance of each of those ele-ments, assuming that both input xtand hidden state ht1are independent drawn from N(0;1)E[it] =E[(xzx+hzh)] (9)Var[it] = Var[(xzx+hzh)] (10)E[gt] =E[tanh(xzx+hzh)] (11)Var[gt] = Var[tanh( xzx+hzh)] (12)wherezx;zhN(0;1). The statistics of the gates otandftcan be computed in a similar way. Wecan then compute the value to which Var[ct]converges. Using this variance estimate, we compen-satectin order to compute the next hidden state htht=(~ot)tanh cctpVar[ct]!(13)Since we assumed that Var[ht1] = 1 , to ensure that we need to correct for the variance induced bythe product of the tanh with the output gate. Using again the variance product rule (equation 7) weobtainVar[ht] = Var"tanh cctpVar[ct]!#(Var[(~ot)] +E[(~ot)]2) (14)We can estimate this variance through similar computation than equation 12. Scaling htwith1=pVar[ht]ensure that its variance is 1 and so the propagation is maintained throughout the re-currence.1This assumption is strong, but we don’t have any easy way to model the covariance between those termswithout estimating it from the data.3Under review as a conference paper at ICLR 20173.2 P ROPOSED REPARAMETRIZATIONUsing equations 5, 6 and 13, we propose the following reparametrization of the LSTM, simply calledtheNormalized LSTM0BB@~it~ft~ot~gt1CCA=xWxjjWx;ijj2xt+hWhjjWh;ijj2ht1+b (15)ct=(~it)tanh( ~gt) +(~ft)ct1 (16)ht=1pVar[ht]"(~ot)tanh cctpVar[ct]!#(17)where Var[ct]andVar[ht]are computed using equations 8 and 14, respectively. Those two vari-ances are estimated at the initialization of the network (eq. 10 to eq. 12), and are then kept fixedduring the training as in Norp Prop. x,handcare parameters learned via gradient descent.Note that the reparametrization of equation 15 is identical to Weight Normalization (Weight Norm)(Salimans & Kingma, 2016). The main difference comes from equation 17, where we compensatefor the variance of ct, the tanh and(~ot), which ensures a normalized propagation. Overall, thisreparametrization is equivalent in spirit to the BN-LSTM, but it benefits from the same advantagesthat Norm Prop has over BN: There is no dependence on the mini-batch size and the computation isthe same for training and inference. Also, the rescaling of the matrices WxandWhcan be donebefore the recurrence, leading to computation time closer to a vanilla LSTM.3.3 W EIGHTS INITIALIZATIONWith such reparametrization of the weight matrices, one can think that the scale of the initializationof the weights doesn’t matter in the learning process anymore. It is actually true for the forward andbackward computation of the layeryi=aWijjaWijj2x=WijjWijj2x (18)@yi@x=aWijjaWijj2=WijjWijj2(19)and since the variance of both forward and backward passes is fixed, using an initialization schemesuch as Glorot (Glorot & Bengio, 2010) doesn’t make sense with Norm Prop. However, the updateof the parameters is affected by their scale:@yi@Wij@L@yi=1jjWijj2xjyiWijjjWijj2@L@yi(20)The scale of the parameters affect the learning rate of the layer: the bigger the weights, the smallerthe update. This induces a regularization effect in Norm Prop that is also present in BN (Ioffe& Szegedy, 2015). However, this could possibly be an issue for such parametrization: differentinitializations lead to different learning rates, and it is true even with adaptive step rules, such asAdam (Kingma & Ba, 2014). Moreover, the parameters that are not normalized (such as andb)aren’t affected by this effect, and so they are not regularized. This is the reason why forcing theweight matrices to have a unit L2 norm of the lines, as proposed in Arpit et al. (2016), helps thetraining procedure.To still benefit from the reduction of the learning rate, which is know to ease the optimization (V oglet al., 1988), we propose to simply force the unit L2 norm of the lines of the matrices and combineit with a global learning rate decay schedule.4 G RADIENT PROPAGATION IN NORMALIZED LSTMIn this section we study the gradient flow in the Normalized LSTM. Since this reparametrization issimilar to the BN-LSTM, the analysis we do here can be transposed to the BN-LSTM case.4Under review as a conference paper at ICLR 20174.1 T HEEXPLODING AND VANISHING GRADIENTS PROBLEMGiven an input sequence X= (x1;x2;:::; xT), we consider a recurrent network, parametrized by, that defines a sequence of hidden states ht=f(ht1;xt)and cost functionLwhich evaluatesthe model performance on a given task. Such network is usually trained using backpropagationthrough time, where the backpropagation is applied on the time-unrolled model. The chain rule canbe applied in order to compute the derivative of the loss Lwith respect to parameters .@L@=X1tT@Lt@=X1tTX1kt@Lt@hk@hk@ht@ht@: (21)The factors@hk@ht=Qklt@hl@hl1transports the error “in time” from step tback to step kand arealso the cause of vanishing or exploding gradient in RNN (Pascanu et al., 2012). Indeed, if theJacobian@hl@hl1has singular value different from 1, the factor@hk@ht, which is a product of tkJacobian matrices will either explode or vanish.4.2 G RADIENT OF THE NORMALIZED LSTMTo study the gradient propagation of the Normalized LSTM, we first need to derive it. Using equa-tion 15-17, we can write the gradient of htwith respect to ht1at=1pVar[ht]tanh cctpVar[ct]!(22)@ht@ht1=@ot@ht1at+ot@at@ht1@it@ht1gt+it@gt@ht1+@ft@ht1ct1(23)As we can see in equation 23 with the normalization, the gradient depends not only on the derivativeof the cell candidate, the gates and the output tanh, but also on on the variance of htandct.If we assume that ht1andxtare independent, we can compute the variance of ct. Neglecting theweight matrices and the effect of the gates, we can write from equations 8 and 14Var[ct]Var[gt] = Var[tanh( z)]; zN(0;2x+2h) (24)Var[ht] = Var[tanh( z)]; zN(0;2c(2x+2h)) (25)In both cases, the variance depends explicitly on the value of the different : The bigger the , thehigher the variance. Neglecting again the weight matrices, we can now write the equations of thecell candidates gtand the gates it;otandftwith respect to ht1@gt@ht1=@tanh( ~gt)@~gt@~gt@ht1=1tanh(xxt+hht1)2h (26)@it@ht1=@(~it)@~it@~it@ht1=(xxt+hht1)(1(xxt+hht1))h (27)The gradients of otandftcan be computed similarly. The effect of the here is double: They appearboth in the activation function, where they control the saturation regime, and halso appears as amultiplicative term in the gradient. They should therefore be small enough to prevent the activationfrom saturating too much, but at the same time hcan’t be too small, because it can also make thegradients vanish. Putting it all together, we have@ht@ht1=@ot@~othat+ot@at@~atcpVar[ct]h@it@~itgt+it@gt@~gt+@ft@~ftct1(28)In this equations we can see that the different directly scale the gradient, and they also controlthe saturation of the activation functions. Bad initialization of could thus lead to saturation orexplosion regimes. Figure 1 shows the norm of the gradient with respect to xandhin a simulatedLSTM. As we can see, one important parameter is the ratio between handx: They control mostof the propagation of the gradients. If x>h, the network will focus more on the input and so thegradients will tend to vanish more. On the other hand, if h> x, the network will tend have lessvanishing gradients, but will focus less on its inputs.5Under review as a conference paper at ICLR 20170.1 0.3 0.5 0.7 0.9 1.1 1.3 1.5 1.7 1.9 2.1gamma h0.10.30.50.70.91.11.31.51.71.92.1gamma x||dht/dht-1|| gamma c=0.10.1 0.3 0.5 0.7 0.9 1.1 1.3 1.5 1.7 1.9 2.1gamma h0.10.30.50.70.91.11.31.51.71.92.1gamma x||dht/dht-1|| gamma c=1.00.61.21.82.43.03.64.24.85.4Figure 1: Norm of the gradients for one time step in an LSTM with respect to xandh(simulation).Left:c= 0:1. Right:c= 1:0.5 E XPERIMENTS5.1 C HARACTER -LEVEL LANGUAGE MODELLINGThe first task we explore is character-level language modelling on the Penn Treebank corpus (Marcuset al., 1993). The goal is to predict the next character of the sequence given the previous ones. Weuse the same splits as Mikolov et al. (2012) and the same training procedure as Cooijmans et al.(2016), i.e. we train on sequences of length 100, with random starting point. The model is a1000 units LSTM followed by a Softmax classifier. We use orthogonal initialization for the weightmatrices. Because Norm Prop requires normalized inputs, we multiply the one-hot inputs vectorwith an untrained but fixed orthogonal matrix. This tricks does not only help the optimization ofNorm Prop, but also all other variants.To compare the convergence properties of Norm Prop against LN and BN, we first ran experimentsusing Adam (Kingma & Ba, 2014) with learning rate 2e-3, exponential decay of 1e-3 and gradientclipping at 1.0. As explained in section 3.3, we rescale the matrices such that they have a unit normon the lines. For Norm Prop, we use x=h= 2 andc= 1, for LN all the = 1:0and for BNall the= 0:1. The results are presented in Table 1 and in Figure 2.Model Validation TimeBaseline 1.455 386Weight Norm 1.438 402Batch Norm 1.433 545Layer Norm 1.439 530Norm Prop 1.422 413Table 1: Perplexity (bits-per-character) on sequences of length 100 from the Penn Treebank valida-tion set, and training time (seconds) per epoch.To show the potential of Norm Prop against other state-of-the-art system, we followed Ha et al.(2016) and apply dropout on both the input and output layer ( p= 0:1) and recurrent dropout insidethe LSTM (p= 0:1). We also used the Batch Data Normalization scheme presented by Arpit et al.(2016), so we standardize each input example using the mini-batch statistics and use populationstatistics at inference time. Finally, we also reduce the learning rate decay to 1e-4, to compensatefor the fact that a network with dropout needs more time to train. The results are presented in Table 2.As we can see in Figure 2 and in Table 1, Norm Prop compares really well against the otherreparametrization. Also Norm Prop is roughly 30 % computationally faster2than BN and LN. LNshows better optimization performances, but also overfits more. We also see that both optimizationand generalization are better than the ones from Weight Norm, which shows the importance of com-pensating for the variance of ctandht. Moreover, although Norm Prop doesn’t combine well with2The GPU used is a NVIDIA GTX 750.6Under review as a conference paper at ICLR 20170 5 10 15 20 25Epochs1.21.31.41.51.61.7PerplexityCharacter-Level Language ModellingBaselineWeight NormBatch NormLayer NormNorm PropFigure 2: Perplexity (bits-per-character) on sequences of length 100 from the Penn Treebank corpus.The dashed lines are the training curves, and the solid ones are the validation curves.Model TestRecurrent Dropout LSTM (Semeniuta et al., 2016) 1.301Zoneout LSTM (Krueger et al., 2016) 1.27Layer Norm LSTM (Ha et al., 2016) 1.267HyperLSTM (Ha et al., 2016) 1.265Norm Prop LSTM (ours) 1.262Layer Norm HyperLSTM (Ha et al., 2016) 1.250Table 2: Perplexity (bits-per-character) of the full Penn Treebank test sequence.dropout in feed-forward networks (Arpit et al., 2016), it works will with recurrent dropout, as we cansee in Table 2. We believe it is because recurrent dropout is less affecting its output distribution thandropout in feed forward networks, because we copy the variable at the previous time step insteadof setting it to 0. With such regularization, Norm Prop compares well with other state-of-the-artapproaches.5.2 DRAWThe second task we explore is a generative modelling task on binarized MNIST (Larochelle &Murray, 2011) using the Deep Recurrent Attentive Writer (DRAW) (Gregor et al., 2015) architecture.DRAW is a variational auto-encoder, where both encoder and decoder are LSTMs, and has twoattention mechanisms to select where to read and where to write.We use J ̈org Bornschein’s implementation3, with the same hyper-parameters as Gregor et al. (2015),ie the read and write size are 2x2 and 5x5 respectively, the number of glimpses is 64, the LSTMshave 256 units and the dimension of zis 100. We use Adam with learning rate of 1e-2, exponentialdecay of 1e-3 and mini-batch size of 128. We use orthogonal initialization and force the norm ofthe lines of the matrices to be 1. For Norm Prop, we use x=h=c= 0:5. The test variationalbound for the first 100 epochs is presented in Figure 3.As we can see in Figure 3, both Weight Norm and Norm Prop outperform the baseline network bya significant margin. Also, as expected, Norm Prop performs better than Weight Norm, showingone again the importance of the compensation of the variance of ctandht. Table 3 shows the testvariational bound after 200 epochs of training. Norm Prop also compares favorably against LN.3https://github.com/jbornschein/draw7Under review as a conference paper at ICLR 20170 20 40 60 80 100Epochs80859095100NLLDRAWBaselineNorm PropWeight NormFigure 3: Test negative log-likelihood on binarized MNIST.Model DRAWBaseline (ours) 84.30Layer Norm (Ba et al., 2016) 82.09Weight Norm (ours) 81.98Norm Prop (ours) 81.17Table 3: Test variational log likelihood (nats) after 200 epochs of training.6 C ONCLUSIONBased on the BN-LSTM, we have shown how to build a Normalized LSTM that is able to preservethe variance of its output at each time step, by compensating for the variance of the cell and thehidden state. Such LSTM can be seen as the Norm Prop version of the BN-LSTM, and thus benefitsfrom the same advantages that Norm Prop has over BN, while being way faster to compute. Also,we propose a scheme to initialize the weight matrices that takes into account the reparametrization.Moreover, we have derived the gradients of this LSTM and pointed out the importance of the initial-ization of the rescaling parameters. We have validated the performances of the Normalized LSTMon two different tasks, showing similar performances than BN-LSTM and LN-LSTM, while beingsignificantly faster in computation time. Also, unlike the feed-forward case, this architecture workswell with recurrent dropout, leading to close to state-of-the-art performances on the character-levellanguage modelling task.Future work includes trying this architecture on more challenging tasks and also studying the impactof not keeping the variance estimates of the cell and the hidden states fixed during the learningprocess.ACKNOWLEDGMENTSPart of this work was funded by Samsung. We used Theano (Theano Development Team, 2016),Blocks and Fuel (van Merri ̈enboer et al., 2015) for our experiments. We also want to thanks CaglarGulcehre and Tim Cooijmans for the talks and J ̈org Bornschein for his DRAW implementation.
SyeMVA-4x
6: Marginally above acceptance threshold
The paper proposes an extension of weight normalization / normalization propagation to recurrent neural networks. Simple experiments suggest it works well. The contribution is potentially useful to a lot of people, as LSTMs are one of the basic building blocks in our field. The contribution is not extremely novel: the change with respect to weight normalization is minor. The experiments are also not very convincing: Layer normalization is reported to have higher test error as it overfits on their example, but in terms of optimization it seems to work better. Also the authors don't seem to use the data dependent parameter init for weight normalization as proposed in that paper.
3: The reviewer is fairly confident that the evaluation is correct
r1GKzP5xx
ICLR.cc/2017/conference
2017
Recurrent Normalization Propagation
["C\u00e9sar Laurent", "Nicolas Ballas", "Pascal Vincent"]
We propose a LSTM parametrization that preserves the means and variances of the hidden states and memory cells across time. While having training benefits similar to Recurrent Batch Normalization and Layer Normalization, it does not need to estimate statistics at each time step, therefore, requiring fewer computations overall. We also investigate the parametrization impact on the gradient flows and present a way of initializing the weights accordingly. We evaluate our proposal on language modelling and image generative modelling tasks. We empirically show that it performs similarly or better than other recurrent normalization approaches, while being faster to execute.
["Deep learning", "Optimization"]
ABSTRACTWe propose an LSTM parametrization that preserves the means and variances ofthe hidden states and memory cells across time. While having training benefitssimilar to Recurrent Batch Normalization and Layer Normalization, it does notneed to estimate statistics at each time step, therefore, requiring fewer computa-tions overall. We also investigate the parametrization impact on the gradient flowsand present a way of initializing the weights accordingly.We evaluate our proposal on language modelling and image generative modellingtasks. We empirically show that it performs similarly or better than other recurrentnormalization approaches, while being faster to execute.1 I NTRODUCTIONRecurrent neural network have shown remarkably good performances for sequential modelling tasksincluding machine translation (Bahdanau et al., 2015), visual captioning (Xu et al., 2015; Yao et al.,2015) or question answering (Hermann et al., 2015). However, such models remain notoriouslyhard to train with gradient backpropagation. As the number of time steps in the input sequenceincreases, the contractive or expanding effects associated with the state-to-state transformation ateach time step can shrink or grow exponentially, leading respectively to vanishing or explodinggradients (Hochreiter, 1991; Bengio et al., 1994; Pascanu et al., 2012). In particular, with gradi-ent vanishing, states at a given time are not influenced by changes happening much earlier in thesequence, preventing the model from learning long-term dependencies.While the long-term dependencies problem is unsolvable in absolute (Hochreiter, 1991; Bengioet al., 1994), different RNN parameterizations, such as LSTM or GRU (Hochreiter & Schmidhuber,1997; Cho et al., 2014) can help mitigate it. Furthermore, the LSTM parametrization has beenrecently extended to include layer-wise normalization (Cooijmans et al., 2016; Ba et al., 2016),building upon Batch Normalization (BN) (Ioffe & Szegedy, 2015). By normalizing the hidden statedistributions to a fix scale and shift through the different time steps, normalized LSTMs have beenshown to ease training, resulting in a parametrization that converges faster than a standard LSTM.However, normalized LSTM introduces extra-computations as it involves standardizing the hiddenstates, enforcing their means and variances at each time step. By contrast, we propose an LSTMreparametrization that allows by construction to cheaply preserve the normalization of the hiddenstates through time. Our approach can be seen as the recurrent counterpart to the recent normal-ization propagation applied in feed-forward network (Arpit et al., 2016). It results in faster trainingconvergence similar to Layer Normalization (LN) and Recurrent Batch Normalization while requir-ing fewer operations per time step and generalizing naturally to variable length sequences.In addition, we investigate the impact of our parametrization, and more generally of normalizedLSTM, on the vanishing and exploding gradient problems. We observe that layer-wise normalizationprovides a direct way to orient LSTM behaviour toward either gradient explosion or vanishing, andtherefore biases the LSTM either towards reliably storing bits of information throughout time orallowing it to be more sensitive to new input changes.Associate Fellow, Canadian Institute For Advanced Research (CIFAR)1Under review as a conference paper at ICLR 2017We empirically validate our proposal on character-level language modelling on the Penn Treebankcorpus (Marcus et al., 1993) and on image generative modelling, applying our normalisation to theDRAW architecture (Gregor et al., 2015).The paper is structured as follows: section 2 provides a brief overview of the Batch-NormalizedLSTM, in section 3 we derive our Normalized LSTM, section 4 investigates the impact of suchnormalization on the gradient flow, section 5 presents some experimental results, and we concludein section 5.2 P RE-REQUISITES2.1 BN-LSTMBatch-Normalized Long Short-Term Memory (BN-LSTM) (Cooijmans et al., 2016) is areparametrization of LSTM that takes advantage of Batch Normalization (BN) to address the Co-variate Shift (Shimodaira, 2000) occurring between time steps. Changes in the LSTM output at onetime-step are likely to cause correlated changes in the summed inputs of the sequence next time-steps. This Temporal Covariate Shift can slow down the training process as the parameters of themodel must not only be updated to minimize the cost of the task at hand but also adapt to the chang-ing distribution of the inputs. In other words, the latter time steps in an LSTM need to account forthe shifting distribution of the previous hidden states.BN-LSTM proposes to reduce this temporal covariate shift by fixing the mean and the variance ateach time step, relying on the BN transformBN(x;;) =xbE[x]qdVar[x] ++ (1)wherebE[x];dVar[x]are the activation mean and variance estimated from the mini-batch samples.Given an input sequence X= (x1;x2;:::; xT), the BN-LSTM defines a sequence of hidden stateshtand memory cell states ctaccording to0BB@~it~ft~ot~gt1CCA= BN( Wxxt;x;x) + BN( Whht1;h;h) +b (2)ct=(~it)tanh( ~gt) +(~ft)ct1 (3)ht=(~ot)tanh(BN( ct;c;c)); (4)where Wh2Rdh4dh;Wx2Rdx4dh;b2R4dhand the initial states h02Rdh;c02Rdharemodel parameters. is the logistic sigmoid function, and denotes the Hadamard product. Ba et al.(2016) latter extended this parametrization by estimating the normalizing statistics (bE[x];dVar[x])using the different feature channels rather than mini-batch samples in order to naturally generalizeto variable length sequences.2.2 N ORMALIZATION PROPAGATIONWhile increasing the training convergence speed relatively to a standard LSTM (Cooijmans et al.,2016), BN-LSTM needs to perform more computations per sample as it requires to compute 3x theBN transform at each time step.On the other hand, Normalization Propagation (Norm Prop) (Arpit et al., 2016) aims at preserve thenormalization of the input throughout the network. Unlike BN, the normalization doesn’t rely onthe statistics of the mini-batch. Instead, it is the structure of the network itself that maintains thenormalization. We therefore propose an LSTM reparametrization that preserves the normalizationthrough the different time steps in order to avoid those extra computation.2Under review as a conference paper at ICLR 20173 N ORMALIZED LSTMWhile Norm Prop properties are appealing for recurrent models, its application to LSTM is notstraightforward due to the memory cell structure. In this section we show how to derive a LSTMreparametrization that preserves normalization of the state htthrough time.3.1 C ONSTRUCTION OF THE NORMALIZED LSTMFollowing (Arpit et al., 2016; Salimans & Kingma, 2016), we will attempt to ensure, through ananalytical reparametrization, that several intermediate quantities in the computation remain approx-imately standardized. We first compensate for the distribution changes induced by the weight matri-ces in the gates and cell candidate gtcomputations0BB@~it~ft~ot~gt1CCA=xWxjjWx;ijj2xt+hWhjjWh;ijj2ht1+b: (5)wherejjW;ijj2is the vector of L2-norm of each line of the matrix and xandhare the trainablerescaling factors that restore the representation power lost in the rescaling of the weight matrices.To preserve the constant error carousel mechanism of the LSTM, we use the usual cell update,ct=(~it)tanh( ~gt) +(~ft)ct1 (6)Let us now construct an approximate analytical estimate of Var(ct). The evolution of ctthroughtime can bee seen as a geometric series, with (~ft)as constant ratio. Since ()is upper-bounded by(and in practice smaller than) 1, ctwill converge in expectation to a fixed value. This is the reasonwhy in BN-LSTM the mini-batch statistics converge to a fixed value after a few time steps (Cooij-mans et al., 2016). Moreover, if we consider that ~it;~ft;~gtandct1are (as a rough approximation)independent1, we can use the variance product rule of two independent random variables XandYVar[XY] = Var[X] Var[Y] + Var[X]E[Y]2+ Var[Y]E[X]2(7)to compute Var[ct]. Considering that E[tanh( ~gt)]0and assuming that the cell has converged i.e.Var[ct] = Var[ ct1], we haveVar[ct] = Var[tanh( ~gt)]Var[(~it)] +E[(~it)]21Var[(~ft)]E[(~ft)]2(8)We can therefore analytically or numerically compute the mean and variance of each of those ele-ments, assuming that both input xtand hidden state ht1are independent drawn from N(0;1)E[it] =E[(xzx+hzh)] (9)Var[it] = Var[(xzx+hzh)] (10)E[gt] =E[tanh(xzx+hzh)] (11)Var[gt] = Var[tanh( xzx+hzh)] (12)wherezx;zhN(0;1). The statistics of the gates otandftcan be computed in a similar way. Wecan then compute the value to which Var[ct]converges. Using this variance estimate, we compen-satectin order to compute the next hidden state htht=(~ot)tanh cctpVar[ct]!(13)Since we assumed that Var[ht1] = 1 , to ensure that we need to correct for the variance induced bythe product of the tanh with the output gate. Using again the variance product rule (equation 7) weobtainVar[ht] = Var"tanh cctpVar[ct]!#(Var[(~ot)] +E[(~ot)]2) (14)We can estimate this variance through similar computation than equation 12. Scaling htwith1=pVar[ht]ensure that its variance is 1 and so the propagation is maintained throughout the re-currence.1This assumption is strong, but we don’t have any easy way to model the covariance between those termswithout estimating it from the data.3Under review as a conference paper at ICLR 20173.2 P ROPOSED REPARAMETRIZATIONUsing equations 5, 6 and 13, we propose the following reparametrization of the LSTM, simply calledtheNormalized LSTM0BB@~it~ft~ot~gt1CCA=xWxjjWx;ijj2xt+hWhjjWh;ijj2ht1+b (15)ct=(~it)tanh( ~gt) +(~ft)ct1 (16)ht=1pVar[ht]"(~ot)tanh cctpVar[ct]!#(17)where Var[ct]andVar[ht]are computed using equations 8 and 14, respectively. Those two vari-ances are estimated at the initialization of the network (eq. 10 to eq. 12), and are then kept fixedduring the training as in Norp Prop. x,handcare parameters learned via gradient descent.Note that the reparametrization of equation 15 is identical to Weight Normalization (Weight Norm)(Salimans & Kingma, 2016). The main difference comes from equation 17, where we compensatefor the variance of ct, the tanh and(~ot), which ensures a normalized propagation. Overall, thisreparametrization is equivalent in spirit to the BN-LSTM, but it benefits from the same advantagesthat Norm Prop has over BN: There is no dependence on the mini-batch size and the computation isthe same for training and inference. Also, the rescaling of the matrices WxandWhcan be donebefore the recurrence, leading to computation time closer to a vanilla LSTM.3.3 W EIGHTS INITIALIZATIONWith such reparametrization of the weight matrices, one can think that the scale of the initializationof the weights doesn’t matter in the learning process anymore. It is actually true for the forward andbackward computation of the layeryi=aWijjaWijj2x=WijjWijj2x (18)@yi@x=aWijjaWijj2=WijjWijj2(19)and since the variance of both forward and backward passes is fixed, using an initialization schemesuch as Glorot (Glorot & Bengio, 2010) doesn’t make sense with Norm Prop. However, the updateof the parameters is affected by their scale:@yi@Wij@L@yi=1jjWijj2xjyiWijjjWijj2@L@yi(20)The scale of the parameters affect the learning rate of the layer: the bigger the weights, the smallerthe update. This induces a regularization effect in Norm Prop that is also present in BN (Ioffe& Szegedy, 2015). However, this could possibly be an issue for such parametrization: differentinitializations lead to different learning rates, and it is true even with adaptive step rules, such asAdam (Kingma & Ba, 2014). Moreover, the parameters that are not normalized (such as andb)aren’t affected by this effect, and so they are not regularized. This is the reason why forcing theweight matrices to have a unit L2 norm of the lines, as proposed in Arpit et al. (2016), helps thetraining procedure.To still benefit from the reduction of the learning rate, which is know to ease the optimization (V oglet al., 1988), we propose to simply force the unit L2 norm of the lines of the matrices and combineit with a global learning rate decay schedule.4 G RADIENT PROPAGATION IN NORMALIZED LSTMIn this section we study the gradient flow in the Normalized LSTM. Since this reparametrization issimilar to the BN-LSTM, the analysis we do here can be transposed to the BN-LSTM case.4Under review as a conference paper at ICLR 20174.1 T HEEXPLODING AND VANISHING GRADIENTS PROBLEMGiven an input sequence X= (x1;x2;:::; xT), we consider a recurrent network, parametrized by, that defines a sequence of hidden states ht=f(ht1;xt)and cost functionLwhich evaluatesthe model performance on a given task. Such network is usually trained using backpropagationthrough time, where the backpropagation is applied on the time-unrolled model. The chain rule canbe applied in order to compute the derivative of the loss Lwith respect to parameters .@L@=X1tT@Lt@=X1tTX1kt@Lt@hk@hk@ht@ht@: (21)The factors@hk@ht=Qklt@hl@hl1transports the error “in time” from step tback to step kand arealso the cause of vanishing or exploding gradient in RNN (Pascanu et al., 2012). Indeed, if theJacobian@hl@hl1has singular value different from 1, the factor@hk@ht, which is a product of tkJacobian matrices will either explode or vanish.4.2 G RADIENT OF THE NORMALIZED LSTMTo study the gradient propagation of the Normalized LSTM, we first need to derive it. Using equa-tion 15-17, we can write the gradient of htwith respect to ht1at=1pVar[ht]tanh cctpVar[ct]!(22)@ht@ht1=@ot@ht1at+ot@at@ht1@it@ht1gt+it@gt@ht1+@ft@ht1ct1(23)As we can see in equation 23 with the normalization, the gradient depends not only on the derivativeof the cell candidate, the gates and the output tanh, but also on on the variance of htandct.If we assume that ht1andxtare independent, we can compute the variance of ct. Neglecting theweight matrices and the effect of the gates, we can write from equations 8 and 14Var[ct]Var[gt] = Var[tanh( z)]; zN(0;2x+2h) (24)Var[ht] = Var[tanh( z)]; zN(0;2c(2x+2h)) (25)In both cases, the variance depends explicitly on the value of the different : The bigger the , thehigher the variance. Neglecting again the weight matrices, we can now write the equations of thecell candidates gtand the gates it;otandftwith respect to ht1@gt@ht1=@tanh( ~gt)@~gt@~gt@ht1=1tanh(xxt+hht1)2h (26)@it@ht1=@(~it)@~it@~it@ht1=(xxt+hht1)(1(xxt+hht1))h (27)The gradients of otandftcan be computed similarly. The effect of the here is double: They appearboth in the activation function, where they control the saturation regime, and halso appears as amultiplicative term in the gradient. They should therefore be small enough to prevent the activationfrom saturating too much, but at the same time hcan’t be too small, because it can also make thegradients vanish. Putting it all together, we have@ht@ht1=@ot@~othat+ot@at@~atcpVar[ct]h@it@~itgt+it@gt@~gt+@ft@~ftct1(28)In this equations we can see that the different directly scale the gradient, and they also controlthe saturation of the activation functions. Bad initialization of could thus lead to saturation orexplosion regimes. Figure 1 shows the norm of the gradient with respect to xandhin a simulatedLSTM. As we can see, one important parameter is the ratio between handx: They control mostof the propagation of the gradients. If x>h, the network will focus more on the input and so thegradients will tend to vanish more. On the other hand, if h> x, the network will tend have lessvanishing gradients, but will focus less on its inputs.5Under review as a conference paper at ICLR 20170.1 0.3 0.5 0.7 0.9 1.1 1.3 1.5 1.7 1.9 2.1gamma h0.10.30.50.70.91.11.31.51.71.92.1gamma x||dht/dht-1|| gamma c=0.10.1 0.3 0.5 0.7 0.9 1.1 1.3 1.5 1.7 1.9 2.1gamma h0.10.30.50.70.91.11.31.51.71.92.1gamma x||dht/dht-1|| gamma c=1.00.61.21.82.43.03.64.24.85.4Figure 1: Norm of the gradients for one time step in an LSTM with respect to xandh(simulation).Left:c= 0:1. Right:c= 1:0.5 E XPERIMENTS5.1 C HARACTER -LEVEL LANGUAGE MODELLINGThe first task we explore is character-level language modelling on the Penn Treebank corpus (Marcuset al., 1993). The goal is to predict the next character of the sequence given the previous ones. Weuse the same splits as Mikolov et al. (2012) and the same training procedure as Cooijmans et al.(2016), i.e. we train on sequences of length 100, with random starting point. The model is a1000 units LSTM followed by a Softmax classifier. We use orthogonal initialization for the weightmatrices. Because Norm Prop requires normalized inputs, we multiply the one-hot inputs vectorwith an untrained but fixed orthogonal matrix. This tricks does not only help the optimization ofNorm Prop, but also all other variants.To compare the convergence properties of Norm Prop against LN and BN, we first ran experimentsusing Adam (Kingma & Ba, 2014) with learning rate 2e-3, exponential decay of 1e-3 and gradientclipping at 1.0. As explained in section 3.3, we rescale the matrices such that they have a unit normon the lines. For Norm Prop, we use x=h= 2 andc= 1, for LN all the = 1:0and for BNall the= 0:1. The results are presented in Table 1 and in Figure 2.Model Validation TimeBaseline 1.455 386Weight Norm 1.438 402Batch Norm 1.433 545Layer Norm 1.439 530Norm Prop 1.422 413Table 1: Perplexity (bits-per-character) on sequences of length 100 from the Penn Treebank valida-tion set, and training time (seconds) per epoch.To show the potential of Norm Prop against other state-of-the-art system, we followed Ha et al.(2016) and apply dropout on both the input and output layer ( p= 0:1) and recurrent dropout insidethe LSTM (p= 0:1). We also used the Batch Data Normalization scheme presented by Arpit et al.(2016), so we standardize each input example using the mini-batch statistics and use populationstatistics at inference time. Finally, we also reduce the learning rate decay to 1e-4, to compensatefor the fact that a network with dropout needs more time to train. The results are presented in Table 2.As we can see in Figure 2 and in Table 1, Norm Prop compares really well against the otherreparametrization. Also Norm Prop is roughly 30 % computationally faster2than BN and LN. LNshows better optimization performances, but also overfits more. We also see that both optimizationand generalization are better than the ones from Weight Norm, which shows the importance of com-pensating for the variance of ctandht. Moreover, although Norm Prop doesn’t combine well with2The GPU used is a NVIDIA GTX 750.6Under review as a conference paper at ICLR 20170 5 10 15 20 25Epochs1.21.31.41.51.61.7PerplexityCharacter-Level Language ModellingBaselineWeight NormBatch NormLayer NormNorm PropFigure 2: Perplexity (bits-per-character) on sequences of length 100 from the Penn Treebank corpus.The dashed lines are the training curves, and the solid ones are the validation curves.Model TestRecurrent Dropout LSTM (Semeniuta et al., 2016) 1.301Zoneout LSTM (Krueger et al., 2016) 1.27Layer Norm LSTM (Ha et al., 2016) 1.267HyperLSTM (Ha et al., 2016) 1.265Norm Prop LSTM (ours) 1.262Layer Norm HyperLSTM (Ha et al., 2016) 1.250Table 2: Perplexity (bits-per-character) of the full Penn Treebank test sequence.dropout in feed-forward networks (Arpit et al., 2016), it works will with recurrent dropout, as we cansee in Table 2. We believe it is because recurrent dropout is less affecting its output distribution thandropout in feed forward networks, because we copy the variable at the previous time step insteadof setting it to 0. With such regularization, Norm Prop compares well with other state-of-the-artapproaches.5.2 DRAWThe second task we explore is a generative modelling task on binarized MNIST (Larochelle &Murray, 2011) using the Deep Recurrent Attentive Writer (DRAW) (Gregor et al., 2015) architecture.DRAW is a variational auto-encoder, where both encoder and decoder are LSTMs, and has twoattention mechanisms to select where to read and where to write.We use J ̈org Bornschein’s implementation3, with the same hyper-parameters as Gregor et al. (2015),ie the read and write size are 2x2 and 5x5 respectively, the number of glimpses is 64, the LSTMshave 256 units and the dimension of zis 100. We use Adam with learning rate of 1e-2, exponentialdecay of 1e-3 and mini-batch size of 128. We use orthogonal initialization and force the norm ofthe lines of the matrices to be 1. For Norm Prop, we use x=h=c= 0:5. The test variationalbound for the first 100 epochs is presented in Figure 3.As we can see in Figure 3, both Weight Norm and Norm Prop outperform the baseline network bya significant margin. Also, as expected, Norm Prop performs better than Weight Norm, showingone again the importance of the compensation of the variance of ctandht. Table 3 shows the testvariational bound after 200 epochs of training. Norm Prop also compares favorably against LN.3https://github.com/jbornschein/draw7Under review as a conference paper at ICLR 20170 20 40 60 80 100Epochs80859095100NLLDRAWBaselineNorm PropWeight NormFigure 3: Test negative log-likelihood on binarized MNIST.Model DRAWBaseline (ours) 84.30Layer Norm (Ba et al., 2016) 82.09Weight Norm (ours) 81.98Norm Prop (ours) 81.17Table 3: Test variational log likelihood (nats) after 200 epochs of training.6 C ONCLUSIONBased on the BN-LSTM, we have shown how to build a Normalized LSTM that is able to preservethe variance of its output at each time step, by compensating for the variance of the cell and thehidden state. Such LSTM can be seen as the Norm Prop version of the BN-LSTM, and thus benefitsfrom the same advantages that Norm Prop has over BN, while being way faster to compute. Also,we propose a scheme to initialize the weight matrices that takes into account the reparametrization.Moreover, we have derived the gradients of this LSTM and pointed out the importance of the initial-ization of the rescaling parameters. We have validated the performances of the Normalized LSTMon two different tasks, showing similar performances than BN-LSTM and LN-LSTM, while beingsignificantly faster in computation time. Also, unlike the feed-forward case, this architecture workswell with recurrent dropout, leading to close to state-of-the-art performances on the character-levellanguage modelling task.Future work includes trying this architecture on more challenging tasks and also studying the impactof not keeping the variance estimates of the cell and the hidden states fixed during the learningprocess.ACKNOWLEDGMENTSPart of this work was funded by Samsung. We used Theano (Theano Development Team, 2016),Blocks and Fuel (van Merri ̈enboer et al., 2015) for our experiments. We also want to thanks CaglarGulcehre and Tim Cooijmans for the talks and J ̈org Bornschein for his DRAW implementation.
BJg8pkV4g
incremental
6: Marginally above acceptance threshold
I think this build upon previous works, in the attempt of doing something similar to batch norm specific for RNNs. To me the experiments are not yet very convincing, I think is not clear this works better than e.g. Layer Norm or not significantly so. I'm not convinced on how significant the speed up is either, I can appreciate is faster, but it doesn't feel like order of magnitude faster. The theoretical analysis also doesn't provide any new insights. All in all I think is good incremental work, but maybe is not yet significant enough for ICLR.
4: The reviewer is confident but not absolutely certain that the evaluation is correct
BJh6Ztuxl
ICLR.cc/2017/conference
2017
Fine-grained Analysis of Sentence Embeddings Using Auxiliary Prediction Tasks
["Yossi Adi", "Einat Kermany", "Yonatan Belinkov", "Ofer Lavi", "Yoav Goldberg"]
There is a lot of research interest in encoding variable length sentences into fixed length vectors, in a way that preserves the sentence meanings. Two common methods include representations based on averaging word vectors, and representations based on the hidden states of recurrent neural networks such as LSTMs. The sentence vectors are used as features for subsequent machine learning tasks or for pre-training in the context of deep learning. However, not much is known about the properties that are encoded in these sentence representations and about the language information they capture. We propose a framework that facilitates better understanding of the encoded representations. We define prediction tasks around isolated aspects of sentence structure (namely sentence length, word content, and word order), and score representations by the ability to train a classifier to solve each prediction task when using the representation as input. We demonstrate the potential contribution of the approach by analyzing different sentence representation mechanisms. The analysis sheds light on the relative strengths of different sentence embedding methods with respect to these low level prediction tasks, and on the effect of the encoded vector’s dimensionality on the resulting representations.
["Natural language processing", "Deep learning"]
ABSTRACTThere is a lot of research interest in encoding variable length sentences into fixedlength vectors, in a way that preserves the sentence meanings. Two commonmethods include representations based on averaging word vectors, and represen-tations based on the hidden states of recurrent neural networks such as LSTMs.The sentence vectors are used as features for subsequent machine learning tasksor for pre-training in the context of deep learning. However, not much is knownabout the properties that are encoded in these sentence representations and aboutthe language information they capture.We propose a framework that facilitates better understanding of the encoded rep-resentations. We define prediction tasks around isolated aspects of sentence struc-ture (namely sentence length, word content, and word order), and score repre-sentations by the ability to train a classifier to solve each prediction task whenusing the representation as input. We demonstrate the potential contribution of theapproach by analyzing different sentence representation mechanisms. The analy-sis sheds light on the relative strengths of different sentence embedding methodswith respect to these low level prediction tasks, and on the effect of the encodedvector’s dimensionality on the resulting representations.1 I NTRODUCTIONWhile sentence embeddings orsentence representations play a central role in recent deep learningapproaches to NLP, little is known about the information that is captured by different sentence em-bedding learning mechanisms. We propose a methodology facilitating fine-grained measurementof some of the information encoded in sentence embeddings, as well as performing fine-grainedcomparison of different sentence embedding methods.In sentence embeddings, sentences, which are variable-length sequences of discrete symbols, areencoded into fixed length continuous vectors that are then used for further prediction tasks. Asimple and common approach is producing word-level vectors using, e.g., word2vec (Mikolov et al.,2013a;b), and summing or averaging the vectors of the words participating in the sentence. Thiscontinuous-bag-of-words (CBOW) approach disregards the word order in the sentence.1Another approach is the encoder-decoder architecture, producing models also known as sequence-to-sequence models (Sutskever et al., 2014; Cho et al., 2014; Bahdanau et al., 2014, inter alia). Inthis architecture, an encoder network (e.g. an LSTM) is used to produce a vector representationof the sentence, which is then fed as input into a decoder network that uses it to perform someprediction task (e.g. recreate the sentence, or produce a translation of it). The encoder and decodernetworks are trained jointly in order to perform the final task.1We use the term CBOW to refer to a sentence representation that is composed of an average of the vectorsof the words in the sentence, not to be confused with the training method by the same name which is used inthe word2vec algorithm.1Published as a conference paper at ICLR 2017Some systems (for example in machine translation) train the system end-to-end, and use the trainedsystem for prediction (Bahdanau et al., 2014). Such systems do not generally care about the encodedvectors, which are used merely as intermediate values. However, another common case is to train anencoder-decoder network and then throw away the decoder and use the trained encoder as a generalmechanism for obtaining sentence representations. For example, an encoder-decoder network canbe trained as an auto-encoder, where the encoder creates a vector representation, and the decoderattempts to recreate the original sentence (Li et al., 2015). Similarly, Kiros et al. (2015) train a net-work to encode a sentence such that the decoder can recreate its neighboring sentences in the text.Such networks do not require specially labeled data, and can be trained on large amounts of unanno-tated text. As the decoder needs information about the sentence in order to perform well, it is clearthat the encoded vectors capture a non-trivial amount of information about the sentence, makingthe encoder appealing to use as a general purpose, stand-alone sentence encoding mechanism. Thesentence encodings can then be used as input for other prediction tasks for which less training datais available (Dai & Le, 2015). In this work we focus on these “general purpose” sentence encodings.The resulting sentence representations are opaque, and there is currently no good way of comparingdifferent representations short of using them as input for different high-level semantic tasks (e.g.sentiment classification, entailment recognition, document retrieval, question answering, sentencesimilarity, etc.) and measuring how well they perform on these tasks. This is the approach takenby Li et al. (2015), Hill et al. (2016) and Kiros et al. (2015). This method of comparing sentenceembeddings leaves a lot to be desired: the comparison is at a very coarse-grained level, does not tellus much about the kind of information that is encoded in the representation, and does not help usform generalizable conclusions.Our Contribution We take a first step towards opening the black box of vector embeddings forsentences. We propose a methodology that facilitates comparing sentence embeddings on a muchfiner-grained level, and demonstrate its use by analyzing and comparing different sentence repre-sentations. We analyze sentence representation methods that are based on LSTM auto-encoders andthe simple CBOW representation produced by averaging word2vec word embeddings. For each ofCBOW and LSTM auto-encoder, we compare different numbers of dimensions, exploring the ef-fect of the dimensionality on the resulting representation. We also provide some comparison to theskip-thought embeddings of Kiros et al. (2015).In this work, we focus on what are arguably the three most basic characteristics of a sequence:its length, the items within it, and their order. We investigate different sentence representationsbased on the capacity to which they encode these aspects. Our analysis of these low-level propertiesleads to interesting, actionable insights, exposing relative strengths and weaknesses of the differentrepresentations.Limitations Focusing on low-level sentence properties also has limitations: The tasks focus onmeasuring the preservation of surface aspects of the sentence and do not measure syntactic andsemantic generalization abilities; the tasks are not directly related to any specific downstream appli-cation (although the properties we test are important factors in many tasks – knowing that a modelis good at predicting length and word order is likely advantageous for syntactic parsing, while mod-els that excel at word content are good for text classification tasks). Dealing with these limitationsrequires a complementary set of auxiliary tasks, which is outside the scope of this study and is leftfor future work.The study also suffers from the general limitations of empirical work: we do not prove generaltheorems but rather measure behaviors on several data points and attempt to draw conclusions fromthese measurements. There is always the risk that our conclusions only hold for the datasets onwhich we measured, and will not generalize. However, we do consider our large sample of sentencesfrom Wikipedia to be representative of the English language, at least in terms of the three basicsentence properties that we study.Summary of Findings Our analysis reveals the following insights regarding the different sentenceembedding methods:Sentence representations based on averaged word vectors are surprisingly effective, and encodea non-trivial amount of information regarding sentence length. The information they contain2Published as a conference paper at ICLR 2017can also be used to reconstruct a non-trivial amount of the original word order in a probabilisticmanner (due to regularities in the natural language data).LSTM auto-encoders are very effective at encoding word order and word content.Increasing the number of dimensions benefits some tasks more than others.Adding more hidden units sometimes degrades the encoders’ ability to encode word content. Thisdegradation is not correlated with the BLEU scores of the decoder, suggesting that BLEU overthe decoder output is sub-optimal for evaluating the encoders’ quality.LSTM encoders trained as auto-encoders do not rely on ordering patterns in the training sentenceswhen encoding novel sentences, while the skip-thought encoders do rely on such patterns.2 R ELATED WORKWord-level distributed representations have been analyzed rather extensively, both empirically andtheoretically, for example by Baroni et al. (2014), Levy & Goldberg (2014) and Levy et al. (2015).In contrast, the analysis of sentence-level representations has been much more limited. Commonlyused approaches is to either compare the performance of the sentence embeddings on down-streamtasks (Hill et al., 2016), or to analyze models, specifically trained for predefined task (Schmaltzet al., 2016; Sutskever et al., 2011).While the resulting analysis reveals differences in performance of different models, it does not ade-quately explain what kind of linguistic properties of the sentence they capture. Other studies analyzethe hidden units learned by neural networks when training a sentence representation model (Elman,1991; Karpathy et al., 2015; K ́ad ́ar et al., 2016). This approach often associates certain linguisticaspects with certain hidden units. K ́ad ́ar et al. (2016) propose a methodology for quantifying thecontribution of each input word to a resulting GRU-based encoding. These methods depend on thespecific learning model and cannot be applied to arbitrary representations. Moreover, it is still notclear what is captured by the final sentence embeddings.Our work is orthogonal and complementary to the previous efforts: we analyze the resulting sentenceembeddings by devising auxiliary prediction tasks for core sentence properties. The methodologywe purpose is general and can be applied to any sentence representation model.3 A PPROACHWe aim to inspect and compare encoded sentence vectors in a task-independent manner. The mainidea of our method is to focus on isolated aspects of sentence structure, and design experiments tomeasure to what extent each aspect is captured in a given representation.In each experiment, we formulate a prediction task. Given a sentence representation method, wecreate training data and train a classifier to predict a specific sentence property (e.g. their length)based on their vector representations. We then measure how well we can train a model to perform thetask. The basic premise is that if we cannot train a classifier to predict some property of a sentencebased on its vector representation, then this property is not encoded in the representation (or rather,not encoded in a useful way, considering how the representation is likely to be used).The experiments in this work focus on low-level properties of sentences – the sentence length, theidentities of words in a sentence, and the order of the words. We consider these to be the coreelements of sentence structure. Generalizing the approach to higher-level semantic and syntacticproperties holds great potential, which we hope will be explored in future work, by us or by others.3.1 T HEPREDICTION TASKSWe now turn to describe the specific prediction tasks. We use lower case italics ( s,w) to referto sentences and words, and boldface to refer to their corresponding vector representations ( s,w).When more than one element is considered, they are distinguished by indices ( w1,w2,w1,w2).Our underlying corpus for generating the classification instances consists of 200,000 Wikipediasentences, where 150,000 sentences are used to generate training examples, and 25,000 sentences3Published as a conference paper at ICLR 2017are used for each of the test and development examples. These sentences are a subset of the trainingset that was used to train the original sentence encoders. The idea behind this setup is to test themodels on what are presumably their best embeddings.Length Task This task measures to what extent the sentence representation encodes its length.Given a sentence representation s2Rk, the goal of the classifier is to predict the length (numberof words) in the original sentence s. The task is formulated as multiclass classification, with eightoutput classes corresponding to binned lengths.2The resulting dataset is reasonably balanced, witha majority class (lengths 5-8 words) of 5,182 test instances and a minority class (34-70) of 1,084 testinstances. Predicting the majority class results in classification accuracy of 20.1%.Word-content Task This task measures to what extent the sentence representation encodes theidentities of words within it. Given a sentence representation s2Rkand a word representationw2Rd, the goal of the classifier is to determine whether wappears in the s, with access to neitherwnors. This is formulated as a binary classification task, where the input is the concatenation of sandw.To create a dataset for this task, we need to provide positive and negative examples. Obtainingpositive examples is straightforward: we simply pick a random word from each sentence. Fornegative examples, we could pick a random word from the entire corpus. However, we found thatsuch a dataset tends to push models to memorize words as either positive or negative words, insteadof finding their relation to the sentence representation. Therefore, for each sentence we pick as anegative example a word that appears as a positive example somewhere in our dataset, but doesnot appear in the given sentence. This forces the models to learn a relationship between word andsentence representations. We generate one positive and one negative example from each sentence.The dataset is balanced, with a baseline accuracy of 50%.Word-order Task This task measures to what extent the sentence representation encodes wordorder. Given a sentence representation s2Rkand the representations of two words that appear inthe sentence, w1;w22Rd, the goal of the classifier is to predict whether w1appears before or afterw2in the original sentence s. Again, the model has no access to the original sentence and the twowords. This is formulated as a binary classification task, where the input is a concatenation of thethree vectors s,w1andw2.For each sentence in the corpus, we simply pick two random words from the sentence as a positiveexample. For negative examples, we flip the order of the words. We generate one positive and onenegative example from each sentence. The dataset is balanced, with a baseline accuracy of 50%.4 S ENTENCE REPRESENTATION MODELSGiven a sentence s=fw1; w2; :::; w Ngwe aim to find a sentence representation susing an encoder:ENC :s=fw1; w2; :::; w Ng7!s2RkThe encoding process usually assumes a vector representation wi2Rdfor each word in the vo-cabulary. In general, the word and sentence embedding dimensions, dandk, need not be the same.The word vectors can be learned together with other encoder parameters or pre-trained. Below wedescribe different instantiations of ENC.Continuous Bag-of-words (CBOW) This simple yet effective text representation consists of per-forming element-wise averaging of word vectors that are obtained using a word-embedding methodsuch as word2vec.Despite its obliviousness to word order, CBOW has proven useful in different tasks (Hill et al., 2016)and is easy to compute, making it an important model class to consider.Encoder-Decoder (ED) The encoder-decoder framework has been successfully used in a numberof sequence-to-sequence learning tasks (Sutskever et al., 2014; Bahdanau et al., 2014; Dai & Le,2015; Li et al., 2015). After the encoding phase, a decoder maps the sentence representation back tothe sequence of words:DEC :s2Rk7!s=fw1; w2; :::; w Ng2We use the bins (5-8), (9-12), (13-16), (17-20), (21-25), (26-29), (30-33), (34-70).4Published as a conference paper at ICLR 2017100 300 500 750 1000Representation dimensions102030405060708090Length prediction accuracy05101520253035BLEUEDCBOWED BLEU(a) Length test.100 300 500 750 1000Representation dimensions505560657075808590Content prediction accuracy05101520253035BLEUEDCBOWED BLEU (b) Content test.100 300 500 750 1000Representation dimensions5060708090Order prediction accuracy05101520253035BLEUEDCBOWED BLEU (c) Order test.Figure 1: Task accuracy vs. embedding size for different models; ED BLEU scores given for reference.Here we investigate the specific case of an auto-encoder, where the entire encoding-decoding processcan be trained end-to-end from a corpus of raw texts. The sentence representation is the final outputvector of the encoder. We use a long short-term memory (LSTM) recurrent neural network (Hochre-iter & Schmidhuber, 1997; Graves et al., 2013) for both encoder and decoder. The LSTM decoderis similar to the LSTM encoder but with different weights.5 E XPERIMENTAL SETUPThe bag-of-words (CBOW) and encoder-decoder models are trained on 1 million sentences from a2012 Wikipedia dump with vocabulary size of 50,000 tokens. We use NLTK (Bird, 2006) for tok-enization, and constrain sentence lengths to be between 5 and 70 words. For both models we controlthe embedding size kand train word and sentence vectors of sizes k2f100;300;500;750;1000g.More details about the experimental setup are available in the Appendix.6 R ESULTSIn this section we provide a detailed description of our experimental results along with their analysis.For each of the three main tests – length, content and order – we investigate the performance ofdifferent sentence representation models across embedding size.6.1 L ENGTH EXPERIMENTSWe begin by investigating how well the different representations encode sentence length. Figure 1ashows the performance of the different models on the length task, as well as the BLEU obtained bythe LSTM encoder-decoder (ED).With enough dimensions, the LSTM embeddings are very good at capturing sentence length, ob-taining accuracies between 82% and 87%. Length prediction ability is not perfectly correlated withBLEU scores: from 300 dimensions onward the length prediction accuracies of the LSTM remainrelatively stable, while the BLEU score of the encoder-decoder model increases as more dimensionsare added.Somewhat surprisingly, the CBOW model also encodes a fair amount of length information, withlength prediction accuracies of 45% to 65%, way above the 20% baseline. This is remarkable, as theCBOW representation consists of averaged word vectors, and we did not expect it to encode lengthat all. We return to CBOW’s exceptional performance in Section 7.6.2 W ORD CONTENT EXPERIMENTSTo what extent do the different sentence representations encode the identities of the words in thesentence? Figure 1b visualizes the performance of our models on the word content test.All the representations encode some amount of word information, and clearly outperform the ran-dom baseline of 50%. Some trends are worth noting. While the capacity of the LSTM encoderto preserve word identities generally increases when adding dimensions, the performance peaks at750 dimensions and drops afterwards. This stands in contrast to the BLEU score of the respective5Published as a conference paper at ICLR 2017encoder-decoder models. We hypothesize that this occurs because a sizable part of the auto-encoderperformance comes from the decoder, which also improves as we add more dimensions. At 1000 di-mensions, the decoder’s language model may be strong enough to allow the representation producedby the encoder to be less informative with regard to word content.CBOW representations with low dimensional vectors (100 and 300 dimensions) perform exception-ally well, outperforming the more complex, sequence-aware models by a wide margin. If your taskrequires access to word identities, it is worth considering this simple representation. Interestingly,CBOW scores drop at higher dimensions.6.3 W ORD ORDER EXPERIMENTSFigure 1c shows the performance of the different models on the order test. The LSTM encoders arevery capable of encoding word order, with LSTM-1000 allowing the recovery of word order in 91%of the cases. Similar to the length test, LSTM order prediction accuracy is only loosely correlatedwith BLEU scores. It is worth noting that increasing the representation size helps the LSTM-encoderto better encode order information.Surprisingly, the CBOW encodings manage to reach an accuracy of 70% on the word order task,20% above the baseline. This is remarkable as, by definition, the CBOW encoder does not attemptto preserve word order information. One way to explain this is by considering distribution patternsof words in natural language sentences: some words tend to appear before others. In the next sectionwe analyze the effect of natural language on the different models.7 I MPORTANCE OF “NATURAL LANGUAGENESS ”Natural language imposes many constraints on sentence structure. To what extent do the differ-ent encoders rely on specific properties of word distributions in natural language sentences whenencoding sentences?To account for this, we perform additional experiments in which we attempt to control for the effectof natural language.How can CBOW encode sentence length? Is the ability of CBOW embeddings to encode lengthrelated to specific words being indicative of longer or shorter sentences? To control for this, wecreated a synthetic dataset where each word in each sentence is replaced by a random word fromthe dictionary and re-ran the length test for the CBOW embeddings using this dataset. As Figure 2ashows, this only leads to a slight decrease in accuracy, indicating that the identity of the words is notthe main component in CBOW’s success at predicting length.100 300 500 750 1000Representation dimensions35404550556065Length prediction accuracyCBOWCBOW syn sent(a) Length accuracy for differentCBOW sizes on natural and synthetic(random words) sentences.5 10 15 20 25 30 35Sentence length0.350.400.450.500.55Norm(b) Average embedding norm vs. sen-tence length for CBOW with an em-bedding size of 300.An alternative explanation for CBOW’s ability to encode sentence length is given by considering thenorms of the sentence embeddings. Indeed, Figure 2b shows that the embedding norm decreases assentences grow longer. We believe this is one of the main reasons for the strong CBOW results.While the correlation between the number of averaged vectors and the resulting norm surprised us,in retrospect it is an expected behavior that has sound mathematical foundations. To understandthe behavior, consider the different word vectors to be random variables, with the values in each6Published as a conference paper at ICLR 2017dimension centered roughly around zero. Both central limit theorem and Hoeffding‘s inequality tellus that as we add more samples, the expected average of the values will better approximate the truemean, causing the norm of the average vector to decrease. We expect the correlation between thesentence length and its norm to be more pronounced with shorter sentences (above some number ofsamples we will already be very close to the true mean, and the norm will not decrease further), abehavior which we indeed observe in practice.How does CBOW encode word order? The surprisingly strong performance of the CBOW modelon the order task made us hypothesize that much of the word order information is captured in generalnatural language word order statistics.To investigate this, we re-run the word order tests, but this time drop the sentence embedding intraining and testing time, learning from the word-pairs alone. In other words, we feed the network asinput two word embeddings and ask which word comes first in the sentence. This test isolates generalword order statistics of language from information that is contained in the sentence embedding (Fig.3).100 300 500 750 1000Representation dimensions657075808590Order prediction accuracyEDED no sentCBOWCBOW no sentFigure 3: Order accuracy w/ and w/o sentence repre-sentation for ED and CBOW models.The difference between including and remov-ing the sentence embeddings when using theCBOW model is minor, while the LSTM-EDsuffers a significant drop. Clearly, the LSTM-ED model encodes word order, while the pre-diction ability of CBOW is mostly explained bygeneral language statistics. However, CBOWdoes benefit from the sentence to some extent:we observe a gain of 3% accuracy pointswhen the CBOW tests are allowed access to thesentence representation. This may be explainedby higher order statistics of correlation betweenword order patterns and the occurrences of spe-cific words.How important is English word order for en-coding sentences? To what extent are the models trained to rely on natural language word orderwhen encoding sentences? To control for this, we create a synthetic dataset, P ERMUTED , in whichthe word order in each sentence is randomly permuted. Then, we repeat the length, content andorder experiments using the P ERMUTED dataset (we still use the original sentence encoders that aretrained on non-permuted sentences). While the permuted sentence representation is the same forCBOW, it is completely different when generated by the encoder-decoder.Results are presented in Fig. 4. When considering CBOW embeddings, word order accuracy dropsto chance level, as expected, while results on the other tests remain the same. Moving to the LSTMencoder-decoder, the results on all three tests are comparable to the ones using non-permuted sen-tences. These results are somewhat surprising since the models were originally trained on “real”,non-permuted sentences. This indicates that the LSTM encoder-decoder is a general-purpose se-quence encoder that for the most part does not rely on word ordering properties of natural languagewhen encoding sentences. The small and consistent drop in word order accuracy on the permutedsentences can be attributed to the encoder relying on natural language word order to some extent,but can also be explained by the word order prediction task becoming harder due to the inability to100 300 500 750 1000Representation dimensions405060708090100Length prediction accuracyCBOWPerm CBOWEncoder-decoderPerm ED(a) Length test.100 300 500 750 1000Representation dimensions5560657075808590Content prediction accuracy (b) Content test.100 300 500 750 1000Representation dimensions5060708090Order prediction accuracy (c) Order test.Figure 4: Results for length, content and order tests on natural and permuted sentences.7Published as a conference paper at ICLR 2017use general word order statistics. The results suggest that a trained encoder will transfer well acrossdifferent natural language domains, as long as the vocabularies remain stable. When consideringthe decoder’s BLEU score on the permuted dataset (not shown), we do see a dramatic decreasein accuracy. For example, LSTM encoder-decoder with 1000 dimensions drops from 32.5 to 8.2BLEU score. These results suggest that the decoder, which is thrown away, contains most of thelanguage-specific information.8 S KIP-THOUGHT VECTORSIn addition to the experiments on CBOW and LSTM-encoders, we also experiment with the skip-thought vectors model (Kiros et al., 2015). This model extends the idea of the auto-encoder toneighboring sentences.Given a sentence si, it first encodes it using an RNN, similar to the auto-encoder model. However,instead of predicting the original sentence, skip-thought predicts the preceding and following sen-tences, si1andsi+1. The encoder and decoder are implemented with gated recurrent units (Choet al., 2014).Here, we deviate from the controlled environment and use the author’s provided model3with therecommended embeddings size of 4800. This makes the direct comparison of the models “unfair”.However, our aim is not to decide which is the “best” model but rather to show how our method canbe used to measure the kinds of information captured by different representations.Table 1 summarizes the performance of the skip-thought embeddings in each of the prediction taskson both the P ERMUTED and original dataset.Length Word content Word orderOriginal 82.1% 79.7% 81.1%Permuted 68.2% 76.4% 76.5%Table 1: Classification accuracy for the prediction tasks using skip-thought embeddings.The performance of the skip-thought embeddings is well above the baselines and roughly similarfor all tasks. Its performance is similar to the higher-dimensional encoder-decoder models, exceptin the order task where it lags somewhat behind. However, we note that the results are not directlycomparable as skip-thought was trained on a different corpus.The more interesting finding is its performance on the P ERMUTED sentences. In this setting we seea large drop. In contrast to the LSTM encoder-decoder, skip-thought’s ability to predict length andword content does degrade significantly on the permuted sentences, suggesting that the encodingprocess of the skip-thought model is indeed specialized towards natural language texts.9 C ONCLUSIONWe presented a methodology for performing fine-grained analysis of sentence embeddings usingauxiliary prediction tasks. Our analysis reveals some properties of sentence embedding methods:CBOW is surprisingly effective – in addition to being very strong at content, it is also predictiveof length, and can be used to reconstruct a non-trivial amount of the original word order . 300dimensions perform best, with greatly degraded word-content prediction performance on higherdimensions.With enough dimensions, LSTM auto-encoders are very effective at encoding word order andword content information. Increasing the dimensionality of the LSTM encoder does not signif-icantly improve its ability to encode length, but does increase its ability to encode content andorder information. 500 dimensional embeddings are already quite effective for encoding wordorder, with little gains beyond that. Word content accuracy peaks at 750 dimensions and drops at1000, suggesting that larger is not always better .3https://github.com/ryankiros/skip-thoughts8Published as a conference paper at ICLR 2017The trained LSTM encoder (when trained with an auto-encoder objective) does not rely on order-ing patterns in the training sentences when encoding novel sequences.In contrast, the skip-thought encoder does rely on such patterns. Its performance on the othertasks is similar to the higher-dimensional LSTM encoder, which is impressive considering it wastrained on a different corpus.Finally, the encoder-decoder’s ability to recreate sentences (BLEU) is not entirely indicative ofthe quality of the encoder at representing aspects such as word identity and order. This suggeststhatBLEU is sub-optimal for model selection .
rkqq9Mime
Interesting analytic results on unsupervised sentence encoders
8: Top 50% of accepted papers, clear accept
This paper presents a set of experiments investigating what kinds of information are captured in common unsupervised approaches to sentence representation learning. The results are non-trivial and somewhat surprising. For example, they show that it is possible to reconstruct word order from bag of words representations, and they show that LSTM sentence autoencoders encode interpretable features even for randomly permuted nonsense sentences. Effective unsupervised sentence representation learning is an important and largely unsolved problem in NLP, and this kind of work seems like it should be straightforwardly helpful towards that end. In addition, the experimental paradigm presented here is likely more broadly applicable to a range of representation learning systems. Some of the results seem somewhat strange, but I see no major technical concerns, and think that that they are informative. I recommend acceptance. One minor red flag: - The massive drop in CBOW performance in Figures 1b and 4b are not explained, and seem implausible enough to warrant serious further investigation. Can you be absolutely certain that those results would appear with a different codebase and different random seed implementing the same model? Fortunately, this point is largely orthogonal to the major results of the paper. Two writing comments: - I agree that the results with word order and CBOW are surprising, but I think it's slightly misleading to say that CBOW is predictive of word order. It doesn't represent word order at all, but it's possible to probabilistically reconstruct word order from the information that it does encode. - Saying that "LSTM auto-encoders are more effective at encoding word order than word content" doesn't really make sense. These two quantities aren't comparable.
4: The reviewer is confident but not absolutely certain that the evaluation is correct
BJh6Ztuxl
ICLR.cc/2017/conference
2017
Fine-grained Analysis of Sentence Embeddings Using Auxiliary Prediction Tasks
["Yossi Adi", "Einat Kermany", "Yonatan Belinkov", "Ofer Lavi", "Yoav Goldberg"]
There is a lot of research interest in encoding variable length sentences into fixed length vectors, in a way that preserves the sentence meanings. Two common methods include representations based on averaging word vectors, and representations based on the hidden states of recurrent neural networks such as LSTMs. The sentence vectors are used as features for subsequent machine learning tasks or for pre-training in the context of deep learning. However, not much is known about the properties that are encoded in these sentence representations and about the language information they capture. We propose a framework that facilitates better understanding of the encoded representations. We define prediction tasks around isolated aspects of sentence structure (namely sentence length, word content, and word order), and score representations by the ability to train a classifier to solve each prediction task when using the representation as input. We demonstrate the potential contribution of the approach by analyzing different sentence representation mechanisms. The analysis sheds light on the relative strengths of different sentence embedding methods with respect to these low level prediction tasks, and on the effect of the encoded vector’s dimensionality on the resulting representations.
["Natural language processing", "Deep learning"]
ABSTRACTThere is a lot of research interest in encoding variable length sentences into fixedlength vectors, in a way that preserves the sentence meanings. Two commonmethods include representations based on averaging word vectors, and represen-tations based on the hidden states of recurrent neural networks such as LSTMs.The sentence vectors are used as features for subsequent machine learning tasksor for pre-training in the context of deep learning. However, not much is knownabout the properties that are encoded in these sentence representations and aboutthe language information they capture.We propose a framework that facilitates better understanding of the encoded rep-resentations. We define prediction tasks around isolated aspects of sentence struc-ture (namely sentence length, word content, and word order), and score repre-sentations by the ability to train a classifier to solve each prediction task whenusing the representation as input. We demonstrate the potential contribution of theapproach by analyzing different sentence representation mechanisms. The analy-sis sheds light on the relative strengths of different sentence embedding methodswith respect to these low level prediction tasks, and on the effect of the encodedvector’s dimensionality on the resulting representations.1 I NTRODUCTIONWhile sentence embeddings orsentence representations play a central role in recent deep learningapproaches to NLP, little is known about the information that is captured by different sentence em-bedding learning mechanisms. We propose a methodology facilitating fine-grained measurementof some of the information encoded in sentence embeddings, as well as performing fine-grainedcomparison of different sentence embedding methods.In sentence embeddings, sentences, which are variable-length sequences of discrete symbols, areencoded into fixed length continuous vectors that are then used for further prediction tasks. Asimple and common approach is producing word-level vectors using, e.g., word2vec (Mikolov et al.,2013a;b), and summing or averaging the vectors of the words participating in the sentence. Thiscontinuous-bag-of-words (CBOW) approach disregards the word order in the sentence.1Another approach is the encoder-decoder architecture, producing models also known as sequence-to-sequence models (Sutskever et al., 2014; Cho et al., 2014; Bahdanau et al., 2014, inter alia). Inthis architecture, an encoder network (e.g. an LSTM) is used to produce a vector representationof the sentence, which is then fed as input into a decoder network that uses it to perform someprediction task (e.g. recreate the sentence, or produce a translation of it). The encoder and decodernetworks are trained jointly in order to perform the final task.1We use the term CBOW to refer to a sentence representation that is composed of an average of the vectorsof the words in the sentence, not to be confused with the training method by the same name which is used inthe word2vec algorithm.1Published as a conference paper at ICLR 2017Some systems (for example in machine translation) train the system end-to-end, and use the trainedsystem for prediction (Bahdanau et al., 2014). Such systems do not generally care about the encodedvectors, which are used merely as intermediate values. However, another common case is to train anencoder-decoder network and then throw away the decoder and use the trained encoder as a generalmechanism for obtaining sentence representations. For example, an encoder-decoder network canbe trained as an auto-encoder, where the encoder creates a vector representation, and the decoderattempts to recreate the original sentence (Li et al., 2015). Similarly, Kiros et al. (2015) train a net-work to encode a sentence such that the decoder can recreate its neighboring sentences in the text.Such networks do not require specially labeled data, and can be trained on large amounts of unanno-tated text. As the decoder needs information about the sentence in order to perform well, it is clearthat the encoded vectors capture a non-trivial amount of information about the sentence, makingthe encoder appealing to use as a general purpose, stand-alone sentence encoding mechanism. Thesentence encodings can then be used as input for other prediction tasks for which less training datais available (Dai & Le, 2015). In this work we focus on these “general purpose” sentence encodings.The resulting sentence representations are opaque, and there is currently no good way of comparingdifferent representations short of using them as input for different high-level semantic tasks (e.g.sentiment classification, entailment recognition, document retrieval, question answering, sentencesimilarity, etc.) and measuring how well they perform on these tasks. This is the approach takenby Li et al. (2015), Hill et al. (2016) and Kiros et al. (2015). This method of comparing sentenceembeddings leaves a lot to be desired: the comparison is at a very coarse-grained level, does not tellus much about the kind of information that is encoded in the representation, and does not help usform generalizable conclusions.Our Contribution We take a first step towards opening the black box of vector embeddings forsentences. We propose a methodology that facilitates comparing sentence embeddings on a muchfiner-grained level, and demonstrate its use by analyzing and comparing different sentence repre-sentations. We analyze sentence representation methods that are based on LSTM auto-encoders andthe simple CBOW representation produced by averaging word2vec word embeddings. For each ofCBOW and LSTM auto-encoder, we compare different numbers of dimensions, exploring the ef-fect of the dimensionality on the resulting representation. We also provide some comparison to theskip-thought embeddings of Kiros et al. (2015).In this work, we focus on what are arguably the three most basic characteristics of a sequence:its length, the items within it, and their order. We investigate different sentence representationsbased on the capacity to which they encode these aspects. Our analysis of these low-level propertiesleads to interesting, actionable insights, exposing relative strengths and weaknesses of the differentrepresentations.Limitations Focusing on low-level sentence properties also has limitations: The tasks focus onmeasuring the preservation of surface aspects of the sentence and do not measure syntactic andsemantic generalization abilities; the tasks are not directly related to any specific downstream appli-cation (although the properties we test are important factors in many tasks – knowing that a modelis good at predicting length and word order is likely advantageous for syntactic parsing, while mod-els that excel at word content are good for text classification tasks). Dealing with these limitationsrequires a complementary set of auxiliary tasks, which is outside the scope of this study and is leftfor future work.The study also suffers from the general limitations of empirical work: we do not prove generaltheorems but rather measure behaviors on several data points and attempt to draw conclusions fromthese measurements. There is always the risk that our conclusions only hold for the datasets onwhich we measured, and will not generalize. However, we do consider our large sample of sentencesfrom Wikipedia to be representative of the English language, at least in terms of the three basicsentence properties that we study.Summary of Findings Our analysis reveals the following insights regarding the different sentenceembedding methods:Sentence representations based on averaged word vectors are surprisingly effective, and encodea non-trivial amount of information regarding sentence length. The information they contain2Published as a conference paper at ICLR 2017can also be used to reconstruct a non-trivial amount of the original word order in a probabilisticmanner (due to regularities in the natural language data).LSTM auto-encoders are very effective at encoding word order and word content.Increasing the number of dimensions benefits some tasks more than others.Adding more hidden units sometimes degrades the encoders’ ability to encode word content. Thisdegradation is not correlated with the BLEU scores of the decoder, suggesting that BLEU overthe decoder output is sub-optimal for evaluating the encoders’ quality.LSTM encoders trained as auto-encoders do not rely on ordering patterns in the training sentenceswhen encoding novel sentences, while the skip-thought encoders do rely on such patterns.2 R ELATED WORKWord-level distributed representations have been analyzed rather extensively, both empirically andtheoretically, for example by Baroni et al. (2014), Levy & Goldberg (2014) and Levy et al. (2015).In contrast, the analysis of sentence-level representations has been much more limited. Commonlyused approaches is to either compare the performance of the sentence embeddings on down-streamtasks (Hill et al., 2016), or to analyze models, specifically trained for predefined task (Schmaltzet al., 2016; Sutskever et al., 2011).While the resulting analysis reveals differences in performance of different models, it does not ade-quately explain what kind of linguistic properties of the sentence they capture. Other studies analyzethe hidden units learned by neural networks when training a sentence representation model (Elman,1991; Karpathy et al., 2015; K ́ad ́ar et al., 2016). This approach often associates certain linguisticaspects with certain hidden units. K ́ad ́ar et al. (2016) propose a methodology for quantifying thecontribution of each input word to a resulting GRU-based encoding. These methods depend on thespecific learning model and cannot be applied to arbitrary representations. Moreover, it is still notclear what is captured by the final sentence embeddings.Our work is orthogonal and complementary to the previous efforts: we analyze the resulting sentenceembeddings by devising auxiliary prediction tasks for core sentence properties. The methodologywe purpose is general and can be applied to any sentence representation model.3 A PPROACHWe aim to inspect and compare encoded sentence vectors in a task-independent manner. The mainidea of our method is to focus on isolated aspects of sentence structure, and design experiments tomeasure to what extent each aspect is captured in a given representation.In each experiment, we formulate a prediction task. Given a sentence representation method, wecreate training data and train a classifier to predict a specific sentence property (e.g. their length)based on their vector representations. We then measure how well we can train a model to perform thetask. The basic premise is that if we cannot train a classifier to predict some property of a sentencebased on its vector representation, then this property is not encoded in the representation (or rather,not encoded in a useful way, considering how the representation is likely to be used).The experiments in this work focus on low-level properties of sentences – the sentence length, theidentities of words in a sentence, and the order of the words. We consider these to be the coreelements of sentence structure. Generalizing the approach to higher-level semantic and syntacticproperties holds great potential, which we hope will be explored in future work, by us or by others.3.1 T HEPREDICTION TASKSWe now turn to describe the specific prediction tasks. We use lower case italics ( s,w) to referto sentences and words, and boldface to refer to their corresponding vector representations ( s,w).When more than one element is considered, they are distinguished by indices ( w1,w2,w1,w2).Our underlying corpus for generating the classification instances consists of 200,000 Wikipediasentences, where 150,000 sentences are used to generate training examples, and 25,000 sentences3Published as a conference paper at ICLR 2017are used for each of the test and development examples. These sentences are a subset of the trainingset that was used to train the original sentence encoders. The idea behind this setup is to test themodels on what are presumably their best embeddings.Length Task This task measures to what extent the sentence representation encodes its length.Given a sentence representation s2Rk, the goal of the classifier is to predict the length (numberof words) in the original sentence s. The task is formulated as multiclass classification, with eightoutput classes corresponding to binned lengths.2The resulting dataset is reasonably balanced, witha majority class (lengths 5-8 words) of 5,182 test instances and a minority class (34-70) of 1,084 testinstances. Predicting the majority class results in classification accuracy of 20.1%.Word-content Task This task measures to what extent the sentence representation encodes theidentities of words within it. Given a sentence representation s2Rkand a word representationw2Rd, the goal of the classifier is to determine whether wappears in the s, with access to neitherwnors. This is formulated as a binary classification task, where the input is the concatenation of sandw.To create a dataset for this task, we need to provide positive and negative examples. Obtainingpositive examples is straightforward: we simply pick a random word from each sentence. Fornegative examples, we could pick a random word from the entire corpus. However, we found thatsuch a dataset tends to push models to memorize words as either positive or negative words, insteadof finding their relation to the sentence representation. Therefore, for each sentence we pick as anegative example a word that appears as a positive example somewhere in our dataset, but doesnot appear in the given sentence. This forces the models to learn a relationship between word andsentence representations. We generate one positive and one negative example from each sentence.The dataset is balanced, with a baseline accuracy of 50%.Word-order Task This task measures to what extent the sentence representation encodes wordorder. Given a sentence representation s2Rkand the representations of two words that appear inthe sentence, w1;w22Rd, the goal of the classifier is to predict whether w1appears before or afterw2in the original sentence s. Again, the model has no access to the original sentence and the twowords. This is formulated as a binary classification task, where the input is a concatenation of thethree vectors s,w1andw2.For each sentence in the corpus, we simply pick two random words from the sentence as a positiveexample. For negative examples, we flip the order of the words. We generate one positive and onenegative example from each sentence. The dataset is balanced, with a baseline accuracy of 50%.4 S ENTENCE REPRESENTATION MODELSGiven a sentence s=fw1; w2; :::; w Ngwe aim to find a sentence representation susing an encoder:ENC :s=fw1; w2; :::; w Ng7!s2RkThe encoding process usually assumes a vector representation wi2Rdfor each word in the vo-cabulary. In general, the word and sentence embedding dimensions, dandk, need not be the same.The word vectors can be learned together with other encoder parameters or pre-trained. Below wedescribe different instantiations of ENC.Continuous Bag-of-words (CBOW) This simple yet effective text representation consists of per-forming element-wise averaging of word vectors that are obtained using a word-embedding methodsuch as word2vec.Despite its obliviousness to word order, CBOW has proven useful in different tasks (Hill et al., 2016)and is easy to compute, making it an important model class to consider.Encoder-Decoder (ED) The encoder-decoder framework has been successfully used in a numberof sequence-to-sequence learning tasks (Sutskever et al., 2014; Bahdanau et al., 2014; Dai & Le,2015; Li et al., 2015). After the encoding phase, a decoder maps the sentence representation back tothe sequence of words:DEC :s2Rk7!s=fw1; w2; :::; w Ng2We use the bins (5-8), (9-12), (13-16), (17-20), (21-25), (26-29), (30-33), (34-70).4Published as a conference paper at ICLR 2017100 300 500 750 1000Representation dimensions102030405060708090Length prediction accuracy05101520253035BLEUEDCBOWED BLEU(a) Length test.100 300 500 750 1000Representation dimensions505560657075808590Content prediction accuracy05101520253035BLEUEDCBOWED BLEU (b) Content test.100 300 500 750 1000Representation dimensions5060708090Order prediction accuracy05101520253035BLEUEDCBOWED BLEU (c) Order test.Figure 1: Task accuracy vs. embedding size for different models; ED BLEU scores given for reference.Here we investigate the specific case of an auto-encoder, where the entire encoding-decoding processcan be trained end-to-end from a corpus of raw texts. The sentence representation is the final outputvector of the encoder. We use a long short-term memory (LSTM) recurrent neural network (Hochre-iter & Schmidhuber, 1997; Graves et al., 2013) for both encoder and decoder. The LSTM decoderis similar to the LSTM encoder but with different weights.5 E XPERIMENTAL SETUPThe bag-of-words (CBOW) and encoder-decoder models are trained on 1 million sentences from a2012 Wikipedia dump with vocabulary size of 50,000 tokens. We use NLTK (Bird, 2006) for tok-enization, and constrain sentence lengths to be between 5 and 70 words. For both models we controlthe embedding size kand train word and sentence vectors of sizes k2f100;300;500;750;1000g.More details about the experimental setup are available in the Appendix.6 R ESULTSIn this section we provide a detailed description of our experimental results along with their analysis.For each of the three main tests – length, content and order – we investigate the performance ofdifferent sentence representation models across embedding size.6.1 L ENGTH EXPERIMENTSWe begin by investigating how well the different representations encode sentence length. Figure 1ashows the performance of the different models on the length task, as well as the BLEU obtained bythe LSTM encoder-decoder (ED).With enough dimensions, the LSTM embeddings are very good at capturing sentence length, ob-taining accuracies between 82% and 87%. Length prediction ability is not perfectly correlated withBLEU scores: from 300 dimensions onward the length prediction accuracies of the LSTM remainrelatively stable, while the BLEU score of the encoder-decoder model increases as more dimensionsare added.Somewhat surprisingly, the CBOW model also encodes a fair amount of length information, withlength prediction accuracies of 45% to 65%, way above the 20% baseline. This is remarkable, as theCBOW representation consists of averaged word vectors, and we did not expect it to encode lengthat all. We return to CBOW’s exceptional performance in Section 7.6.2 W ORD CONTENT EXPERIMENTSTo what extent do the different sentence representations encode the identities of the words in thesentence? Figure 1b visualizes the performance of our models on the word content test.All the representations encode some amount of word information, and clearly outperform the ran-dom baseline of 50%. Some trends are worth noting. While the capacity of the LSTM encoderto preserve word identities generally increases when adding dimensions, the performance peaks at750 dimensions and drops afterwards. This stands in contrast to the BLEU score of the respective5Published as a conference paper at ICLR 2017encoder-decoder models. We hypothesize that this occurs because a sizable part of the auto-encoderperformance comes from the decoder, which also improves as we add more dimensions. At 1000 di-mensions, the decoder’s language model may be strong enough to allow the representation producedby the encoder to be less informative with regard to word content.CBOW representations with low dimensional vectors (100 and 300 dimensions) perform exception-ally well, outperforming the more complex, sequence-aware models by a wide margin. If your taskrequires access to word identities, it is worth considering this simple representation. Interestingly,CBOW scores drop at higher dimensions.6.3 W ORD ORDER EXPERIMENTSFigure 1c shows the performance of the different models on the order test. The LSTM encoders arevery capable of encoding word order, with LSTM-1000 allowing the recovery of word order in 91%of the cases. Similar to the length test, LSTM order prediction accuracy is only loosely correlatedwith BLEU scores. It is worth noting that increasing the representation size helps the LSTM-encoderto better encode order information.Surprisingly, the CBOW encodings manage to reach an accuracy of 70% on the word order task,20% above the baseline. This is remarkable as, by definition, the CBOW encoder does not attemptto preserve word order information. One way to explain this is by considering distribution patternsof words in natural language sentences: some words tend to appear before others. In the next sectionwe analyze the effect of natural language on the different models.7 I MPORTANCE OF “NATURAL LANGUAGENESS ”Natural language imposes many constraints on sentence structure. To what extent do the differ-ent encoders rely on specific properties of word distributions in natural language sentences whenencoding sentences?To account for this, we perform additional experiments in which we attempt to control for the effectof natural language.How can CBOW encode sentence length? Is the ability of CBOW embeddings to encode lengthrelated to specific words being indicative of longer or shorter sentences? To control for this, wecreated a synthetic dataset where each word in each sentence is replaced by a random word fromthe dictionary and re-ran the length test for the CBOW embeddings using this dataset. As Figure 2ashows, this only leads to a slight decrease in accuracy, indicating that the identity of the words is notthe main component in CBOW’s success at predicting length.100 300 500 750 1000Representation dimensions35404550556065Length prediction accuracyCBOWCBOW syn sent(a) Length accuracy for differentCBOW sizes on natural and synthetic(random words) sentences.5 10 15 20 25 30 35Sentence length0.350.400.450.500.55Norm(b) Average embedding norm vs. sen-tence length for CBOW with an em-bedding size of 300.An alternative explanation for CBOW’s ability to encode sentence length is given by considering thenorms of the sentence embeddings. Indeed, Figure 2b shows that the embedding norm decreases assentences grow longer. We believe this is one of the main reasons for the strong CBOW results.While the correlation between the number of averaged vectors and the resulting norm surprised us,in retrospect it is an expected behavior that has sound mathematical foundations. To understandthe behavior, consider the different word vectors to be random variables, with the values in each6Published as a conference paper at ICLR 2017dimension centered roughly around zero. Both central limit theorem and Hoeffding‘s inequality tellus that as we add more samples, the expected average of the values will better approximate the truemean, causing the norm of the average vector to decrease. We expect the correlation between thesentence length and its norm to be more pronounced with shorter sentences (above some number ofsamples we will already be very close to the true mean, and the norm will not decrease further), abehavior which we indeed observe in practice.How does CBOW encode word order? The surprisingly strong performance of the CBOW modelon the order task made us hypothesize that much of the word order information is captured in generalnatural language word order statistics.To investigate this, we re-run the word order tests, but this time drop the sentence embedding intraining and testing time, learning from the word-pairs alone. In other words, we feed the network asinput two word embeddings and ask which word comes first in the sentence. This test isolates generalword order statistics of language from information that is contained in the sentence embedding (Fig.3).100 300 500 750 1000Representation dimensions657075808590Order prediction accuracyEDED no sentCBOWCBOW no sentFigure 3: Order accuracy w/ and w/o sentence repre-sentation for ED and CBOW models.The difference between including and remov-ing the sentence embeddings when using theCBOW model is minor, while the LSTM-EDsuffers a significant drop. Clearly, the LSTM-ED model encodes word order, while the pre-diction ability of CBOW is mostly explained bygeneral language statistics. However, CBOWdoes benefit from the sentence to some extent:we observe a gain of 3% accuracy pointswhen the CBOW tests are allowed access to thesentence representation. This may be explainedby higher order statistics of correlation betweenword order patterns and the occurrences of spe-cific words.How important is English word order for en-coding sentences? To what extent are the models trained to rely on natural language word orderwhen encoding sentences? To control for this, we create a synthetic dataset, P ERMUTED , in whichthe word order in each sentence is randomly permuted. Then, we repeat the length, content andorder experiments using the P ERMUTED dataset (we still use the original sentence encoders that aretrained on non-permuted sentences). While the permuted sentence representation is the same forCBOW, it is completely different when generated by the encoder-decoder.Results are presented in Fig. 4. When considering CBOW embeddings, word order accuracy dropsto chance level, as expected, while results on the other tests remain the same. Moving to the LSTMencoder-decoder, the results on all three tests are comparable to the ones using non-permuted sen-tences. These results are somewhat surprising since the models were originally trained on “real”,non-permuted sentences. This indicates that the LSTM encoder-decoder is a general-purpose se-quence encoder that for the most part does not rely on word ordering properties of natural languagewhen encoding sentences. The small and consistent drop in word order accuracy on the permutedsentences can be attributed to the encoder relying on natural language word order to some extent,but can also be explained by the word order prediction task becoming harder due to the inability to100 300 500 750 1000Representation dimensions405060708090100Length prediction accuracyCBOWPerm CBOWEncoder-decoderPerm ED(a) Length test.100 300 500 750 1000Representation dimensions5560657075808590Content prediction accuracy (b) Content test.100 300 500 750 1000Representation dimensions5060708090Order prediction accuracy (c) Order test.Figure 4: Results for length, content and order tests on natural and permuted sentences.7Published as a conference paper at ICLR 2017use general word order statistics. The results suggest that a trained encoder will transfer well acrossdifferent natural language domains, as long as the vocabularies remain stable. When consideringthe decoder’s BLEU score on the permuted dataset (not shown), we do see a dramatic decreasein accuracy. For example, LSTM encoder-decoder with 1000 dimensions drops from 32.5 to 8.2BLEU score. These results suggest that the decoder, which is thrown away, contains most of thelanguage-specific information.8 S KIP-THOUGHT VECTORSIn addition to the experiments on CBOW and LSTM-encoders, we also experiment with the skip-thought vectors model (Kiros et al., 2015). This model extends the idea of the auto-encoder toneighboring sentences.Given a sentence si, it first encodes it using an RNN, similar to the auto-encoder model. However,instead of predicting the original sentence, skip-thought predicts the preceding and following sen-tences, si1andsi+1. The encoder and decoder are implemented with gated recurrent units (Choet al., 2014).Here, we deviate from the controlled environment and use the author’s provided model3with therecommended embeddings size of 4800. This makes the direct comparison of the models “unfair”.However, our aim is not to decide which is the “best” model but rather to show how our method canbe used to measure the kinds of information captured by different representations.Table 1 summarizes the performance of the skip-thought embeddings in each of the prediction taskson both the P ERMUTED and original dataset.Length Word content Word orderOriginal 82.1% 79.7% 81.1%Permuted 68.2% 76.4% 76.5%Table 1: Classification accuracy for the prediction tasks using skip-thought embeddings.The performance of the skip-thought embeddings is well above the baselines and roughly similarfor all tasks. Its performance is similar to the higher-dimensional encoder-decoder models, exceptin the order task where it lags somewhat behind. However, we note that the results are not directlycomparable as skip-thought was trained on a different corpus.The more interesting finding is its performance on the P ERMUTED sentences. In this setting we seea large drop. In contrast to the LSTM encoder-decoder, skip-thought’s ability to predict length andword content does degrade significantly on the permuted sentences, suggesting that the encodingprocess of the skip-thought model is indeed specialized towards natural language texts.9 C ONCLUSIONWe presented a methodology for performing fine-grained analysis of sentence embeddings usingauxiliary prediction tasks. Our analysis reveals some properties of sentence embedding methods:CBOW is surprisingly effective – in addition to being very strong at content, it is also predictiveof length, and can be used to reconstruct a non-trivial amount of the original word order . 300dimensions perform best, with greatly degraded word-content prediction performance on higherdimensions.With enough dimensions, LSTM auto-encoders are very effective at encoding word order andword content information. Increasing the dimensionality of the LSTM encoder does not signif-icantly improve its ability to encode length, but does increase its ability to encode content andorder information. 500 dimensional embeddings are already quite effective for encoding wordorder, with little gains beyond that. Word content accuracy peaks at 750 dimensions and drops at1000, suggesting that larger is not always better .3https://github.com/ryankiros/skip-thoughts8Published as a conference paper at ICLR 2017The trained LSTM encoder (when trained with an auto-encoder objective) does not rely on order-ing patterns in the training sentences when encoding novel sequences.In contrast, the skip-thought encoder does rely on such patterns. Its performance on the othertasks is similar to the higher-dimensional LSTM encoder, which is impressive considering it wastrained on a different corpus.Finally, the encoder-decoder’s ability to recreate sentences (BLEU) is not entirely indicative ofthe quality of the encoder at representing aspects such as word identity and order. This suggeststhatBLEU is sub-optimal for model selection .
HkHqRoIEe
Review
8: Top 50% of accepted papers, clear accept
The authors present a methodology for analyzing sentence embedding techniques by checking how much the embeddings preserve information about sentence length, word content, and word order. They examine several popular embedding methods including autoencoding LSTMs, averaged word vectors, and skip-thought vectors. The experiments are thorough and provide interesting insights into the representational power of common sentence embedding strategies, such as the fact that word ordering is surprisingly low-entropy conditioned on word content. Exploring what sort of information is encoded in representation learning methods for NLP is an important and under-researched area. For example, the tide of word-embeddings research was mostly stemmed after a thread of careful experimental results showing most embeddings to be essentially equivalent, culminating in "Improving Distributional Similarity with Lessons Learned from Word Embeddings" by Levy, Goldberg, and Dagan. As representation learning becomes even more important in NLP this sort of research will be even more important. While this paper makes a valuable contribution in setting out and exploring a methodology for evaluating sentence embeddings, the evaluations themselves are quite simple and do not necessarily correlate with real-world desiderata for sentence embeddings (as the authors note in other comments, performance on these tasks is not a normative measure of embedding quality). For example, as the authors note, the ability of the averaged vector to encode sentence length is trivially to be expected given the central limit theorem (or more accurately, concentration inequalities like Hoeffding's inequality). The word-order experiments were interesting. A relevant citation for this sort of conditional ordering procedure is "Generating Text with Recurrent Neural Networks" by Sutskever, Martens, and Hinton, who refer to the conversion of a bag of words into a sentence as "debagging." Although this is just a first step in better understanding of sentence embeddings, it is an important one and I recommend this paper for publication.
5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature
BJh6Ztuxl
ICLR.cc/2017/conference
2017
Fine-grained Analysis of Sentence Embeddings Using Auxiliary Prediction Tasks
["Yossi Adi", "Einat Kermany", "Yonatan Belinkov", "Ofer Lavi", "Yoav Goldberg"]
There is a lot of research interest in encoding variable length sentences into fixed length vectors, in a way that preserves the sentence meanings. Two common methods include representations based on averaging word vectors, and representations based on the hidden states of recurrent neural networks such as LSTMs. The sentence vectors are used as features for subsequent machine learning tasks or for pre-training in the context of deep learning. However, not much is known about the properties that are encoded in these sentence representations and about the language information they capture. We propose a framework that facilitates better understanding of the encoded representations. We define prediction tasks around isolated aspects of sentence structure (namely sentence length, word content, and word order), and score representations by the ability to train a classifier to solve each prediction task when using the representation as input. We demonstrate the potential contribution of the approach by analyzing different sentence representation mechanisms. The analysis sheds light on the relative strengths of different sentence embedding methods with respect to these low level prediction tasks, and on the effect of the encoded vector’s dimensionality on the resulting representations.
["Natural language processing", "Deep learning"]
ABSTRACTThere is a lot of research interest in encoding variable length sentences into fixedlength vectors, in a way that preserves the sentence meanings. Two commonmethods include representations based on averaging word vectors, and represen-tations based on the hidden states of recurrent neural networks such as LSTMs.The sentence vectors are used as features for subsequent machine learning tasksor for pre-training in the context of deep learning. However, not much is knownabout the properties that are encoded in these sentence representations and aboutthe language information they capture.We propose a framework that facilitates better understanding of the encoded rep-resentations. We define prediction tasks around isolated aspects of sentence struc-ture (namely sentence length, word content, and word order), and score repre-sentations by the ability to train a classifier to solve each prediction task whenusing the representation as input. We demonstrate the potential contribution of theapproach by analyzing different sentence representation mechanisms. The analy-sis sheds light on the relative strengths of different sentence embedding methodswith respect to these low level prediction tasks, and on the effect of the encodedvector’s dimensionality on the resulting representations.1 I NTRODUCTIONWhile sentence embeddings orsentence representations play a central role in recent deep learningapproaches to NLP, little is known about the information that is captured by different sentence em-bedding learning mechanisms. We propose a methodology facilitating fine-grained measurementof some of the information encoded in sentence embeddings, as well as performing fine-grainedcomparison of different sentence embedding methods.In sentence embeddings, sentences, which are variable-length sequences of discrete symbols, areencoded into fixed length continuous vectors that are then used for further prediction tasks. Asimple and common approach is producing word-level vectors using, e.g., word2vec (Mikolov et al.,2013a;b), and summing or averaging the vectors of the words participating in the sentence. Thiscontinuous-bag-of-words (CBOW) approach disregards the word order in the sentence.1Another approach is the encoder-decoder architecture, producing models also known as sequence-to-sequence models (Sutskever et al., 2014; Cho et al., 2014; Bahdanau et al., 2014, inter alia). Inthis architecture, an encoder network (e.g. an LSTM) is used to produce a vector representationof the sentence, which is then fed as input into a decoder network that uses it to perform someprediction task (e.g. recreate the sentence, or produce a translation of it). The encoder and decodernetworks are trained jointly in order to perform the final task.1We use the term CBOW to refer to a sentence representation that is composed of an average of the vectorsof the words in the sentence, not to be confused with the training method by the same name which is used inthe word2vec algorithm.1Published as a conference paper at ICLR 2017Some systems (for example in machine translation) train the system end-to-end, and use the trainedsystem for prediction (Bahdanau et al., 2014). Such systems do not generally care about the encodedvectors, which are used merely as intermediate values. However, another common case is to train anencoder-decoder network and then throw away the decoder and use the trained encoder as a generalmechanism for obtaining sentence representations. For example, an encoder-decoder network canbe trained as an auto-encoder, where the encoder creates a vector representation, and the decoderattempts to recreate the original sentence (Li et al., 2015). Similarly, Kiros et al. (2015) train a net-work to encode a sentence such that the decoder can recreate its neighboring sentences in the text.Such networks do not require specially labeled data, and can be trained on large amounts of unanno-tated text. As the decoder needs information about the sentence in order to perform well, it is clearthat the encoded vectors capture a non-trivial amount of information about the sentence, makingthe encoder appealing to use as a general purpose, stand-alone sentence encoding mechanism. Thesentence encodings can then be used as input for other prediction tasks for which less training datais available (Dai & Le, 2015). In this work we focus on these “general purpose” sentence encodings.The resulting sentence representations are opaque, and there is currently no good way of comparingdifferent representations short of using them as input for different high-level semantic tasks (e.g.sentiment classification, entailment recognition, document retrieval, question answering, sentencesimilarity, etc.) and measuring how well they perform on these tasks. This is the approach takenby Li et al. (2015), Hill et al. (2016) and Kiros et al. (2015). This method of comparing sentenceembeddings leaves a lot to be desired: the comparison is at a very coarse-grained level, does not tellus much about the kind of information that is encoded in the representation, and does not help usform generalizable conclusions.Our Contribution We take a first step towards opening the black box of vector embeddings forsentences. We propose a methodology that facilitates comparing sentence embeddings on a muchfiner-grained level, and demonstrate its use by analyzing and comparing different sentence repre-sentations. We analyze sentence representation methods that are based on LSTM auto-encoders andthe simple CBOW representation produced by averaging word2vec word embeddings. For each ofCBOW and LSTM auto-encoder, we compare different numbers of dimensions, exploring the ef-fect of the dimensionality on the resulting representation. We also provide some comparison to theskip-thought embeddings of Kiros et al. (2015).In this work, we focus on what are arguably the three most basic characteristics of a sequence:its length, the items within it, and their order. We investigate different sentence representationsbased on the capacity to which they encode these aspects. Our analysis of these low-level propertiesleads to interesting, actionable insights, exposing relative strengths and weaknesses of the differentrepresentations.Limitations Focusing on low-level sentence properties also has limitations: The tasks focus onmeasuring the preservation of surface aspects of the sentence and do not measure syntactic andsemantic generalization abilities; the tasks are not directly related to any specific downstream appli-cation (although the properties we test are important factors in many tasks – knowing that a modelis good at predicting length and word order is likely advantageous for syntactic parsing, while mod-els that excel at word content are good for text classification tasks). Dealing with these limitationsrequires a complementary set of auxiliary tasks, which is outside the scope of this study and is leftfor future work.The study also suffers from the general limitations of empirical work: we do not prove generaltheorems but rather measure behaviors on several data points and attempt to draw conclusions fromthese measurements. There is always the risk that our conclusions only hold for the datasets onwhich we measured, and will not generalize. However, we do consider our large sample of sentencesfrom Wikipedia to be representative of the English language, at least in terms of the three basicsentence properties that we study.Summary of Findings Our analysis reveals the following insights regarding the different sentenceembedding methods:Sentence representations based on averaged word vectors are surprisingly effective, and encodea non-trivial amount of information regarding sentence length. The information they contain2Published as a conference paper at ICLR 2017can also be used to reconstruct a non-trivial amount of the original word order in a probabilisticmanner (due to regularities in the natural language data).LSTM auto-encoders are very effective at encoding word order and word content.Increasing the number of dimensions benefits some tasks more than others.Adding more hidden units sometimes degrades the encoders’ ability to encode word content. Thisdegradation is not correlated with the BLEU scores of the decoder, suggesting that BLEU overthe decoder output is sub-optimal for evaluating the encoders’ quality.LSTM encoders trained as auto-encoders do not rely on ordering patterns in the training sentenceswhen encoding novel sentences, while the skip-thought encoders do rely on such patterns.2 R ELATED WORKWord-level distributed representations have been analyzed rather extensively, both empirically andtheoretically, for example by Baroni et al. (2014), Levy & Goldberg (2014) and Levy et al. (2015).In contrast, the analysis of sentence-level representations has been much more limited. Commonlyused approaches is to either compare the performance of the sentence embeddings on down-streamtasks (Hill et al., 2016), or to analyze models, specifically trained for predefined task (Schmaltzet al., 2016; Sutskever et al., 2011).While the resulting analysis reveals differences in performance of different models, it does not ade-quately explain what kind of linguistic properties of the sentence they capture. Other studies analyzethe hidden units learned by neural networks when training a sentence representation model (Elman,1991; Karpathy et al., 2015; K ́ad ́ar et al., 2016). This approach often associates certain linguisticaspects with certain hidden units. K ́ad ́ar et al. (2016) propose a methodology for quantifying thecontribution of each input word to a resulting GRU-based encoding. These methods depend on thespecific learning model and cannot be applied to arbitrary representations. Moreover, it is still notclear what is captured by the final sentence embeddings.Our work is orthogonal and complementary to the previous efforts: we analyze the resulting sentenceembeddings by devising auxiliary prediction tasks for core sentence properties. The methodologywe purpose is general and can be applied to any sentence representation model.3 A PPROACHWe aim to inspect and compare encoded sentence vectors in a task-independent manner. The mainidea of our method is to focus on isolated aspects of sentence structure, and design experiments tomeasure to what extent each aspect is captured in a given representation.In each experiment, we formulate a prediction task. Given a sentence representation method, wecreate training data and train a classifier to predict a specific sentence property (e.g. their length)based on their vector representations. We then measure how well we can train a model to perform thetask. The basic premise is that if we cannot train a classifier to predict some property of a sentencebased on its vector representation, then this property is not encoded in the representation (or rather,not encoded in a useful way, considering how the representation is likely to be used).The experiments in this work focus on low-level properties of sentences – the sentence length, theidentities of words in a sentence, and the order of the words. We consider these to be the coreelements of sentence structure. Generalizing the approach to higher-level semantic and syntacticproperties holds great potential, which we hope will be explored in future work, by us or by others.3.1 T HEPREDICTION TASKSWe now turn to describe the specific prediction tasks. We use lower case italics ( s,w) to referto sentences and words, and boldface to refer to their corresponding vector representations ( s,w).When more than one element is considered, they are distinguished by indices ( w1,w2,w1,w2).Our underlying corpus for generating the classification instances consists of 200,000 Wikipediasentences, where 150,000 sentences are used to generate training examples, and 25,000 sentences3Published as a conference paper at ICLR 2017are used for each of the test and development examples. These sentences are a subset of the trainingset that was used to train the original sentence encoders. The idea behind this setup is to test themodels on what are presumably their best embeddings.Length Task This task measures to what extent the sentence representation encodes its length.Given a sentence representation s2Rk, the goal of the classifier is to predict the length (numberof words) in the original sentence s. The task is formulated as multiclass classification, with eightoutput classes corresponding to binned lengths.2The resulting dataset is reasonably balanced, witha majority class (lengths 5-8 words) of 5,182 test instances and a minority class (34-70) of 1,084 testinstances. Predicting the majority class results in classification accuracy of 20.1%.Word-content Task This task measures to what extent the sentence representation encodes theidentities of words within it. Given a sentence representation s2Rkand a word representationw2Rd, the goal of the classifier is to determine whether wappears in the s, with access to neitherwnors. This is formulated as a binary classification task, where the input is the concatenation of sandw.To create a dataset for this task, we need to provide positive and negative examples. Obtainingpositive examples is straightforward: we simply pick a random word from each sentence. Fornegative examples, we could pick a random word from the entire corpus. However, we found thatsuch a dataset tends to push models to memorize words as either positive or negative words, insteadof finding their relation to the sentence representation. Therefore, for each sentence we pick as anegative example a word that appears as a positive example somewhere in our dataset, but doesnot appear in the given sentence. This forces the models to learn a relationship between word andsentence representations. We generate one positive and one negative example from each sentence.The dataset is balanced, with a baseline accuracy of 50%.Word-order Task This task measures to what extent the sentence representation encodes wordorder. Given a sentence representation s2Rkand the representations of two words that appear inthe sentence, w1;w22Rd, the goal of the classifier is to predict whether w1appears before or afterw2in the original sentence s. Again, the model has no access to the original sentence and the twowords. This is formulated as a binary classification task, where the input is a concatenation of thethree vectors s,w1andw2.For each sentence in the corpus, we simply pick two random words from the sentence as a positiveexample. For negative examples, we flip the order of the words. We generate one positive and onenegative example from each sentence. The dataset is balanced, with a baseline accuracy of 50%.4 S ENTENCE REPRESENTATION MODELSGiven a sentence s=fw1; w2; :::; w Ngwe aim to find a sentence representation susing an encoder:ENC :s=fw1; w2; :::; w Ng7!s2RkThe encoding process usually assumes a vector representation wi2Rdfor each word in the vo-cabulary. In general, the word and sentence embedding dimensions, dandk, need not be the same.The word vectors can be learned together with other encoder parameters or pre-trained. Below wedescribe different instantiations of ENC.Continuous Bag-of-words (CBOW) This simple yet effective text representation consists of per-forming element-wise averaging of word vectors that are obtained using a word-embedding methodsuch as word2vec.Despite its obliviousness to word order, CBOW has proven useful in different tasks (Hill et al., 2016)and is easy to compute, making it an important model class to consider.Encoder-Decoder (ED) The encoder-decoder framework has been successfully used in a numberof sequence-to-sequence learning tasks (Sutskever et al., 2014; Bahdanau et al., 2014; Dai & Le,2015; Li et al., 2015). After the encoding phase, a decoder maps the sentence representation back tothe sequence of words:DEC :s2Rk7!s=fw1; w2; :::; w Ng2We use the bins (5-8), (9-12), (13-16), (17-20), (21-25), (26-29), (30-33), (34-70).4Published as a conference paper at ICLR 2017100 300 500 750 1000Representation dimensions102030405060708090Length prediction accuracy05101520253035BLEUEDCBOWED BLEU(a) Length test.100 300 500 750 1000Representation dimensions505560657075808590Content prediction accuracy05101520253035BLEUEDCBOWED BLEU (b) Content test.100 300 500 750 1000Representation dimensions5060708090Order prediction accuracy05101520253035BLEUEDCBOWED BLEU (c) Order test.Figure 1: Task accuracy vs. embedding size for different models; ED BLEU scores given for reference.Here we investigate the specific case of an auto-encoder, where the entire encoding-decoding processcan be trained end-to-end from a corpus of raw texts. The sentence representation is the final outputvector of the encoder. We use a long short-term memory (LSTM) recurrent neural network (Hochre-iter & Schmidhuber, 1997; Graves et al., 2013) for both encoder and decoder. The LSTM decoderis similar to the LSTM encoder but with different weights.5 E XPERIMENTAL SETUPThe bag-of-words (CBOW) and encoder-decoder models are trained on 1 million sentences from a2012 Wikipedia dump with vocabulary size of 50,000 tokens. We use NLTK (Bird, 2006) for tok-enization, and constrain sentence lengths to be between 5 and 70 words. For both models we controlthe embedding size kand train word and sentence vectors of sizes k2f100;300;500;750;1000g.More details about the experimental setup are available in the Appendix.6 R ESULTSIn this section we provide a detailed description of our experimental results along with their analysis.For each of the three main tests – length, content and order – we investigate the performance ofdifferent sentence representation models across embedding size.6.1 L ENGTH EXPERIMENTSWe begin by investigating how well the different representations encode sentence length. Figure 1ashows the performance of the different models on the length task, as well as the BLEU obtained bythe LSTM encoder-decoder (ED).With enough dimensions, the LSTM embeddings are very good at capturing sentence length, ob-taining accuracies between 82% and 87%. Length prediction ability is not perfectly correlated withBLEU scores: from 300 dimensions onward the length prediction accuracies of the LSTM remainrelatively stable, while the BLEU score of the encoder-decoder model increases as more dimensionsare added.Somewhat surprisingly, the CBOW model also encodes a fair amount of length information, withlength prediction accuracies of 45% to 65%, way above the 20% baseline. This is remarkable, as theCBOW representation consists of averaged word vectors, and we did not expect it to encode lengthat all. We return to CBOW’s exceptional performance in Section 7.6.2 W ORD CONTENT EXPERIMENTSTo what extent do the different sentence representations encode the identities of the words in thesentence? Figure 1b visualizes the performance of our models on the word content test.All the representations encode some amount of word information, and clearly outperform the ran-dom baseline of 50%. Some trends are worth noting. While the capacity of the LSTM encoderto preserve word identities generally increases when adding dimensions, the performance peaks at750 dimensions and drops afterwards. This stands in contrast to the BLEU score of the respective5Published as a conference paper at ICLR 2017encoder-decoder models. We hypothesize that this occurs because a sizable part of the auto-encoderperformance comes from the decoder, which also improves as we add more dimensions. At 1000 di-mensions, the decoder’s language model may be strong enough to allow the representation producedby the encoder to be less informative with regard to word content.CBOW representations with low dimensional vectors (100 and 300 dimensions) perform exception-ally well, outperforming the more complex, sequence-aware models by a wide margin. If your taskrequires access to word identities, it is worth considering this simple representation. Interestingly,CBOW scores drop at higher dimensions.6.3 W ORD ORDER EXPERIMENTSFigure 1c shows the performance of the different models on the order test. The LSTM encoders arevery capable of encoding word order, with LSTM-1000 allowing the recovery of word order in 91%of the cases. Similar to the length test, LSTM order prediction accuracy is only loosely correlatedwith BLEU scores. It is worth noting that increasing the representation size helps the LSTM-encoderto better encode order information.Surprisingly, the CBOW encodings manage to reach an accuracy of 70% on the word order task,20% above the baseline. This is remarkable as, by definition, the CBOW encoder does not attemptto preserve word order information. One way to explain this is by considering distribution patternsof words in natural language sentences: some words tend to appear before others. In the next sectionwe analyze the effect of natural language on the different models.7 I MPORTANCE OF “NATURAL LANGUAGENESS ”Natural language imposes many constraints on sentence structure. To what extent do the differ-ent encoders rely on specific properties of word distributions in natural language sentences whenencoding sentences?To account for this, we perform additional experiments in which we attempt to control for the effectof natural language.How can CBOW encode sentence length? Is the ability of CBOW embeddings to encode lengthrelated to specific words being indicative of longer or shorter sentences? To control for this, wecreated a synthetic dataset where each word in each sentence is replaced by a random word fromthe dictionary and re-ran the length test for the CBOW embeddings using this dataset. As Figure 2ashows, this only leads to a slight decrease in accuracy, indicating that the identity of the words is notthe main component in CBOW’s success at predicting length.100 300 500 750 1000Representation dimensions35404550556065Length prediction accuracyCBOWCBOW syn sent(a) Length accuracy for differentCBOW sizes on natural and synthetic(random words) sentences.5 10 15 20 25 30 35Sentence length0.350.400.450.500.55Norm(b) Average embedding norm vs. sen-tence length for CBOW with an em-bedding size of 300.An alternative explanation for CBOW’s ability to encode sentence length is given by considering thenorms of the sentence embeddings. Indeed, Figure 2b shows that the embedding norm decreases assentences grow longer. We believe this is one of the main reasons for the strong CBOW results.While the correlation between the number of averaged vectors and the resulting norm surprised us,in retrospect it is an expected behavior that has sound mathematical foundations. To understandthe behavior, consider the different word vectors to be random variables, with the values in each6Published as a conference paper at ICLR 2017dimension centered roughly around zero. Both central limit theorem and Hoeffding‘s inequality tellus that as we add more samples, the expected average of the values will better approximate the truemean, causing the norm of the average vector to decrease. We expect the correlation between thesentence length and its norm to be more pronounced with shorter sentences (above some number ofsamples we will already be very close to the true mean, and the norm will not decrease further), abehavior which we indeed observe in practice.How does CBOW encode word order? The surprisingly strong performance of the CBOW modelon the order task made us hypothesize that much of the word order information is captured in generalnatural language word order statistics.To investigate this, we re-run the word order tests, but this time drop the sentence embedding intraining and testing time, learning from the word-pairs alone. In other words, we feed the network asinput two word embeddings and ask which word comes first in the sentence. This test isolates generalword order statistics of language from information that is contained in the sentence embedding (Fig.3).100 300 500 750 1000Representation dimensions657075808590Order prediction accuracyEDED no sentCBOWCBOW no sentFigure 3: Order accuracy w/ and w/o sentence repre-sentation for ED and CBOW models.The difference between including and remov-ing the sentence embeddings when using theCBOW model is minor, while the LSTM-EDsuffers a significant drop. Clearly, the LSTM-ED model encodes word order, while the pre-diction ability of CBOW is mostly explained bygeneral language statistics. However, CBOWdoes benefit from the sentence to some extent:we observe a gain of 3% accuracy pointswhen the CBOW tests are allowed access to thesentence representation. This may be explainedby higher order statistics of correlation betweenword order patterns and the occurrences of spe-cific words.How important is English word order for en-coding sentences? To what extent are the models trained to rely on natural language word orderwhen encoding sentences? To control for this, we create a synthetic dataset, P ERMUTED , in whichthe word order in each sentence is randomly permuted. Then, we repeat the length, content andorder experiments using the P ERMUTED dataset (we still use the original sentence encoders that aretrained on non-permuted sentences). While the permuted sentence representation is the same forCBOW, it is completely different when generated by the encoder-decoder.Results are presented in Fig. 4. When considering CBOW embeddings, word order accuracy dropsto chance level, as expected, while results on the other tests remain the same. Moving to the LSTMencoder-decoder, the results on all three tests are comparable to the ones using non-permuted sen-tences. These results are somewhat surprising since the models were originally trained on “real”,non-permuted sentences. This indicates that the LSTM encoder-decoder is a general-purpose se-quence encoder that for the most part does not rely on word ordering properties of natural languagewhen encoding sentences. The small and consistent drop in word order accuracy on the permutedsentences can be attributed to the encoder relying on natural language word order to some extent,but can also be explained by the word order prediction task becoming harder due to the inability to100 300 500 750 1000Representation dimensions405060708090100Length prediction accuracyCBOWPerm CBOWEncoder-decoderPerm ED(a) Length test.100 300 500 750 1000Representation dimensions5560657075808590Content prediction accuracy (b) Content test.100 300 500 750 1000Representation dimensions5060708090Order prediction accuracy (c) Order test.Figure 4: Results for length, content and order tests on natural and permuted sentences.7Published as a conference paper at ICLR 2017use general word order statistics. The results suggest that a trained encoder will transfer well acrossdifferent natural language domains, as long as the vocabularies remain stable. When consideringthe decoder’s BLEU score on the permuted dataset (not shown), we do see a dramatic decreasein accuracy. For example, LSTM encoder-decoder with 1000 dimensions drops from 32.5 to 8.2BLEU score. These results suggest that the decoder, which is thrown away, contains most of thelanguage-specific information.8 S KIP-THOUGHT VECTORSIn addition to the experiments on CBOW and LSTM-encoders, we also experiment with the skip-thought vectors model (Kiros et al., 2015). This model extends the idea of the auto-encoder toneighboring sentences.Given a sentence si, it first encodes it using an RNN, similar to the auto-encoder model. However,instead of predicting the original sentence, skip-thought predicts the preceding and following sen-tences, si1andsi+1. The encoder and decoder are implemented with gated recurrent units (Choet al., 2014).Here, we deviate from the controlled environment and use the author’s provided model3with therecommended embeddings size of 4800. This makes the direct comparison of the models “unfair”.However, our aim is not to decide which is the “best” model but rather to show how our method canbe used to measure the kinds of information captured by different representations.Table 1 summarizes the performance of the skip-thought embeddings in each of the prediction taskson both the P ERMUTED and original dataset.Length Word content Word orderOriginal 82.1% 79.7% 81.1%Permuted 68.2% 76.4% 76.5%Table 1: Classification accuracy for the prediction tasks using skip-thought embeddings.The performance of the skip-thought embeddings is well above the baselines and roughly similarfor all tasks. Its performance is similar to the higher-dimensional encoder-decoder models, exceptin the order task where it lags somewhat behind. However, we note that the results are not directlycomparable as skip-thought was trained on a different corpus.The more interesting finding is its performance on the P ERMUTED sentences. In this setting we seea large drop. In contrast to the LSTM encoder-decoder, skip-thought’s ability to predict length andword content does degrade significantly on the permuted sentences, suggesting that the encodingprocess of the skip-thought model is indeed specialized towards natural language texts.9 C ONCLUSIONWe presented a methodology for performing fine-grained analysis of sentence embeddings usingauxiliary prediction tasks. Our analysis reveals some properties of sentence embedding methods:CBOW is surprisingly effective – in addition to being very strong at content, it is also predictiveof length, and can be used to reconstruct a non-trivial amount of the original word order . 300dimensions perform best, with greatly degraded word-content prediction performance on higherdimensions.With enough dimensions, LSTM auto-encoders are very effective at encoding word order andword content information. Increasing the dimensionality of the LSTM encoder does not signif-icantly improve its ability to encode length, but does increase its ability to encode content andorder information. 500 dimensional embeddings are already quite effective for encoding wordorder, with little gains beyond that. Word content accuracy peaks at 750 dimensions and drops at1000, suggesting that larger is not always better .3https://github.com/ryankiros/skip-thoughts8Published as a conference paper at ICLR 2017The trained LSTM encoder (when trained with an auto-encoder objective) does not rely on order-ing patterns in the training sentences when encoding novel sequences.In contrast, the skip-thought encoder does rely on such patterns. Its performance on the othertasks is similar to the higher-dimensional LSTM encoder, which is impressive considering it wastrained on a different corpus.Finally, the encoder-decoder’s ability to recreate sentences (BLEU) is not entirely indicative ofthe quality of the encoder at representing aspects such as word identity and order. This suggeststhatBLEU is sub-optimal for model selection .
H1rEX6WNl
Experimental analysis of unsupervised sentence embeddings
8: Top 50% of accepted papers, clear accept
This paper analyzes various unsupervised sentence embedding approaches by means of a set of auxiliary prediction tasks. By examining how well classifiers can predict word order, word content, and sentence length, the authors aim to assess how much and what type of information is captured by the different embedding models. The main focus is on a comparison between and encoder-decoder model (ED) and a permutation-invariant model, CBOW. (There is also an analysis of skip-thought vectors, but since it was trained on a different corpus it is hard to compare). There are several interesting and perhaps counter-intuitive results that emerge from this analysis and the authors do a nice job of examining those results and, for the most part, explaining them. However, I found the discussion of the word-order experiment rather unsatisfying. It seems to me that the appropriate question should have been something like, 'How well does model X do compared to the theoretical upper bound which can be deduced from natural language statistics?' This is investigated from one angle in Section 7, but I would have preferred to the effect of natural language statistics discussed up front rather than presented as the explanation to a 'surprising' observation. I had a similar reaction to the word-order experiments. Most of the interesting results, in my opinion, are about the ED model. It is fascinating that the LSTM encoder does not seem to rely on natural-language ordering statistics -- it seems like doing so should be a big win in terms of per-parameter expressivity. I also think that it's strange that word content accuracy begins to drop for high-dimensional embeddings. I suppose this could be investigated by handicapping the decoder. Overall, this is a very nice paper investigating some aspects of the information content stored in various types of sentence embeddings. I recommend acceptance.
4: The reviewer is confident but not absolutely certain that the evaluation is correct
HkYhZDqxg
ICLR.cc/2017/conference
2017
Tree-structured decoding with doubly-recurrent neural networks
["David Alvarez-Melis", "Tommi S. Jaakkola"]
We propose a neural network architecture for generating tree-structured objects from encoded representations. The core of the method is a doubly-recurrent neural network that models separately the width and depth recurrences across the tree, and combines them inside each cell to generate an output. The topology of the tree is explicitly modeled, allowing the network to predict both content and topology of the tree when decoding. That is, given only an encoded vector representation, the network is able to simultaneously generate a tree from it and predict labels for the nodes. We test this architecture in an encoder-decoder framework, where we train a network to encode a sentence as a vector, and then generate a tree structure from it. The experimental results show the effectiveness of this architecture at recovering latent tree structure in sequences and at mapping sentences to simple functional programs.
["Natural language processing", "Supervised Learning", "Structured prediction"]
ABSTRACTWe propose a neural network architecture for generating tree-structured objectsfrom encoded representations. The core of the method is a doubly recurrent neu-ral network model comprised of separate width and depth recurrences that arecombined inside each cell (node) to generate an output. The topology of the treeis modeled explicitly together with the content. That is, in response to an encodedvector representation, co-evolving recurrences are used to realize the associatedtree and the labels for the nodes in the tree. We test this architecture in an encoder-decoder framework, where we train a network to encode a sentence as a vector,and then generate a tree structure from it. The experimental results show the ef-fectiveness of this architecture at recovering latent tree structure in sequences andat mapping sentences to simple functional programs.1 I NTRODUCTIONRecurrent neural networks have become extremely popular for modeling structured data. Key totheir success is their ability to learn long-range temporal dependencies, their flexibility, and ease ofcustomization. These architectures are naturally suited for modeling sequences since the underlyingstate evolution resulting from successive operations follows an inherently linear order (Williams &Zipser, 1995; Hochreiter & Schmidhuber, 1997). Indeed, they have been successfully adapted tolanguage modeling (Zaremba et al., 2015), machine translation (Sutskever et al., 2014) and conver-sational agents (Vinyals & Le, 2015), among other applications.Although sequences arise frequently in practice, other structures such as trees or graphs do notnaturally conform to a linear ordering. For example, natural language sentences or associated parsetrees, programs, hierarchical structures in biology, or molecules are not inherently linear structures.While sentences in natural language can be modeled as if they were linear sequences, the underlyingprocess is compositional (Frege, 1892). Models that construct sentences compositionally shouldderive an advantage from adopting a more appropriate inductive bias.The flexibility and success of recurrent neural networks in modeling and generating sequential datahas prompted efforts to adapt them to non-sequential data too. Recent work has focused on theapplication of neural architectures to hierarchical structures, albeit in limited ways. Much of thiswork has assumed that either the full tree structure is given (Socher et al., 2012; Tai et al., 2015) or atleast the nodes are (Socher & Lin, 2011; Chen & Manning, 2014; Kiperwasser & Goldberg, 2016).In the former scenario, the network aggregates the node information in a manner that is coherentwith a given tree structure while, in the latter, generation is reduced to an attachment problem, i.e.,sequentially deciding which pairs of nodes to join with an edge until a tree is formed.The full problem of decoding with structure , i.e., generating a tree-structured object with node labelsfrom a given vector representation, has remained largely unexplored until recently. Recent efforts toadapt RNNs to this context have so far remained relatively close to their sequential counterparts. Forexample, in order to capture depth and branching in the tree, one can introduce special tokens (Dong& Lapata, 2016) or use alternating RNNs coupled with external classifiers to predict branching(Zhang et al., 2016).1Published as a conference paper at ICLR 2017In this work, we propose a novel architecture tailored specifically to tree-structured decoding. At theheart of our approach is a doubly-recurrent (breadth and depth-wise recurrent) neural network whichseparately models the flow of information between parent and children nodes, and between siblings.Each of these relationships is modeled with a recurrent module whose hidden states are updatedupon observing node labels. Every node in the tree receives two hidden states, which are thencombined and used to predict a label for that node. Besides maintaining separate but simultaneousfraternal andpaternal recurrences, the proposed architecture departs from previous methods in thatit explicitly models tree topology. Each node in the network has modules that predict, based onthe cell state, whether the node is terminal, both in terms of depth and width. Decoupling thesedecisions from the label prediction allows for a more concise formulation, which does not requireartificial tokens to be added to the tree to simulate branching.We test this novel architecture in various encoder-decoder frameworks, coupling it with sequentialencoders to predict tree structure from encoded vector representations of sequences. The experimen-tal results show the effectiveness of this approach at recovering latent structure in flattened stringrepresentations of trees (Section 4.1) and at mapping from natural language descriptions of simpleprograms to abstract syntax trees (Section 4.2). In addition, we show that even for sequence-to-sequence tasks such as machine translation, the proposed architecture exhibits desirable properties,such as invariance to structural changes and coarse-to-fine generation (Section 4.3).To summarize, the main contributions of this paper are as follows:We propose a novel neural network architecture specifically tailored to tree-structured de-coding, which maintains separate depth and width recurrent states and combines them toobtain hidden states for every node in the tree.We equip this novel architecture with a mechanism to predict tree topology explicitly (asopposed to implicitly by adding nodes with special tokens).We show experimentally that the proposed method is capable of recovering trees fromencoded representations and that it outperforms state-of-the-art methods in a task consistingof mapping sentences to simple functional programs.2 R ELATED WORKRecursive Neural Networks. Recursive neural networks (Socher & Lin, 2011; Socher et al., 2012)were proposed to model data with hierarchical structures, such as parsed scenes and natural languagesentences. Though they have been most successfully applied to encoding objects when their tree-structured representation is given (Socher et al., 2013), the original formulation by Socher & Lin(2011) also considered using them to predict the structure (edges), albeit for the case where nodesare given. Thus, besides their limited applicability due to their assumption of binary trees, recursiveneural networks are not useful for fully generating trees from scratch.Tree-structured encoders. The Tree-LSTM of Tai et al. (2015) is a generalization of long short-term memory networks (Hochreiter & Schmidhuber, 1997) to tree-structured inputs. Their modelconstructs a sentence representation bottom-up, obtaining at every step the representation of a nodein the tree from those of its children. In this sense, this model can be seen as a generalization ofrecursive neural networks to trees with degree potentially greater than two, with the additional long-range dependency modeling provided by LSTMs. They propose two methods for aggregating thestates of the children, depending on the type of underlying tree: N-ary trees or trees with unknownand potentially unbounded branching factor. TreeLSTMs have shown promising results for compo-sitional encoding of structured data, though by construction they cannot be used for decoding, sincethey operate on a given tree structure.Tree-structured decoders. Proposed only very recently, most tree-structured decoders rely onstacked on intertwined RNNs, and use heuristic methods for topological decisions during genera-tion. Closest to our method is the Top-down Tree LSTM of Zhang et al. (2016), which generatesa tree from an encoded representation. Their method relies on 4 independent LSTMs, which act inalternation —as opposed to simultaneously in our approach—yielding essentially a standard LSTMthat changes the weights it uses based on the position of the current node. In addition, their method2Published as a conference paper at ICLR 2017provides children with asymmetric parent input : “younger” children receive information from theparent state only through the previous sibling’s state. Though most of their experiments focus onthe case where the nodes are given, they mention how to use their method for full prediction by in-troducing additional binary classifiers which predict which of the four LSTMs is to be used. Theseclassifiers are trained in isolation after the main architecture has been trained. Contrary to thisapproach, our method can be trained end-to-end in only one pass, has a simpler formulation andexplicitly incorporates topological prediction as part of the functioning of each neuron.A similar approach is proposed by Dong & Lapata (2016). They propose SEQ2TREE , an encoder-decoder architecture that maps sentences to tree structures. For the decoder, they rely on hierarchicaluse of an LSTM, similar to Tai et al. (2015), but in the opposite direction: working top-down fromthe root of the tree. To decide when to change levels in the hierarchy, they augment the training treeswith nonterminal nodes labeled with a special token <n> , which when generated during decodingtrigger the branching out into a lower level in the tree. Similar to our method, they feed nodes withhidden representations of their parent and sibling, but they do so by concatenating both states andrunning them through a single recurrent unit, as opposed to our method, where these two sourcesof information are handled separately. A further difference is that our approach does not requireartificial nodes with special tokens to be added to the tree, resulting in smaller trees.Hierarchical Neural Networks for Parsing. Neural networks have also been recently introducedto the problem of natural language parsing (Chen & Manning, 2014; Kiperwasser & Goldberg,2016). In this problem, the task is to predict a parse tree over a given sentence. For this, Kiperwasser& Goldberg (2016) use recurrent neural networks as a building block, and compose them recursivelyto obtain a tree-structured encoder. Starting from the leaves (words) they predict a parse tree with aprojective bottom-up strategy, which sequentially updates the encoded vector representation of thetree and uses it to guide edge-attaching decisions. Though conceptually similar to our approach,their method relies on having access to the nodes of the tree (words) and only predicts its topology,so—similar to recursive neural networks—it cannot be used for a fully generative decoding.3 D OUBLY RECURRENT NEURAL NETWORKSGenerating a tree-structured object from scratch using only an encoded representation poses severaldesign challenges. First, one must decide in which order to generate the tree. If the nodes on thedecoder side were given (such as in parsing), it would be possible to generate a tree bottom-up fromthese nodes (e.g. as Kiperwasser & Goldberg 2016 do). In the setting we are interested in, however,not even the nodes are known when decoding, so the natural choice is a top-down decoder, whichstarting from an encoded representation generates the root of the tree and then recursively generatesthe children (if any) of every node.The second challenge arises from the asymmetric hierarchical nature of trees. Unlike the sequence-to-sequence setting where encoding and decoding can be achieved with analogous procedures, whendealing with tree-structured data these two involve significantly different operations. For example,an encoder that processes a tree bottom-up using information of a node’s children to obtain itsrepresentation cannot be simply reversed and used as a decoder, since when generating the treetop-down, nodes have to be generated before their children are.An additional design constraint comes from deciding what information to feed to each node. Forsequences, the choice is obvious: a node should receive information from the node preceding orsucceeding it (or both), i.e. there is a one-dimensional flow of information. In trees, there is anevident flow of information from parent to children (or vice-versa), but when generating nodes ina top-down order it seems unnatural to generate children in isolation: the label of one of them willlikely influence what the states of the other children might be. For example, in the case of parsetrees, generating a verb will reduce the chances of other verbs occurring in that branch.With these considerations in mind, we propose an architecture tailored to tree decoding from scratch:top-down, recursive and doubly-recurrent , i.e. where both the ancestral (parent-to-children) andfraternal (sibling-to-sibling) flows of information are modeled with recurrent modules. Thus, thebuilding block of a doubly recurrent neural network (DRNN) is a cell with two types of input states,one coming from its parent, updated and passed on to its descendants, and another one received from3Published as a conference paper at ICLR 2017itsprevious sibling,1updated and passed on to the next one. We model the flow of information inthe two directions with separate recurrent modules.Formally, letT=fV;E;Xgbe a connected labeled tree, where Vis the set of nodes, Ethe set ofedges andXare node labels.2Letgaandgfbe functions which apply one step of the two separateRNNs. For a node i2V with parent p(i)and previous sibling s(i), the ancestral and fraternalhidden states are updated viahai=ga(hap(i);xp(i)) (1)hfi=gf(hfs(i);xs(i)) (2)where xs(j);xp(i)are the vectors representing the previous sibling’s and parent’s values, respec-tively. Once the hidden depth and width states have been updated with these observed labels, theyare combined to obtain a predictive hidden state :h(pred )i = tanhUfhfi+Uahai(3)where Uf2RnDfandUa2RnDaare learnable parameters. This state contains combinedinformation of the node’s neighborhood in the tree, and is used to predict a label for it. In itssimplest form, the network could compute the output of node iby sampling from distributionoi=softmax (Wh(pred )i ) (4)In the next section, we propose a slight modification to (4) whereby topological information isincluded in the computation of cell outputs. After the node’s output symbol xihas been obtained bysampling from oi, the cell passes haito all its children and hfito the next sibling (if any), enablingthem to apply Eqs (1) and (2) to realize their states. This procedure continues recursively, untiltermination conditions (explained in the next section) cause it to halt.3.1 T OPOLOGICAL PREDICTIONAs mentioned before, the central issue with free-form tree construction is to predict the topologyof the tree. When constructing the tree top-down, for each node we need to decide: (i) whether itis a leaf node (and thus it should not produce offspring) and (ii) whether there should be additionalsiblings produced after it. Answering these two questions for every node allows us to construct atree from scratch and eventual stop growing it.Sequence decoders typically rely on special tokens to terminate generation (Sutskever et al., 2014).The token is added to the vocabulary and treated as a regular word. During training, the examples arepadded with this token at the end of the sequence, and during testing, generation of this token signalstermination. These ideas has been adopted by most tree decoders (Dong & Lapata, 2016). Thereare two important downsides of using a padding strategy for topology prediction in trees. First,the size of the tree can grow considerably. While in the sequence framework only one stoppingtoken is needed, a tree with nnodes might need up to O(n)padding nodes to be added. This canhave important effects in training speed. The second reason is that a single stopping token selectedcompetitively with other tokens requires one to continually update the associated parameters inresponse to any changes in the distribution over ordinary tokens so as to maintain topological control.Based on these observations, we propose an alternative approach to stopping, in which topologicaldecisions are made explicitly (as opposed to implicitly, with stopping tokens). For this, we use thepredictive hidden state of the node h(pred )with a projection and sigmoid activation:pai=(uah(pred )i ) (5)The valuepai2[0;1]is interpreted as the probability that node ihas children. Analogously, we canobtain a probability of stopping fraternal branch growth after the current node as follows:pfi=(ufh(pred )i ) (6)1Unlike the “ancestral” line, the order within sibling nodes is ambiguous. While in abstract trees it isassumed that the there is no such ordering, we assume that for the structures were are interested in learningthere is always one: either chronological (the temporal order in which the nodes were generated) or latent(e.g. the grammatical order of the words in a parse tree with respect to their sentence representation).2We assume throughout that these values are given as class indicators xi2f1;:::;Ng.4Published as a conference paper at ICLR 2017+gahapxpgfhfsxsσσsoftmaxoipaipfih(pred )ihaihfi125 63 47 8 9Encoderha0ha1 ha1 ha1ha2 ha2 ha4 ha4 ha4hf2hf3hf5hf7hf8/////////Figure 1: Left: A cell of the doubly-recurrent neural network corresponding to node iwith parentpand siblings.Right : Structure-unrolled D RNN network in an encoder-decoder setting. The nodesare labeled in the order in which they are generated. Solid (dashed) lines indicate ancestral (fraternal)connections. Crossed arrows indicate production halted by the topology modules.Note that these stopping strategies depart from the usual padding methods in a fundamental property:the decision to stop is made before instead of in conjunction with the label prediction. The rationalebehind this is that the label of a node will likely be influenced not only by its context, but also bythe type of node (terminal or non-terminal) where it is to be assigned. This is the case in language,for example, where syntactic constraints restrict the type of words that can be found in terminalnodes. For this purpose, we include the topological information as inputs to the label predictionlayer. Thus, (4) takes the formoi=softmax (Wh(pred )i +iva+'ivf) (7)wherei;'i2f0;1gare binary variables indicating the topological decisions and va;vfare learn-able offset parameters. During training, we use gold-truth values in (7), i.e. i= 1 if nodeihaschildren and 'i= 1 if it has a succeeding sibling. During testing, these values are obtained frompa;pfby sampling or beam-search. A schematic representation of the internal structure of a DRNNcell and the flow of information in a tree are shown in Figure 1.3.2 T RAINING DRNN SWe train DRNNs with (reverse) back-propagation through structure (BPTS) (Goller & Kuechler,1996). In the forward pass, node outputs are computed in a top-down fashion on the structure-unrolled version of the network, following the natural3dependencies of the tree. We obtain errorsignal at the node level from the two types of prediction: label and topology. For the former, wecompute cross-entropy loss of oiwith respect to the true label of the node xi. For the topologicalvaluespaiandpfiwe compute binary cross entropy loss with respect to gold topological indicatorsi;'i2f0;1g. In the backward pass, we proceed in the reverse (bottom-up) direction, feeding intoevery node the gradients received from child and sibling nodes and computing internally gradientswith respect to both topology and label prediction. Further details on the backpropagation flow areprovided in the Appendix.Note that the way BPTS is computed implies and underlying decoupled loss functionL(bx) =Xi2VLlabel(xi;bxi) +Ltopo(pi;bpi) (8)The decoupled nature of this loss allows us to weigh these two objectives differently, to emphasizeeither topology or label prediction accuracy. Investigating the effect of this is left for future work.3The traversal is always breadth-first starting from the root, but the order in which sibling nodes are visitedmight depend on the specific problem. If the nodes of the tree have an underlying order (such as in dependencyparse trees), it is usually desirable to preserve this order.5Published as a conference paper at ICLR 2017N=500N=1000N=3500N=4000goldROOTBWIBOROOTBFROOTBVWROOTBWFWJROOTBWFJVFigure 2: Trees generated by the D RNN decoder trained on subset of size Nof the synthetic dataset,for a test example with description “ROOT B W F J V”.As is common with sequence generation, during training we perform teacher forcing : after predict-ing the label of a node and its corresponding loss, we replace it with its gold value, so that childrenand siblings receive the correct label for that node. Analogously, we obtain the probabilities paandpf, compute their loss, and replace them for ground truth variables i;'ifor all downstreamcomputations. Addressing this exposure bias by mixing ground truth labels with model predictionsduring training (Venkatraman et al., 2015) or by incremental hybrid losses (Ranzato et al., 2016) isleft as an avenue for future work.4 E XPERIMENTS4.1 S YNTHETIC TREE RECOVERYIn our first set of experiments we evaluate the effectiveness of the proposed architecture to recovertrees from flattened string representations. For this, we first generate a toy dataset consisting ofsimple labeled trees. To isolate the effect of label content from topological prediction, we take asmall vocabulary consisting of the 26 letters of the English alphabet. We generate trees in a top-downfashion, conditioning the label and topology of every node on the state of its ancestors and siblings.For simplicity, we use a Markovian assumption on these dependencies, modeling the probability ofa node’s label as depending only on the label of its parent and the last sibling generated before it (ifany). Conditioned on these two inputs, we model the label of the node as coming from a multinomialdistribution over the alphabet with a dirichlet prior. To generate the topology of the tree, we modelthe probability of a node having children and a next-sibling as depending only on its label and thedepth of the tree. For each tree we generate a string representation by traversing it in breadth-firstpreorder, starting from the root. The labels of the nodes are concatenated into a string in the orderin which they were visited, resulting in a string of jTjsymbols. We create a dataset of 5,000 treeswith this procedure, and split it randomly into train, validation and test sets (with a 80%,10%,10%split). Further details on the construction of this dataset are provided in the Appendix.The task consists of learning a mapping from strings to trees, and using this learned mapping torecover the tree structure of the test set examples, given only their flattened representation. Todo so, we use an encoder-decoder framework, where the strings are mapped to a fixed-size vectorrepresentation using a recurrent neural network. For the decoder, we use a DRNN with LSTMmodules, which given the encoded representation generates a tree. We choose hyper-parameterswith cross-validation. Full training details are provided in the Appendix.Measuring performance only in terms of exact recovery would likely yield near-zero accuracies formost trees. Instead, we opt for a finer-grained metric of tree similarity that gives partial credit forcorrectly predicted subtrees. Treating tree generation as a retrieval problem, we evaluate the qualityof the predicted tree in terms of the precision and recall of recovering nodes and edges present inthe gold tree. Thus, we penalize both missing and superfluous components. As baseline, we inducea probabilistic context-free grammar (PCFG) on the full training data and use it to parse the testsentences. Note that unlike the DRNN, this parser has direct access to the sentence representationand thus its task is only to infer the tree structure on top of it, so this is indeed a strong baseline.Figure 3 shows the results on the test set. Training on the full data yields node and edge retrievalF1-Scores of 75% and71%, respectively, the latter considerably above the baseline.4This 4%gapcan be explained by correct nodes being generated in the wrong part of the tree, as in the example in4Since the PCFG parser has access to the nodes by construction, node accuracy for the baseline method isirrelevant and thus omitted from the analysis.6Published as a conference paper at ICLR 2017500 1000 1500 2000 2500 3000 3500 4000Training examples50556065707580Macro-F1 ScoreBasline - EdgeNodeEdge23456789101112131415161819212224Tree Size (# nodes)0.00.20.40.60.81.0PrecisionNodeEdgeFigure 3: Left: F1-Score for models trained on randomly sampled subsets of varying size, averagedover 5 repetitions. Right : Node (first column) and edge (second) precision as a function of tree size.23456Tree Depth (# nodes)0.00.20.40.60.81.0PrecisionNodeEdge12345678912Tree Width (# nodes)0.00.20.40.60.81.0PrecisionNodeEdgeFigure 4: Node and edge precision as a function of tree depth (left figure) and width (right).Figure 2. The second plot in Figure 3 shows that although small trees are recovered more accurately,precision decays slowly with tree size, with depth accounting for the largest effect (Figure 4).4.2 M APPING SENTENCES TO FUNCTIONAL PROGRAMSTree structures arise naturally in the context of programs. A typical compiler takes human-readablesource code (expressed as sequences of characters) and transforms it into an executable abstractsyntax tree (AST). Source code, however, is already semi-structured. Mapping natural languagesentences directly into executable programs is an open problem, which has received considerableinterest in the natural language processing community (Kate et al., 2005; Branavan et al., 2009).The IFTTT dataset (Quirk et al., 2015) is a simple testbed for language-to-program mapping. Itconsists of if-this-then-that programs (called recipes ) crawled from the IFTTT website5, paired withnatural language descriptions of their purpose. The recipes consist of a trigger and an action, eachdefined in terms of a channel (e.g. “ Facebook ”), a function (e.g. “ Post a status update ”) and poten-tially arguments and parameters. An example of a recipe and its description are shown in Figure 5.The data is user-generated and extremely noisy, which makes the task significantly challenging.5www.ifttt.comRootIF (TRIGGER)FacebookYouaretagged inaphotoTHEN (ACTION)DropboxAdd filefrom URLFilename FileURL Dropbox Folder Path“{{CreatedAt }}}-{{From }}}-{{Caption }}”{{ImageSource }}} {{Facebook }}}“Save photos you’re tagged in on Facebook to Dropbox” Recipe(a) Channels(b) Functions(c) Arguments(b) ParametersFigure 5: Example recipe from the IFTTT dataset. The description (above) is a user-generatednatural language explanation of the if-this-then-that program (below).7Published as a conference paper at ICLR 2017Table 1: Results on the IFTTT task. Left: non-English and unintelligible examples removed (2,262recipes). Right : examples for which at least 3 +humans agree with gold (758 recipes).Method Channel +Func F1retrieval 36.8 25.4 49.0phrasal 27.8 16.4 39.9sync 26.7 15.4 37.6classifier 64.8 47.2 56.5posclass 67.2 50.4 57.7SEQ2SEQ 68.8 50.5 60.3SEQ2TREE 69.6 51.4 60.4GRU-DRNN 70.1 51.2 62.7LSTM-DRNN 74.9 54.3 65.2Method Channel +Func F1retrieval 43.3 32.3 56.2phrasal 37.2 23.5 45.5sync 36.5 23.5 45.5classifier 79.3 66.2 65.0posclass 81.4 71.0 66.5SEQ2SEQ 87.8 75.2 73.7SEQ2TREE 89.7 78.4 74.2GRU-DRNN 89.9 77.6 74.1LSTM-DRNN 90.1 78.2 77.4We approach this task using an encoder-decoder framework. We use a standard RNN encoder, eitheran LSTM or a GRU (Cho et al., 2014), to map the sentence to a vector representation, and we usea D RNN decoder to generate the AST representation of the recipe. We use the original data split,which consists of 77,495 training, 5,171 development and 4,294 test examples. For evaluation, weuse the same metrics as Quirk et al. (2015), who note that computing exact accuracy on such a noisydataset is problematic, and instead propose to evaluate the generated AST in terms of F1-score onthe set of recovered productions. In addition, they compute accuracy at the channel level (i.e. whenboth channels are predicted correctly) and at the function level (both channels andboth functionspredicted correctly).We compare our methods against the various extraction and phrased-based machine translation base-lines of Quirk et al. (2015) and the the methods of Dong & Lapata (2016): S EQ2SEQ, a sequence-to-sequence model trained on flattened representations of the AST, and S EQ2TREE, a token-drivenhierarchical RNN. Following these two works, we report results on two noise-filtered subsets of thedata: one with all non-English and unintelligible recipes removed and the other one with recipesfor which at least three humans agreed with the gold AST. The results are shown in Table 1. Inboth subsets, D RNNs perform on par or above previous approaches, with L STM-DRNN achievingsignificantly better results. The improvement is particularly evident in terms of F1-score, which isthe only metric used by previous approaches that measures global tree reconstruction accuracy. Tobetter understand the quality of the predicted trees beyond the function level (i.e. (b) in Figure 5),we computed node accuracy on the arguments level. Our best performing model, L STM-DRNN,achieves a Macro F1 score of 51% (0.71 precision, 0.40 recall) over argument nodes, which showsthat the model is reasonably successful at predicting structure even beyond depth three. The bestperforming alternative model, S EQ2TREE, achieves a corresponding F1 score of 46%.4.3 M ACHINE TRANSLATIONIn our last set of experiments, we offer a qualitative evaluation DRNNs in the context of machinetranslation. Obtaining state-of-the-art results in machine translation requires highly-optimized ar-chitectures and large parallel corpora. This is not our goal. Instead, we investigate whether decodingwith structure can bring benefits to a task traditionally approached as a sequence-to-sequence prob-lem. For this reason, we consider a setting with limited data: a subset of the WMT14 datasetconsisting of about 50K English $French sentence pairs (see the Appendix for details) along withdependency parses of the target (English) side.We train a sequence-to-tree model using an LSTM encoder and a DRNN decoder as in the previousexperiments. A slight modification here is that we distinguish left and right children in the tree,using two symmetric width-modules gfL;gfRthat produce children from the parent outwards. Withthis, children are lexically ordered, and therefore trees can be easily and un-ambiguously projectedback into sentences. We compare our model against a sequence-to-sequence architecture of similarcomplexity (in terms of number of parameters) trained on the same data using the optimized Open-NMT library (Klein et al., 2017). For decoding, we use a simple best-of-k sampling scheme for ourmodel, and beam search for the S EQ2SEQmodels.8Published as a conference paper at ICLR 20170 20 40 60 80 100Log-Likelihood relative change (%)Seq2Seq(Small)Seq2Seq(Large)DRNN(Large)DRNN(Small)Figure 6: Likelihood change un-der target structural perturbation.Source“ produit diff ́erentes r ́eponses quichangent avec le temps selon nosexp ́eriences et nos relations ”“je ne sais jamais quoidire dans ces cas l `a”SEQ2SEQ:l= 1 a Il= 4 with the different actions I dol= 8 with the different actions who change with I do not know what to sayDRNN:d= 1 answers knowd= 2 different answers change but i do not knowd= 3 product the different answers change . but i do not know to sayTable 2: Translations at different resolutions (size constraints im-posed during decoding) for two example sentences.First, we analyze the quality of translations as a function of the maximum allowed target sentence“size”. The notion of size for a sequence decoder is simply the length while for D RNN we usedepth instead so as to tap into the inherent granularity at which sentences can be generated fromthis architecture. Two such examples are shown in Table 2. Since D RNN topology has been trainedto mimic dependency parses top-down, the decoder tends to first generate the fundamental aspectsof the sentence (verb, nouns), leaving less important refinements for deeper structures down in thetree. The sequence decoder, in contrast, is trained for left-to-right sequential generation, and thusproduces less informative translations under max-length restrictions.In our second experiment we investigate the decoders’ ability to entertain natural paraphrases ofsentences. If we keep the semantic content of a sentence fixed and only change its grammaticalstructure, it is desirable that the decoder would assign nearly the same likelihood to the new sentence.One way to assess this invariance is to compare the relative likelihood that the model assigns to thegold sentence in comparison to its paraphrase. To test this, we take 50 examples from the WMTtest split and manually generate paraphrases with various types of structural alterations (see detailsin the Appendix). For each type of decoder, we measure the relative change (in absolute value) ofthe log-likelihood resulting from the perturbation. All the models we compare have similar standarddeviation ( 4020) of log-likelihood scores over these examples, so the relative changes in thelog-likelihood remain directly comparable. For each architecture we train two versions of differentsizes, where the sizes are balanced in terms of the number of parameters across the architectures. Theresults in Figure 6 show that D RNN’s exhibit significantly lower log-likelihood change, suggestingthat, as language models, they are more robust to natural structural variation than their S EQ2SEQcounterparts.5 D ISCUSSION AND FUTURE WORKWe have presented doubly recurrent neural networks , a natural extension of (sequential) recurrentarchitectures to tree-structured objects. This architecture models the information flow in a tree withtwo separate recurrent modules: one carrying ancestral information (received from parent and passedon to offspring) and the other carrying fraternal information (passed from sibling to sibling). Thetopology of the tree is modeled explicitly and separately from the label prediction, with modulesthat given the state of a node predict whether it has children and siblings.The experimental results show that the proposed method is able to predict reasonable tree structuresfrom encoded vector representations. Despite the simple structure of the IFTTT trees, the resultson that task suggest a promising direction of using D RNNs for generating programs or executablequeries from natural language. On the other hand, the results on the toy machine translation taskshow that even when used to generate sequences, D RNN’s exhibit desirable properties, such as in-variance over structural modifications and the ability to perform coarse-to-fine decoding. In orderto truly use this architecture for machine translation, the approach must be scaled by resorting tobatch processing in GPU. This is possible since forward and backward propagation are computedsequentially along tree traversal paths so that inputs and hidden states of parents and siblings can begrouped into tensors and operated in batch. We leave this as an avenue for future work.9Published as a conference paper at ICLR 2017ACKNOWLEDGEMENTSDA-M acknowledges support from a CONACYT fellowship. The authors would like to thank theanonymous reviewers for their constructive comments.
rySdVpB4e
6: Marginally above acceptance threshold
This paper proposes a variant of a recurrent neural network that has two orthogonal temporal dimensions that can be used as a decoder to generate tree structures (including the topology) in an encoder-decoder setting. The architecture is well motivated and I can see several applications (in addition to what's presented in the paper) that need to generate tree structures given an unstructured data. One weakness of the paper is the limitation of experiments. IFTTT dataset seems to be an interesting appropriate application, and there is also a synthetic dataset, however it would be more interesting to see more natural language applications with syntactic tree structures. Still, I consider the experiments sufficient as a first step to showcase a novel architecture. A strength is that the authors experiment with different design decisions when building the topology predictor components of the architecture, about when / how to decide to terminate, as opposed to making a single arbitrary choice. I see future applications of this architecture and it seems to have interesting directions for future work so I suggest its acceptance as a conference contribution.
4: The reviewer is confident but not absolutely certain that the evaluation is correct
HkYhZDqxg
ICLR.cc/2017/conference
2017
Tree-structured decoding with doubly-recurrent neural networks
["David Alvarez-Melis", "Tommi S. Jaakkola"]
We propose a neural network architecture for generating tree-structured objects from encoded representations. The core of the method is a doubly-recurrent neural network that models separately the width and depth recurrences across the tree, and combines them inside each cell to generate an output. The topology of the tree is explicitly modeled, allowing the network to predict both content and topology of the tree when decoding. That is, given only an encoded vector representation, the network is able to simultaneously generate a tree from it and predict labels for the nodes. We test this architecture in an encoder-decoder framework, where we train a network to encode a sentence as a vector, and then generate a tree structure from it. The experimental results show the effectiveness of this architecture at recovering latent tree structure in sequences and at mapping sentences to simple functional programs.
["Natural language processing", "Supervised Learning", "Structured prediction"]
ABSTRACTWe propose a neural network architecture for generating tree-structured objectsfrom encoded representations. The core of the method is a doubly recurrent neu-ral network model comprised of separate width and depth recurrences that arecombined inside each cell (node) to generate an output. The topology of the treeis modeled explicitly together with the content. That is, in response to an encodedvector representation, co-evolving recurrences are used to realize the associatedtree and the labels for the nodes in the tree. We test this architecture in an encoder-decoder framework, where we train a network to encode a sentence as a vector,and then generate a tree structure from it. The experimental results show the ef-fectiveness of this architecture at recovering latent tree structure in sequences andat mapping sentences to simple functional programs.1 I NTRODUCTIONRecurrent neural networks have become extremely popular for modeling structured data. Key totheir success is their ability to learn long-range temporal dependencies, their flexibility, and ease ofcustomization. These architectures are naturally suited for modeling sequences since the underlyingstate evolution resulting from successive operations follows an inherently linear order (Williams &Zipser, 1995; Hochreiter & Schmidhuber, 1997). Indeed, they have been successfully adapted tolanguage modeling (Zaremba et al., 2015), machine translation (Sutskever et al., 2014) and conver-sational agents (Vinyals & Le, 2015), among other applications.Although sequences arise frequently in practice, other structures such as trees or graphs do notnaturally conform to a linear ordering. For example, natural language sentences or associated parsetrees, programs, hierarchical structures in biology, or molecules are not inherently linear structures.While sentences in natural language can be modeled as if they were linear sequences, the underlyingprocess is compositional (Frege, 1892). Models that construct sentences compositionally shouldderive an advantage from adopting a more appropriate inductive bias.The flexibility and success of recurrent neural networks in modeling and generating sequential datahas prompted efforts to adapt them to non-sequential data too. Recent work has focused on theapplication of neural architectures to hierarchical structures, albeit in limited ways. Much of thiswork has assumed that either the full tree structure is given (Socher et al., 2012; Tai et al., 2015) or atleast the nodes are (Socher & Lin, 2011; Chen & Manning, 2014; Kiperwasser & Goldberg, 2016).In the former scenario, the network aggregates the node information in a manner that is coherentwith a given tree structure while, in the latter, generation is reduced to an attachment problem, i.e.,sequentially deciding which pairs of nodes to join with an edge until a tree is formed.The full problem of decoding with structure , i.e., generating a tree-structured object with node labelsfrom a given vector representation, has remained largely unexplored until recently. Recent efforts toadapt RNNs to this context have so far remained relatively close to their sequential counterparts. Forexample, in order to capture depth and branching in the tree, one can introduce special tokens (Dong& Lapata, 2016) or use alternating RNNs coupled with external classifiers to predict branching(Zhang et al., 2016).1Published as a conference paper at ICLR 2017In this work, we propose a novel architecture tailored specifically to tree-structured decoding. At theheart of our approach is a doubly-recurrent (breadth and depth-wise recurrent) neural network whichseparately models the flow of information between parent and children nodes, and between siblings.Each of these relationships is modeled with a recurrent module whose hidden states are updatedupon observing node labels. Every node in the tree receives two hidden states, which are thencombined and used to predict a label for that node. Besides maintaining separate but simultaneousfraternal andpaternal recurrences, the proposed architecture departs from previous methods in thatit explicitly models tree topology. Each node in the network has modules that predict, based onthe cell state, whether the node is terminal, both in terms of depth and width. Decoupling thesedecisions from the label prediction allows for a more concise formulation, which does not requireartificial tokens to be added to the tree to simulate branching.We test this novel architecture in various encoder-decoder frameworks, coupling it with sequentialencoders to predict tree structure from encoded vector representations of sequences. The experimen-tal results show the effectiveness of this approach at recovering latent structure in flattened stringrepresentations of trees (Section 4.1) and at mapping from natural language descriptions of simpleprograms to abstract syntax trees (Section 4.2). In addition, we show that even for sequence-to-sequence tasks such as machine translation, the proposed architecture exhibits desirable properties,such as invariance to structural changes and coarse-to-fine generation (Section 4.3).To summarize, the main contributions of this paper are as follows:We propose a novel neural network architecture specifically tailored to tree-structured de-coding, which maintains separate depth and width recurrent states and combines them toobtain hidden states for every node in the tree.We equip this novel architecture with a mechanism to predict tree topology explicitly (asopposed to implicitly by adding nodes with special tokens).We show experimentally that the proposed method is capable of recovering trees fromencoded representations and that it outperforms state-of-the-art methods in a task consistingof mapping sentences to simple functional programs.2 R ELATED WORKRecursive Neural Networks. Recursive neural networks (Socher & Lin, 2011; Socher et al., 2012)were proposed to model data with hierarchical structures, such as parsed scenes and natural languagesentences. Though they have been most successfully applied to encoding objects when their tree-structured representation is given (Socher et al., 2013), the original formulation by Socher & Lin(2011) also considered using them to predict the structure (edges), albeit for the case where nodesare given. Thus, besides their limited applicability due to their assumption of binary trees, recursiveneural networks are not useful for fully generating trees from scratch.Tree-structured encoders. The Tree-LSTM of Tai et al. (2015) is a generalization of long short-term memory networks (Hochreiter & Schmidhuber, 1997) to tree-structured inputs. Their modelconstructs a sentence representation bottom-up, obtaining at every step the representation of a nodein the tree from those of its children. In this sense, this model can be seen as a generalization ofrecursive neural networks to trees with degree potentially greater than two, with the additional long-range dependency modeling provided by LSTMs. They propose two methods for aggregating thestates of the children, depending on the type of underlying tree: N-ary trees or trees with unknownand potentially unbounded branching factor. TreeLSTMs have shown promising results for compo-sitional encoding of structured data, though by construction they cannot be used for decoding, sincethey operate on a given tree structure.Tree-structured decoders. Proposed only very recently, most tree-structured decoders rely onstacked on intertwined RNNs, and use heuristic methods for topological decisions during genera-tion. Closest to our method is the Top-down Tree LSTM of Zhang et al. (2016), which generatesa tree from an encoded representation. Their method relies on 4 independent LSTMs, which act inalternation —as opposed to simultaneously in our approach—yielding essentially a standard LSTMthat changes the weights it uses based on the position of the current node. In addition, their method2Published as a conference paper at ICLR 2017provides children with asymmetric parent input : “younger” children receive information from theparent state only through the previous sibling’s state. Though most of their experiments focus onthe case where the nodes are given, they mention how to use their method for full prediction by in-troducing additional binary classifiers which predict which of the four LSTMs is to be used. Theseclassifiers are trained in isolation after the main architecture has been trained. Contrary to thisapproach, our method can be trained end-to-end in only one pass, has a simpler formulation andexplicitly incorporates topological prediction as part of the functioning of each neuron.A similar approach is proposed by Dong & Lapata (2016). They propose SEQ2TREE , an encoder-decoder architecture that maps sentences to tree structures. For the decoder, they rely on hierarchicaluse of an LSTM, similar to Tai et al. (2015), but in the opposite direction: working top-down fromthe root of the tree. To decide when to change levels in the hierarchy, they augment the training treeswith nonterminal nodes labeled with a special token <n> , which when generated during decodingtrigger the branching out into a lower level in the tree. Similar to our method, they feed nodes withhidden representations of their parent and sibling, but they do so by concatenating both states andrunning them through a single recurrent unit, as opposed to our method, where these two sourcesof information are handled separately. A further difference is that our approach does not requireartificial nodes with special tokens to be added to the tree, resulting in smaller trees.Hierarchical Neural Networks for Parsing. Neural networks have also been recently introducedto the problem of natural language parsing (Chen & Manning, 2014; Kiperwasser & Goldberg,2016). In this problem, the task is to predict a parse tree over a given sentence. For this, Kiperwasser& Goldberg (2016) use recurrent neural networks as a building block, and compose them recursivelyto obtain a tree-structured encoder. Starting from the leaves (words) they predict a parse tree with aprojective bottom-up strategy, which sequentially updates the encoded vector representation of thetree and uses it to guide edge-attaching decisions. Though conceptually similar to our approach,their method relies on having access to the nodes of the tree (words) and only predicts its topology,so—similar to recursive neural networks—it cannot be used for a fully generative decoding.3 D OUBLY RECURRENT NEURAL NETWORKSGenerating a tree-structured object from scratch using only an encoded representation poses severaldesign challenges. First, one must decide in which order to generate the tree. If the nodes on thedecoder side were given (such as in parsing), it would be possible to generate a tree bottom-up fromthese nodes (e.g. as Kiperwasser & Goldberg 2016 do). In the setting we are interested in, however,not even the nodes are known when decoding, so the natural choice is a top-down decoder, whichstarting from an encoded representation generates the root of the tree and then recursively generatesthe children (if any) of every node.The second challenge arises from the asymmetric hierarchical nature of trees. Unlike the sequence-to-sequence setting where encoding and decoding can be achieved with analogous procedures, whendealing with tree-structured data these two involve significantly different operations. For example,an encoder that processes a tree bottom-up using information of a node’s children to obtain itsrepresentation cannot be simply reversed and used as a decoder, since when generating the treetop-down, nodes have to be generated before their children are.An additional design constraint comes from deciding what information to feed to each node. Forsequences, the choice is obvious: a node should receive information from the node preceding orsucceeding it (or both), i.e. there is a one-dimensional flow of information. In trees, there is anevident flow of information from parent to children (or vice-versa), but when generating nodes ina top-down order it seems unnatural to generate children in isolation: the label of one of them willlikely influence what the states of the other children might be. For example, in the case of parsetrees, generating a verb will reduce the chances of other verbs occurring in that branch.With these considerations in mind, we propose an architecture tailored to tree decoding from scratch:top-down, recursive and doubly-recurrent , i.e. where both the ancestral (parent-to-children) andfraternal (sibling-to-sibling) flows of information are modeled with recurrent modules. Thus, thebuilding block of a doubly recurrent neural network (DRNN) is a cell with two types of input states,one coming from its parent, updated and passed on to its descendants, and another one received from3Published as a conference paper at ICLR 2017itsprevious sibling,1updated and passed on to the next one. We model the flow of information inthe two directions with separate recurrent modules.Formally, letT=fV;E;Xgbe a connected labeled tree, where Vis the set of nodes, Ethe set ofedges andXare node labels.2Letgaandgfbe functions which apply one step of the two separateRNNs. For a node i2V with parent p(i)and previous sibling s(i), the ancestral and fraternalhidden states are updated viahai=ga(hap(i);xp(i)) (1)hfi=gf(hfs(i);xs(i)) (2)where xs(j);xp(i)are the vectors representing the previous sibling’s and parent’s values, respec-tively. Once the hidden depth and width states have been updated with these observed labels, theyare combined to obtain a predictive hidden state :h(pred )i = tanhUfhfi+Uahai(3)where Uf2RnDfandUa2RnDaare learnable parameters. This state contains combinedinformation of the node’s neighborhood in the tree, and is used to predict a label for it. In itssimplest form, the network could compute the output of node iby sampling from distributionoi=softmax (Wh(pred )i ) (4)In the next section, we propose a slight modification to (4) whereby topological information isincluded in the computation of cell outputs. After the node’s output symbol xihas been obtained bysampling from oi, the cell passes haito all its children and hfito the next sibling (if any), enablingthem to apply Eqs (1) and (2) to realize their states. This procedure continues recursively, untiltermination conditions (explained in the next section) cause it to halt.3.1 T OPOLOGICAL PREDICTIONAs mentioned before, the central issue with free-form tree construction is to predict the topologyof the tree. When constructing the tree top-down, for each node we need to decide: (i) whether itis a leaf node (and thus it should not produce offspring) and (ii) whether there should be additionalsiblings produced after it. Answering these two questions for every node allows us to construct atree from scratch and eventual stop growing it.Sequence decoders typically rely on special tokens to terminate generation (Sutskever et al., 2014).The token is added to the vocabulary and treated as a regular word. During training, the examples arepadded with this token at the end of the sequence, and during testing, generation of this token signalstermination. These ideas has been adopted by most tree decoders (Dong & Lapata, 2016). Thereare two important downsides of using a padding strategy for topology prediction in trees. First,the size of the tree can grow considerably. While in the sequence framework only one stoppingtoken is needed, a tree with nnodes might need up to O(n)padding nodes to be added. This canhave important effects in training speed. The second reason is that a single stopping token selectedcompetitively with other tokens requires one to continually update the associated parameters inresponse to any changes in the distribution over ordinary tokens so as to maintain topological control.Based on these observations, we propose an alternative approach to stopping, in which topologicaldecisions are made explicitly (as opposed to implicitly, with stopping tokens). For this, we use thepredictive hidden state of the node h(pred )with a projection and sigmoid activation:pai=(uah(pred )i ) (5)The valuepai2[0;1]is interpreted as the probability that node ihas children. Analogously, we canobtain a probability of stopping fraternal branch growth after the current node as follows:pfi=(ufh(pred )i ) (6)1Unlike the “ancestral” line, the order within sibling nodes is ambiguous. While in abstract trees it isassumed that the there is no such ordering, we assume that for the structures were are interested in learningthere is always one: either chronological (the temporal order in which the nodes were generated) or latent(e.g. the grammatical order of the words in a parse tree with respect to their sentence representation).2We assume throughout that these values are given as class indicators xi2f1;:::;Ng.4Published as a conference paper at ICLR 2017+gahapxpgfhfsxsσσsoftmaxoipaipfih(pred )ihaihfi125 63 47 8 9Encoderha0ha1 ha1 ha1ha2 ha2 ha4 ha4 ha4hf2hf3hf5hf7hf8/////////Figure 1: Left: A cell of the doubly-recurrent neural network corresponding to node iwith parentpand siblings.Right : Structure-unrolled D RNN network in an encoder-decoder setting. The nodesare labeled in the order in which they are generated. Solid (dashed) lines indicate ancestral (fraternal)connections. Crossed arrows indicate production halted by the topology modules.Note that these stopping strategies depart from the usual padding methods in a fundamental property:the decision to stop is made before instead of in conjunction with the label prediction. The rationalebehind this is that the label of a node will likely be influenced not only by its context, but also bythe type of node (terminal or non-terminal) where it is to be assigned. This is the case in language,for example, where syntactic constraints restrict the type of words that can be found in terminalnodes. For this purpose, we include the topological information as inputs to the label predictionlayer. Thus, (4) takes the formoi=softmax (Wh(pred )i +iva+'ivf) (7)wherei;'i2f0;1gare binary variables indicating the topological decisions and va;vfare learn-able offset parameters. During training, we use gold-truth values in (7), i.e. i= 1 if nodeihaschildren and 'i= 1 if it has a succeeding sibling. During testing, these values are obtained frompa;pfby sampling or beam-search. A schematic representation of the internal structure of a DRNNcell and the flow of information in a tree are shown in Figure 1.3.2 T RAINING DRNN SWe train DRNNs with (reverse) back-propagation through structure (BPTS) (Goller & Kuechler,1996). In the forward pass, node outputs are computed in a top-down fashion on the structure-unrolled version of the network, following the natural3dependencies of the tree. We obtain errorsignal at the node level from the two types of prediction: label and topology. For the former, wecompute cross-entropy loss of oiwith respect to the true label of the node xi. For the topologicalvaluespaiandpfiwe compute binary cross entropy loss with respect to gold topological indicatorsi;'i2f0;1g. In the backward pass, we proceed in the reverse (bottom-up) direction, feeding intoevery node the gradients received from child and sibling nodes and computing internally gradientswith respect to both topology and label prediction. Further details on the backpropagation flow areprovided in the Appendix.Note that the way BPTS is computed implies and underlying decoupled loss functionL(bx) =Xi2VLlabel(xi;bxi) +Ltopo(pi;bpi) (8)The decoupled nature of this loss allows us to weigh these two objectives differently, to emphasizeeither topology or label prediction accuracy. Investigating the effect of this is left for future work.3The traversal is always breadth-first starting from the root, but the order in which sibling nodes are visitedmight depend on the specific problem. If the nodes of the tree have an underlying order (such as in dependencyparse trees), it is usually desirable to preserve this order.5Published as a conference paper at ICLR 2017N=500N=1000N=3500N=4000goldROOTBWIBOROOTBFROOTBVWROOTBWFWJROOTBWFJVFigure 2: Trees generated by the D RNN decoder trained on subset of size Nof the synthetic dataset,for a test example with description “ROOT B W F J V”.As is common with sequence generation, during training we perform teacher forcing : after predict-ing the label of a node and its corresponding loss, we replace it with its gold value, so that childrenand siblings receive the correct label for that node. Analogously, we obtain the probabilities paandpf, compute their loss, and replace them for ground truth variables i;'ifor all downstreamcomputations. Addressing this exposure bias by mixing ground truth labels with model predictionsduring training (Venkatraman et al., 2015) or by incremental hybrid losses (Ranzato et al., 2016) isleft as an avenue for future work.4 E XPERIMENTS4.1 S YNTHETIC TREE RECOVERYIn our first set of experiments we evaluate the effectiveness of the proposed architecture to recovertrees from flattened string representations. For this, we first generate a toy dataset consisting ofsimple labeled trees. To isolate the effect of label content from topological prediction, we take asmall vocabulary consisting of the 26 letters of the English alphabet. We generate trees in a top-downfashion, conditioning the label and topology of every node on the state of its ancestors and siblings.For simplicity, we use a Markovian assumption on these dependencies, modeling the probability ofa node’s label as depending only on the label of its parent and the last sibling generated before it (ifany). Conditioned on these two inputs, we model the label of the node as coming from a multinomialdistribution over the alphabet with a dirichlet prior. To generate the topology of the tree, we modelthe probability of a node having children and a next-sibling as depending only on its label and thedepth of the tree. For each tree we generate a string representation by traversing it in breadth-firstpreorder, starting from the root. The labels of the nodes are concatenated into a string in the orderin which they were visited, resulting in a string of jTjsymbols. We create a dataset of 5,000 treeswith this procedure, and split it randomly into train, validation and test sets (with a 80%,10%,10%split). Further details on the construction of this dataset are provided in the Appendix.The task consists of learning a mapping from strings to trees, and using this learned mapping torecover the tree structure of the test set examples, given only their flattened representation. Todo so, we use an encoder-decoder framework, where the strings are mapped to a fixed-size vectorrepresentation using a recurrent neural network. For the decoder, we use a DRNN with LSTMmodules, which given the encoded representation generates a tree. We choose hyper-parameterswith cross-validation. Full training details are provided in the Appendix.Measuring performance only in terms of exact recovery would likely yield near-zero accuracies formost trees. Instead, we opt for a finer-grained metric of tree similarity that gives partial credit forcorrectly predicted subtrees. Treating tree generation as a retrieval problem, we evaluate the qualityof the predicted tree in terms of the precision and recall of recovering nodes and edges present inthe gold tree. Thus, we penalize both missing and superfluous components. As baseline, we inducea probabilistic context-free grammar (PCFG) on the full training data and use it to parse the testsentences. Note that unlike the DRNN, this parser has direct access to the sentence representationand thus its task is only to infer the tree structure on top of it, so this is indeed a strong baseline.Figure 3 shows the results on the test set. Training on the full data yields node and edge retrievalF1-Scores of 75% and71%, respectively, the latter considerably above the baseline.4This 4%gapcan be explained by correct nodes being generated in the wrong part of the tree, as in the example in4Since the PCFG parser has access to the nodes by construction, node accuracy for the baseline method isirrelevant and thus omitted from the analysis.6Published as a conference paper at ICLR 2017500 1000 1500 2000 2500 3000 3500 4000Training examples50556065707580Macro-F1 ScoreBasline - EdgeNodeEdge23456789101112131415161819212224Tree Size (# nodes)0.00.20.40.60.81.0PrecisionNodeEdgeFigure 3: Left: F1-Score for models trained on randomly sampled subsets of varying size, averagedover 5 repetitions. Right : Node (first column) and edge (second) precision as a function of tree size.23456Tree Depth (# nodes)0.00.20.40.60.81.0PrecisionNodeEdge12345678912Tree Width (# nodes)0.00.20.40.60.81.0PrecisionNodeEdgeFigure 4: Node and edge precision as a function of tree depth (left figure) and width (right).Figure 2. The second plot in Figure 3 shows that although small trees are recovered more accurately,precision decays slowly with tree size, with depth accounting for the largest effect (Figure 4).4.2 M APPING SENTENCES TO FUNCTIONAL PROGRAMSTree structures arise naturally in the context of programs. A typical compiler takes human-readablesource code (expressed as sequences of characters) and transforms it into an executable abstractsyntax tree (AST). Source code, however, is already semi-structured. Mapping natural languagesentences directly into executable programs is an open problem, which has received considerableinterest in the natural language processing community (Kate et al., 2005; Branavan et al., 2009).The IFTTT dataset (Quirk et al., 2015) is a simple testbed for language-to-program mapping. Itconsists of if-this-then-that programs (called recipes ) crawled from the IFTTT website5, paired withnatural language descriptions of their purpose. The recipes consist of a trigger and an action, eachdefined in terms of a channel (e.g. “ Facebook ”), a function (e.g. “ Post a status update ”) and poten-tially arguments and parameters. An example of a recipe and its description are shown in Figure 5.The data is user-generated and extremely noisy, which makes the task significantly challenging.5www.ifttt.comRootIF (TRIGGER)FacebookYouaretagged inaphotoTHEN (ACTION)DropboxAdd filefrom URLFilename FileURL Dropbox Folder Path“{{CreatedAt }}}-{{From }}}-{{Caption }}”{{ImageSource }}} {{Facebook }}}“Save photos you’re tagged in on Facebook to Dropbox” Recipe(a) Channels(b) Functions(c) Arguments(b) ParametersFigure 5: Example recipe from the IFTTT dataset. The description (above) is a user-generatednatural language explanation of the if-this-then-that program (below).7Published as a conference paper at ICLR 2017Table 1: Results on the IFTTT task. Left: non-English and unintelligible examples removed (2,262recipes). Right : examples for which at least 3 +humans agree with gold (758 recipes).Method Channel +Func F1retrieval 36.8 25.4 49.0phrasal 27.8 16.4 39.9sync 26.7 15.4 37.6classifier 64.8 47.2 56.5posclass 67.2 50.4 57.7SEQ2SEQ 68.8 50.5 60.3SEQ2TREE 69.6 51.4 60.4GRU-DRNN 70.1 51.2 62.7LSTM-DRNN 74.9 54.3 65.2Method Channel +Func F1retrieval 43.3 32.3 56.2phrasal 37.2 23.5 45.5sync 36.5 23.5 45.5classifier 79.3 66.2 65.0posclass 81.4 71.0 66.5SEQ2SEQ 87.8 75.2 73.7SEQ2TREE 89.7 78.4 74.2GRU-DRNN 89.9 77.6 74.1LSTM-DRNN 90.1 78.2 77.4We approach this task using an encoder-decoder framework. We use a standard RNN encoder, eitheran LSTM or a GRU (Cho et al., 2014), to map the sentence to a vector representation, and we usea D RNN decoder to generate the AST representation of the recipe. We use the original data split,which consists of 77,495 training, 5,171 development and 4,294 test examples. For evaluation, weuse the same metrics as Quirk et al. (2015), who note that computing exact accuracy on such a noisydataset is problematic, and instead propose to evaluate the generated AST in terms of F1-score onthe set of recovered productions. In addition, they compute accuracy at the channel level (i.e. whenboth channels are predicted correctly) and at the function level (both channels andboth functionspredicted correctly).We compare our methods against the various extraction and phrased-based machine translation base-lines of Quirk et al. (2015) and the the methods of Dong & Lapata (2016): S EQ2SEQ, a sequence-to-sequence model trained on flattened representations of the AST, and S EQ2TREE, a token-drivenhierarchical RNN. Following these two works, we report results on two noise-filtered subsets of thedata: one with all non-English and unintelligible recipes removed and the other one with recipesfor which at least three humans agreed with the gold AST. The results are shown in Table 1. Inboth subsets, D RNNs perform on par or above previous approaches, with L STM-DRNN achievingsignificantly better results. The improvement is particularly evident in terms of F1-score, which isthe only metric used by previous approaches that measures global tree reconstruction accuracy. Tobetter understand the quality of the predicted trees beyond the function level (i.e. (b) in Figure 5),we computed node accuracy on the arguments level. Our best performing model, L STM-DRNN,achieves a Macro F1 score of 51% (0.71 precision, 0.40 recall) over argument nodes, which showsthat the model is reasonably successful at predicting structure even beyond depth three. The bestperforming alternative model, S EQ2TREE, achieves a corresponding F1 score of 46%.4.3 M ACHINE TRANSLATIONIn our last set of experiments, we offer a qualitative evaluation DRNNs in the context of machinetranslation. Obtaining state-of-the-art results in machine translation requires highly-optimized ar-chitectures and large parallel corpora. This is not our goal. Instead, we investigate whether decodingwith structure can bring benefits to a task traditionally approached as a sequence-to-sequence prob-lem. For this reason, we consider a setting with limited data: a subset of the WMT14 datasetconsisting of about 50K English $French sentence pairs (see the Appendix for details) along withdependency parses of the target (English) side.We train a sequence-to-tree model using an LSTM encoder and a DRNN decoder as in the previousexperiments. A slight modification here is that we distinguish left and right children in the tree,using two symmetric width-modules gfL;gfRthat produce children from the parent outwards. Withthis, children are lexically ordered, and therefore trees can be easily and un-ambiguously projectedback into sentences. We compare our model against a sequence-to-sequence architecture of similarcomplexity (in terms of number of parameters) trained on the same data using the optimized Open-NMT library (Klein et al., 2017). For decoding, we use a simple best-of-k sampling scheme for ourmodel, and beam search for the S EQ2SEQmodels.8Published as a conference paper at ICLR 20170 20 40 60 80 100Log-Likelihood relative change (%)Seq2Seq(Small)Seq2Seq(Large)DRNN(Large)DRNN(Small)Figure 6: Likelihood change un-der target structural perturbation.Source“ produit diff ́erentes r ́eponses quichangent avec le temps selon nosexp ́eriences et nos relations ”“je ne sais jamais quoidire dans ces cas l `a”SEQ2SEQ:l= 1 a Il= 4 with the different actions I dol= 8 with the different actions who change with I do not know what to sayDRNN:d= 1 answers knowd= 2 different answers change but i do not knowd= 3 product the different answers change . but i do not know to sayTable 2: Translations at different resolutions (size constraints im-posed during decoding) for two example sentences.First, we analyze the quality of translations as a function of the maximum allowed target sentence“size”. The notion of size for a sequence decoder is simply the length while for D RNN we usedepth instead so as to tap into the inherent granularity at which sentences can be generated fromthis architecture. Two such examples are shown in Table 2. Since D RNN topology has been trainedto mimic dependency parses top-down, the decoder tends to first generate the fundamental aspectsof the sentence (verb, nouns), leaving less important refinements for deeper structures down in thetree. The sequence decoder, in contrast, is trained for left-to-right sequential generation, and thusproduces less informative translations under max-length restrictions.In our second experiment we investigate the decoders’ ability to entertain natural paraphrases ofsentences. If we keep the semantic content of a sentence fixed and only change its grammaticalstructure, it is desirable that the decoder would assign nearly the same likelihood to the new sentence.One way to assess this invariance is to compare the relative likelihood that the model assigns to thegold sentence in comparison to its paraphrase. To test this, we take 50 examples from the WMTtest split and manually generate paraphrases with various types of structural alterations (see detailsin the Appendix). For each type of decoder, we measure the relative change (in absolute value) ofthe log-likelihood resulting from the perturbation. All the models we compare have similar standarddeviation ( 4020) of log-likelihood scores over these examples, so the relative changes in thelog-likelihood remain directly comparable. For each architecture we train two versions of differentsizes, where the sizes are balanced in terms of the number of parameters across the architectures. Theresults in Figure 6 show that D RNN’s exhibit significantly lower log-likelihood change, suggestingthat, as language models, they are more robust to natural structural variation than their S EQ2SEQcounterparts.5 D ISCUSSION AND FUTURE WORKWe have presented doubly recurrent neural networks , a natural extension of (sequential) recurrentarchitectures to tree-structured objects. This architecture models the information flow in a tree withtwo separate recurrent modules: one carrying ancestral information (received from parent and passedon to offspring) and the other carrying fraternal information (passed from sibling to sibling). Thetopology of the tree is modeled explicitly and separately from the label prediction, with modulesthat given the state of a node predict whether it has children and siblings.The experimental results show that the proposed method is able to predict reasonable tree structuresfrom encoded vector representations. Despite the simple structure of the IFTTT trees, the resultson that task suggest a promising direction of using D RNNs for generating programs or executablequeries from natural language. On the other hand, the results on the toy machine translation taskshow that even when used to generate sequences, D RNN’s exhibit desirable properties, such as in-variance over structural modifications and the ability to perform coarse-to-fine decoding. In orderto truly use this architecture for machine translation, the approach must be scaled by resorting tobatch processing in GPU. This is possible since forward and backward propagation are computedsequentially along tree traversal paths so that inputs and hidden states of parents and siblings can begrouped into tensors and operated in batch. We leave this as an avenue for future work.9Published as a conference paper at ICLR 2017ACKNOWLEDGEMENTSDA-M acknowledges support from a CONACYT fellowship. The authors would like to thank theanonymous reviewers for their constructive comments.
rJeK5Tz4x
review
6: Marginally above acceptance threshold
The paper propose DRNN as a neural decoder for tree structures. I like the model architecture since it has two clear improvements over traditional approaches — (1) the information flows in two directions, both from the parent and from siblings, which is desirable in tree structures (2) the model use a probability distribution to model the tree boundary (i.e. the last sibling or the leaf). This avoids the use of special ending symbols which is larger in size and putting more things to learn for the parameters (shared with other symbols). The authors test the DRNN using the tasks of recovering the synthetic trees and recovering functional programs. The model did better than traditional methods like seq2seq models. I think the recovering synthetic tree task is not very satisfying for two reasons — (1) the surface form itself already containing some of the topological information which makes the task easier than it should be (2) as we can see from figure 3, when the number of nodes grows (even to a number not very large), the performance of the model drops dramatically, I am not sure if a simple baseline only captures the topological information in the surface string would be much worse than this. And DRNN in this case, seems can’t show its full potentials since the length of the information flow in the model won’t be very long. I think the experiments are interesting. But I think there are some other tasks which are more difficult and the tree structure information are more important in such tasks. For example, we have the seq2seq parsing model (Vinyals et al, 2014), is it possible to use the DRNN proposed here on the decoder side? I think tasks like this can show more potentials of the DRNN and can be very convincing that model architectures like this are better than traditional alternatives.
4: The reviewer is confident but not absolutely certain that the evaluation is correct
HkYhZDqxg
ICLR.cc/2017/conference
2017
Tree-structured decoding with doubly-recurrent neural networks
["David Alvarez-Melis", "Tommi S. Jaakkola"]
We propose a neural network architecture for generating tree-structured objects from encoded representations. The core of the method is a doubly-recurrent neural network that models separately the width and depth recurrences across the tree, and combines them inside each cell to generate an output. The topology of the tree is explicitly modeled, allowing the network to predict both content and topology of the tree when decoding. That is, given only an encoded vector representation, the network is able to simultaneously generate a tree from it and predict labels for the nodes. We test this architecture in an encoder-decoder framework, where we train a network to encode a sentence as a vector, and then generate a tree structure from it. The experimental results show the effectiveness of this architecture at recovering latent tree structure in sequences and at mapping sentences to simple functional programs.
["Natural language processing", "Supervised Learning", "Structured prediction"]
ABSTRACTWe propose a neural network architecture for generating tree-structured objectsfrom encoded representations. The core of the method is a doubly recurrent neu-ral network model comprised of separate width and depth recurrences that arecombined inside each cell (node) to generate an output. The topology of the treeis modeled explicitly together with the content. That is, in response to an encodedvector representation, co-evolving recurrences are used to realize the associatedtree and the labels for the nodes in the tree. We test this architecture in an encoder-decoder framework, where we train a network to encode a sentence as a vector,and then generate a tree structure from it. The experimental results show the ef-fectiveness of this architecture at recovering latent tree structure in sequences andat mapping sentences to simple functional programs.1 I NTRODUCTIONRecurrent neural networks have become extremely popular for modeling structured data. Key totheir success is their ability to learn long-range temporal dependencies, their flexibility, and ease ofcustomization. These architectures are naturally suited for modeling sequences since the underlyingstate evolution resulting from successive operations follows an inherently linear order (Williams &Zipser, 1995; Hochreiter & Schmidhuber, 1997). Indeed, they have been successfully adapted tolanguage modeling (Zaremba et al., 2015), machine translation (Sutskever et al., 2014) and conver-sational agents (Vinyals & Le, 2015), among other applications.Although sequences arise frequently in practice, other structures such as trees or graphs do notnaturally conform to a linear ordering. For example, natural language sentences or associated parsetrees, programs, hierarchical structures in biology, or molecules are not inherently linear structures.While sentences in natural language can be modeled as if they were linear sequences, the underlyingprocess is compositional (Frege, 1892). Models that construct sentences compositionally shouldderive an advantage from adopting a more appropriate inductive bias.The flexibility and success of recurrent neural networks in modeling and generating sequential datahas prompted efforts to adapt them to non-sequential data too. Recent work has focused on theapplication of neural architectures to hierarchical structures, albeit in limited ways. Much of thiswork has assumed that either the full tree structure is given (Socher et al., 2012; Tai et al., 2015) or atleast the nodes are (Socher & Lin, 2011; Chen & Manning, 2014; Kiperwasser & Goldberg, 2016).In the former scenario, the network aggregates the node information in a manner that is coherentwith a given tree structure while, in the latter, generation is reduced to an attachment problem, i.e.,sequentially deciding which pairs of nodes to join with an edge until a tree is formed.The full problem of decoding with structure , i.e., generating a tree-structured object with node labelsfrom a given vector representation, has remained largely unexplored until recently. Recent efforts toadapt RNNs to this context have so far remained relatively close to their sequential counterparts. Forexample, in order to capture depth and branching in the tree, one can introduce special tokens (Dong& Lapata, 2016) or use alternating RNNs coupled with external classifiers to predict branching(Zhang et al., 2016).1Published as a conference paper at ICLR 2017In this work, we propose a novel architecture tailored specifically to tree-structured decoding. At theheart of our approach is a doubly-recurrent (breadth and depth-wise recurrent) neural network whichseparately models the flow of information between parent and children nodes, and between siblings.Each of these relationships is modeled with a recurrent module whose hidden states are updatedupon observing node labels. Every node in the tree receives two hidden states, which are thencombined and used to predict a label for that node. Besides maintaining separate but simultaneousfraternal andpaternal recurrences, the proposed architecture departs from previous methods in thatit explicitly models tree topology. Each node in the network has modules that predict, based onthe cell state, whether the node is terminal, both in terms of depth and width. Decoupling thesedecisions from the label prediction allows for a more concise formulation, which does not requireartificial tokens to be added to the tree to simulate branching.We test this novel architecture in various encoder-decoder frameworks, coupling it with sequentialencoders to predict tree structure from encoded vector representations of sequences. The experimen-tal results show the effectiveness of this approach at recovering latent structure in flattened stringrepresentations of trees (Section 4.1) and at mapping from natural language descriptions of simpleprograms to abstract syntax trees (Section 4.2). In addition, we show that even for sequence-to-sequence tasks such as machine translation, the proposed architecture exhibits desirable properties,such as invariance to structural changes and coarse-to-fine generation (Section 4.3).To summarize, the main contributions of this paper are as follows:We propose a novel neural network architecture specifically tailored to tree-structured de-coding, which maintains separate depth and width recurrent states and combines them toobtain hidden states for every node in the tree.We equip this novel architecture with a mechanism to predict tree topology explicitly (asopposed to implicitly by adding nodes with special tokens).We show experimentally that the proposed method is capable of recovering trees fromencoded representations and that it outperforms state-of-the-art methods in a task consistingof mapping sentences to simple functional programs.2 R ELATED WORKRecursive Neural Networks. Recursive neural networks (Socher & Lin, 2011; Socher et al., 2012)were proposed to model data with hierarchical structures, such as parsed scenes and natural languagesentences. Though they have been most successfully applied to encoding objects when their tree-structured representation is given (Socher et al., 2013), the original formulation by Socher & Lin(2011) also considered using them to predict the structure (edges), albeit for the case where nodesare given. Thus, besides their limited applicability due to their assumption of binary trees, recursiveneural networks are not useful for fully generating trees from scratch.Tree-structured encoders. The Tree-LSTM of Tai et al. (2015) is a generalization of long short-term memory networks (Hochreiter & Schmidhuber, 1997) to tree-structured inputs. Their modelconstructs a sentence representation bottom-up, obtaining at every step the representation of a nodein the tree from those of its children. In this sense, this model can be seen as a generalization ofrecursive neural networks to trees with degree potentially greater than two, with the additional long-range dependency modeling provided by LSTMs. They propose two methods for aggregating thestates of the children, depending on the type of underlying tree: N-ary trees or trees with unknownand potentially unbounded branching factor. TreeLSTMs have shown promising results for compo-sitional encoding of structured data, though by construction they cannot be used for decoding, sincethey operate on a given tree structure.Tree-structured decoders. Proposed only very recently, most tree-structured decoders rely onstacked on intertwined RNNs, and use heuristic methods for topological decisions during genera-tion. Closest to our method is the Top-down Tree LSTM of Zhang et al. (2016), which generatesa tree from an encoded representation. Their method relies on 4 independent LSTMs, which act inalternation —as opposed to simultaneously in our approach—yielding essentially a standard LSTMthat changes the weights it uses based on the position of the current node. In addition, their method2Published as a conference paper at ICLR 2017provides children with asymmetric parent input : “younger” children receive information from theparent state only through the previous sibling’s state. Though most of their experiments focus onthe case where the nodes are given, they mention how to use their method for full prediction by in-troducing additional binary classifiers which predict which of the four LSTMs is to be used. Theseclassifiers are trained in isolation after the main architecture has been trained. Contrary to thisapproach, our method can be trained end-to-end in only one pass, has a simpler formulation andexplicitly incorporates topological prediction as part of the functioning of each neuron.A similar approach is proposed by Dong & Lapata (2016). They propose SEQ2TREE , an encoder-decoder architecture that maps sentences to tree structures. For the decoder, they rely on hierarchicaluse of an LSTM, similar to Tai et al. (2015), but in the opposite direction: working top-down fromthe root of the tree. To decide when to change levels in the hierarchy, they augment the training treeswith nonterminal nodes labeled with a special token <n> , which when generated during decodingtrigger the branching out into a lower level in the tree. Similar to our method, they feed nodes withhidden representations of their parent and sibling, but they do so by concatenating both states andrunning them through a single recurrent unit, as opposed to our method, where these two sourcesof information are handled separately. A further difference is that our approach does not requireartificial nodes with special tokens to be added to the tree, resulting in smaller trees.Hierarchical Neural Networks for Parsing. Neural networks have also been recently introducedto the problem of natural language parsing (Chen & Manning, 2014; Kiperwasser & Goldberg,2016). In this problem, the task is to predict a parse tree over a given sentence. For this, Kiperwasser& Goldberg (2016) use recurrent neural networks as a building block, and compose them recursivelyto obtain a tree-structured encoder. Starting from the leaves (words) they predict a parse tree with aprojective bottom-up strategy, which sequentially updates the encoded vector representation of thetree and uses it to guide edge-attaching decisions. Though conceptually similar to our approach,their method relies on having access to the nodes of the tree (words) and only predicts its topology,so—similar to recursive neural networks—it cannot be used for a fully generative decoding.3 D OUBLY RECURRENT NEURAL NETWORKSGenerating a tree-structured object from scratch using only an encoded representation poses severaldesign challenges. First, one must decide in which order to generate the tree. If the nodes on thedecoder side were given (such as in parsing), it would be possible to generate a tree bottom-up fromthese nodes (e.g. as Kiperwasser & Goldberg 2016 do). In the setting we are interested in, however,not even the nodes are known when decoding, so the natural choice is a top-down decoder, whichstarting from an encoded representation generates the root of the tree and then recursively generatesthe children (if any) of every node.The second challenge arises from the asymmetric hierarchical nature of trees. Unlike the sequence-to-sequence setting where encoding and decoding can be achieved with analogous procedures, whendealing with tree-structured data these two involve significantly different operations. For example,an encoder that processes a tree bottom-up using information of a node’s children to obtain itsrepresentation cannot be simply reversed and used as a decoder, since when generating the treetop-down, nodes have to be generated before their children are.An additional design constraint comes from deciding what information to feed to each node. Forsequences, the choice is obvious: a node should receive information from the node preceding orsucceeding it (or both), i.e. there is a one-dimensional flow of information. In trees, there is anevident flow of information from parent to children (or vice-versa), but when generating nodes ina top-down order it seems unnatural to generate children in isolation: the label of one of them willlikely influence what the states of the other children might be. For example, in the case of parsetrees, generating a verb will reduce the chances of other verbs occurring in that branch.With these considerations in mind, we propose an architecture tailored to tree decoding from scratch:top-down, recursive and doubly-recurrent , i.e. where both the ancestral (parent-to-children) andfraternal (sibling-to-sibling) flows of information are modeled with recurrent modules. Thus, thebuilding block of a doubly recurrent neural network (DRNN) is a cell with two types of input states,one coming from its parent, updated and passed on to its descendants, and another one received from3Published as a conference paper at ICLR 2017itsprevious sibling,1updated and passed on to the next one. We model the flow of information inthe two directions with separate recurrent modules.Formally, letT=fV;E;Xgbe a connected labeled tree, where Vis the set of nodes, Ethe set ofedges andXare node labels.2Letgaandgfbe functions which apply one step of the two separateRNNs. For a node i2V with parent p(i)and previous sibling s(i), the ancestral and fraternalhidden states are updated viahai=ga(hap(i);xp(i)) (1)hfi=gf(hfs(i);xs(i)) (2)where xs(j);xp(i)are the vectors representing the previous sibling’s and parent’s values, respec-tively. Once the hidden depth and width states have been updated with these observed labels, theyare combined to obtain a predictive hidden state :h(pred )i = tanhUfhfi+Uahai(3)where Uf2RnDfandUa2RnDaare learnable parameters. This state contains combinedinformation of the node’s neighborhood in the tree, and is used to predict a label for it. In itssimplest form, the network could compute the output of node iby sampling from distributionoi=softmax (Wh(pred )i ) (4)In the next section, we propose a slight modification to (4) whereby topological information isincluded in the computation of cell outputs. After the node’s output symbol xihas been obtained bysampling from oi, the cell passes haito all its children and hfito the next sibling (if any), enablingthem to apply Eqs (1) and (2) to realize their states. This procedure continues recursively, untiltermination conditions (explained in the next section) cause it to halt.3.1 T OPOLOGICAL PREDICTIONAs mentioned before, the central issue with free-form tree construction is to predict the topologyof the tree. When constructing the tree top-down, for each node we need to decide: (i) whether itis a leaf node (and thus it should not produce offspring) and (ii) whether there should be additionalsiblings produced after it. Answering these two questions for every node allows us to construct atree from scratch and eventual stop growing it.Sequence decoders typically rely on special tokens to terminate generation (Sutskever et al., 2014).The token is added to the vocabulary and treated as a regular word. During training, the examples arepadded with this token at the end of the sequence, and during testing, generation of this token signalstermination. These ideas has been adopted by most tree decoders (Dong & Lapata, 2016). Thereare two important downsides of using a padding strategy for topology prediction in trees. First,the size of the tree can grow considerably. While in the sequence framework only one stoppingtoken is needed, a tree with nnodes might need up to O(n)padding nodes to be added. This canhave important effects in training speed. The second reason is that a single stopping token selectedcompetitively with other tokens requires one to continually update the associated parameters inresponse to any changes in the distribution over ordinary tokens so as to maintain topological control.Based on these observations, we propose an alternative approach to stopping, in which topologicaldecisions are made explicitly (as opposed to implicitly, with stopping tokens). For this, we use thepredictive hidden state of the node h(pred )with a projection and sigmoid activation:pai=(uah(pred )i ) (5)The valuepai2[0;1]is interpreted as the probability that node ihas children. Analogously, we canobtain a probability of stopping fraternal branch growth after the current node as follows:pfi=(ufh(pred )i ) (6)1Unlike the “ancestral” line, the order within sibling nodes is ambiguous. While in abstract trees it isassumed that the there is no such ordering, we assume that for the structures were are interested in learningthere is always one: either chronological (the temporal order in which the nodes were generated) or latent(e.g. the grammatical order of the words in a parse tree with respect to their sentence representation).2We assume throughout that these values are given as class indicators xi2f1;:::;Ng.4Published as a conference paper at ICLR 2017+gahapxpgfhfsxsσσsoftmaxoipaipfih(pred )ihaihfi125 63 47 8 9Encoderha0ha1 ha1 ha1ha2 ha2 ha4 ha4 ha4hf2hf3hf5hf7hf8/////////Figure 1: Left: A cell of the doubly-recurrent neural network corresponding to node iwith parentpand siblings.Right : Structure-unrolled D RNN network in an encoder-decoder setting. The nodesare labeled in the order in which they are generated. Solid (dashed) lines indicate ancestral (fraternal)connections. Crossed arrows indicate production halted by the topology modules.Note that these stopping strategies depart from the usual padding methods in a fundamental property:the decision to stop is made before instead of in conjunction with the label prediction. The rationalebehind this is that the label of a node will likely be influenced not only by its context, but also bythe type of node (terminal or non-terminal) where it is to be assigned. This is the case in language,for example, where syntactic constraints restrict the type of words that can be found in terminalnodes. For this purpose, we include the topological information as inputs to the label predictionlayer. Thus, (4) takes the formoi=softmax (Wh(pred )i +iva+'ivf) (7)wherei;'i2f0;1gare binary variables indicating the topological decisions and va;vfare learn-able offset parameters. During training, we use gold-truth values in (7), i.e. i= 1 if nodeihaschildren and 'i= 1 if it has a succeeding sibling. During testing, these values are obtained frompa;pfby sampling or beam-search. A schematic representation of the internal structure of a DRNNcell and the flow of information in a tree are shown in Figure 1.3.2 T RAINING DRNN SWe train DRNNs with (reverse) back-propagation through structure (BPTS) (Goller & Kuechler,1996). In the forward pass, node outputs are computed in a top-down fashion on the structure-unrolled version of the network, following the natural3dependencies of the tree. We obtain errorsignal at the node level from the two types of prediction: label and topology. For the former, wecompute cross-entropy loss of oiwith respect to the true label of the node xi. For the topologicalvaluespaiandpfiwe compute binary cross entropy loss with respect to gold topological indicatorsi;'i2f0;1g. In the backward pass, we proceed in the reverse (bottom-up) direction, feeding intoevery node the gradients received from child and sibling nodes and computing internally gradientswith respect to both topology and label prediction. Further details on the backpropagation flow areprovided in the Appendix.Note that the way BPTS is computed implies and underlying decoupled loss functionL(bx) =Xi2VLlabel(xi;bxi) +Ltopo(pi;bpi) (8)The decoupled nature of this loss allows us to weigh these two objectives differently, to emphasizeeither topology or label prediction accuracy. Investigating the effect of this is left for future work.3The traversal is always breadth-first starting from the root, but the order in which sibling nodes are visitedmight depend on the specific problem. If the nodes of the tree have an underlying order (such as in dependencyparse trees), it is usually desirable to preserve this order.5Published as a conference paper at ICLR 2017N=500N=1000N=3500N=4000goldROOTBWIBOROOTBFROOTBVWROOTBWFWJROOTBWFJVFigure 2: Trees generated by the D RNN decoder trained on subset of size Nof the synthetic dataset,for a test example with description “ROOT B W F J V”.As is common with sequence generation, during training we perform teacher forcing : after predict-ing the label of a node and its corresponding loss, we replace it with its gold value, so that childrenand siblings receive the correct label for that node. Analogously, we obtain the probabilities paandpf, compute their loss, and replace them for ground truth variables i;'ifor all downstreamcomputations. Addressing this exposure bias by mixing ground truth labels with model predictionsduring training (Venkatraman et al., 2015) or by incremental hybrid losses (Ranzato et al., 2016) isleft as an avenue for future work.4 E XPERIMENTS4.1 S YNTHETIC TREE RECOVERYIn our first set of experiments we evaluate the effectiveness of the proposed architecture to recovertrees from flattened string representations. For this, we first generate a toy dataset consisting ofsimple labeled trees. To isolate the effect of label content from topological prediction, we take asmall vocabulary consisting of the 26 letters of the English alphabet. We generate trees in a top-downfashion, conditioning the label and topology of every node on the state of its ancestors and siblings.For simplicity, we use a Markovian assumption on these dependencies, modeling the probability ofa node’s label as depending only on the label of its parent and the last sibling generated before it (ifany). Conditioned on these two inputs, we model the label of the node as coming from a multinomialdistribution over the alphabet with a dirichlet prior. To generate the topology of the tree, we modelthe probability of a node having children and a next-sibling as depending only on its label and thedepth of the tree. For each tree we generate a string representation by traversing it in breadth-firstpreorder, starting from the root. The labels of the nodes are concatenated into a string in the orderin which they were visited, resulting in a string of jTjsymbols. We create a dataset of 5,000 treeswith this procedure, and split it randomly into train, validation and test sets (with a 80%,10%,10%split). Further details on the construction of this dataset are provided in the Appendix.The task consists of learning a mapping from strings to trees, and using this learned mapping torecover the tree structure of the test set examples, given only their flattened representation. Todo so, we use an encoder-decoder framework, where the strings are mapped to a fixed-size vectorrepresentation using a recurrent neural network. For the decoder, we use a DRNN with LSTMmodules, which given the encoded representation generates a tree. We choose hyper-parameterswith cross-validation. Full training details are provided in the Appendix.Measuring performance only in terms of exact recovery would likely yield near-zero accuracies formost trees. Instead, we opt for a finer-grained metric of tree similarity that gives partial credit forcorrectly predicted subtrees. Treating tree generation as a retrieval problem, we evaluate the qualityof the predicted tree in terms of the precision and recall of recovering nodes and edges present inthe gold tree. Thus, we penalize both missing and superfluous components. As baseline, we inducea probabilistic context-free grammar (PCFG) on the full training data and use it to parse the testsentences. Note that unlike the DRNN, this parser has direct access to the sentence representationand thus its task is only to infer the tree structure on top of it, so this is indeed a strong baseline.Figure 3 shows the results on the test set. Training on the full data yields node and edge retrievalF1-Scores of 75% and71%, respectively, the latter considerably above the baseline.4This 4%gapcan be explained by correct nodes being generated in the wrong part of the tree, as in the example in4Since the PCFG parser has access to the nodes by construction, node accuracy for the baseline method isirrelevant and thus omitted from the analysis.6Published as a conference paper at ICLR 2017500 1000 1500 2000 2500 3000 3500 4000Training examples50556065707580Macro-F1 ScoreBasline - EdgeNodeEdge23456789101112131415161819212224Tree Size (# nodes)0.00.20.40.60.81.0PrecisionNodeEdgeFigure 3: Left: F1-Score for models trained on randomly sampled subsets of varying size, averagedover 5 repetitions. Right : Node (first column) and edge (second) precision as a function of tree size.23456Tree Depth (# nodes)0.00.20.40.60.81.0PrecisionNodeEdge12345678912Tree Width (# nodes)0.00.20.40.60.81.0PrecisionNodeEdgeFigure 4: Node and edge precision as a function of tree depth (left figure) and width (right).Figure 2. The second plot in Figure 3 shows that although small trees are recovered more accurately,precision decays slowly with tree size, with depth accounting for the largest effect (Figure 4).4.2 M APPING SENTENCES TO FUNCTIONAL PROGRAMSTree structures arise naturally in the context of programs. A typical compiler takes human-readablesource code (expressed as sequences of characters) and transforms it into an executable abstractsyntax tree (AST). Source code, however, is already semi-structured. Mapping natural languagesentences directly into executable programs is an open problem, which has received considerableinterest in the natural language processing community (Kate et al., 2005; Branavan et al., 2009).The IFTTT dataset (Quirk et al., 2015) is a simple testbed for language-to-program mapping. Itconsists of if-this-then-that programs (called recipes ) crawled from the IFTTT website5, paired withnatural language descriptions of their purpose. The recipes consist of a trigger and an action, eachdefined in terms of a channel (e.g. “ Facebook ”), a function (e.g. “ Post a status update ”) and poten-tially arguments and parameters. An example of a recipe and its description are shown in Figure 5.The data is user-generated and extremely noisy, which makes the task significantly challenging.5www.ifttt.comRootIF (TRIGGER)FacebookYouaretagged inaphotoTHEN (ACTION)DropboxAdd filefrom URLFilename FileURL Dropbox Folder Path“{{CreatedAt }}}-{{From }}}-{{Caption }}”{{ImageSource }}} {{Facebook }}}“Save photos you’re tagged in on Facebook to Dropbox” Recipe(a) Channels(b) Functions(c) Arguments(b) ParametersFigure 5: Example recipe from the IFTTT dataset. The description (above) is a user-generatednatural language explanation of the if-this-then-that program (below).7Published as a conference paper at ICLR 2017Table 1: Results on the IFTTT task. Left: non-English and unintelligible examples removed (2,262recipes). Right : examples for which at least 3 +humans agree with gold (758 recipes).Method Channel +Func F1retrieval 36.8 25.4 49.0phrasal 27.8 16.4 39.9sync 26.7 15.4 37.6classifier 64.8 47.2 56.5posclass 67.2 50.4 57.7SEQ2SEQ 68.8 50.5 60.3SEQ2TREE 69.6 51.4 60.4GRU-DRNN 70.1 51.2 62.7LSTM-DRNN 74.9 54.3 65.2Method Channel +Func F1retrieval 43.3 32.3 56.2phrasal 37.2 23.5 45.5sync 36.5 23.5 45.5classifier 79.3 66.2 65.0posclass 81.4 71.0 66.5SEQ2SEQ 87.8 75.2 73.7SEQ2TREE 89.7 78.4 74.2GRU-DRNN 89.9 77.6 74.1LSTM-DRNN 90.1 78.2 77.4We approach this task using an encoder-decoder framework. We use a standard RNN encoder, eitheran LSTM or a GRU (Cho et al., 2014), to map the sentence to a vector representation, and we usea D RNN decoder to generate the AST representation of the recipe. We use the original data split,which consists of 77,495 training, 5,171 development and 4,294 test examples. For evaluation, weuse the same metrics as Quirk et al. (2015), who note that computing exact accuracy on such a noisydataset is problematic, and instead propose to evaluate the generated AST in terms of F1-score onthe set of recovered productions. In addition, they compute accuracy at the channel level (i.e. whenboth channels are predicted correctly) and at the function level (both channels andboth functionspredicted correctly).We compare our methods against the various extraction and phrased-based machine translation base-lines of Quirk et al. (2015) and the the methods of Dong & Lapata (2016): S EQ2SEQ, a sequence-to-sequence model trained on flattened representations of the AST, and S EQ2TREE, a token-drivenhierarchical RNN. Following these two works, we report results on two noise-filtered subsets of thedata: one with all non-English and unintelligible recipes removed and the other one with recipesfor which at least three humans agreed with the gold AST. The results are shown in Table 1. Inboth subsets, D RNNs perform on par or above previous approaches, with L STM-DRNN achievingsignificantly better results. The improvement is particularly evident in terms of F1-score, which isthe only metric used by previous approaches that measures global tree reconstruction accuracy. Tobetter understand the quality of the predicted trees beyond the function level (i.e. (b) in Figure 5),we computed node accuracy on the arguments level. Our best performing model, L STM-DRNN,achieves a Macro F1 score of 51% (0.71 precision, 0.40 recall) over argument nodes, which showsthat the model is reasonably successful at predicting structure even beyond depth three. The bestperforming alternative model, S EQ2TREE, achieves a corresponding F1 score of 46%.4.3 M ACHINE TRANSLATIONIn our last set of experiments, we offer a qualitative evaluation DRNNs in the context of machinetranslation. Obtaining state-of-the-art results in machine translation requires highly-optimized ar-chitectures and large parallel corpora. This is not our goal. Instead, we investigate whether decodingwith structure can bring benefits to a task traditionally approached as a sequence-to-sequence prob-lem. For this reason, we consider a setting with limited data: a subset of the WMT14 datasetconsisting of about 50K English $French sentence pairs (see the Appendix for details) along withdependency parses of the target (English) side.We train a sequence-to-tree model using an LSTM encoder and a DRNN decoder as in the previousexperiments. A slight modification here is that we distinguish left and right children in the tree,using two symmetric width-modules gfL;gfRthat produce children from the parent outwards. Withthis, children are lexically ordered, and therefore trees can be easily and un-ambiguously projectedback into sentences. We compare our model against a sequence-to-sequence architecture of similarcomplexity (in terms of number of parameters) trained on the same data using the optimized Open-NMT library (Klein et al., 2017). For decoding, we use a simple best-of-k sampling scheme for ourmodel, and beam search for the S EQ2SEQmodels.8Published as a conference paper at ICLR 20170 20 40 60 80 100Log-Likelihood relative change (%)Seq2Seq(Small)Seq2Seq(Large)DRNN(Large)DRNN(Small)Figure 6: Likelihood change un-der target structural perturbation.Source“ produit diff ́erentes r ́eponses quichangent avec le temps selon nosexp ́eriences et nos relations ”“je ne sais jamais quoidire dans ces cas l `a”SEQ2SEQ:l= 1 a Il= 4 with the different actions I dol= 8 with the different actions who change with I do not know what to sayDRNN:d= 1 answers knowd= 2 different answers change but i do not knowd= 3 product the different answers change . but i do not know to sayTable 2: Translations at different resolutions (size constraints im-posed during decoding) for two example sentences.First, we analyze the quality of translations as a function of the maximum allowed target sentence“size”. The notion of size for a sequence decoder is simply the length while for D RNN we usedepth instead so as to tap into the inherent granularity at which sentences can be generated fromthis architecture. Two such examples are shown in Table 2. Since D RNN topology has been trainedto mimic dependency parses top-down, the decoder tends to first generate the fundamental aspectsof the sentence (verb, nouns), leaving less important refinements for deeper structures down in thetree. The sequence decoder, in contrast, is trained for left-to-right sequential generation, and thusproduces less informative translations under max-length restrictions.In our second experiment we investigate the decoders’ ability to entertain natural paraphrases ofsentences. If we keep the semantic content of a sentence fixed and only change its grammaticalstructure, it is desirable that the decoder would assign nearly the same likelihood to the new sentence.One way to assess this invariance is to compare the relative likelihood that the model assigns to thegold sentence in comparison to its paraphrase. To test this, we take 50 examples from the WMTtest split and manually generate paraphrases with various types of structural alterations (see detailsin the Appendix). For each type of decoder, we measure the relative change (in absolute value) ofthe log-likelihood resulting from the perturbation. All the models we compare have similar standarddeviation ( 4020) of log-likelihood scores over these examples, so the relative changes in thelog-likelihood remain directly comparable. For each architecture we train two versions of differentsizes, where the sizes are balanced in terms of the number of parameters across the architectures. Theresults in Figure 6 show that D RNN’s exhibit significantly lower log-likelihood change, suggestingthat, as language models, they are more robust to natural structural variation than their S EQ2SEQcounterparts.5 D ISCUSSION AND FUTURE WORKWe have presented doubly recurrent neural networks , a natural extension of (sequential) recurrentarchitectures to tree-structured objects. This architecture models the information flow in a tree withtwo separate recurrent modules: one carrying ancestral information (received from parent and passedon to offspring) and the other carrying fraternal information (passed from sibling to sibling). Thetopology of the tree is modeled explicitly and separately from the label prediction, with modulesthat given the state of a node predict whether it has children and siblings.The experimental results show that the proposed method is able to predict reasonable tree structuresfrom encoded vector representations. Despite the simple structure of the IFTTT trees, the resultson that task suggest a promising direction of using D RNNs for generating programs or executablequeries from natural language. On the other hand, the results on the toy machine translation taskshow that even when used to generate sequences, D RNN’s exhibit desirable properties, such as in-variance over structural modifications and the ability to perform coarse-to-fine decoding. In orderto truly use this architecture for machine translation, the approach must be scaled by resorting tobatch processing in GPU. This is possible since forward and backward propagation are computedsequentially along tree traversal paths so that inputs and hidden states of parents and siblings can begrouped into tensors and operated in batch. We leave this as an avenue for future work.9Published as a conference paper at ICLR 2017ACKNOWLEDGEMENTSDA-M acknowledges support from a CONACYT fellowship. The authors would like to thank theanonymous reviewers for their constructive comments.
rJpv5lzEx
Accept
7: Good paper, accept
Authors' response well answered my questions. Thanks. Evaluation not changed. ### This paper proposes a neural model for generating tree structure output from scratch. The model does 1) separate the recurrence between depths and siblings; 2) separate the topology and label generation, and outperforms previous methods on a benchmark IFTTT dataset. Compared to previous tree-decoding methods, the model avoids manually annotating subtrees with special tokens, and thus is a very good alternative to such problems. The paper does solid experiments on one synthetic dataset, and outperforms alternative methods on one real-world IFTTT dataset. There are couple of interesting results in the paper that I believe is worth further investigation. Firstly, on the synthetic dataset, the precision drops rapidly with the number of nodes. Is it because that the vector representation of the sequential encoder fails to provide sufficient information of long sequences, such that the tree decoder can not do a good job? Or is it because that such tree decoder is not tolerant to the long sequence input, i.e., large tree structure? I believe that it is important to understand this before a better model can be developed. For example, if it is the fault of encoder, maybe an attention layer can be added, as in a seq-to-seq model, to preserve more information of the input sequence. Moreover, besides only showing how the precision changes with the number of nodes in the tree, it might be interesting to investigate how it goes with 1) number of depths; 2) number of widths; 3) symmetricity; etc. Moreover, as greedy search is used in decoding, it might be interesting to see how it helps, if it does, to use beam-search in tree decoding. On the IFTTT dataset, listing more statistics about this dataset might be helpful for better understanding the difficulty of this task. How deep are the trees? How large are the vocabularies on both language and program sides? The paper is well written, except for minor typo as mentioned in my pre-review questions. In general, I believe this is a solid paper, and more can be explored in this direction. So I tend to accept it.
4: The reviewer is confident but not absolutely certain that the evaluation is correct
rJ8Je4clg
ICLR.cc/2017/conference
2017
Learning to Play in a Day: Faster Deep Reinforcement Learning by Optimality Tightening
["Frank S.He", "Yang Liu", "Alexander G. Schwing", "Jian Peng"]
We propose a novel training algorithm for reinforcement learning which combines the strength of deep Q-learning with a constrained optimization approach to tighten optimality and encourage faster reward propagation. Our novel technique makes deep reinforcement learning more practical by drastically reducing the training time. We evaluate the performance of our approach on the 49 games of the challenging Arcade Learning Environment, and report significant improvements in both training time and accuracy.
["Reinforcement Learning", "Optimization", "Games"]
ABSTRACTWe propose a novel training algorithm for reinforcement learning which com-bines the strength of deep Q-learning with a constrained optimization approachto tighten optimality and encourage faster reward propagation. Our novel tech-nique makes deep reinforcement learning more practical by drastically reducingthe training time. We evaluate the performance of our approach on the 49 gamesof the challenging Arcade Learning Environment, and report significant improve-ments in both training time and accuracy.1 I NTRODUCTIONThe recent advances of supervised deep learning techniques (LeCun et al., 2015) in computer vision,speech recognition and natural language processing have tremendously improved the performanceon challenging tasks, including image processing (Krizhevsky et al., 2012), speech-based transla-tion (Sutskever et al., 2014) and language modeling (Hinton et al., 2012). The core idea of deeplearning is to use artificial neural networks to model complex hierarchical or compositional dataabstractions and representations from raw input data (Bengio et al., 2013). However, we are stillfar from building intelligent solutions for many real-world challenges, such as autonomous driv-ing, human-computer interaction and automated decision making, in which software agents need toconsider interactions with a dynamic environment and take actions towards goals. Reinforcementlearning (Bertsekas & Tsitsiklis, 1996; Powell, 2011; Sutton & Barto, 1998; Kaelbling et al., 1996)studies these problems and algorithms which learn policies to make decisions so as to maximize areward signal from the environment. One of the promising algorithms is Q-learning (Watkins, 1989;Watkins & Dayan, 1992). Deep reinforcement learning with neural function approximation (Tsit-siklis & Roy, 1997; Riedmiller, 2005; Mnih et al., 2013; 2015), possibly a first attempt to combinedeep learning and reinforcement learning, has been proved to be effective on a few problems whichclassical AI approaches were unable to solve. Notable examples of deep reinforcement learninginclude human-level game playing (Mnih et al., 2015) and AlphaGo (Silver et al., 2016).Despite these successes, its high demand of computational resources makes deep reinforcementlearning not yet applicable to many real-world problems. For example, even for an Atari game, thedeep Q-learning algorithm (also called deep Q-networks, abbreviated as DQN) needs to play up tohundreds of millions of game frames to achieve a reasonable performance (van Hasselt et al., 2015).AlphaGo trained its model using a database of game records of advanced players and, in addition,about 30 million self-played game moves (Silver et al., 2016). The sheer amount of required com-putational resources of current deep reinforcement learning algorithms is a major bottleneck for itsapplicability to real-world tasks. Moreover, in many tasks, the reward signal is sparse and delayed,thus making the convergence of learning even slower.1Published as a conference paper at ICLR 2017Here we propose optimality tightening, a new technique to accelerate deep Q-learning by fast rewardpropagation. While current deep Q-learning algorithms rely on a set of experience replays, they onlyconsider a single forward step for the Bellman optimality error minimization, which becomes highlyinefficient when the reward signal is sparse and delayed. To better exploit long-term high-rewardstrategies from past experience, we design a new algorithm to capture rewards from both forwardand backward steps of the replays via a constrained optimization approach. This encourages fasterreward propagation which reduces the training time of deep Q-learning.We evaluate our proposed approach using the Arcade learning environment (Bellemare et al., 2013)and show that our new strategy outperforms competing techniques in both accuracy and trainingtime on 30 out of 49 games despite being trained with significantly fewer data frames.2 R ELATED WORKThere have been a number of approaches improving the stability, convergence and runtime of deepreinforcement learning since deep Q-learning, also known as deep Q-network (DQN), was firstproposed (Mnih et al., 2013; 2015). DQN combined techniques such as deep learning, reinforcementlearning and experience replays (Lin, 1992; Wawrzynski, 2009).Nonetheless, the original DQN algorithm required millions of training steps to achieve human-level performance on Atari games. To improve the stability, recently, double Q-learning was com-bined with deep neural networks, with the goal to alleviate the overestimation issue observed inQ-learning (Thrun & Schwartz, 1993; van Hasselt, 2010; van Hasselt et al., 2015). The key idea isto use two Q-networks for the action selection and Q-function value calculation, respectively. Thegreedy action of the target is first chosen using the current Q-network parameters, then the targetvalue is computed using a set of parameters from a previous iteration. Another notable advance is“prioritized experience replay” (Schaul et al., 2016) or “prioritized sweeping” for deep Q-learning.The idea is to increase the replay probability of experience tuples that have a high expected learningprogress measured by temporal difference errors.In addition to the aforementioned variants of Q-learning, other network architectures have beenproposed. The dueling network architecture applies an extra network structure to learn the impor-tance of states and uses advantage functions (Wang et al., 2015). A distributed version of the deepactor-critic algorithm without experience replay was introduced very recently (Mnih et al., 2016).It deploys multiple threads learning directly from current transitions. The approach is applicable toboth value-based and policy-based methods, off-policy as well as on-policy methods, and in discreteas well as in continuous domains. The model-free episodic control approach evaluates state-actionpairs based on episodic memory using k-nearest neighbors with hashing functions (Blundell et al.,2016). Bootstrapped deep Q-learning carries out temporally-extended (or deep) exploration, thusleading to much faster learning (Osband et al., 2016).Our fast reward propagation differs from all of the aforementioned approaches. The key idea ofour method is to propagate delayed and sparse rewards during Q-network training, and thus greatlyimprove the efficiency and performance. We formulate this propagation step via a constrained pro-gram. Note that our program is also different from earlier work on off-policy Q()algorithmswith eligibility traces and n-step Q learning (Munos et al., 2016; Watkins, 1989; Mnih et al., 2016),which have been recently shown to perform poorly when used for training deep Q-networks on Atarigames.3 B ACKGROUNDReinforcement learning considers agents which are able to take a sequence of actions in an environ-ment. By taking actions and experiencing at most one scalar reward per action, their task is to learna policy which allows them to act such that a high cumulative reward is obtained over time.More precisely, consider an agent operating over time t2f1;:::;Tg. At timetthe agent is in anenvironment state stand reacts upon it by choosing action at2A. The agent will then observe anew statest+1and receive a numerical reward rt2R. Throughout, we assume the set of possibleactions, i.e., the setA, to be discrete.2Published as a conference paper at ICLR 2017A well established technique to address the aforementioned reinforcement learning task is Q-learning (Watkins, 1989; Watkins & Dayan, 1992). Generally, Q-learning algorithms maintain anaction-value function, often also referred to as Q-function, Q(s;a). Given a state s, the action-valuefunction provides a ‘value’ for each action a2A which estimates the expected future reward ifactiona2A is taken. The estimated future reward is computed based on the current state sor aseries of past states stif available.The core idea of Q-learning is the use of the Bellman equation as a characterization of the optimalfuture reward function Qvia a state-action-value functionQ(st;a) =E[rt+maxa0Q(st+1;a0)]: (1)Hereby the expectation is taken w.r.t. the distribution of state st+1and reward rtobtained aftertaking action a, andis a discount factor. Intuitively, reward for taking action aplus best futurereward should equal the best total return from the current state.The choice of Q-function is crucial for the success of Q-learning algorithms. While classical meth-ods use linear Q-functions based on a set of hand-crafted features of the state, more recent ap-proaches use nonlinear deep neural networks to automatically mine intermediate features from thestate (Riedmiller, 2005; Lange & Riedmiller, 2010; Mnih et al., 2013; 2015). This change hasbeen shown to be very effective for many applications of reinforcement learning. However, auto-matic mining of intermediate representations comes at a price: larger quantities of data and morecomputational resources are required. Even though it is sometimes straightforward to extract largeamounts of data, e.g., when training on video games, for successful optimization, it is crucial that thealgorithms operate on un-correlated samples from a dataset Dfor stability. A technique called “ex-perience replay” (Lin, 1992; Wawrzynski, 2009) encourages this property and quickly emerged as astandard step in the well-known deep Q-learning framework (Mnih et al., 2013; 2015). Experiencereplays are stored as a dataset D=f(sj;aj;rj;sj+1)gwhich contains state-action-reward-futurestate-tuples (sj;aj;rj;sj+1), including past observations from previous plays.The characterization of optimality given in Eq. (1) combined with an “experience replay” dataset Dresults in the following iterative algorithmic procedure (Mnih et al., 2013; 2015): start an episodein the initial state s0; sample a mini-batch of tuples B=f(sj;aj;rj;sj+1)gD ; compute andfix the targets yj=rj+maxaQ(sj+1;a)for each tuple using a recent estimate Q(themaximization is only considered if sjis not a terminal state); update the Q-function by optimizingthe following program w.r.t. the parameters typically via stochastic gradient descent:minX(sj;aj;rj;sj+1)2B(Q(sj;aj)yj)2: (2)After having updated the parameters of the Q-function we perform an action simulation either choos-ing an action at random with a small probability , or by following the strategy arg maxaQ(st;a)which is currently estimated. This strategy is also called the -greedy policy. We then obtain theactual reward rt. Subsequently we augment the replay memory with the new tuple (st;at;rt;st+1)and continue the simulation until this episode terminates or reaches an upper limit of steps, andwe restart a new episode. When optimizing w.r.t. the parameter , a recent Q-network is used tocompute the target yj=rj+maxaQ(sj+1;a). This technique is referred to as ‘semi-gradientdescent,’ i.e., the dependence of the target on the parameter is ignored.4 F AST REWARD PROPAGATION VIA OPTIMALITY TIGHTENINGInvestigating the cost function given in Eq. (2) more carefully, we observe that it operates on aset of short one-step sequences, each characterized by the tuple (sj;aj;rj;sj+1). Intuitively, eachstep encourages an update of the parameters , such that the action-value function for the chosenactionaj,i.e.,Q(sj;aj), is closer to the obtained reward plus the best achievable future value, i.e.,yj=rj+maxaQ(sj+1;a). As we expect from the Bellman optimality equation, it is instructiveto interpret this algorithm as propagating reward information from time j+ 1backwards to time j.To understand the shortcomings of this procedure consider a situation where the agent only receivesa sparse and delayed reward once reaching a target in a maze. Further let jPjcharacterize the short-est path from the agents initial position to the target. For a long time, no real reward is available3Published as a conference paper at ICLR 2017and the aforementioned algorithm propagates randomly initialized future rewards. Once the targetis reached, real reward information is available. Due to the cost function and its property of prop-agating reward time-step by time-step, it is immediately apparent that it takes at least an additionalO(jPj)iterations until the observed reward impacts the initial state.In the following we propose a technique which increases the speed of propagation and achievesimproved convergence for deep Q-learning. We achieve this improvement by taking advantage oflonger state-action-reward-sequences which are readily available in the “experience replay memory.”Not only do we propagate information from time instances in the future to our current state, butalso will we pass information from states several steps in the past. Even though we expect to seesubstantial improvements on sequences where rewards are sparse or only available at terminal states,we also demonstrate significant speedups for situations where rewards are obtained frequently. Thisis intuitive as the Q-function represents an estimate for any reward encountered in the future. Fasterpropagation of future and past rewards to a particular state is therefore desirable.Subsequently we discuss our technique for fast reward propagation, a new deep Q-learning algo-rithm that exploits longer state-transitions in experience replays by tightening the optimization viaconstraints. For notational simplicity, we assume that the environmental dynamics is deterministic,i.e., the new state and the reward are solely determined by the current state and action. It is possibleto show that mathematically our proposed approach also approximately works in stochastic environ-ments. Please see details in the appendix. From the Bellman optimality equation we know that thefollowing series of equalities hold for the optimal Q-function Q:Q(sj;aj) =rj+maxaQ(sj+1;a) =rj+maxarj+1+maxa0hrj+2+max~aQ(sj+3;~a)i:Evaluating such a sequence exactly is not possible in a reinforcement learning setting since theenumeration of intermediate states sj+irequires exponential time complexity O(jAji). It is howeverpossible to take advantage of the episodes available in the replay memory Dby noting that thefollowing sequence of inequalities holds for the optimal action-value function Q(with the greedypolicy), irrespective of whether a policy generating the sequence of actions aj,aj+1,etc., whichresults in rewards rj,rj+1,etc. is optimal or not:Q(sj;aj) =rj+maxaQ(sj+1;a)]:::kXi=0irj+i+k+1maxaQ(sj+k+1;a) =Lj;k:Note the definition of the lower bounds Lj;kfor samplejand time horizon kin the aforementionedseries of inequalities.We can also use this series of inequalities to define upper bounds. To see this note thatQ(sjk1;ajk1)kXi=0irjk1+ik+1Q(sj;aj)0;which follows from the definition of the lower bound by dropping the maximization over the actions,and a change of indices from j!jk1. Reformulating the inequality yields an upper boundUj;kfor samplejand time horizon kby fixing state sjand actionajas follows:Uj;k=k1Q(sjk1;ajk1)kXi=0ik1rjk1+iQ(sj;aj):In contrast to classical techniques which optimize the Bellman criterion given in Eq. (2), we proposeto optimize the Bellman equation subject to constraints Q(sj;aj)Lmaxj= maxk2f1;:::;KgLj;k,which defines the largest lower bound, and Q(sj;aj)Uminj= mink2f1;:::;KgUj;k, which speci-fies the smallest upper bound. Hereby, Lj;kandUj;kare computed using the Q-function Qwitha recent estimated parameter rather than the unknown optimal Q-function Q, and the integer Kspecifies the number of future and past time steps which are considered. Also note that the targetused in the Bellman equation is obtained from yj=Lj;0=rj+maxaQ(sj+1;a). In thisway, we ignore the dependence of the bounds and the target on the parameter to stabilize the train-ing. Taking all the aforementioned definitions into account, we propose the following program for4Published as a conference paper at ICLR 2017Output : Parametersof a Q-functionInitialize:randomly, set =forepisode 1toMdoinitializes1;fort 1toTdoChoose action ataccording to -greedy strategy;Observe reward rtand next state st+1;Store the tuple (st;at;rt;;st+1)in replay memoryD;Sample a minibatch of tuples B=f(sj;aj;rj;Rj;sj+1g)from replay memory D;Updatewith one gradient step of cost function given in Eq. (4);Reset=everyCsteps;endfort Tto1doComputeRt=rt+Rt+1;InsertRtinto the corresponding tuple in replay memory D;endendAlgorithm 1: Our algorithm for fast reward propagation in reinforcement learning tasks.reinforcement learning tasks:minX(sj;aj;sj+1;rj)2B(Q(sj;aj)yj)2s.t.Q(sj;aj)Lmaxj8(sj;aj)2BQ(sj;aj)Uminj8(sj;aj)2B:(3)This program differs from the classical approach given in Eq. (2) via the constraints, which is cru-cial. Intuitively, the constraints encourage faster reward propagation as we show next, and result intremendously better results as we will demonstrate empirically in Sec. 5.Before doing so we describe our optimization procedure for the constrained program in Eq. (3) morecarefully. The cost function is generally non-convex in the parameters , and so are the constraints.We therefore make use of a quadratic penalty method to reformulate the program intominX(sj;aj;rj;sj+1)2Bh(Q(sj;aj)yj)2+(LmaxjQ(sj;aj))2++(Q(sj;aj)Uminj)2+i;(4)whereis a penalty coefficient and (x)+= max(0;x)is the rectifier function. Augmenting the costfunction with (LmaxjQ(sj;aj))2+and/or(Q(sj;aj)Uminj)2+results in a penalty wheneverany optimality bounding constraint gets violated. The quadratic penalty function is chosen for sim-plicity. The penalty coefficient can be set as a large positive value or adjusted in an annealingscheme during training. In this work, we fix its value, due to time constraints. We optimize this costfunction with stochastic (sub-)gradient descent using an experience replay memory from which werandomly draw samples, as well as their successors and predecessors. We emphasize that the deriva-tives correcting the prediction of Q(sj;aj)not only depend on the Q-function from the immediatelysuccessive time step Q(sj+1;a)stored in the experience replay memory, but also on more distanttime instances if constraints are violated. Our proposed formulation and the resulting optimizationtechnique hence encourage faster reward propagation, and the number of time steps depends onthe constant Kand the quality of the current Q-function. We summarize the proposed method inAlgorithm 1.The computational complexity of the proposed approach increases with the number of consideredtime stepsK, since additional forward passes are required to compute the bounds LmaxjandUminj.However, we can increase the memory size on the GPU to compute both the bounds and targets ina single forward pass if Kis not too large. If at all a problem, we can further alleviate this increaseby randomly sampling a subset of the constraints rather than exhaustively using all of them. Moreinformed strategies regarding the choice of constraints are possible as well since we may expectlower bounds in the more distant future to have a larger impact early in the training. In contrast oncethe algorithm is almost converged we may expect lower bounds close to the considered time-step tohave bigger impact.To efficiently compute the discounted reward over multiple time steps we add a new element tothe experience replay structure. Specifically, in addition to state, action, reward and next state for5Published as a conference paper at ICLR 2017Figure 1: Improvements of our method trained on 10M frames compared to results of 200M frameDQN training presented by Mnih et al. (2015), using the metric given in Eq. (5).time-stepj, we also store the real discounted return Rjwhich is the discounted cumulative returnachieved by the agent in its game episode. Rjis computed via Rj=PT=jjr, whereTis theend of the episode and is the discount factor. Rjis then inserted in the replay memory after thetermination of the current episode or after reaching the limit of steps. All in all, the structure of ourexperience replay memory consists of tuples of the form (sj;aj;rj;Rj;sj+1). In practice, we alsofound that incorporating Rjin the lower bound calculation can further improve the stability of thetraining.We leave the questions regarding a good choice of penalty function and a good choice of the penaltycoefficients to future work. At the moment we use a quadratic penalty function and a constantpenalty coefficient identical for both bounds. More complex penalty functions and sophisticatedoptimization approaches may yield even better results than the ones we report in the following.5 E XPERIMENTSWe evaluate the proposed algorithm on a set of 49 games from the Arcade Learning Environ-ment (Bellemare et al., 2013) as suggested by Mnih et al. (2015). This environment is considered tobe one of the most challenging reinforcement learning task because of its high dimensional output.Moreover, the intrinsic mechanism varies tremendously for each game, making it extremely de-manding to find a single, general and robust algorithm and a corresponding single hyperparametersetting which works well across all 49 games.Following existing work (Mnih et al., 2015), our agent predicts an action based on only raw imagepixels and reward information received from the environment. A deep neural network is used asthe function approximator for the Q-function. The game image is resized to an 8484grayscaleimagest. The first layer is a convolutional layer with 32 filters of size 88and a stride of 4; thesecond layer is a convolutional layer with 64 filters of size 44and stride of 2; the third layer isa convolutional layer with 64 filters of size 33and a stride of 1; the next fully connected layertransforms the input to 512 units which are then transformed by another fully connected layer to anoutput size equal to the number of actions in each game. The rectified linear unit (ReLU) is used asthe activation function for each layer. We used the hyperparameters provided by Mnih et al. (2015)for annealing -greedy exploration and also applied RMSProp for gradient descent. As in previouswork we combine four frames into a single step for processing. We chose the hyperparamenterK= 4, for GPU memory efficiency when dealing with mini-batches. In addition, we also includethe discounted return Rj=Lj;1in the lower bound calculation to further stabilize the training. Weuse the penalty coefficient = 4which was obtained by coarsely tuning performance on the games‘Alien,’ ‘Amidar,’ ‘Assault,’ and ‘Asterix.’ Gradients are also rescaled so that their magnitudes arecomparable with or without penalty. All experiments are performed on an NVIDIA GTX Titan-X12GB graphics card.6Published as a conference paper at ICLR 2017Figure 2: Improvements of our method trained on 10M frames compared to results of 10M frameDQN training, using the metric given in Eq. (5).5.1 E VALUATIONIn previous work (Mnih et al., 2015; van Hasselt et al., 2015; Schaul et al., 2016; Wang et al., 2015),the Q-function is trained on each game using 200 million (200M) frames or 50M training steps. Wecompare to those baseline results obtained after 200M frames using our proposed algorithm whichran for only 10M frames or 2.5M steps, i.e., 20 times fewer data, due to time constraints. Instead oftraining more than 10 days we manage to finish training in less than one day. Furthermore, for a faircomparison, we replicate the DQN results and compare the performance of the proposed algorithmafter 10M frames to those obtained when training DQN on only 10M frames.We strictly follow the evaluation procedure in (Mnih et al., 2015) which is often referred to as ‘30no-op evaluation.’ During both training and testing, at the start of the episode, the agent alwaysperforms a random number of at most 30 no-op actions. During evaluation, our agent plays eachgame 30 times for up to 5 minutes, and the obtained score is averaged over these 30 runs. An -greedy policy with = 0:05is used. Specifically, for each run, the game episode starts with at most30 no-op steps, and ends with ‘death’ or after a maximum of 5 minute game-play, which correspondsto 18000 frames.Our training consists of M= 40 epochs, each containing 250000 frames, thus 10M frames intotal. For each game, we evaluate our agent at the end of every epoch, and, following commonpractice (van Hasselt et al., 2015; Mnih et al., 2015), we select the best agent’s evaluation as theresult of the game. So almost all hyperparameters are selected identical to Mnih et al. (2015) andNair et al. (2015).To compare the performance of our algorithm to the DQN baseline, we follow the approach of Wanget al. (2015) and measure the improvement in percent usingScore AgentScore BaselinemaxfScore Human;Score BaselinegScore Random: (5)We select this approach because the denominator choice of either human or baseline score preventsinsignificant changes or negative scores from being interpreted as large improvements.Fig. 1 shows the improvement of our algorithm over the DQN baseline proposed by Mnih et al.(2015) and trained for 200M frames, i.e., 50M steps. Even though our agent is only trained for 10Mframes, we observe that our technique outperforms the baseline significantly. In 30 out of 49 games,our algorithm exceeds the baseline using only 5%of the baseline’s training frames, sometimesdrastically, e.g., in games such as ‘Atlantis,’ ‘Double Dunk,’ and ‘Krull.’ The remaining 19 games,often require a long training time. Nonetheless, our algorithm still reaches a satisfactory level ofperformance.7Published as a conference paper at ICLR 2017Training Time Mean MedianOurs (10M) less than 1 day (1 GPU) 345.70% 105.74%DQN (200M) more than 10 days (1 GPU) 241.06% 93.52%D-DQN (200M) more than 10 days (1 GPU) 330.3% 114.7%Table 1: Mean and median human-normalized scores. DQN baseline and D-DQN results are fromMnih et al. (2015); van Hasselt et al. (2015) and trained with 200M frames while our method istrained with 10M frames. Note that our approach can be combined with the D-DQN method.Figure 3: Game scores for our algorithm (blue), DQN (black), DQN+return (red) and DQN( )(yellow) using 10M training frames. 30 no-op evaluation is used and moving average over 4 pointsis applied.In order to further illustrate the effectiveness of our method, we compare our results with our imple-mentation of DQN trained on 10M frames. The results are illustrated in Fig. 2. We observe a betterperformance on 46 out of 49 games, demonstrating in a fair way the potential of our technique.As suggested by van Hasselt et al. (2015), we use the following scoreScore Normalized =Score AgentScore RandomjScore HumanScore Randomj(6)to summarize the performance of our algorithm in a single number. We normalize the scores ofour algorithm, the baseline reported by Mnih et al. (2015), and double DQN (D-DQN) (van Hasseltet al., 2015), and report the training time, mean and median in Table 1. We observe our techniquewith 10M frames to achieve comparable scores to the D-DQN method trained on 200M frames (vanHasselt et al., 2015), while it outperforms the DQN method (Mnih et al., 2015) by a large margin. Webelieve that our method can be readily combined with other techniques developed for DQN, suchas D-DQN (van Hasselt et al., 2015), prioritized experience replay (Schaul et al., 2016), duelingnetworks (Wang et al., 2015), and asynchronous methods (Mnih et al., 2016) to further improve theaccuracy and training speed.In Fig. 3 we illustrate the evolution of the score for our algorithm and the DQN approach. In additionwe demonstrate two additional techniques: ‘DQN+return’ and ‘DQN( ).’ ‘DQN+return’ uses onlythe discounted future return as a bound, but does not take advantage of the additional constraintswe propose. ‘DQN( )’ combines TD- with the DQN algorithm. We illustrate the performance ofthose four algorithms on the six games ‘Frostbite,’ ‘Atlantis,’ ‘Zaxxon,’ ‘H.E.R.O,’ ‘Q*Bert,’ and‘Chopper Command.’ We observe our method to achieve higher scores than the three baselines onthe majority of the games. We refer the reader to the supplementary material for additional results.6 C ONCLUSIONIn this paper we proposed a novel program for deep Q-learning which propagates promising rewardsto achieve significantly faster convergence than the classical DQN. Our method significantly outper-forms competing approaches even when trained on a small fraction of the data on the Atari 2600domain. In the future, we plan to investigate the impact of penalty functions, advanced constrainedoptimization techniques and explore potential synergy with other techniques.8Published as a conference paper at ICLR 2017
Ske_zvGNl
Intriguing idea, but lacking theoretical and empirical validation
4: Ok but not good enough - rejection
In this paper, a Q-Learning variant is proposed that aims at "propagating" rewards faster by adding extra costs corresponding to bounds on the Q function, that are based on both past and future rewards. This leads to faster convergence, as shown on the Atari Learning Environment benchmark. The paper is well written and easy to follow. The core idea of using relaxed inequality bounds in the optimization problem is original to the best of my knowledge, and results seem promising. This submission however has a number of important shortcomings that prevent me from recommending it for publication at ICLR: 1. The theoretical justification and analysis is very limited. As far as I can tell the bounds as defined require a deterministic reward to hold, which is rarely the case in practice. There is also the fact that the bounds are computed using the so-called "target network" with different parameters theta-, which is another source of discrepancy. And even before that, the bounds hold for Q* but are applied on Q for which they may not be valid until Q gets close enough to Q*. It also looks weird to take the max over k in (1, ..., K) when the definition of L_j,k makes it look like the max has to be L_j,1 (or even L_j,0, but I am not sure why that one is not considered), since L*_j,0 >= L*_j,1 >= ... >= L*_j,K. Neither of these issues are discussed in the paper, and there is no theoretical analysis of the convergence properties of the proposed method. [Update: some of these concerns were addressed in OpenReview comments] 2. The empirical evaluation does not compensate, in my opinion, for the lack of theory. First, since there are two bounds introduced, I would have expected "ablative" experiments showing the improvement brought by each one independently. It is also unfortunate that the authors did not have time to let their algorithm run longer, since as shown in Fig. 1 there remain a significant amount of games where it performs worse compared to DQN. In addition, comparisons are limited to vanilla DQN and DDQN: I believe it would have been important to compare to other ways of incorporating longer-term rewards, like n-step Q-Learning or actor-critic. Finally, there is no experiment demonstrating that the proposed algorithm can indeed improve other existing DQN variants: I agree with the author when they say "We believe that our method can be readily combined with other techniques developed for DQN", however providing actual results showing this would have made the paper much stronger. In conclusion, I do believe this line of research is worth pursuing, but also that additional work is required to really prove and understand its benefits. Minor comments: - Instead of citing the arxiv Wang et al (2015), it would be best to cite the 2016 ICML paper - The description of Q-Learning in section 3 says "The estimated future reward is computed based on the current state s or a series of past states s_t if available." I am not sure what you mean by "a series of past states", since Q is defined as Q(s, a) and thus can only take the current state s as input, when defined this way. - The introduction of R_j in Alg. 1 is confusing since its use is only explained later in the text (in section 5 "In addition, we also incorporate the discounted return R_j in the lower bound calculation to further stabilize the training") - In Fig. S1 the legend should not say "10M" since the plot is from 1M to 10M
4: The reviewer is confident but not absolutely certain that the evaluation is correct
rJ8Je4clg
ICLR.cc/2017/conference
2017
Learning to Play in a Day: Faster Deep Reinforcement Learning by Optimality Tightening
["Frank S.He", "Yang Liu", "Alexander G. Schwing", "Jian Peng"]
We propose a novel training algorithm for reinforcement learning which combines the strength of deep Q-learning with a constrained optimization approach to tighten optimality and encourage faster reward propagation. Our novel technique makes deep reinforcement learning more practical by drastically reducing the training time. We evaluate the performance of our approach on the 49 games of the challenging Arcade Learning Environment, and report significant improvements in both training time and accuracy.
["Reinforcement Learning", "Optimization", "Games"]
ABSTRACTWe propose a novel training algorithm for reinforcement learning which com-bines the strength of deep Q-learning with a constrained optimization approachto tighten optimality and encourage faster reward propagation. Our novel tech-nique makes deep reinforcement learning more practical by drastically reducingthe training time. We evaluate the performance of our approach on the 49 gamesof the challenging Arcade Learning Environment, and report significant improve-ments in both training time and accuracy.1 I NTRODUCTIONThe recent advances of supervised deep learning techniques (LeCun et al., 2015) in computer vision,speech recognition and natural language processing have tremendously improved the performanceon challenging tasks, including image processing (Krizhevsky et al., 2012), speech-based transla-tion (Sutskever et al., 2014) and language modeling (Hinton et al., 2012). The core idea of deeplearning is to use artificial neural networks to model complex hierarchical or compositional dataabstractions and representations from raw input data (Bengio et al., 2013). However, we are stillfar from building intelligent solutions for many real-world challenges, such as autonomous driv-ing, human-computer interaction and automated decision making, in which software agents need toconsider interactions with a dynamic environment and take actions towards goals. Reinforcementlearning (Bertsekas & Tsitsiklis, 1996; Powell, 2011; Sutton & Barto, 1998; Kaelbling et al., 1996)studies these problems and algorithms which learn policies to make decisions so as to maximize areward signal from the environment. One of the promising algorithms is Q-learning (Watkins, 1989;Watkins & Dayan, 1992). Deep reinforcement learning with neural function approximation (Tsit-siklis & Roy, 1997; Riedmiller, 2005; Mnih et al., 2013; 2015), possibly a first attempt to combinedeep learning and reinforcement learning, has been proved to be effective on a few problems whichclassical AI approaches were unable to solve. Notable examples of deep reinforcement learninginclude human-level game playing (Mnih et al., 2015) and AlphaGo (Silver et al., 2016).Despite these successes, its high demand of computational resources makes deep reinforcementlearning not yet applicable to many real-world problems. For example, even for an Atari game, thedeep Q-learning algorithm (also called deep Q-networks, abbreviated as DQN) needs to play up tohundreds of millions of game frames to achieve a reasonable performance (van Hasselt et al., 2015).AlphaGo trained its model using a database of game records of advanced players and, in addition,about 30 million self-played game moves (Silver et al., 2016). The sheer amount of required com-putational resources of current deep reinforcement learning algorithms is a major bottleneck for itsapplicability to real-world tasks. Moreover, in many tasks, the reward signal is sparse and delayed,thus making the convergence of learning even slower.1Published as a conference paper at ICLR 2017Here we propose optimality tightening, a new technique to accelerate deep Q-learning by fast rewardpropagation. While current deep Q-learning algorithms rely on a set of experience replays, they onlyconsider a single forward step for the Bellman optimality error minimization, which becomes highlyinefficient when the reward signal is sparse and delayed. To better exploit long-term high-rewardstrategies from past experience, we design a new algorithm to capture rewards from both forwardand backward steps of the replays via a constrained optimization approach. This encourages fasterreward propagation which reduces the training time of deep Q-learning.We evaluate our proposed approach using the Arcade learning environment (Bellemare et al., 2013)and show that our new strategy outperforms competing techniques in both accuracy and trainingtime on 30 out of 49 games despite being trained with significantly fewer data frames.2 R ELATED WORKThere have been a number of approaches improving the stability, convergence and runtime of deepreinforcement learning since deep Q-learning, also known as deep Q-network (DQN), was firstproposed (Mnih et al., 2013; 2015). DQN combined techniques such as deep learning, reinforcementlearning and experience replays (Lin, 1992; Wawrzynski, 2009).Nonetheless, the original DQN algorithm required millions of training steps to achieve human-level performance on Atari games. To improve the stability, recently, double Q-learning was com-bined with deep neural networks, with the goal to alleviate the overestimation issue observed inQ-learning (Thrun & Schwartz, 1993; van Hasselt, 2010; van Hasselt et al., 2015). The key idea isto use two Q-networks for the action selection and Q-function value calculation, respectively. Thegreedy action of the target is first chosen using the current Q-network parameters, then the targetvalue is computed using a set of parameters from a previous iteration. Another notable advance is“prioritized experience replay” (Schaul et al., 2016) or “prioritized sweeping” for deep Q-learning.The idea is to increase the replay probability of experience tuples that have a high expected learningprogress measured by temporal difference errors.In addition to the aforementioned variants of Q-learning, other network architectures have beenproposed. The dueling network architecture applies an extra network structure to learn the impor-tance of states and uses advantage functions (Wang et al., 2015). A distributed version of the deepactor-critic algorithm without experience replay was introduced very recently (Mnih et al., 2016).It deploys multiple threads learning directly from current transitions. The approach is applicable toboth value-based and policy-based methods, off-policy as well as on-policy methods, and in discreteas well as in continuous domains. The model-free episodic control approach evaluates state-actionpairs based on episodic memory using k-nearest neighbors with hashing functions (Blundell et al.,2016). Bootstrapped deep Q-learning carries out temporally-extended (or deep) exploration, thusleading to much faster learning (Osband et al., 2016).Our fast reward propagation differs from all of the aforementioned approaches. The key idea ofour method is to propagate delayed and sparse rewards during Q-network training, and thus greatlyimprove the efficiency and performance. We formulate this propagation step via a constrained pro-gram. Note that our program is also different from earlier work on off-policy Q()algorithmswith eligibility traces and n-step Q learning (Munos et al., 2016; Watkins, 1989; Mnih et al., 2016),which have been recently shown to perform poorly when used for training deep Q-networks on Atarigames.3 B ACKGROUNDReinforcement learning considers agents which are able to take a sequence of actions in an environ-ment. By taking actions and experiencing at most one scalar reward per action, their task is to learna policy which allows them to act such that a high cumulative reward is obtained over time.More precisely, consider an agent operating over time t2f1;:::;Tg. At timetthe agent is in anenvironment state stand reacts upon it by choosing action at2A. The agent will then observe anew statest+1and receive a numerical reward rt2R. Throughout, we assume the set of possibleactions, i.e., the setA, to be discrete.2Published as a conference paper at ICLR 2017A well established technique to address the aforementioned reinforcement learning task is Q-learning (Watkins, 1989; Watkins & Dayan, 1992). Generally, Q-learning algorithms maintain anaction-value function, often also referred to as Q-function, Q(s;a). Given a state s, the action-valuefunction provides a ‘value’ for each action a2A which estimates the expected future reward ifactiona2A is taken. The estimated future reward is computed based on the current state sor aseries of past states stif available.The core idea of Q-learning is the use of the Bellman equation as a characterization of the optimalfuture reward function Qvia a state-action-value functionQ(st;a) =E[rt+maxa0Q(st+1;a0)]: (1)Hereby the expectation is taken w.r.t. the distribution of state st+1and reward rtobtained aftertaking action a, andis a discount factor. Intuitively, reward for taking action aplus best futurereward should equal the best total return from the current state.The choice of Q-function is crucial for the success of Q-learning algorithms. While classical meth-ods use linear Q-functions based on a set of hand-crafted features of the state, more recent ap-proaches use nonlinear deep neural networks to automatically mine intermediate features from thestate (Riedmiller, 2005; Lange & Riedmiller, 2010; Mnih et al., 2013; 2015). This change hasbeen shown to be very effective for many applications of reinforcement learning. However, auto-matic mining of intermediate representations comes at a price: larger quantities of data and morecomputational resources are required. Even though it is sometimes straightforward to extract largeamounts of data, e.g., when training on video games, for successful optimization, it is crucial that thealgorithms operate on un-correlated samples from a dataset Dfor stability. A technique called “ex-perience replay” (Lin, 1992; Wawrzynski, 2009) encourages this property and quickly emerged as astandard step in the well-known deep Q-learning framework (Mnih et al., 2013; 2015). Experiencereplays are stored as a dataset D=f(sj;aj;rj;sj+1)gwhich contains state-action-reward-futurestate-tuples (sj;aj;rj;sj+1), including past observations from previous plays.The characterization of optimality given in Eq. (1) combined with an “experience replay” dataset Dresults in the following iterative algorithmic procedure (Mnih et al., 2013; 2015): start an episodein the initial state s0; sample a mini-batch of tuples B=f(sj;aj;rj;sj+1)gD ; compute andfix the targets yj=rj+maxaQ(sj+1;a)for each tuple using a recent estimate Q(themaximization is only considered if sjis not a terminal state); update the Q-function by optimizingthe following program w.r.t. the parameters typically via stochastic gradient descent:minX(sj;aj;rj;sj+1)2B(Q(sj;aj)yj)2: (2)After having updated the parameters of the Q-function we perform an action simulation either choos-ing an action at random with a small probability , or by following the strategy arg maxaQ(st;a)which is currently estimated. This strategy is also called the -greedy policy. We then obtain theactual reward rt. Subsequently we augment the replay memory with the new tuple (st;at;rt;st+1)and continue the simulation until this episode terminates or reaches an upper limit of steps, andwe restart a new episode. When optimizing w.r.t. the parameter , a recent Q-network is used tocompute the target yj=rj+maxaQ(sj+1;a). This technique is referred to as ‘semi-gradientdescent,’ i.e., the dependence of the target on the parameter is ignored.4 F AST REWARD PROPAGATION VIA OPTIMALITY TIGHTENINGInvestigating the cost function given in Eq. (2) more carefully, we observe that it operates on aset of short one-step sequences, each characterized by the tuple (sj;aj;rj;sj+1). Intuitively, eachstep encourages an update of the parameters , such that the action-value function for the chosenactionaj,i.e.,Q(sj;aj), is closer to the obtained reward plus the best achievable future value, i.e.,yj=rj+maxaQ(sj+1;a). As we expect from the Bellman optimality equation, it is instructiveto interpret this algorithm as propagating reward information from time j+ 1backwards to time j.To understand the shortcomings of this procedure consider a situation where the agent only receivesa sparse and delayed reward once reaching a target in a maze. Further let jPjcharacterize the short-est path from the agents initial position to the target. For a long time, no real reward is available3Published as a conference paper at ICLR 2017and the aforementioned algorithm propagates randomly initialized future rewards. Once the targetis reached, real reward information is available. Due to the cost function and its property of prop-agating reward time-step by time-step, it is immediately apparent that it takes at least an additionalO(jPj)iterations until the observed reward impacts the initial state.In the following we propose a technique which increases the speed of propagation and achievesimproved convergence for deep Q-learning. We achieve this improvement by taking advantage oflonger state-action-reward-sequences which are readily available in the “experience replay memory.”Not only do we propagate information from time instances in the future to our current state, butalso will we pass information from states several steps in the past. Even though we expect to seesubstantial improvements on sequences where rewards are sparse or only available at terminal states,we also demonstrate significant speedups for situations where rewards are obtained frequently. Thisis intuitive as the Q-function represents an estimate for any reward encountered in the future. Fasterpropagation of future and past rewards to a particular state is therefore desirable.Subsequently we discuss our technique for fast reward propagation, a new deep Q-learning algo-rithm that exploits longer state-transitions in experience replays by tightening the optimization viaconstraints. For notational simplicity, we assume that the environmental dynamics is deterministic,i.e., the new state and the reward are solely determined by the current state and action. It is possibleto show that mathematically our proposed approach also approximately works in stochastic environ-ments. Please see details in the appendix. From the Bellman optimality equation we know that thefollowing series of equalities hold for the optimal Q-function Q:Q(sj;aj) =rj+maxaQ(sj+1;a) =rj+maxarj+1+maxa0hrj+2+max~aQ(sj+3;~a)i:Evaluating such a sequence exactly is not possible in a reinforcement learning setting since theenumeration of intermediate states sj+irequires exponential time complexity O(jAji). It is howeverpossible to take advantage of the episodes available in the replay memory Dby noting that thefollowing sequence of inequalities holds for the optimal action-value function Q(with the greedypolicy), irrespective of whether a policy generating the sequence of actions aj,aj+1,etc., whichresults in rewards rj,rj+1,etc. is optimal or not:Q(sj;aj) =rj+maxaQ(sj+1;a)]:::kXi=0irj+i+k+1maxaQ(sj+k+1;a) =Lj;k:Note the definition of the lower bounds Lj;kfor samplejand time horizon kin the aforementionedseries of inequalities.We can also use this series of inequalities to define upper bounds. To see this note thatQ(sjk1;ajk1)kXi=0irjk1+ik+1Q(sj;aj)0;which follows from the definition of the lower bound by dropping the maximization over the actions,and a change of indices from j!jk1. Reformulating the inequality yields an upper boundUj;kfor samplejand time horizon kby fixing state sjand actionajas follows:Uj;k=k1Q(sjk1;ajk1)kXi=0ik1rjk1+iQ(sj;aj):In contrast to classical techniques which optimize the Bellman criterion given in Eq. (2), we proposeto optimize the Bellman equation subject to constraints Q(sj;aj)Lmaxj= maxk2f1;:::;KgLj;k,which defines the largest lower bound, and Q(sj;aj)Uminj= mink2f1;:::;KgUj;k, which speci-fies the smallest upper bound. Hereby, Lj;kandUj;kare computed using the Q-function Qwitha recent estimated parameter rather than the unknown optimal Q-function Q, and the integer Kspecifies the number of future and past time steps which are considered. Also note that the targetused in the Bellman equation is obtained from yj=Lj;0=rj+maxaQ(sj+1;a). In thisway, we ignore the dependence of the bounds and the target on the parameter to stabilize the train-ing. Taking all the aforementioned definitions into account, we propose the following program for4Published as a conference paper at ICLR 2017Output : Parametersof a Q-functionInitialize:randomly, set =forepisode 1toMdoinitializes1;fort 1toTdoChoose action ataccording to -greedy strategy;Observe reward rtand next state st+1;Store the tuple (st;at;rt;;st+1)in replay memoryD;Sample a minibatch of tuples B=f(sj;aj;rj;Rj;sj+1g)from replay memory D;Updatewith one gradient step of cost function given in Eq. (4);Reset=everyCsteps;endfort Tto1doComputeRt=rt+Rt+1;InsertRtinto the corresponding tuple in replay memory D;endendAlgorithm 1: Our algorithm for fast reward propagation in reinforcement learning tasks.reinforcement learning tasks:minX(sj;aj;sj+1;rj)2B(Q(sj;aj)yj)2s.t.Q(sj;aj)Lmaxj8(sj;aj)2BQ(sj;aj)Uminj8(sj;aj)2B:(3)This program differs from the classical approach given in Eq. (2) via the constraints, which is cru-cial. Intuitively, the constraints encourage faster reward propagation as we show next, and result intremendously better results as we will demonstrate empirically in Sec. 5.Before doing so we describe our optimization procedure for the constrained program in Eq. (3) morecarefully. The cost function is generally non-convex in the parameters , and so are the constraints.We therefore make use of a quadratic penalty method to reformulate the program intominX(sj;aj;rj;sj+1)2Bh(Q(sj;aj)yj)2+(LmaxjQ(sj;aj))2++(Q(sj;aj)Uminj)2+i;(4)whereis a penalty coefficient and (x)+= max(0;x)is the rectifier function. Augmenting the costfunction with (LmaxjQ(sj;aj))2+and/or(Q(sj;aj)Uminj)2+results in a penalty wheneverany optimality bounding constraint gets violated. The quadratic penalty function is chosen for sim-plicity. The penalty coefficient can be set as a large positive value or adjusted in an annealingscheme during training. In this work, we fix its value, due to time constraints. We optimize this costfunction with stochastic (sub-)gradient descent using an experience replay memory from which werandomly draw samples, as well as their successors and predecessors. We emphasize that the deriva-tives correcting the prediction of Q(sj;aj)not only depend on the Q-function from the immediatelysuccessive time step Q(sj+1;a)stored in the experience replay memory, but also on more distanttime instances if constraints are violated. Our proposed formulation and the resulting optimizationtechnique hence encourage faster reward propagation, and the number of time steps depends onthe constant Kand the quality of the current Q-function. We summarize the proposed method inAlgorithm 1.The computational complexity of the proposed approach increases with the number of consideredtime stepsK, since additional forward passes are required to compute the bounds LmaxjandUminj.However, we can increase the memory size on the GPU to compute both the bounds and targets ina single forward pass if Kis not too large. If at all a problem, we can further alleviate this increaseby randomly sampling a subset of the constraints rather than exhaustively using all of them. Moreinformed strategies regarding the choice of constraints are possible as well since we may expectlower bounds in the more distant future to have a larger impact early in the training. In contrast oncethe algorithm is almost converged we may expect lower bounds close to the considered time-step tohave bigger impact.To efficiently compute the discounted reward over multiple time steps we add a new element tothe experience replay structure. Specifically, in addition to state, action, reward and next state for5Published as a conference paper at ICLR 2017Figure 1: Improvements of our method trained on 10M frames compared to results of 200M frameDQN training presented by Mnih et al. (2015), using the metric given in Eq. (5).time-stepj, we also store the real discounted return Rjwhich is the discounted cumulative returnachieved by the agent in its game episode. Rjis computed via Rj=PT=jjr, whereTis theend of the episode and is the discount factor. Rjis then inserted in the replay memory after thetermination of the current episode or after reaching the limit of steps. All in all, the structure of ourexperience replay memory consists of tuples of the form (sj;aj;rj;Rj;sj+1). In practice, we alsofound that incorporating Rjin the lower bound calculation can further improve the stability of thetraining.We leave the questions regarding a good choice of penalty function and a good choice of the penaltycoefficients to future work. At the moment we use a quadratic penalty function and a constantpenalty coefficient identical for both bounds. More complex penalty functions and sophisticatedoptimization approaches may yield even better results than the ones we report in the following.5 E XPERIMENTSWe evaluate the proposed algorithm on a set of 49 games from the Arcade Learning Environ-ment (Bellemare et al., 2013) as suggested by Mnih et al. (2015). This environment is considered tobe one of the most challenging reinforcement learning task because of its high dimensional output.Moreover, the intrinsic mechanism varies tremendously for each game, making it extremely de-manding to find a single, general and robust algorithm and a corresponding single hyperparametersetting which works well across all 49 games.Following existing work (Mnih et al., 2015), our agent predicts an action based on only raw imagepixels and reward information received from the environment. A deep neural network is used asthe function approximator for the Q-function. The game image is resized to an 8484grayscaleimagest. The first layer is a convolutional layer with 32 filters of size 88and a stride of 4; thesecond layer is a convolutional layer with 64 filters of size 44and stride of 2; the third layer isa convolutional layer with 64 filters of size 33and a stride of 1; the next fully connected layertransforms the input to 512 units which are then transformed by another fully connected layer to anoutput size equal to the number of actions in each game. The rectified linear unit (ReLU) is used asthe activation function for each layer. We used the hyperparameters provided by Mnih et al. (2015)for annealing -greedy exploration and also applied RMSProp for gradient descent. As in previouswork we combine four frames into a single step for processing. We chose the hyperparamenterK= 4, for GPU memory efficiency when dealing with mini-batches. In addition, we also includethe discounted return Rj=Lj;1in the lower bound calculation to further stabilize the training. Weuse the penalty coefficient = 4which was obtained by coarsely tuning performance on the games‘Alien,’ ‘Amidar,’ ‘Assault,’ and ‘Asterix.’ Gradients are also rescaled so that their magnitudes arecomparable with or without penalty. All experiments are performed on an NVIDIA GTX Titan-X12GB graphics card.6Published as a conference paper at ICLR 2017Figure 2: Improvements of our method trained on 10M frames compared to results of 10M frameDQN training, using the metric given in Eq. (5).5.1 E VALUATIONIn previous work (Mnih et al., 2015; van Hasselt et al., 2015; Schaul et al., 2016; Wang et al., 2015),the Q-function is trained on each game using 200 million (200M) frames or 50M training steps. Wecompare to those baseline results obtained after 200M frames using our proposed algorithm whichran for only 10M frames or 2.5M steps, i.e., 20 times fewer data, due to time constraints. Instead oftraining more than 10 days we manage to finish training in less than one day. Furthermore, for a faircomparison, we replicate the DQN results and compare the performance of the proposed algorithmafter 10M frames to those obtained when training DQN on only 10M frames.We strictly follow the evaluation procedure in (Mnih et al., 2015) which is often referred to as ‘30no-op evaluation.’ During both training and testing, at the start of the episode, the agent alwaysperforms a random number of at most 30 no-op actions. During evaluation, our agent plays eachgame 30 times for up to 5 minutes, and the obtained score is averaged over these 30 runs. An -greedy policy with = 0:05is used. Specifically, for each run, the game episode starts with at most30 no-op steps, and ends with ‘death’ or after a maximum of 5 minute game-play, which correspondsto 18000 frames.Our training consists of M= 40 epochs, each containing 250000 frames, thus 10M frames intotal. For each game, we evaluate our agent at the end of every epoch, and, following commonpractice (van Hasselt et al., 2015; Mnih et al., 2015), we select the best agent’s evaluation as theresult of the game. So almost all hyperparameters are selected identical to Mnih et al. (2015) andNair et al. (2015).To compare the performance of our algorithm to the DQN baseline, we follow the approach of Wanget al. (2015) and measure the improvement in percent usingScore AgentScore BaselinemaxfScore Human;Score BaselinegScore Random: (5)We select this approach because the denominator choice of either human or baseline score preventsinsignificant changes or negative scores from being interpreted as large improvements.Fig. 1 shows the improvement of our algorithm over the DQN baseline proposed by Mnih et al.(2015) and trained for 200M frames, i.e., 50M steps. Even though our agent is only trained for 10Mframes, we observe that our technique outperforms the baseline significantly. In 30 out of 49 games,our algorithm exceeds the baseline using only 5%of the baseline’s training frames, sometimesdrastically, e.g., in games such as ‘Atlantis,’ ‘Double Dunk,’ and ‘Krull.’ The remaining 19 games,often require a long training time. Nonetheless, our algorithm still reaches a satisfactory level ofperformance.7Published as a conference paper at ICLR 2017Training Time Mean MedianOurs (10M) less than 1 day (1 GPU) 345.70% 105.74%DQN (200M) more than 10 days (1 GPU) 241.06% 93.52%D-DQN (200M) more than 10 days (1 GPU) 330.3% 114.7%Table 1: Mean and median human-normalized scores. DQN baseline and D-DQN results are fromMnih et al. (2015); van Hasselt et al. (2015) and trained with 200M frames while our method istrained with 10M frames. Note that our approach can be combined with the D-DQN method.Figure 3: Game scores for our algorithm (blue), DQN (black), DQN+return (red) and DQN( )(yellow) using 10M training frames. 30 no-op evaluation is used and moving average over 4 pointsis applied.In order to further illustrate the effectiveness of our method, we compare our results with our imple-mentation of DQN trained on 10M frames. The results are illustrated in Fig. 2. We observe a betterperformance on 46 out of 49 games, demonstrating in a fair way the potential of our technique.As suggested by van Hasselt et al. (2015), we use the following scoreScore Normalized =Score AgentScore RandomjScore HumanScore Randomj(6)to summarize the performance of our algorithm in a single number. We normalize the scores ofour algorithm, the baseline reported by Mnih et al. (2015), and double DQN (D-DQN) (van Hasseltet al., 2015), and report the training time, mean and median in Table 1. We observe our techniquewith 10M frames to achieve comparable scores to the D-DQN method trained on 200M frames (vanHasselt et al., 2015), while it outperforms the DQN method (Mnih et al., 2015) by a large margin. Webelieve that our method can be readily combined with other techniques developed for DQN, suchas D-DQN (van Hasselt et al., 2015), prioritized experience replay (Schaul et al., 2016), duelingnetworks (Wang et al., 2015), and asynchronous methods (Mnih et al., 2016) to further improve theaccuracy and training speed.In Fig. 3 we illustrate the evolution of the score for our algorithm and the DQN approach. In additionwe demonstrate two additional techniques: ‘DQN+return’ and ‘DQN( ).’ ‘DQN+return’ uses onlythe discounted future return as a bound, but does not take advantage of the additional constraintswe propose. ‘DQN( )’ combines TD- with the DQN algorithm. We illustrate the performance ofthose four algorithms on the six games ‘Frostbite,’ ‘Atlantis,’ ‘Zaxxon,’ ‘H.E.R.O,’ ‘Q*Bert,’ and‘Chopper Command.’ We observe our method to achieve higher scores than the three baselines onthe majority of the games. We refer the reader to the supplementary material for additional results.6 C ONCLUSIONIn this paper we proposed a novel program for deep Q-learning which propagates promising rewardsto achieve significantly faster convergence than the classical DQN. Our method significantly outper-forms competing approaches even when trained on a small fraction of the data on the Atari 2600domain. In the future, we plan to investigate the impact of penalty functions, advanced constrainedoptimization techniques and explore potential synergy with other techniques.8Published as a conference paper at ICLR 2017
BJhbTXKEx
Review
9: Top 15% of accepted papers, strong accept
In this paper, the authors proposed a extension to the DQN algorithm by introducing both an upper and lower bound to the optimal Q function. The authors show experimentally that this approach improves the data efficiency quite dramatically such that they can achieve or even supersede the performance of DQN that is trained in 8 days. The idea is novel to the best of my knowledge and the improvement over DQN seems very significant. Recently, Remi et al have introduced the Retrace algorithm which can make use of multi-step returns to estimate Q values. As I suspect, some of the improvements that comes from the bounds is due to the fact that multi-step returns is used effectively. Therefore, I was wondering whether the authors have tried any approach like Retrace or Tree backup by Precup et al. and if so how do these methods stack up against the proposed method. The author have very impressive results and the paper proposes a very promising direction for future research and as a result I would like to make a few suggestions: First, it would be great if the authors could include a discussion about deterministic vs stochastic MDPs. Second, it would be great if the authors could include some kind of theoretically analysis about the approach. Finally, I would like to apologize for the late review.
3: The reviewer is fairly confident that the evaluation is correct
rJ8Je4clg
ICLR.cc/2017/conference
2017
Learning to Play in a Day: Faster Deep Reinforcement Learning by Optimality Tightening
["Frank S.He", "Yang Liu", "Alexander G. Schwing", "Jian Peng"]
We propose a novel training algorithm for reinforcement learning which combines the strength of deep Q-learning with a constrained optimization approach to tighten optimality and encourage faster reward propagation. Our novel technique makes deep reinforcement learning more practical by drastically reducing the training time. We evaluate the performance of our approach on the 49 games of the challenging Arcade Learning Environment, and report significant improvements in both training time and accuracy.
["Reinforcement Learning", "Optimization", "Games"]
ABSTRACTWe propose a novel training algorithm for reinforcement learning which com-bines the strength of deep Q-learning with a constrained optimization approachto tighten optimality and encourage faster reward propagation. Our novel tech-nique makes deep reinforcement learning more practical by drastically reducingthe training time. We evaluate the performance of our approach on the 49 gamesof the challenging Arcade Learning Environment, and report significant improve-ments in both training time and accuracy.1 I NTRODUCTIONThe recent advances of supervised deep learning techniques (LeCun et al., 2015) in computer vision,speech recognition and natural language processing have tremendously improved the performanceon challenging tasks, including image processing (Krizhevsky et al., 2012), speech-based transla-tion (Sutskever et al., 2014) and language modeling (Hinton et al., 2012). The core idea of deeplearning is to use artificial neural networks to model complex hierarchical or compositional dataabstractions and representations from raw input data (Bengio et al., 2013). However, we are stillfar from building intelligent solutions for many real-world challenges, such as autonomous driv-ing, human-computer interaction and automated decision making, in which software agents need toconsider interactions with a dynamic environment and take actions towards goals. Reinforcementlearning (Bertsekas & Tsitsiklis, 1996; Powell, 2011; Sutton & Barto, 1998; Kaelbling et al., 1996)studies these problems and algorithms which learn policies to make decisions so as to maximize areward signal from the environment. One of the promising algorithms is Q-learning (Watkins, 1989;Watkins & Dayan, 1992). Deep reinforcement learning with neural function approximation (Tsit-siklis & Roy, 1997; Riedmiller, 2005; Mnih et al., 2013; 2015), possibly a first attempt to combinedeep learning and reinforcement learning, has been proved to be effective on a few problems whichclassical AI approaches were unable to solve. Notable examples of deep reinforcement learninginclude human-level game playing (Mnih et al., 2015) and AlphaGo (Silver et al., 2016).Despite these successes, its high demand of computational resources makes deep reinforcementlearning not yet applicable to many real-world problems. For example, even for an Atari game, thedeep Q-learning algorithm (also called deep Q-networks, abbreviated as DQN) needs to play up tohundreds of millions of game frames to achieve a reasonable performance (van Hasselt et al., 2015).AlphaGo trained its model using a database of game records of advanced players and, in addition,about 30 million self-played game moves (Silver et al., 2016). The sheer amount of required com-putational resources of current deep reinforcement learning algorithms is a major bottleneck for itsapplicability to real-world tasks. Moreover, in many tasks, the reward signal is sparse and delayed,thus making the convergence of learning even slower.1Published as a conference paper at ICLR 2017Here we propose optimality tightening, a new technique to accelerate deep Q-learning by fast rewardpropagation. While current deep Q-learning algorithms rely on a set of experience replays, they onlyconsider a single forward step for the Bellman optimality error minimization, which becomes highlyinefficient when the reward signal is sparse and delayed. To better exploit long-term high-rewardstrategies from past experience, we design a new algorithm to capture rewards from both forwardand backward steps of the replays via a constrained optimization approach. This encourages fasterreward propagation which reduces the training time of deep Q-learning.We evaluate our proposed approach using the Arcade learning environment (Bellemare et al., 2013)and show that our new strategy outperforms competing techniques in both accuracy and trainingtime on 30 out of 49 games despite being trained with significantly fewer data frames.2 R ELATED WORKThere have been a number of approaches improving the stability, convergence and runtime of deepreinforcement learning since deep Q-learning, also known as deep Q-network (DQN), was firstproposed (Mnih et al., 2013; 2015). DQN combined techniques such as deep learning, reinforcementlearning and experience replays (Lin, 1992; Wawrzynski, 2009).Nonetheless, the original DQN algorithm required millions of training steps to achieve human-level performance on Atari games. To improve the stability, recently, double Q-learning was com-bined with deep neural networks, with the goal to alleviate the overestimation issue observed inQ-learning (Thrun & Schwartz, 1993; van Hasselt, 2010; van Hasselt et al., 2015). The key idea isto use two Q-networks for the action selection and Q-function value calculation, respectively. Thegreedy action of the target is first chosen using the current Q-network parameters, then the targetvalue is computed using a set of parameters from a previous iteration. Another notable advance is“prioritized experience replay” (Schaul et al., 2016) or “prioritized sweeping” for deep Q-learning.The idea is to increase the replay probability of experience tuples that have a high expected learningprogress measured by temporal difference errors.In addition to the aforementioned variants of Q-learning, other network architectures have beenproposed. The dueling network architecture applies an extra network structure to learn the impor-tance of states and uses advantage functions (Wang et al., 2015). A distributed version of the deepactor-critic algorithm without experience replay was introduced very recently (Mnih et al., 2016).It deploys multiple threads learning directly from current transitions. The approach is applicable toboth value-based and policy-based methods, off-policy as well as on-policy methods, and in discreteas well as in continuous domains. The model-free episodic control approach evaluates state-actionpairs based on episodic memory using k-nearest neighbors with hashing functions (Blundell et al.,2016). Bootstrapped deep Q-learning carries out temporally-extended (or deep) exploration, thusleading to much faster learning (Osband et al., 2016).Our fast reward propagation differs from all of the aforementioned approaches. The key idea ofour method is to propagate delayed and sparse rewards during Q-network training, and thus greatlyimprove the efficiency and performance. We formulate this propagation step via a constrained pro-gram. Note that our program is also different from earlier work on off-policy Q()algorithmswith eligibility traces and n-step Q learning (Munos et al., 2016; Watkins, 1989; Mnih et al., 2016),which have been recently shown to perform poorly when used for training deep Q-networks on Atarigames.3 B ACKGROUNDReinforcement learning considers agents which are able to take a sequence of actions in an environ-ment. By taking actions and experiencing at most one scalar reward per action, their task is to learna policy which allows them to act such that a high cumulative reward is obtained over time.More precisely, consider an agent operating over time t2f1;:::;Tg. At timetthe agent is in anenvironment state stand reacts upon it by choosing action at2A. The agent will then observe anew statest+1and receive a numerical reward rt2R. Throughout, we assume the set of possibleactions, i.e., the setA, to be discrete.2Published as a conference paper at ICLR 2017A well established technique to address the aforementioned reinforcement learning task is Q-learning (Watkins, 1989; Watkins & Dayan, 1992). Generally, Q-learning algorithms maintain anaction-value function, often also referred to as Q-function, Q(s;a). Given a state s, the action-valuefunction provides a ‘value’ for each action a2A which estimates the expected future reward ifactiona2A is taken. The estimated future reward is computed based on the current state sor aseries of past states stif available.The core idea of Q-learning is the use of the Bellman equation as a characterization of the optimalfuture reward function Qvia a state-action-value functionQ(st;a) =E[rt+maxa0Q(st+1;a0)]: (1)Hereby the expectation is taken w.r.t. the distribution of state st+1and reward rtobtained aftertaking action a, andis a discount factor. Intuitively, reward for taking action aplus best futurereward should equal the best total return from the current state.The choice of Q-function is crucial for the success of Q-learning algorithms. While classical meth-ods use linear Q-functions based on a set of hand-crafted features of the state, more recent ap-proaches use nonlinear deep neural networks to automatically mine intermediate features from thestate (Riedmiller, 2005; Lange & Riedmiller, 2010; Mnih et al., 2013; 2015). This change hasbeen shown to be very effective for many applications of reinforcement learning. However, auto-matic mining of intermediate representations comes at a price: larger quantities of data and morecomputational resources are required. Even though it is sometimes straightforward to extract largeamounts of data, e.g., when training on video games, for successful optimization, it is crucial that thealgorithms operate on un-correlated samples from a dataset Dfor stability. A technique called “ex-perience replay” (Lin, 1992; Wawrzynski, 2009) encourages this property and quickly emerged as astandard step in the well-known deep Q-learning framework (Mnih et al., 2013; 2015). Experiencereplays are stored as a dataset D=f(sj;aj;rj;sj+1)gwhich contains state-action-reward-futurestate-tuples (sj;aj;rj;sj+1), including past observations from previous plays.The characterization of optimality given in Eq. (1) combined with an “experience replay” dataset Dresults in the following iterative algorithmic procedure (Mnih et al., 2013; 2015): start an episodein the initial state s0; sample a mini-batch of tuples B=f(sj;aj;rj;sj+1)gD ; compute andfix the targets yj=rj+maxaQ(sj+1;a)for each tuple using a recent estimate Q(themaximization is only considered if sjis not a terminal state); update the Q-function by optimizingthe following program w.r.t. the parameters typically via stochastic gradient descent:minX(sj;aj;rj;sj+1)2B(Q(sj;aj)yj)2: (2)After having updated the parameters of the Q-function we perform an action simulation either choos-ing an action at random with a small probability , or by following the strategy arg maxaQ(st;a)which is currently estimated. This strategy is also called the -greedy policy. We then obtain theactual reward rt. Subsequently we augment the replay memory with the new tuple (st;at;rt;st+1)and continue the simulation until this episode terminates or reaches an upper limit of steps, andwe restart a new episode. When optimizing w.r.t. the parameter , a recent Q-network is used tocompute the target yj=rj+maxaQ(sj+1;a). This technique is referred to as ‘semi-gradientdescent,’ i.e., the dependence of the target on the parameter is ignored.4 F AST REWARD PROPAGATION VIA OPTIMALITY TIGHTENINGInvestigating the cost function given in Eq. (2) more carefully, we observe that it operates on aset of short one-step sequences, each characterized by the tuple (sj;aj;rj;sj+1). Intuitively, eachstep encourages an update of the parameters , such that the action-value function for the chosenactionaj,i.e.,Q(sj;aj), is closer to the obtained reward plus the best achievable future value, i.e.,yj=rj+maxaQ(sj+1;a). As we expect from the Bellman optimality equation, it is instructiveto interpret this algorithm as propagating reward information from time j+ 1backwards to time j.To understand the shortcomings of this procedure consider a situation where the agent only receivesa sparse and delayed reward once reaching a target in a maze. Further let jPjcharacterize the short-est path from the agents initial position to the target. For a long time, no real reward is available3Published as a conference paper at ICLR 2017and the aforementioned algorithm propagates randomly initialized future rewards. Once the targetis reached, real reward information is available. Due to the cost function and its property of prop-agating reward time-step by time-step, it is immediately apparent that it takes at least an additionalO(jPj)iterations until the observed reward impacts the initial state.In the following we propose a technique which increases the speed of propagation and achievesimproved convergence for deep Q-learning. We achieve this improvement by taking advantage oflonger state-action-reward-sequences which are readily available in the “experience replay memory.”Not only do we propagate information from time instances in the future to our current state, butalso will we pass information from states several steps in the past. Even though we expect to seesubstantial improvements on sequences where rewards are sparse or only available at terminal states,we also demonstrate significant speedups for situations where rewards are obtained frequently. Thisis intuitive as the Q-function represents an estimate for any reward encountered in the future. Fasterpropagation of future and past rewards to a particular state is therefore desirable.Subsequently we discuss our technique for fast reward propagation, a new deep Q-learning algo-rithm that exploits longer state-transitions in experience replays by tightening the optimization viaconstraints. For notational simplicity, we assume that the environmental dynamics is deterministic,i.e., the new state and the reward are solely determined by the current state and action. It is possibleto show that mathematically our proposed approach also approximately works in stochastic environ-ments. Please see details in the appendix. From the Bellman optimality equation we know that thefollowing series of equalities hold for the optimal Q-function Q:Q(sj;aj) =rj+maxaQ(sj+1;a) =rj+maxarj+1+maxa0hrj+2+max~aQ(sj+3;~a)i:Evaluating such a sequence exactly is not possible in a reinforcement learning setting since theenumeration of intermediate states sj+irequires exponential time complexity O(jAji). It is howeverpossible to take advantage of the episodes available in the replay memory Dby noting that thefollowing sequence of inequalities holds for the optimal action-value function Q(with the greedypolicy), irrespective of whether a policy generating the sequence of actions aj,aj+1,etc., whichresults in rewards rj,rj+1,etc. is optimal or not:Q(sj;aj) =rj+maxaQ(sj+1;a)]:::kXi=0irj+i+k+1maxaQ(sj+k+1;a) =Lj;k:Note the definition of the lower bounds Lj;kfor samplejand time horizon kin the aforementionedseries of inequalities.We can also use this series of inequalities to define upper bounds. To see this note thatQ(sjk1;ajk1)kXi=0irjk1+ik+1Q(sj;aj)0;which follows from the definition of the lower bound by dropping the maximization over the actions,and a change of indices from j!jk1. Reformulating the inequality yields an upper boundUj;kfor samplejand time horizon kby fixing state sjand actionajas follows:Uj;k=k1Q(sjk1;ajk1)kXi=0ik1rjk1+iQ(sj;aj):In contrast to classical techniques which optimize the Bellman criterion given in Eq. (2), we proposeto optimize the Bellman equation subject to constraints Q(sj;aj)Lmaxj= maxk2f1;:::;KgLj;k,which defines the largest lower bound, and Q(sj;aj)Uminj= mink2f1;:::;KgUj;k, which speci-fies the smallest upper bound. Hereby, Lj;kandUj;kare computed using the Q-function Qwitha recent estimated parameter rather than the unknown optimal Q-function Q, and the integer Kspecifies the number of future and past time steps which are considered. Also note that the targetused in the Bellman equation is obtained from yj=Lj;0=rj+maxaQ(sj+1;a). In thisway, we ignore the dependence of the bounds and the target on the parameter to stabilize the train-ing. Taking all the aforementioned definitions into account, we propose the following program for4Published as a conference paper at ICLR 2017Output : Parametersof a Q-functionInitialize:randomly, set =forepisode 1toMdoinitializes1;fort 1toTdoChoose action ataccording to -greedy strategy;Observe reward rtand next state st+1;Store the tuple (st;at;rt;;st+1)in replay memoryD;Sample a minibatch of tuples B=f(sj;aj;rj;Rj;sj+1g)from replay memory D;Updatewith one gradient step of cost function given in Eq. (4);Reset=everyCsteps;endfort Tto1doComputeRt=rt+Rt+1;InsertRtinto the corresponding tuple in replay memory D;endendAlgorithm 1: Our algorithm for fast reward propagation in reinforcement learning tasks.reinforcement learning tasks:minX(sj;aj;sj+1;rj)2B(Q(sj;aj)yj)2s.t.Q(sj;aj)Lmaxj8(sj;aj)2BQ(sj;aj)Uminj8(sj;aj)2B:(3)This program differs from the classical approach given in Eq. (2) via the constraints, which is cru-cial. Intuitively, the constraints encourage faster reward propagation as we show next, and result intremendously better results as we will demonstrate empirically in Sec. 5.Before doing so we describe our optimization procedure for the constrained program in Eq. (3) morecarefully. The cost function is generally non-convex in the parameters , and so are the constraints.We therefore make use of a quadratic penalty method to reformulate the program intominX(sj;aj;rj;sj+1)2Bh(Q(sj;aj)yj)2+(LmaxjQ(sj;aj))2++(Q(sj;aj)Uminj)2+i;(4)whereis a penalty coefficient and (x)+= max(0;x)is the rectifier function. Augmenting the costfunction with (LmaxjQ(sj;aj))2+and/or(Q(sj;aj)Uminj)2+results in a penalty wheneverany optimality bounding constraint gets violated. The quadratic penalty function is chosen for sim-plicity. The penalty coefficient can be set as a large positive value or adjusted in an annealingscheme during training. In this work, we fix its value, due to time constraints. We optimize this costfunction with stochastic (sub-)gradient descent using an experience replay memory from which werandomly draw samples, as well as their successors and predecessors. We emphasize that the deriva-tives correcting the prediction of Q(sj;aj)not only depend on the Q-function from the immediatelysuccessive time step Q(sj+1;a)stored in the experience replay memory, but also on more distanttime instances if constraints are violated. Our proposed formulation and the resulting optimizationtechnique hence encourage faster reward propagation, and the number of time steps depends onthe constant Kand the quality of the current Q-function. We summarize the proposed method inAlgorithm 1.The computational complexity of the proposed approach increases with the number of consideredtime stepsK, since additional forward passes are required to compute the bounds LmaxjandUminj.However, we can increase the memory size on the GPU to compute both the bounds and targets ina single forward pass if Kis not too large. If at all a problem, we can further alleviate this increaseby randomly sampling a subset of the constraints rather than exhaustively using all of them. Moreinformed strategies regarding the choice of constraints are possible as well since we may expectlower bounds in the more distant future to have a larger impact early in the training. In contrast oncethe algorithm is almost converged we may expect lower bounds close to the considered time-step tohave bigger impact.To efficiently compute the discounted reward over multiple time steps we add a new element tothe experience replay structure. Specifically, in addition to state, action, reward and next state for5Published as a conference paper at ICLR 2017Figure 1: Improvements of our method trained on 10M frames compared to results of 200M frameDQN training presented by Mnih et al. (2015), using the metric given in Eq. (5).time-stepj, we also store the real discounted return Rjwhich is the discounted cumulative returnachieved by the agent in its game episode. Rjis computed via Rj=PT=jjr, whereTis theend of the episode and is the discount factor. Rjis then inserted in the replay memory after thetermination of the current episode or after reaching the limit of steps. All in all, the structure of ourexperience replay memory consists of tuples of the form (sj;aj;rj;Rj;sj+1). In practice, we alsofound that incorporating Rjin the lower bound calculation can further improve the stability of thetraining.We leave the questions regarding a good choice of penalty function and a good choice of the penaltycoefficients to future work. At the moment we use a quadratic penalty function and a constantpenalty coefficient identical for both bounds. More complex penalty functions and sophisticatedoptimization approaches may yield even better results than the ones we report in the following.5 E XPERIMENTSWe evaluate the proposed algorithm on a set of 49 games from the Arcade Learning Environ-ment (Bellemare et al., 2013) as suggested by Mnih et al. (2015). This environment is considered tobe one of the most challenging reinforcement learning task because of its high dimensional output.Moreover, the intrinsic mechanism varies tremendously for each game, making it extremely de-manding to find a single, general and robust algorithm and a corresponding single hyperparametersetting which works well across all 49 games.Following existing work (Mnih et al., 2015), our agent predicts an action based on only raw imagepixels and reward information received from the environment. A deep neural network is used asthe function approximator for the Q-function. The game image is resized to an 8484grayscaleimagest. The first layer is a convolutional layer with 32 filters of size 88and a stride of 4; thesecond layer is a convolutional layer with 64 filters of size 44and stride of 2; the third layer isa convolutional layer with 64 filters of size 33and a stride of 1; the next fully connected layertransforms the input to 512 units which are then transformed by another fully connected layer to anoutput size equal to the number of actions in each game. The rectified linear unit (ReLU) is used asthe activation function for each layer. We used the hyperparameters provided by Mnih et al. (2015)for annealing -greedy exploration and also applied RMSProp for gradient descent. As in previouswork we combine four frames into a single step for processing. We chose the hyperparamenterK= 4, for GPU memory efficiency when dealing with mini-batches. In addition, we also includethe discounted return Rj=Lj;1in the lower bound calculation to further stabilize the training. Weuse the penalty coefficient = 4which was obtained by coarsely tuning performance on the games‘Alien,’ ‘Amidar,’ ‘Assault,’ and ‘Asterix.’ Gradients are also rescaled so that their magnitudes arecomparable with or without penalty. All experiments are performed on an NVIDIA GTX Titan-X12GB graphics card.6Published as a conference paper at ICLR 2017Figure 2: Improvements of our method trained on 10M frames compared to results of 10M frameDQN training, using the metric given in Eq. (5).5.1 E VALUATIONIn previous work (Mnih et al., 2015; van Hasselt et al., 2015; Schaul et al., 2016; Wang et al., 2015),the Q-function is trained on each game using 200 million (200M) frames or 50M training steps. Wecompare to those baseline results obtained after 200M frames using our proposed algorithm whichran for only 10M frames or 2.5M steps, i.e., 20 times fewer data, due to time constraints. Instead oftraining more than 10 days we manage to finish training in less than one day. Furthermore, for a faircomparison, we replicate the DQN results and compare the performance of the proposed algorithmafter 10M frames to those obtained when training DQN on only 10M frames.We strictly follow the evaluation procedure in (Mnih et al., 2015) which is often referred to as ‘30no-op evaluation.’ During both training and testing, at the start of the episode, the agent alwaysperforms a random number of at most 30 no-op actions. During evaluation, our agent plays eachgame 30 times for up to 5 minutes, and the obtained score is averaged over these 30 runs. An -greedy policy with = 0:05is used. Specifically, for each run, the game episode starts with at most30 no-op steps, and ends with ‘death’ or after a maximum of 5 minute game-play, which correspondsto 18000 frames.Our training consists of M= 40 epochs, each containing 250000 frames, thus 10M frames intotal. For each game, we evaluate our agent at the end of every epoch, and, following commonpractice (van Hasselt et al., 2015; Mnih et al., 2015), we select the best agent’s evaluation as theresult of the game. So almost all hyperparameters are selected identical to Mnih et al. (2015) andNair et al. (2015).To compare the performance of our algorithm to the DQN baseline, we follow the approach of Wanget al. (2015) and measure the improvement in percent usingScore AgentScore BaselinemaxfScore Human;Score BaselinegScore Random: (5)We select this approach because the denominator choice of either human or baseline score preventsinsignificant changes or negative scores from being interpreted as large improvements.Fig. 1 shows the improvement of our algorithm over the DQN baseline proposed by Mnih et al.(2015) and trained for 200M frames, i.e., 50M steps. Even though our agent is only trained for 10Mframes, we observe that our technique outperforms the baseline significantly. In 30 out of 49 games,our algorithm exceeds the baseline using only 5%of the baseline’s training frames, sometimesdrastically, e.g., in games such as ‘Atlantis,’ ‘Double Dunk,’ and ‘Krull.’ The remaining 19 games,often require a long training time. Nonetheless, our algorithm still reaches a satisfactory level ofperformance.7Published as a conference paper at ICLR 2017Training Time Mean MedianOurs (10M) less than 1 day (1 GPU) 345.70% 105.74%DQN (200M) more than 10 days (1 GPU) 241.06% 93.52%D-DQN (200M) more than 10 days (1 GPU) 330.3% 114.7%Table 1: Mean and median human-normalized scores. DQN baseline and D-DQN results are fromMnih et al. (2015); van Hasselt et al. (2015) and trained with 200M frames while our method istrained with 10M frames. Note that our approach can be combined with the D-DQN method.Figure 3: Game scores for our algorithm (blue), DQN (black), DQN+return (red) and DQN( )(yellow) using 10M training frames. 30 no-op evaluation is used and moving average over 4 pointsis applied.In order to further illustrate the effectiveness of our method, we compare our results with our imple-mentation of DQN trained on 10M frames. The results are illustrated in Fig. 2. We observe a betterperformance on 46 out of 49 games, demonstrating in a fair way the potential of our technique.As suggested by van Hasselt et al. (2015), we use the following scoreScore Normalized =Score AgentScore RandomjScore HumanScore Randomj(6)to summarize the performance of our algorithm in a single number. We normalize the scores ofour algorithm, the baseline reported by Mnih et al. (2015), and double DQN (D-DQN) (van Hasseltet al., 2015), and report the training time, mean and median in Table 1. We observe our techniquewith 10M frames to achieve comparable scores to the D-DQN method trained on 200M frames (vanHasselt et al., 2015), while it outperforms the DQN method (Mnih et al., 2015) by a large margin. Webelieve that our method can be readily combined with other techniques developed for DQN, suchas D-DQN (van Hasselt et al., 2015), prioritized experience replay (Schaul et al., 2016), duelingnetworks (Wang et al., 2015), and asynchronous methods (Mnih et al., 2016) to further improve theaccuracy and training speed.In Fig. 3 we illustrate the evolution of the score for our algorithm and the DQN approach. In additionwe demonstrate two additional techniques: ‘DQN+return’ and ‘DQN( ).’ ‘DQN+return’ uses onlythe discounted future return as a bound, but does not take advantage of the additional constraintswe propose. ‘DQN( )’ combines TD- with the DQN algorithm. We illustrate the performance ofthose four algorithms on the six games ‘Frostbite,’ ‘Atlantis,’ ‘Zaxxon,’ ‘H.E.R.O,’ ‘Q*Bert,’ and‘Chopper Command.’ We observe our method to achieve higher scores than the three baselines onthe majority of the games. We refer the reader to the supplementary material for additional results.6 C ONCLUSIONIn this paper we proposed a novel program for deep Q-learning which propagates promising rewardsto achieve significantly faster convergence than the classical DQN. Our method significantly outper-forms competing approaches even when trained on a small fraction of the data on the Atari 2600domain. In the future, we plan to investigate the impact of penalty functions, advanced constrainedoptimization techniques and explore potential synergy with other techniques.8Published as a conference paper at ICLR 2017
SJ8uwSGVx
review
9: Top 15% of accepted papers, strong accept
This paper proposes an improvement to the q-learning/DQN algorithm using constraint bounds on the q-function, which are implemented using quadratic penalties in practice. The proposed change is simple to implement and remarkably effective, enabling both significantly faster learning and better performance on the suite of Atari games. I have a few suggestions for improving the paper: The paper could be improved by including qualitative observations of the learning process with and without the proposed penalties, to better understand the scenarios in which this method is most useful, and to develop a better understanding of its empirical performance. It would also be nice to include zoomed-out versions of the learned curves in Figure 3, as the DQN has yet to converge. Error bars would also be helpful to judge stability over different random seeds. As mentioned in the paper, this method could be combined with D-DQN. It would be interesting to see this combination, to see if the two are complementary. Do you plan to do this in the final version? Also, a couple questions: - Do you think the performance of this method would continue to improve after 10M frames? - Could the ideas in this paper be extended to methods for continuous control like DDPG or NAF?
4: The reviewer is confident but not absolutely certain that the evaluation is correct
HJ7O61Yxe
ICLR.cc/2017/conference
2017
Modelling Relational Time Series using Gaussian Embeddings
["Ludovic Dos Santos", "Ali Ziat", "Ludovic Denoyer", "Benjamin Piwowarski", "Patrick Gallinari"]
We address the problem of modeling multiple simultaneous time series where the observations are correlated not only inside each series, but among the different series. This problem happens in many domains such as ecology, meteorology, etc. We propose a new dynamical state space model, based on representation learning, for modeling the evolution of such series. The joint relational and temporal dynamics of the series are modeled as Gaussian distributions in a latent space. A decoder maps the latent representations to the observations. The two components (dynamic model and decoder) are jointly trained. Using stochastic representations allows us to model the uncertainty inherent to observations and to predict unobserved values together with a confidence in the prediction.
["Applications", "Deep learning"]
ABSTRACTWe address the problem of modeling multiple simultaneous time series where theobservations are correlated not only inside each series, but among the differentseries. This problem happens in many domains such as ecology, meteorology, etc.We propose a new dynamical state space model, based on representation learn-ing, for modeling the evolution of such series. The joint relational and temporaldynamics of the series are modeled as Gaussian distributions in a latent space. Adecoder maps the latent representations to the observations. The two components(dynamic model and decoder) are jointly trained. Using stochastic representationsallows us to model the uncertainty inherent to observations and to predict unob-served values together with a confidence in the prediction.1 I NTRODUCTIONRelational time series, i.e. multiple time series where the observations are correlated both insideeach series and between series occur in many domains such as ecology, medicine, biology, earthobservation by satellite imagery or local measurements, multimedia or even social data analysis.The correlations between the different observed series can come from a proximity (e.g. earth obser-vation or epidemic diffusion) or from a similarity of behavior (e.g. user traces in social data). In thestatistical literature, the modeling of relational time series has been the topic of a dedicated field:spatio-temporal statistics (Cressie & Wikle (2011); Wikle & Hooten (2010)). Different method-ologies have been developed for handling a large variety of spatio-temporal phenomena, with anemphasis on the analysis of natural observations like weather prediction, ecology or remote sensing.In the machine learning domain, there exists a vast literature dedicated to sequence or time seriesprediction. Recently, deep recurrent neural networks have witnessed notable successes in differentsequence and time series modeling tasks leading to an increasing number of publications, e.g. (Bar-bounis et al. (2006); Hsieh et al. (2011); Cao et al. (2012); Hermans & Schrauwen (2013)). Despitea large number of recent developments, the modeling and analysis of relational time series has onlyattracted a few attention in the field of representation learning. In addition, most of the models aredeterministic in the sense that they are trained to learn a fixed mapping for modeling the dynamicsof the series.We propose a new state space model for relational time series able to model the uncertainty at theobservation and at the modeling levels. The principle of this approach is to associate each point ofa time series to a Gaussian distribution in a latent space, the distribution over the observed valuesbeing directly computed from these latent distributions. The model has two main components. Oneis responsible for the dynamics in the latent space. This component is thus modeling the evolutionof the Gaussian distribution considering both the temporal intra-series and the relational inter-seriesBoth authors contributed equally to this work1Under review as a conference paper at ICLR 2017dependencies. A second component acts as a decoder and maps the latent representations associatedwith each series to the corresponding observations in the output space.The contributions of the paper are thus: (i) a new dynamical model for relational time series in-spired by representation learning; (ii) a stochastic component for modeling the uncertainties at theobservation and dynamic levelsThe paper is organized as follows. In Section 2 we introduce some related work on forecastingin time series, representation learning for time series, and recent deep learning works focusing onmodeling uncertainty. The model is presented in Section 3 together with four different variants.Section 4 presents experimental results on four datasets, and section 5 concludes this work andgives some perspectives.2 R ELATED WORKThe classical topic of time series modeling and forecasting has given rise to an extensive literature.In statistics, classical linear models include many variations around auto-regressive and movingaverage models (De Gooijer & Hyndman (2006)). In machine learning, non linear extensions ofthese models based on neural networks have been proposed as early as the 90s, opening the way tomany other non linear models including kernel methods (Muller et al. (99)).Relational time series have mainly been studied in the field of spatio-temporal statistics (Cressie &Wikle (2011); Wikle & Hooten (2010)). The traditional method first relied on a descriptive approachusing the first and second-order moments of the process for modeling the spatio-temporal dependen-cies. More recently, dynamical state models, where the current state is conditioned on the past havebeen explored (Wikle (2015)). These models have been considered both for continuous/discretespace and time components. However, the most common way is to consider discrete time, leadingto the modeling of time series of spatial processes as we do here. When space is discrete, the modelcomes down to a general vectorial autoregressive formulation. These models face a curse of dimen-sionality in the case of a large number of sources. Different strategies have been adopted to solve thisproblem such as embedding the spatio-temporal process in a low-dimensional manifold or param-eter reduction (Wikle (2015)), leading to model families quite similar to the ones used in machinelearning for modeling dynamical phenomena. Also, for complex underlying processes, observationsonly provide an incomplete description of the process dynamics so that modeling uncertainty at thedata and model levels is an important topic.In the last 10 years, there has been a growing interest in learning latent representations for examplethrough neural networks and deep learning. Dynamical state space models such as recurrent neuralnetworks (RNN), which have been used for time series forecasting in different contexts since theearly nineties (Connor et al. (1994)), have recently witnessed important successes in different areasfor general sequence modeling problems, leading to breakthroughs in domains like speech (Graveset al. (2013)), language generation (Sutskever et al. (2011)), translation (Cho et al. (2014)), andmany others. Among this family, the model closest to ours is the dynamic factor graph model of(Mirowski & LeCun (2009)) designed for multiple series modeling for the tasks of forecasting andimputation. However this model does not consider relational dependencies which is the focus of ourapproach.Most of the above models make use of pointwise representations and do not model explicitly theuncertainties present in the process and/or in the observations. Recently, in the learning repre-sentation community, there has been a growing interest in using distributions as latent representa-tions instead of points. (Vilnis & McCallum (2015); He et al. (2015); Dos Santos et al. (2016)) allmake use of Gaussian distributions for representing different items like words (Vilnis & McCallum(2015)), nodes in knowledge graphs (He et al. (2015)) or nodes in graphs for transductive classifi-cation (Dos Santos et al. (2016)). Note that Gaussian processes have also been used for time seriesprediction, but they have mainly been considered for univariate time series prediction (Hachino &Kadirkamanathan (2011); Brahim-Belhouari & Bermak (2004)) and they do not use a state spaceformulation.Recent techniques in variational inference (Kingma & Welling (2014); Rezende et al. (2014)) dealwith uncertainty by modeling distributions in the observation space, mapping random variableswithin a latent space to observations with a deep neural network. Extension of the variational in-2Under review as a conference paper at ICLR 2017ference method to time series has been proposed (Fraccaro et al. (2016); Krishnan et al. (2015)) butcontrarily to those works, we take into account relationships (both temporal and relational). Fur-thermore, in our model, we work directly with random variables to predict observations from timeseries. This gives us direct access to the output distribution with no need to sample or work withintractable distributions.Our model is built on top of the model in (Ziat et al. (2016)) which proposes a deterministic dy-namical process model but does not consider any explicit modeling of uncertainty. In this paper, wepropose a model that uses Gaussian embeddings, and extend the dynamics and loss functions of themodel in (Ziat et al. (2016)).3 F ORECASTING OF RELATIONAL TIMESERIES3.1 N OTATIONS AND TASKSLet us consider a set of ntemporal sequences1x1;::;xnsuch thatxptqiPRis the value of the ithsequence at time tdefined by xipxp1qi;::;xpTqiq,Tbeing the number of observed time steps. Forsimplification, we consider that all the series have the same length, but this is not restrictive.We model the dependencies between the different series through a graph, the different series sourcesbeing the graph vertices and the links modeling explicit dependencies between the sources. Theselinks can reflect a spatial proximity between the sources of the series, a similarity of behavior be-tween users or any other predefined relation. These explicit relations will be modeled in the latentspace. Our hypothesis is that they will constrain the representation of linked sources to be similarone to another in the latent space, this similarity being controlled by the strength of the link betweenthe two time series, denoted ei;j. We assume that the graph structure is static in time and is providedas a prior information. The model can be extended to learn these static dependencies but this is notconsidered here.Let us denote the size of the prediction horizon. The forecasting problem considered here is tocompute for all series ithe valuesxpTkqi for allkinr1;s. Note that the model can be straightfor-wardly extended to the imputation problem that aims at predicting missing values.3.2 I NFORMAL DESCRIPTIONThe proposed model is a dynamic state space model: the dynamics is modeled in a continuous latentstate space and the observations are generated from states in this latent space. State space modelshave already been considered for multiple time series (e.g. Mirowski & LeCun (2009)) and forspatio-temporal processes (e.g. Wikle & Hooten (2010)).Both the observations and the dynamics are subject to uncertainties. Usually, the observations cor-respond to a partial view of the underlying generating process and the dynamics being hidden is notdirectly accessible and should be modeled as a stochastic process.To handle this uncertainty, we propose a model, namely Relational Dynamic model with Gaussianrepresentations ( RDG ), that represents latent factors as distributions in a latent space and learns theseries dynamics in this latent space. The distributions themselves are estimated using observationslike for any other representation learning model. Besides being more adapted to handling the noiseinherent to the process and to the observations, the model can be used to predict the posterior distri-bution of the variables associated to the series and in particular the confidence or variance associatedto the predictions.The model is an extension of the deterministic model of (Ziat et al. (2016)) and has two maincomponents: (i) Decoding component: we consider that each series corresponds to a particulartrajectory in an unknown latent space. Each series xp1qi;::::;xpTqiis thus associated to a series ofrandom variables in RddenotedZp1qi;::::;ZpTqi,Zptqibeing the latent factor explaining the observedvalue of the series iat timetanddthe size of the latent space. We model each Zptqias a multivariate1For simplicity, we consider univariate time series, but the model can be trivially extended to multivariatetime series.3Under review as a conference paper at ICLR 2017normal variable Npptqi;ptqiq. The observation can be computed from this latent distribution byusing a decoding function mappingZptqitoXptqifpZptqiq. (ii) Dynamic component: Thesecond component models the series dynamics in the latent space. We suppose that dynamics canbe captured for all series through a function hthat maps the latent random variable Zptqito the nextlatent variable Zpt1qihpZptqiq. The function his thus modeling the time dynamics. In addition,constraints are introduced to reflect prior knowledge about the relational dependency structure ofthe series. For any couple of series iandjwith a known dependency, i.e. such that ei;j¡0we adda corresponding constraint on ZptqiandZptqjas explained in Section 3.3.3.In the following, we explain how the distributions corresponding to the random variables Zptqiarelearned, jointly to the functions f(decoder component) and h(dynamic component).3.3 M ODEL DEFINITIONWe suppose that the random variables Zptqifollow a Gaussian distribution. Let us denote ZptqiNpptqi;ptqiqa distribution where ptqiandptqihave to be estimated from known observations. Forsimplicity, we consider in the following that ptqiis a diagonal matrix, with ptqi;jdenoting the jthvalue of the diagonal of ptqi.We define a global loss function Lp;;f;hqwhereandare the means and covariance matricesfor all the series and for all the time steps between 1andT. The loss is a sum of three terms: (i) adecoding loss De, (ii) a dynamical loss Dyand (iii) a structural loss R:Lp;;f;hqn ̧i1T ̧t1DepfpZptqiq;xptqiqDyn ̧i1T1 ̧t1DypZpt1qi;hpZptqiqqRn ̧j1T ̧t1ei;jRpZptqi;Zptqjq(1)whereDyandRare hyperparameters weighting the importance of the different elements in the lossfunction. The first term corresponds to the decoding component , and forces both fand the learneddistributions of variables Zto “explain” the observations, the second term, the dynamic component ,encourageshto model the time dynamics in the latent space, while the third term captures therelations between the pairs of series. In the following, we use for falinear function andhwill beeither a linear or non-linear function (see Section 3.3.2).Learning: Learning the model is performed through the minimization of the loss functionLp;;f;hqwith respect to ,,fandh. To simplify the notations, the parameters of fandhare not made explicit in the notations – fandhare supposed to be differentiable. At the end ofthe learning process, all the latent distributions for each of the time steps are known for the trainingdata, as well as the decoding function fand the dynamical one h. We used ADAM (Kingma & Ba(2015)) as a stochastic gradient descent technique. This optimization can be easily made on a largescale dataset, and/or by using GPUs.3.3.1 F ROM LATENT SPACE TO OBSERVATIONSThe mapping onto the latent space is learned so that the values xptqiof each series can be predictedfrom their respective Gaussian embedding Zptqithrough the ffunction. We define below two al-ternative decoding loss functions De, used in the experiments for measuring the error between thepredicted distribution fpZptqiqand the observation xptqi. Other losses could be used with the samemodel.Thefirst loss measures the difference between the expected value of fand the observation using amean-square error:De1pfpZptqiq;xptqiqdefEfpZptqiqxptqi2(2)4Under review as a conference paper at ICLR 2017When considering a linear decoding function such as fpq ;¡,being the set of parametersoff,De1can be rewritten as as:De1pfpZptqiq;xptqiqp ;ptqi¡xptqiq2(3)Thesecond loss aims at measuring the distance between the random variable modeling the predictedobservations and the observations. This is the expectation of the mean squared error between thepredictions and the observations:De2pfpZptqiq;xptqiqdefEpfpZptqiqxptqiq2(4)Whenfis a linear function, this loss can be written as:De2pfpZptqiq;xptqiqd ̧k12kptqi;k ;ptqi¡xptqi2(5)Minimizing De1only updates the mean of the distributions, whereas minimizing De2updates boththe mean and the variance. More specifically, an observed value with De2will pull the variancesptqidown. This is an interesting property since observing values should reduce the variance of therepresentation. Moreover, this effect will be higher for the dimensions of the latent space where thevalue ofis higher. This is sensible since variance is reduced for the dimensions that are importantfor the prediction.3.3.2 M ODELING DYNAMICSThe loss function Dyaims at finding values Zp:qiand a dynamic model h, that will be used topredict the representation of the next state of time series i,Zpt1qi . The function hmaps a dis-tribution Npptqi;ptqiqtoNppt1qi;pt1qiq. Based on (Vilnis & McCallum (2015); Dos Santoset al. (2016)), we use a Kullback-Leibler divergence (noted DKLp||q ) to compare the distributionatpt1qto the distribution predicted by h.We propose in the following two alternative functions for h. For the first one, we consider that thelatent representation at time pt1qis a linear transformation of the latent distribution at time t. Thetransformed variable is also a Gaussian and its parameters can be easily computed. In this case, hisa linear function from RdtoRdwhich is represented by a matrix PMd;dpRq:Dy1pZpt1qi;hpZptqiqqdefDKLpZpt1qi||ZptqiqDKLpZpt1qi||Npptqi;ptqiTqq (6)Linear transformations of random vectors might be too restrictive to model complex processes.As an alternative transformation, we used two non linear multilayer perceptrons (MLP), one hmfor predicting the means and one for hcfor predicting the variance: the next mean is given bypt1qihmpptqi;ptqiq, and the next variance by pt1qihcpptqi;ptqiq. This gives:Dy2pZpt1qi;hpZptqiqqdefDKLpZpt1qi||Nphmpptqi;ptqiq;hcpptqi;ptqiqqq (7)Note hat in the second case, we also make the hypothesis that the resulting distribution (for Zpt1qi )is Gaussian. In the two cases, the KL divergence between the two Gaussian distributions has asimple analytic form from which the gradient can be easily computed2.3.3.3 S TRUCTURAL REGULARIZATION TERMAt last, Rcorresponds to a structural regularization over the graph structure that encourages themodel to learn similar representations for time series that are interdependent. This forces the modelto learn representations that reflect the structure dependencies between the series. Recall that these2DKLpZptqi||Zptqjq12ptrpptqj1ptqiqpptqjptqiqTptqj1pptqjptqiqdlogp|ptqi||ptqj|qq5Under review as a conference paper at ICLR 2017dependencies are supposed to be provided as priors for this model. We define this regularization lossas:RpZptqi||ZptqjqDKLpZptqi||Zptqjq (8)which again has, for Gaussian random variables, a simple analytical form that can be used forlearning.Minimizing the regularization term Rhas a direct impact on the distributions of the predictedobservations for connected times series. More precisely, we have the following inequality:dTVfpZptqiq;fpZptqjq¤ddDKLpZptqi||Zptqjq2(9)withdTVbeing the total variation distance of probability measures, i.e.:dTVpX;Yq supAPBorelp|DXpAqDYpAq|q (10)withXandYbeing to random variables of density distribution respectively DXandDY, andBorel being the Borel set of Rn(roughly, cuboids in Rn). This means that having relatively similarrepresentations (regarding the KL-divergence) constrains the predicted values to be similar. Formore details see Appendix A.3.4 I NFERENCEDuring inference when forecasting values, the latent distributions at pT1qare deduced from theones at time Tand follow NphppTqi;pTqiqq, distributions at pT2qfollow NphhppTqi;pTqiqq,and so on...4 E XPERIMENTS4.1 D ATASETS AND BASELINESExperiments have been performed on four datasets respectively extracted from Google Flu Trends3,WHO4and from two datasets from Grand Lyon5(GL) (respectively data from traffic conditionsand from car parks occupancy). All the series are normalized. For all datasets, we used binarydependency relations indicating whether two series are related or not. The Google Flu Trend(GFT) dataset is composed of an aggregation of weekly Google search queries related to the flu in29 countries. This dataset spans about ten years of time. The binary relations between series aredefined a priori so that the series of two countries iandjare linked, i.e. ei;j1in Equation (1),only if the countries have a common frontier. There are 96 relations in all. The GL Traffic (GL-T)dataset corresponds to the traffic conditions of the 50 busiest roads of the city of Lyon (France).Data is aggregated on 20 minutes windows spanning 15 days. The binary relations between seriesare based on the geographical proximity of roads. There are 130 relations in total. The GL Park(GL-P) dataset represents the occupancy of public car parks in Lyon. The series correspond to theoccupancy of the 30 busiest car parks. It has the same window and period of time as the previousdataset, and the binary relations between series are based on the geographical proximity of carparks. There are 74 relations in total. The WHO dataset provides the number of deaths caused bydiphtheria over 91 different countries, giving rise to 91 time series. The binary relations betweenseries are defined so that two series are linked if the corresponding countries share a commonfrontier. There are 228 links in total.We compare our approach with five baselines : Auto-Regressive ( AR), a monovariate linearauto-regressive model. It computes its predictions based on a learned linear function of a fixednumberpof past values of the series. The order pof the model is a hyperparameter of the modelselected by a grid search. Feed Forward Neural Network ( FFNN ), representative of non-linear3http://www.google.org/flutrends4http://www.who.int5http://data.grandlyon.com6Under review as a conference paper at ICLR 2017(a) RMSE from T+1 to T+5 on GL-T.Model GL-T GL-P GFT WHOAR 0.0752 0.0892 0.0626 0.0832FFNN 0.0751 0.0894 0.045 0.0838RNN 0.0709 0.0890 0.0431 0.0795KF 0.0711 0.0833 0.0388 0.0799DFG 0.0712 0.0911 0.0592 0.0795RDG 1;1 0.0742 0.0902 0.0607 0.0848RDG 1;2 0.0707 0.0834 0.0434 0.0796RDG 2;1 0.0765 0.0896 0.0589 0.0831RDG 2;2 0.0718 0.0828 0.0429 0.0795(b) RMSE at T+1 on the four datasets.Figure 1: Quantitative comparison between baselines and our proposed model (RDG) on the predic-tion task. RDG k;lcorresponds to the variant with losses ( Dek,Dyl).auto-regressive models of order pwhere the non-linear function is modeled as a feed-forward neuralnetwork with one hidden layer of size s. In this case, pandsare hyperparameters selected by gridsearch. RNN , a recurrent neural network with one hidden layer of size sof recurrent units and tanhnon-linearities. The RNN model is a state space non-linear auto-regressive model with exogenousinputs (the past values of the series). Note that this model should in principle be able to learnthe inter-series dependencies, but the dependencies are not modeled explicitly as they are in ourmodel. Also the RNN does not introduce explicit modeling of uncertainties. KF(Kalman (1960)),is a classic Kalman Filter with linear transformations from one state to another. DFG (Mirowski& LeCun (2009)), a state of the art model that learns continuous deterministic latent variablesby modeling the dynamics and the joint probabilities between series. All the hyperparameters ofthe baselines have been set using a validation set by grid search, including the best architecturesfor the dynamic model hwhen it is a multi-layer perceptron with one hidden layer or a linear model. .For the evaluation we have considered a root-mean-square error (RMSE) criterion. Regarding theexperimental protocol, models are evaluated using cross-validation with rolling origin.4.2 R ESULTSLet us first present the performance of our model w.r.t. the baselines for prediction at horizon 1inFigure 1b We have tested the four variants of our approach i.e combinations of De1orDe2withDy1orDy2. The proposed model obtains the best results on all the datasets except GFT where KFperforms better. Otherwise it outperforms the baselines on two datasets (GL-P -Grand Lyon Parks-and GFT -Google Flu Trends- on the table) and gets results similar to the RNN on the two others(GL-T -Grand yon Traffic- and WHO). The non linear dynamical model used for Dy2usually getsbetter results than other models, the best combination being the use of the MSE expectation errorfor the decoder and the non-linear model for the dynamics (denoted RDG 2;2on the figure). Thenon linear dynamical model used for Dy2usually gets better results than other models, the bestcombination being the use of the MSE expectation error for the decoder and the non-linear modelfor the dynamics (denoted RDG 2;2on the figure).Figure 1a shows the prediction quality (RMSE) at pT1q,pT2q,pT3q,pT4qandpT5qandillustrates the ability of RDG to predict correctly at different horizons. Here again, the performanceof RDG is very close to the performance of the Recurrent Neural Network. One can remark that atpT5qKF does not goes the distance since it performs well at pT1qbut quite badly at pT5qin comparison to other baselines.RDG has the additional property of modeling the uncertainty associated to its predictions, which isnot the case for a RNN. Let us consider the curves presented in Figure 2. They illustrate, the pre-dictions made by our model together with their associated variance computed through the Gaussianembeddings. First, one can see that the ground truth values are always within the confidence intervalprovided by our model, which means that RDG computes relevant minimum and maximum possiblevalues. Another aspect is that the size of the interval increases with the prediction horizon, which is7Under review as a conference paper at ICLR 2017Figure 2: Forecasts on GFT (two different time series of the dataset) with the RDG 2;2model showingits range of confidence: EpfpZptqqqvarpfpZptqqq. Prediction at 25ncorresponds to fphnpZp25qq.what is expected from such a model. The latter is then able to predict relevant confidence values forits predictions.Comparison between RDG with/without structural regularization or uncertainty We com-pare in Table 1 the results between our model when taking into account the neighborhood graph(R0) or not (R0): forecasts are uniformly worse for all datasets when we do not takeinto account the neighborhood graph, it suggests that the regularizer improves the model when theinput graph is relevant. Furthermore, we give the results obtained without uncertainty, which cor-responds to the model described in (Ziat et al. (2016)) (denoted Rainstorm): here again, our modeloutperforms the previous one for all the datasets.ModelDatasetGL-T GL-P GFT WHORainstorm 0.0710 0.0886 0.0440 0.0804RDG (R0) 0.0719 0.900 0.0441 0.0807RDG 0.0707 0.0828 0.0388 0.0795Table 1: RMSE at T1on the four datasets5 C ONCLUSION AND FUTURE WORKWe have proposed a model for relational time series forecasting. Our model (RDG) is based onlatent Gaussian embeddings, and has shown competitive performance on four different datasetscompared to state-of-the-art models. Moreover, RDG allows us to model the uncertainty of predic-tions, providing for example confidence intervals for each prediction. Future work will investigatemore complex dynamic and prediction functions, as well as observing the behavior of the model forimputation tasks.
SyxnNWM4e
Interesting idea but formulation and experiments not convincing
4: Ok but not good enough - rejection
This manuscript proposes an approach for modeling correlated timeseries through a combination of loss functions which depend on neural networks. The loss functions correspond to: data fit term, autoregressive latent state term, and a term which captures relations between pairs of timeseries (relations have to be given as prior information). Modeling relational timeseries is a well-researched problem, however little attention has been given to it in the neural network community. Perhaps the reason for this is the importance of having uncertainty in the representation. The authors correctly identify this need and consider an approach which considers distributions in the state space. The formulation is quite straightforward by combining loss functions. The model adds to Ziat et al. 2016 in certain aspects which are well motivated, but unfortunately implemented in an unconvincing way. To start with, uncertainty is not treated in a very principled way, since the inference in the model is rather naive; I'd expect employing a VAE framework [1] for better uncertainty handling. Furthermore, the Gaussian co-variance collapses into a variance, which is the opposite of what one would want for modelling correlated time-series. There are approaches which take these correlations into account in the states, e.g. [2]. Moreover, the treatment of uncertainty only allows for linear decoding function f. This significantly reduces the power of the model. State of the art methods in timeseries modeling have moved beyond this constraint, especially in the Gaussian process community e.g. [2,3,4,5]. Comparing to a few of these methods, or at least discussing them would be useful. References: [1] Kingma and Welling. Auto-encoding Variational Bayes. arXiv:1312.6114 [2] Damianou et al. Variational Gaussian process dynamical systems. NIPS 2011. [3] Mattos et al. Recurrent Gaussian processes. ICLR 2016. [4] Frigola. Bayesian Time Series Learning with Gaussian Processes, University of Cambridge, PhD Thesis, 2015. [5] Frigola et al. Variational Gaussian Process State-Space Models. NIPS 2014 One innovation is that the prior structure of the correlation needs to be given. This is a potentially useful and also original structural component. However, it also constitutes a limitation in some sense, since it is unrealistic in many scenarios to have this prior information. Moreover, the particular regularizer that makes "similar" timeseries to have closeness in the state space seems problematic. Some timeseries groups might be more "similar" than others, and also the similarity might be of different nature across groups. These variations cannot be well captured/distilled by a simple indicator variable e_ij. Furthermore, these variables are in practice taken to be binary (by looking at the experiments), which would make it even harder to model rich correlations. The experiments show that the proposed method works, but they are not entirely convincing. Importantly, they do not shed enough light into the different properties of the model w.r.t its different parts. For example, the effect and sensitivity of the different regularizers. The authors state in a pre-review answer that they amended with some more results, but I can't see a revision in openreview (please let me know if I've missed it). From the performance point of view, the results are not particularly exciting, especially given the fact that it's not clear which loss is better (making it difficult to use the method in practice). It would also be very interesting to report the optimized values of the parameters \lambda, to get an idea of how the different losses behave. Timeseries analysis is a very well-researched area. Given the above, it's not clear to me why one would prefer to use this model over other approaches. Methodology wise, there are no novel components that offer a proven advantage with respect to past methods. The uncertainty in the states and the correlation of the time-series are the aspects which could add an advantage, but are not adequately researched in this paper.
4: The reviewer is confident but not absolutely certain that the evaluation is correct
HJ7O61Yxe
ICLR.cc/2017/conference
2017
Modelling Relational Time Series using Gaussian Embeddings
["Ludovic Dos Santos", "Ali Ziat", "Ludovic Denoyer", "Benjamin Piwowarski", "Patrick Gallinari"]
We address the problem of modeling multiple simultaneous time series where the observations are correlated not only inside each series, but among the different series. This problem happens in many domains such as ecology, meteorology, etc. We propose a new dynamical state space model, based on representation learning, for modeling the evolution of such series. The joint relational and temporal dynamics of the series are modeled as Gaussian distributions in a latent space. A decoder maps the latent representations to the observations. The two components (dynamic model and decoder) are jointly trained. Using stochastic representations allows us to model the uncertainty inherent to observations and to predict unobserved values together with a confidence in the prediction.
["Applications", "Deep learning"]
ABSTRACTWe address the problem of modeling multiple simultaneous time series where theobservations are correlated not only inside each series, but among the differentseries. This problem happens in many domains such as ecology, meteorology, etc.We propose a new dynamical state space model, based on representation learn-ing, for modeling the evolution of such series. The joint relational and temporaldynamics of the series are modeled as Gaussian distributions in a latent space. Adecoder maps the latent representations to the observations. The two components(dynamic model and decoder) are jointly trained. Using stochastic representationsallows us to model the uncertainty inherent to observations and to predict unob-served values together with a confidence in the prediction.1 I NTRODUCTIONRelational time series, i.e. multiple time series where the observations are correlated both insideeach series and between series occur in many domains such as ecology, medicine, biology, earthobservation by satellite imagery or local measurements, multimedia or even social data analysis.The correlations between the different observed series can come from a proximity (e.g. earth obser-vation or epidemic diffusion) or from a similarity of behavior (e.g. user traces in social data). In thestatistical literature, the modeling of relational time series has been the topic of a dedicated field:spatio-temporal statistics (Cressie & Wikle (2011); Wikle & Hooten (2010)). Different method-ologies have been developed for handling a large variety of spatio-temporal phenomena, with anemphasis on the analysis of natural observations like weather prediction, ecology or remote sensing.In the machine learning domain, there exists a vast literature dedicated to sequence or time seriesprediction. Recently, deep recurrent neural networks have witnessed notable successes in differentsequence and time series modeling tasks leading to an increasing number of publications, e.g. (Bar-bounis et al. (2006); Hsieh et al. (2011); Cao et al. (2012); Hermans & Schrauwen (2013)). Despitea large number of recent developments, the modeling and analysis of relational time series has onlyattracted a few attention in the field of representation learning. In addition, most of the models aredeterministic in the sense that they are trained to learn a fixed mapping for modeling the dynamicsof the series.We propose a new state space model for relational time series able to model the uncertainty at theobservation and at the modeling levels. The principle of this approach is to associate each point ofa time series to a Gaussian distribution in a latent space, the distribution over the observed valuesbeing directly computed from these latent distributions. The model has two main components. Oneis responsible for the dynamics in the latent space. This component is thus modeling the evolutionof the Gaussian distribution considering both the temporal intra-series and the relational inter-seriesBoth authors contributed equally to this work1Under review as a conference paper at ICLR 2017dependencies. A second component acts as a decoder and maps the latent representations associatedwith each series to the corresponding observations in the output space.The contributions of the paper are thus: (i) a new dynamical model for relational time series in-spired by representation learning; (ii) a stochastic component for modeling the uncertainties at theobservation and dynamic levelsThe paper is organized as follows. In Section 2 we introduce some related work on forecastingin time series, representation learning for time series, and recent deep learning works focusing onmodeling uncertainty. The model is presented in Section 3 together with four different variants.Section 4 presents experimental results on four datasets, and section 5 concludes this work andgives some perspectives.2 R ELATED WORKThe classical topic of time series modeling and forecasting has given rise to an extensive literature.In statistics, classical linear models include many variations around auto-regressive and movingaverage models (De Gooijer & Hyndman (2006)). In machine learning, non linear extensions ofthese models based on neural networks have been proposed as early as the 90s, opening the way tomany other non linear models including kernel methods (Muller et al. (99)).Relational time series have mainly been studied in the field of spatio-temporal statistics (Cressie &Wikle (2011); Wikle & Hooten (2010)). The traditional method first relied on a descriptive approachusing the first and second-order moments of the process for modeling the spatio-temporal dependen-cies. More recently, dynamical state models, where the current state is conditioned on the past havebeen explored (Wikle (2015)). These models have been considered both for continuous/discretespace and time components. However, the most common way is to consider discrete time, leadingto the modeling of time series of spatial processes as we do here. When space is discrete, the modelcomes down to a general vectorial autoregressive formulation. These models face a curse of dimen-sionality in the case of a large number of sources. Different strategies have been adopted to solve thisproblem such as embedding the spatio-temporal process in a low-dimensional manifold or param-eter reduction (Wikle (2015)), leading to model families quite similar to the ones used in machinelearning for modeling dynamical phenomena. Also, for complex underlying processes, observationsonly provide an incomplete description of the process dynamics so that modeling uncertainty at thedata and model levels is an important topic.In the last 10 years, there has been a growing interest in learning latent representations for examplethrough neural networks and deep learning. Dynamical state space models such as recurrent neuralnetworks (RNN), which have been used for time series forecasting in different contexts since theearly nineties (Connor et al. (1994)), have recently witnessed important successes in different areasfor general sequence modeling problems, leading to breakthroughs in domains like speech (Graveset al. (2013)), language generation (Sutskever et al. (2011)), translation (Cho et al. (2014)), andmany others. Among this family, the model closest to ours is the dynamic factor graph model of(Mirowski & LeCun (2009)) designed for multiple series modeling for the tasks of forecasting andimputation. However this model does not consider relational dependencies which is the focus of ourapproach.Most of the above models make use of pointwise representations and do not model explicitly theuncertainties present in the process and/or in the observations. Recently, in the learning repre-sentation community, there has been a growing interest in using distributions as latent representa-tions instead of points. (Vilnis & McCallum (2015); He et al. (2015); Dos Santos et al. (2016)) allmake use of Gaussian distributions for representing different items like words (Vilnis & McCallum(2015)), nodes in knowledge graphs (He et al. (2015)) or nodes in graphs for transductive classifi-cation (Dos Santos et al. (2016)). Note that Gaussian processes have also been used for time seriesprediction, but they have mainly been considered for univariate time series prediction (Hachino &Kadirkamanathan (2011); Brahim-Belhouari & Bermak (2004)) and they do not use a state spaceformulation.Recent techniques in variational inference (Kingma & Welling (2014); Rezende et al. (2014)) dealwith uncertainty by modeling distributions in the observation space, mapping random variableswithin a latent space to observations with a deep neural network. Extension of the variational in-2Under review as a conference paper at ICLR 2017ference method to time series has been proposed (Fraccaro et al. (2016); Krishnan et al. (2015)) butcontrarily to those works, we take into account relationships (both temporal and relational). Fur-thermore, in our model, we work directly with random variables to predict observations from timeseries. This gives us direct access to the output distribution with no need to sample or work withintractable distributions.Our model is built on top of the model in (Ziat et al. (2016)) which proposes a deterministic dy-namical process model but does not consider any explicit modeling of uncertainty. In this paper, wepropose a model that uses Gaussian embeddings, and extend the dynamics and loss functions of themodel in (Ziat et al. (2016)).3 F ORECASTING OF RELATIONAL TIMESERIES3.1 N OTATIONS AND TASKSLet us consider a set of ntemporal sequences1x1;::;xnsuch thatxptqiPRis the value of the ithsequence at time tdefined by xipxp1qi;::;xpTqiq,Tbeing the number of observed time steps. Forsimplification, we consider that all the series have the same length, but this is not restrictive.We model the dependencies between the different series through a graph, the different series sourcesbeing the graph vertices and the links modeling explicit dependencies between the sources. Theselinks can reflect a spatial proximity between the sources of the series, a similarity of behavior be-tween users or any other predefined relation. These explicit relations will be modeled in the latentspace. Our hypothesis is that they will constrain the representation of linked sources to be similarone to another in the latent space, this similarity being controlled by the strength of the link betweenthe two time series, denoted ei;j. We assume that the graph structure is static in time and is providedas a prior information. The model can be extended to learn these static dependencies but this is notconsidered here.Let us denote the size of the prediction horizon. The forecasting problem considered here is tocompute for all series ithe valuesxpTkqi for allkinr1;s. Note that the model can be straightfor-wardly extended to the imputation problem that aims at predicting missing values.3.2 I NFORMAL DESCRIPTIONThe proposed model is a dynamic state space model: the dynamics is modeled in a continuous latentstate space and the observations are generated from states in this latent space. State space modelshave already been considered for multiple time series (e.g. Mirowski & LeCun (2009)) and forspatio-temporal processes (e.g. Wikle & Hooten (2010)).Both the observations and the dynamics are subject to uncertainties. Usually, the observations cor-respond to a partial view of the underlying generating process and the dynamics being hidden is notdirectly accessible and should be modeled as a stochastic process.To handle this uncertainty, we propose a model, namely Relational Dynamic model with Gaussianrepresentations ( RDG ), that represents latent factors as distributions in a latent space and learns theseries dynamics in this latent space. The distributions themselves are estimated using observationslike for any other representation learning model. Besides being more adapted to handling the noiseinherent to the process and to the observations, the model can be used to predict the posterior distri-bution of the variables associated to the series and in particular the confidence or variance associatedto the predictions.The model is an extension of the deterministic model of (Ziat et al. (2016)) and has two maincomponents: (i) Decoding component: we consider that each series corresponds to a particulartrajectory in an unknown latent space. Each series xp1qi;::::;xpTqiis thus associated to a series ofrandom variables in RddenotedZp1qi;::::;ZpTqi,Zptqibeing the latent factor explaining the observedvalue of the series iat timetanddthe size of the latent space. We model each Zptqias a multivariate1For simplicity, we consider univariate time series, but the model can be trivially extended to multivariatetime series.3Under review as a conference paper at ICLR 2017normal variable Npptqi;ptqiq. The observation can be computed from this latent distribution byusing a decoding function mappingZptqitoXptqifpZptqiq. (ii) Dynamic component: Thesecond component models the series dynamics in the latent space. We suppose that dynamics canbe captured for all series through a function hthat maps the latent random variable Zptqito the nextlatent variable Zpt1qihpZptqiq. The function his thus modeling the time dynamics. In addition,constraints are introduced to reflect prior knowledge about the relational dependency structure ofthe series. For any couple of series iandjwith a known dependency, i.e. such that ei;j¡0we adda corresponding constraint on ZptqiandZptqjas explained in Section 3.3.3.In the following, we explain how the distributions corresponding to the random variables Zptqiarelearned, jointly to the functions f(decoder component) and h(dynamic component).3.3 M ODEL DEFINITIONWe suppose that the random variables Zptqifollow a Gaussian distribution. Let us denote ZptqiNpptqi;ptqiqa distribution where ptqiandptqihave to be estimated from known observations. Forsimplicity, we consider in the following that ptqiis a diagonal matrix, with ptqi;jdenoting the jthvalue of the diagonal of ptqi.We define a global loss function Lp;;f;hqwhereandare the means and covariance matricesfor all the series and for all the time steps between 1andT. The loss is a sum of three terms: (i) adecoding loss De, (ii) a dynamical loss Dyand (iii) a structural loss R:Lp;;f;hqn ̧i1T ̧t1DepfpZptqiq;xptqiqDyn ̧i1T1 ̧t1DypZpt1qi;hpZptqiqqRn ̧j1T ̧t1ei;jRpZptqi;Zptqjq(1)whereDyandRare hyperparameters weighting the importance of the different elements in the lossfunction. The first term corresponds to the decoding component , and forces both fand the learneddistributions of variables Zto “explain” the observations, the second term, the dynamic component ,encourageshto model the time dynamics in the latent space, while the third term captures therelations between the pairs of series. In the following, we use for falinear function andhwill beeither a linear or non-linear function (see Section 3.3.2).Learning: Learning the model is performed through the minimization of the loss functionLp;;f;hqwith respect to ,,fandh. To simplify the notations, the parameters of fandhare not made explicit in the notations – fandhare supposed to be differentiable. At the end ofthe learning process, all the latent distributions for each of the time steps are known for the trainingdata, as well as the decoding function fand the dynamical one h. We used ADAM (Kingma & Ba(2015)) as a stochastic gradient descent technique. This optimization can be easily made on a largescale dataset, and/or by using GPUs.3.3.1 F ROM LATENT SPACE TO OBSERVATIONSThe mapping onto the latent space is learned so that the values xptqiof each series can be predictedfrom their respective Gaussian embedding Zptqithrough the ffunction. We define below two al-ternative decoding loss functions De, used in the experiments for measuring the error between thepredicted distribution fpZptqiqand the observation xptqi. Other losses could be used with the samemodel.Thefirst loss measures the difference between the expected value of fand the observation using amean-square error:De1pfpZptqiq;xptqiqdefEfpZptqiqxptqi2(2)4Under review as a conference paper at ICLR 2017When considering a linear decoding function such as fpq ;¡,being the set of parametersoff,De1can be rewritten as as:De1pfpZptqiq;xptqiqp ;ptqi¡xptqiq2(3)Thesecond loss aims at measuring the distance between the random variable modeling the predictedobservations and the observations. This is the expectation of the mean squared error between thepredictions and the observations:De2pfpZptqiq;xptqiqdefEpfpZptqiqxptqiq2(4)Whenfis a linear function, this loss can be written as:De2pfpZptqiq;xptqiqd ̧k12kptqi;k ;ptqi¡xptqi2(5)Minimizing De1only updates the mean of the distributions, whereas minimizing De2updates boththe mean and the variance. More specifically, an observed value with De2will pull the variancesptqidown. This is an interesting property since observing values should reduce the variance of therepresentation. Moreover, this effect will be higher for the dimensions of the latent space where thevalue ofis higher. This is sensible since variance is reduced for the dimensions that are importantfor the prediction.3.3.2 M ODELING DYNAMICSThe loss function Dyaims at finding values Zp:qiand a dynamic model h, that will be used topredict the representation of the next state of time series i,Zpt1qi . The function hmaps a dis-tribution Npptqi;ptqiqtoNppt1qi;pt1qiq. Based on (Vilnis & McCallum (2015); Dos Santoset al. (2016)), we use a Kullback-Leibler divergence (noted DKLp||q ) to compare the distributionatpt1qto the distribution predicted by h.We propose in the following two alternative functions for h. For the first one, we consider that thelatent representation at time pt1qis a linear transformation of the latent distribution at time t. Thetransformed variable is also a Gaussian and its parameters can be easily computed. In this case, hisa linear function from RdtoRdwhich is represented by a matrix PMd;dpRq:Dy1pZpt1qi;hpZptqiqqdefDKLpZpt1qi||ZptqiqDKLpZpt1qi||Npptqi;ptqiTqq (6)Linear transformations of random vectors might be too restrictive to model complex processes.As an alternative transformation, we used two non linear multilayer perceptrons (MLP), one hmfor predicting the means and one for hcfor predicting the variance: the next mean is given bypt1qihmpptqi;ptqiq, and the next variance by pt1qihcpptqi;ptqiq. This gives:Dy2pZpt1qi;hpZptqiqqdefDKLpZpt1qi||Nphmpptqi;ptqiq;hcpptqi;ptqiqqq (7)Note hat in the second case, we also make the hypothesis that the resulting distribution (for Zpt1qi )is Gaussian. In the two cases, the KL divergence between the two Gaussian distributions has asimple analytic form from which the gradient can be easily computed2.3.3.3 S TRUCTURAL REGULARIZATION TERMAt last, Rcorresponds to a structural regularization over the graph structure that encourages themodel to learn similar representations for time series that are interdependent. This forces the modelto learn representations that reflect the structure dependencies between the series. Recall that these2DKLpZptqi||Zptqjq12ptrpptqj1ptqiqpptqjptqiqTptqj1pptqjptqiqdlogp|ptqi||ptqj|qq5Under review as a conference paper at ICLR 2017dependencies are supposed to be provided as priors for this model. We define this regularization lossas:RpZptqi||ZptqjqDKLpZptqi||Zptqjq (8)which again has, for Gaussian random variables, a simple analytical form that can be used forlearning.Minimizing the regularization term Rhas a direct impact on the distributions of the predictedobservations for connected times series. More precisely, we have the following inequality:dTVfpZptqiq;fpZptqjq¤ddDKLpZptqi||Zptqjq2(9)withdTVbeing the total variation distance of probability measures, i.e.:dTVpX;Yq supAPBorelp|DXpAqDYpAq|q (10)withXandYbeing to random variables of density distribution respectively DXandDY, andBorel being the Borel set of Rn(roughly, cuboids in Rn). This means that having relatively similarrepresentations (regarding the KL-divergence) constrains the predicted values to be similar. Formore details see Appendix A.3.4 I NFERENCEDuring inference when forecasting values, the latent distributions at pT1qare deduced from theones at time Tand follow NphppTqi;pTqiqq, distributions at pT2qfollow NphhppTqi;pTqiqq,and so on...4 E XPERIMENTS4.1 D ATASETS AND BASELINESExperiments have been performed on four datasets respectively extracted from Google Flu Trends3,WHO4and from two datasets from Grand Lyon5(GL) (respectively data from traffic conditionsand from car parks occupancy). All the series are normalized. For all datasets, we used binarydependency relations indicating whether two series are related or not. The Google Flu Trend(GFT) dataset is composed of an aggregation of weekly Google search queries related to the flu in29 countries. This dataset spans about ten years of time. The binary relations between series aredefined a priori so that the series of two countries iandjare linked, i.e. ei;j1in Equation (1),only if the countries have a common frontier. There are 96 relations in all. The GL Traffic (GL-T)dataset corresponds to the traffic conditions of the 50 busiest roads of the city of Lyon (France).Data is aggregated on 20 minutes windows spanning 15 days. The binary relations between seriesare based on the geographical proximity of roads. There are 130 relations in total. The GL Park(GL-P) dataset represents the occupancy of public car parks in Lyon. The series correspond to theoccupancy of the 30 busiest car parks. It has the same window and period of time as the previousdataset, and the binary relations between series are based on the geographical proximity of carparks. There are 74 relations in total. The WHO dataset provides the number of deaths caused bydiphtheria over 91 different countries, giving rise to 91 time series. The binary relations betweenseries are defined so that two series are linked if the corresponding countries share a commonfrontier. There are 228 links in total.We compare our approach with five baselines : Auto-Regressive ( AR), a monovariate linearauto-regressive model. It computes its predictions based on a learned linear function of a fixednumberpof past values of the series. The order pof the model is a hyperparameter of the modelselected by a grid search. Feed Forward Neural Network ( FFNN ), representative of non-linear3http://www.google.org/flutrends4http://www.who.int5http://data.grandlyon.com6Under review as a conference paper at ICLR 2017(a) RMSE from T+1 to T+5 on GL-T.Model GL-T GL-P GFT WHOAR 0.0752 0.0892 0.0626 0.0832FFNN 0.0751 0.0894 0.045 0.0838RNN 0.0709 0.0890 0.0431 0.0795KF 0.0711 0.0833 0.0388 0.0799DFG 0.0712 0.0911 0.0592 0.0795RDG 1;1 0.0742 0.0902 0.0607 0.0848RDG 1;2 0.0707 0.0834 0.0434 0.0796RDG 2;1 0.0765 0.0896 0.0589 0.0831RDG 2;2 0.0718 0.0828 0.0429 0.0795(b) RMSE at T+1 on the four datasets.Figure 1: Quantitative comparison between baselines and our proposed model (RDG) on the predic-tion task. RDG k;lcorresponds to the variant with losses ( Dek,Dyl).auto-regressive models of order pwhere the non-linear function is modeled as a feed-forward neuralnetwork with one hidden layer of size s. In this case, pandsare hyperparameters selected by gridsearch. RNN , a recurrent neural network with one hidden layer of size sof recurrent units and tanhnon-linearities. The RNN model is a state space non-linear auto-regressive model with exogenousinputs (the past values of the series). Note that this model should in principle be able to learnthe inter-series dependencies, but the dependencies are not modeled explicitly as they are in ourmodel. Also the RNN does not introduce explicit modeling of uncertainties. KF(Kalman (1960)),is a classic Kalman Filter with linear transformations from one state to another. DFG (Mirowski& LeCun (2009)), a state of the art model that learns continuous deterministic latent variablesby modeling the dynamics and the joint probabilities between series. All the hyperparameters ofthe baselines have been set using a validation set by grid search, including the best architecturesfor the dynamic model hwhen it is a multi-layer perceptron with one hidden layer or a linear model. .For the evaluation we have considered a root-mean-square error (RMSE) criterion. Regarding theexperimental protocol, models are evaluated using cross-validation with rolling origin.4.2 R ESULTSLet us first present the performance of our model w.r.t. the baselines for prediction at horizon 1inFigure 1b We have tested the four variants of our approach i.e combinations of De1orDe2withDy1orDy2. The proposed model obtains the best results on all the datasets except GFT where KFperforms better. Otherwise it outperforms the baselines on two datasets (GL-P -Grand Lyon Parks-and GFT -Google Flu Trends- on the table) and gets results similar to the RNN on the two others(GL-T -Grand yon Traffic- and WHO). The non linear dynamical model used for Dy2usually getsbetter results than other models, the best combination being the use of the MSE expectation errorfor the decoder and the non-linear model for the dynamics (denoted RDG 2;2on the figure). Thenon linear dynamical model used for Dy2usually gets better results than other models, the bestcombination being the use of the MSE expectation error for the decoder and the non-linear modelfor the dynamics (denoted RDG 2;2on the figure).Figure 1a shows the prediction quality (RMSE) at pT1q,pT2q,pT3q,pT4qandpT5qandillustrates the ability of RDG to predict correctly at different horizons. Here again, the performanceof RDG is very close to the performance of the Recurrent Neural Network. One can remark that atpT5qKF does not goes the distance since it performs well at pT1qbut quite badly at pT5qin comparison to other baselines.RDG has the additional property of modeling the uncertainty associated to its predictions, which isnot the case for a RNN. Let us consider the curves presented in Figure 2. They illustrate, the pre-dictions made by our model together with their associated variance computed through the Gaussianembeddings. First, one can see that the ground truth values are always within the confidence intervalprovided by our model, which means that RDG computes relevant minimum and maximum possiblevalues. Another aspect is that the size of the interval increases with the prediction horizon, which is7Under review as a conference paper at ICLR 2017Figure 2: Forecasts on GFT (two different time series of the dataset) with the RDG 2;2model showingits range of confidence: EpfpZptqqqvarpfpZptqqq. Prediction at 25ncorresponds to fphnpZp25qq.what is expected from such a model. The latter is then able to predict relevant confidence values forits predictions.Comparison between RDG with/without structural regularization or uncertainty We com-pare in Table 1 the results between our model when taking into account the neighborhood graph(R0) or not (R0): forecasts are uniformly worse for all datasets when we do not takeinto account the neighborhood graph, it suggests that the regularizer improves the model when theinput graph is relevant. Furthermore, we give the results obtained without uncertainty, which cor-responds to the model described in (Ziat et al. (2016)) (denoted Rainstorm): here again, our modeloutperforms the previous one for all the datasets.ModelDatasetGL-T GL-P GFT WHORainstorm 0.0710 0.0886 0.0440 0.0804RDG (R0) 0.0719 0.900 0.0441 0.0807RDG 0.0707 0.0828 0.0388 0.0795Table 1: RMSE at T1on the four datasets5 C ONCLUSION AND FUTURE WORKWe have proposed a model for relational time series forecasting. Our model (RDG) is based onlatent Gaussian embeddings, and has shown competitive performance on four different datasetscompared to state-of-the-art models. Moreover, RDG allows us to model the uncertainty of predic-tions, providing for example confidence intervals for each prediction. Future work will investigatemore complex dynamic and prediction functions, as well as observing the behavior of the model forimputation tasks.
rkHCIUMVg
Important line of research, muddled presentation and unconvincing empirical results
4: Ok but not good enough - rejection
Because the authors did not respond to reviewer feedback, I am maintaining my original review score. ----- This paper proposes to model relational (i.e., correlated) time series using a deep learning-inspired latent variable approach: they design a flexible parametric (but not generative) model with Gaussian latent factors and fit it using a rich training objective including terms for reconstruction (of observed time series) error, smoothness in the latent state space (via a KL divergence term encouraging neighbor states to be similarly distributed), and a final regularizer that encourages related time series to have similar latent state trajectories. Relations between trajectories are hard coded based on pre-existing knowledge, i.e., latent state trajectories for neighboring (wind speed) base stations should be similar. The model appears to be fit using gradient simple descent. The authors propose several elaborations, including a nonlinear transition function (based on an MLP) and a reconstruction error term that takes variance into account. However, the model is restricted to using a linear decoder. Experimental results are positive but not convincing. Strengths: - The authors target a worthwhile and challenging problem: incorporating the modeling of uncertainty over hidden states with the power of flexible neural net-like models. - The idea of representing relationships between hidden states using KL divergence between their (distributions over) corresponding hidden states is clever. Combined with the Gaussian distribution over hidden states, the resulting regularization term is simple and differentiable. - This general approach -- focusing on writing down the problem as a neural network-like loss function -- seems robust and flexible and could be combined with other approaches, including variants of variational autoencoders. Weaknesses: - The presentation is a muddled, especially the model definition in Sec. 3.3. The authors introduce four variants of their model with different combinations of decoder (with and without variance term) and linear vs. MLP transition function. It appears that the 2,2 variant is generally better but not on all metrics and often by small margins. This makes drawing a solid conclusions difficult: what each component of the loss contributes, whether and how the nonlinear transition function helps and how much, how in practice the model should be applied, etc. I would suggest two improvements to the manuscript: (1) focus on the main 2,2 variant in Sec. 3.3 (with the hypothesis that it should perform best) and make the simpler variants additional "baselines" described in a paragraph in Sec. 4.1; (2) perform more thorough experiments with larger data sets to make a stronger case for the superiority of this approach. - The authors only allude to learning (with references to gradient descent and ADAM during model description) in this framework. Inference gets its one subsection but only one sentence that ends in an ellipsis (?). - It's unclear what is the purpose of introducing the inequality in Eq. 9. - Experimental results are not convincing: given the size of the data, the differences vs. the RNN and KF baselines is probably not significant, and these aren't particularly strong baselines (especially if it is in fact an RNN and not an LSTM or GRU). - The position of this paper is unclear with respect to variational autoencoders and related models. Recurrent variants of VAEs (e.g., Krishnan, et al., 2015) seem to achieve most of the same goals as far as uncertainty modeling is concerned. It seems like those could easily be extended to model relationships between time series using the simple regularization strategy used here. Same goes for Johnson, et al., 2016 (mentioned in separate question). This is a valuable research direction with some intriguing ideas and interesting preliminary results. I would suggest that the authors restructure this manuscript a bit, striving for clarity of model description similar to the papers cited above and providing greater detail about learning and inference. They also need to perform more thorough experiments and present results that tell a clear story about the strengths and weaknesses of this approach.
3: The reviewer is fairly confident that the evaluation is correct
HJ7O61Yxe
ICLR.cc/2017/conference
2017
Modelling Relational Time Series using Gaussian Embeddings
["Ludovic Dos Santos", "Ali Ziat", "Ludovic Denoyer", "Benjamin Piwowarski", "Patrick Gallinari"]
We address the problem of modeling multiple simultaneous time series where the observations are correlated not only inside each series, but among the different series. This problem happens in many domains such as ecology, meteorology, etc. We propose a new dynamical state space model, based on representation learning, for modeling the evolution of such series. The joint relational and temporal dynamics of the series are modeled as Gaussian distributions in a latent space. A decoder maps the latent representations to the observations. The two components (dynamic model and decoder) are jointly trained. Using stochastic representations allows us to model the uncertainty inherent to observations and to predict unobserved values together with a confidence in the prediction.
["Applications", "Deep learning"]
ABSTRACTWe address the problem of modeling multiple simultaneous time series where theobservations are correlated not only inside each series, but among the differentseries. This problem happens in many domains such as ecology, meteorology, etc.We propose a new dynamical state space model, based on representation learn-ing, for modeling the evolution of such series. The joint relational and temporaldynamics of the series are modeled as Gaussian distributions in a latent space. Adecoder maps the latent representations to the observations. The two components(dynamic model and decoder) are jointly trained. Using stochastic representationsallows us to model the uncertainty inherent to observations and to predict unob-served values together with a confidence in the prediction.1 I NTRODUCTIONRelational time series, i.e. multiple time series where the observations are correlated both insideeach series and between series occur in many domains such as ecology, medicine, biology, earthobservation by satellite imagery or local measurements, multimedia or even social data analysis.The correlations between the different observed series can come from a proximity (e.g. earth obser-vation or epidemic diffusion) or from a similarity of behavior (e.g. user traces in social data). In thestatistical literature, the modeling of relational time series has been the topic of a dedicated field:spatio-temporal statistics (Cressie & Wikle (2011); Wikle & Hooten (2010)). Different method-ologies have been developed for handling a large variety of spatio-temporal phenomena, with anemphasis on the analysis of natural observations like weather prediction, ecology or remote sensing.In the machine learning domain, there exists a vast literature dedicated to sequence or time seriesprediction. Recently, deep recurrent neural networks have witnessed notable successes in differentsequence and time series modeling tasks leading to an increasing number of publications, e.g. (Bar-bounis et al. (2006); Hsieh et al. (2011); Cao et al. (2012); Hermans & Schrauwen (2013)). Despitea large number of recent developments, the modeling and analysis of relational time series has onlyattracted a few attention in the field of representation learning. In addition, most of the models aredeterministic in the sense that they are trained to learn a fixed mapping for modeling the dynamicsof the series.We propose a new state space model for relational time series able to model the uncertainty at theobservation and at the modeling levels. The principle of this approach is to associate each point ofa time series to a Gaussian distribution in a latent space, the distribution over the observed valuesbeing directly computed from these latent distributions. The model has two main components. Oneis responsible for the dynamics in the latent space. This component is thus modeling the evolutionof the Gaussian distribution considering both the temporal intra-series and the relational inter-seriesBoth authors contributed equally to this work1Under review as a conference paper at ICLR 2017dependencies. A second component acts as a decoder and maps the latent representations associatedwith each series to the corresponding observations in the output space.The contributions of the paper are thus: (i) a new dynamical model for relational time series in-spired by representation learning; (ii) a stochastic component for modeling the uncertainties at theobservation and dynamic levelsThe paper is organized as follows. In Section 2 we introduce some related work on forecastingin time series, representation learning for time series, and recent deep learning works focusing onmodeling uncertainty. The model is presented in Section 3 together with four different variants.Section 4 presents experimental results on four datasets, and section 5 concludes this work andgives some perspectives.2 R ELATED WORKThe classical topic of time series modeling and forecasting has given rise to an extensive literature.In statistics, classical linear models include many variations around auto-regressive and movingaverage models (De Gooijer & Hyndman (2006)). In machine learning, non linear extensions ofthese models based on neural networks have been proposed as early as the 90s, opening the way tomany other non linear models including kernel methods (Muller et al. (99)).Relational time series have mainly been studied in the field of spatio-temporal statistics (Cressie &Wikle (2011); Wikle & Hooten (2010)). The traditional method first relied on a descriptive approachusing the first and second-order moments of the process for modeling the spatio-temporal dependen-cies. More recently, dynamical state models, where the current state is conditioned on the past havebeen explored (Wikle (2015)). These models have been considered both for continuous/discretespace and time components. However, the most common way is to consider discrete time, leadingto the modeling of time series of spatial processes as we do here. When space is discrete, the modelcomes down to a general vectorial autoregressive formulation. These models face a curse of dimen-sionality in the case of a large number of sources. Different strategies have been adopted to solve thisproblem such as embedding the spatio-temporal process in a low-dimensional manifold or param-eter reduction (Wikle (2015)), leading to model families quite similar to the ones used in machinelearning for modeling dynamical phenomena. Also, for complex underlying processes, observationsonly provide an incomplete description of the process dynamics so that modeling uncertainty at thedata and model levels is an important topic.In the last 10 years, there has been a growing interest in learning latent representations for examplethrough neural networks and deep learning. Dynamical state space models such as recurrent neuralnetworks (RNN), which have been used for time series forecasting in different contexts since theearly nineties (Connor et al. (1994)), have recently witnessed important successes in different areasfor general sequence modeling problems, leading to breakthroughs in domains like speech (Graveset al. (2013)), language generation (Sutskever et al. (2011)), translation (Cho et al. (2014)), andmany others. Among this family, the model closest to ours is the dynamic factor graph model of(Mirowski & LeCun (2009)) designed for multiple series modeling for the tasks of forecasting andimputation. However this model does not consider relational dependencies which is the focus of ourapproach.Most of the above models make use of pointwise representations and do not model explicitly theuncertainties present in the process and/or in the observations. Recently, in the learning repre-sentation community, there has been a growing interest in using distributions as latent representa-tions instead of points. (Vilnis & McCallum (2015); He et al. (2015); Dos Santos et al. (2016)) allmake use of Gaussian distributions for representing different items like words (Vilnis & McCallum(2015)), nodes in knowledge graphs (He et al. (2015)) or nodes in graphs for transductive classifi-cation (Dos Santos et al. (2016)). Note that Gaussian processes have also been used for time seriesprediction, but they have mainly been considered for univariate time series prediction (Hachino &Kadirkamanathan (2011); Brahim-Belhouari & Bermak (2004)) and they do not use a state spaceformulation.Recent techniques in variational inference (Kingma & Welling (2014); Rezende et al. (2014)) dealwith uncertainty by modeling distributions in the observation space, mapping random variableswithin a latent space to observations with a deep neural network. Extension of the variational in-2Under review as a conference paper at ICLR 2017ference method to time series has been proposed (Fraccaro et al. (2016); Krishnan et al. (2015)) butcontrarily to those works, we take into account relationships (both temporal and relational). Fur-thermore, in our model, we work directly with random variables to predict observations from timeseries. This gives us direct access to the output distribution with no need to sample or work withintractable distributions.Our model is built on top of the model in (Ziat et al. (2016)) which proposes a deterministic dy-namical process model but does not consider any explicit modeling of uncertainty. In this paper, wepropose a model that uses Gaussian embeddings, and extend the dynamics and loss functions of themodel in (Ziat et al. (2016)).3 F ORECASTING OF RELATIONAL TIMESERIES3.1 N OTATIONS AND TASKSLet us consider a set of ntemporal sequences1x1;::;xnsuch thatxptqiPRis the value of the ithsequence at time tdefined by xipxp1qi;::;xpTqiq,Tbeing the number of observed time steps. Forsimplification, we consider that all the series have the same length, but this is not restrictive.We model the dependencies between the different series through a graph, the different series sourcesbeing the graph vertices and the links modeling explicit dependencies between the sources. Theselinks can reflect a spatial proximity between the sources of the series, a similarity of behavior be-tween users or any other predefined relation. These explicit relations will be modeled in the latentspace. Our hypothesis is that they will constrain the representation of linked sources to be similarone to another in the latent space, this similarity being controlled by the strength of the link betweenthe two time series, denoted ei;j. We assume that the graph structure is static in time and is providedas a prior information. The model can be extended to learn these static dependencies but this is notconsidered here.Let us denote the size of the prediction horizon. The forecasting problem considered here is tocompute for all series ithe valuesxpTkqi for allkinr1;s. Note that the model can be straightfor-wardly extended to the imputation problem that aims at predicting missing values.3.2 I NFORMAL DESCRIPTIONThe proposed model is a dynamic state space model: the dynamics is modeled in a continuous latentstate space and the observations are generated from states in this latent space. State space modelshave already been considered for multiple time series (e.g. Mirowski & LeCun (2009)) and forspatio-temporal processes (e.g. Wikle & Hooten (2010)).Both the observations and the dynamics are subject to uncertainties. Usually, the observations cor-respond to a partial view of the underlying generating process and the dynamics being hidden is notdirectly accessible and should be modeled as a stochastic process.To handle this uncertainty, we propose a model, namely Relational Dynamic model with Gaussianrepresentations ( RDG ), that represents latent factors as distributions in a latent space and learns theseries dynamics in this latent space. The distributions themselves are estimated using observationslike for any other representation learning model. Besides being more adapted to handling the noiseinherent to the process and to the observations, the model can be used to predict the posterior distri-bution of the variables associated to the series and in particular the confidence or variance associatedto the predictions.The model is an extension of the deterministic model of (Ziat et al. (2016)) and has two maincomponents: (i) Decoding component: we consider that each series corresponds to a particulartrajectory in an unknown latent space. Each series xp1qi;::::;xpTqiis thus associated to a series ofrandom variables in RddenotedZp1qi;::::;ZpTqi,Zptqibeing the latent factor explaining the observedvalue of the series iat timetanddthe size of the latent space. We model each Zptqias a multivariate1For simplicity, we consider univariate time series, but the model can be trivially extended to multivariatetime series.3Under review as a conference paper at ICLR 2017normal variable Npptqi;ptqiq. The observation can be computed from this latent distribution byusing a decoding function mappingZptqitoXptqifpZptqiq. (ii) Dynamic component: Thesecond component models the series dynamics in the latent space. We suppose that dynamics canbe captured for all series through a function hthat maps the latent random variable Zptqito the nextlatent variable Zpt1qihpZptqiq. The function his thus modeling the time dynamics. In addition,constraints are introduced to reflect prior knowledge about the relational dependency structure ofthe series. For any couple of series iandjwith a known dependency, i.e. such that ei;j¡0we adda corresponding constraint on ZptqiandZptqjas explained in Section 3.3.3.In the following, we explain how the distributions corresponding to the random variables Zptqiarelearned, jointly to the functions f(decoder component) and h(dynamic component).3.3 M ODEL DEFINITIONWe suppose that the random variables Zptqifollow a Gaussian distribution. Let us denote ZptqiNpptqi;ptqiqa distribution where ptqiandptqihave to be estimated from known observations. Forsimplicity, we consider in the following that ptqiis a diagonal matrix, with ptqi;jdenoting the jthvalue of the diagonal of ptqi.We define a global loss function Lp;;f;hqwhereandare the means and covariance matricesfor all the series and for all the time steps between 1andT. The loss is a sum of three terms: (i) adecoding loss De, (ii) a dynamical loss Dyand (iii) a structural loss R:Lp;;f;hqn ̧i1T ̧t1DepfpZptqiq;xptqiqDyn ̧i1T1 ̧t1DypZpt1qi;hpZptqiqqRn ̧j1T ̧t1ei;jRpZptqi;Zptqjq(1)whereDyandRare hyperparameters weighting the importance of the different elements in the lossfunction. The first term corresponds to the decoding component , and forces both fand the learneddistributions of variables Zto “explain” the observations, the second term, the dynamic component ,encourageshto model the time dynamics in the latent space, while the third term captures therelations between the pairs of series. In the following, we use for falinear function andhwill beeither a linear or non-linear function (see Section 3.3.2).Learning: Learning the model is performed through the minimization of the loss functionLp;;f;hqwith respect to ,,fandh. To simplify the notations, the parameters of fandhare not made explicit in the notations – fandhare supposed to be differentiable. At the end ofthe learning process, all the latent distributions for each of the time steps are known for the trainingdata, as well as the decoding function fand the dynamical one h. We used ADAM (Kingma & Ba(2015)) as a stochastic gradient descent technique. This optimization can be easily made on a largescale dataset, and/or by using GPUs.3.3.1 F ROM LATENT SPACE TO OBSERVATIONSThe mapping onto the latent space is learned so that the values xptqiof each series can be predictedfrom their respective Gaussian embedding Zptqithrough the ffunction. We define below two al-ternative decoding loss functions De, used in the experiments for measuring the error between thepredicted distribution fpZptqiqand the observation xptqi. Other losses could be used with the samemodel.Thefirst loss measures the difference between the expected value of fand the observation using amean-square error:De1pfpZptqiq;xptqiqdefEfpZptqiqxptqi2(2)4Under review as a conference paper at ICLR 2017When considering a linear decoding function such as fpq ;¡,being the set of parametersoff,De1can be rewritten as as:De1pfpZptqiq;xptqiqp ;ptqi¡xptqiq2(3)Thesecond loss aims at measuring the distance between the random variable modeling the predictedobservations and the observations. This is the expectation of the mean squared error between thepredictions and the observations:De2pfpZptqiq;xptqiqdefEpfpZptqiqxptqiq2(4)Whenfis a linear function, this loss can be written as:De2pfpZptqiq;xptqiqd ̧k12kptqi;k ;ptqi¡xptqi2(5)Minimizing De1only updates the mean of the distributions, whereas minimizing De2updates boththe mean and the variance. More specifically, an observed value with De2will pull the variancesptqidown. This is an interesting property since observing values should reduce the variance of therepresentation. Moreover, this effect will be higher for the dimensions of the latent space where thevalue ofis higher. This is sensible since variance is reduced for the dimensions that are importantfor the prediction.3.3.2 M ODELING DYNAMICSThe loss function Dyaims at finding values Zp:qiand a dynamic model h, that will be used topredict the representation of the next state of time series i,Zpt1qi . The function hmaps a dis-tribution Npptqi;ptqiqtoNppt1qi;pt1qiq. Based on (Vilnis & McCallum (2015); Dos Santoset al. (2016)), we use a Kullback-Leibler divergence (noted DKLp||q ) to compare the distributionatpt1qto the distribution predicted by h.We propose in the following two alternative functions for h. For the first one, we consider that thelatent representation at time pt1qis a linear transformation of the latent distribution at time t. Thetransformed variable is also a Gaussian and its parameters can be easily computed. In this case, hisa linear function from RdtoRdwhich is represented by a matrix PMd;dpRq:Dy1pZpt1qi;hpZptqiqqdefDKLpZpt1qi||ZptqiqDKLpZpt1qi||Npptqi;ptqiTqq (6)Linear transformations of random vectors might be too restrictive to model complex processes.As an alternative transformation, we used two non linear multilayer perceptrons (MLP), one hmfor predicting the means and one for hcfor predicting the variance: the next mean is given bypt1qihmpptqi;ptqiq, and the next variance by pt1qihcpptqi;ptqiq. This gives:Dy2pZpt1qi;hpZptqiqqdefDKLpZpt1qi||Nphmpptqi;ptqiq;hcpptqi;ptqiqqq (7)Note hat in the second case, we also make the hypothesis that the resulting distribution (for Zpt1qi )is Gaussian. In the two cases, the KL divergence between the two Gaussian distributions has asimple analytic form from which the gradient can be easily computed2.3.3.3 S TRUCTURAL REGULARIZATION TERMAt last, Rcorresponds to a structural regularization over the graph structure that encourages themodel to learn similar representations for time series that are interdependent. This forces the modelto learn representations that reflect the structure dependencies between the series. Recall that these2DKLpZptqi||Zptqjq12ptrpptqj1ptqiqpptqjptqiqTptqj1pptqjptqiqdlogp|ptqi||ptqj|qq5Under review as a conference paper at ICLR 2017dependencies are supposed to be provided as priors for this model. We define this regularization lossas:RpZptqi||ZptqjqDKLpZptqi||Zptqjq (8)which again has, for Gaussian random variables, a simple analytical form that can be used forlearning.Minimizing the regularization term Rhas a direct impact on the distributions of the predictedobservations for connected times series. More precisely, we have the following inequality:dTVfpZptqiq;fpZptqjq¤ddDKLpZptqi||Zptqjq2(9)withdTVbeing the total variation distance of probability measures, i.e.:dTVpX;Yq supAPBorelp|DXpAqDYpAq|q (10)withXandYbeing to random variables of density distribution respectively DXandDY, andBorel being the Borel set of Rn(roughly, cuboids in Rn). This means that having relatively similarrepresentations (regarding the KL-divergence) constrains the predicted values to be similar. Formore details see Appendix A.3.4 I NFERENCEDuring inference when forecasting values, the latent distributions at pT1qare deduced from theones at time Tand follow NphppTqi;pTqiqq, distributions at pT2qfollow NphhppTqi;pTqiqq,and so on...4 E XPERIMENTS4.1 D ATASETS AND BASELINESExperiments have been performed on four datasets respectively extracted from Google Flu Trends3,WHO4and from two datasets from Grand Lyon5(GL) (respectively data from traffic conditionsand from car parks occupancy). All the series are normalized. For all datasets, we used binarydependency relations indicating whether two series are related or not. The Google Flu Trend(GFT) dataset is composed of an aggregation of weekly Google search queries related to the flu in29 countries. This dataset spans about ten years of time. The binary relations between series aredefined a priori so that the series of two countries iandjare linked, i.e. ei;j1in Equation (1),only if the countries have a common frontier. There are 96 relations in all. The GL Traffic (GL-T)dataset corresponds to the traffic conditions of the 50 busiest roads of the city of Lyon (France).Data is aggregated on 20 minutes windows spanning 15 days. The binary relations between seriesare based on the geographical proximity of roads. There are 130 relations in total. The GL Park(GL-P) dataset represents the occupancy of public car parks in Lyon. The series correspond to theoccupancy of the 30 busiest car parks. It has the same window and period of time as the previousdataset, and the binary relations between series are based on the geographical proximity of carparks. There are 74 relations in total. The WHO dataset provides the number of deaths caused bydiphtheria over 91 different countries, giving rise to 91 time series. The binary relations betweenseries are defined so that two series are linked if the corresponding countries share a commonfrontier. There are 228 links in total.We compare our approach with five baselines : Auto-Regressive ( AR), a monovariate linearauto-regressive model. It computes its predictions based on a learned linear function of a fixednumberpof past values of the series. The order pof the model is a hyperparameter of the modelselected by a grid search. Feed Forward Neural Network ( FFNN ), representative of non-linear3http://www.google.org/flutrends4http://www.who.int5http://data.grandlyon.com6Under review as a conference paper at ICLR 2017(a) RMSE from T+1 to T+5 on GL-T.Model GL-T GL-P GFT WHOAR 0.0752 0.0892 0.0626 0.0832FFNN 0.0751 0.0894 0.045 0.0838RNN 0.0709 0.0890 0.0431 0.0795KF 0.0711 0.0833 0.0388 0.0799DFG 0.0712 0.0911 0.0592 0.0795RDG 1;1 0.0742 0.0902 0.0607 0.0848RDG 1;2 0.0707 0.0834 0.0434 0.0796RDG 2;1 0.0765 0.0896 0.0589 0.0831RDG 2;2 0.0718 0.0828 0.0429 0.0795(b) RMSE at T+1 on the four datasets.Figure 1: Quantitative comparison between baselines and our proposed model (RDG) on the predic-tion task. RDG k;lcorresponds to the variant with losses ( Dek,Dyl).auto-regressive models of order pwhere the non-linear function is modeled as a feed-forward neuralnetwork with one hidden layer of size s. In this case, pandsare hyperparameters selected by gridsearch. RNN , a recurrent neural network with one hidden layer of size sof recurrent units and tanhnon-linearities. The RNN model is a state space non-linear auto-regressive model with exogenousinputs (the past values of the series). Note that this model should in principle be able to learnthe inter-series dependencies, but the dependencies are not modeled explicitly as they are in ourmodel. Also the RNN does not introduce explicit modeling of uncertainties. KF(Kalman (1960)),is a classic Kalman Filter with linear transformations from one state to another. DFG (Mirowski& LeCun (2009)), a state of the art model that learns continuous deterministic latent variablesby modeling the dynamics and the joint probabilities between series. All the hyperparameters ofthe baselines have been set using a validation set by grid search, including the best architecturesfor the dynamic model hwhen it is a multi-layer perceptron with one hidden layer or a linear model. .For the evaluation we have considered a root-mean-square error (RMSE) criterion. Regarding theexperimental protocol, models are evaluated using cross-validation with rolling origin.4.2 R ESULTSLet us first present the performance of our model w.r.t. the baselines for prediction at horizon 1inFigure 1b We have tested the four variants of our approach i.e combinations of De1orDe2withDy1orDy2. The proposed model obtains the best results on all the datasets except GFT where KFperforms better. Otherwise it outperforms the baselines on two datasets (GL-P -Grand Lyon Parks-and GFT -Google Flu Trends- on the table) and gets results similar to the RNN on the two others(GL-T -Grand yon Traffic- and WHO). The non linear dynamical model used for Dy2usually getsbetter results than other models, the best combination being the use of the MSE expectation errorfor the decoder and the non-linear model for the dynamics (denoted RDG 2;2on the figure). Thenon linear dynamical model used for Dy2usually gets better results than other models, the bestcombination being the use of the MSE expectation error for the decoder and the non-linear modelfor the dynamics (denoted RDG 2;2on the figure).Figure 1a shows the prediction quality (RMSE) at pT1q,pT2q,pT3q,pT4qandpT5qandillustrates the ability of RDG to predict correctly at different horizons. Here again, the performanceof RDG is very close to the performance of the Recurrent Neural Network. One can remark that atpT5qKF does not goes the distance since it performs well at pT1qbut quite badly at pT5qin comparison to other baselines.RDG has the additional property of modeling the uncertainty associated to its predictions, which isnot the case for a RNN. Let us consider the curves presented in Figure 2. They illustrate, the pre-dictions made by our model together with their associated variance computed through the Gaussianembeddings. First, one can see that the ground truth values are always within the confidence intervalprovided by our model, which means that RDG computes relevant minimum and maximum possiblevalues. Another aspect is that the size of the interval increases with the prediction horizon, which is7Under review as a conference paper at ICLR 2017Figure 2: Forecasts on GFT (two different time series of the dataset) with the RDG 2;2model showingits range of confidence: EpfpZptqqqvarpfpZptqqq. Prediction at 25ncorresponds to fphnpZp25qq.what is expected from such a model. The latter is then able to predict relevant confidence values forits predictions.Comparison between RDG with/without structural regularization or uncertainty We com-pare in Table 1 the results between our model when taking into account the neighborhood graph(R0) or not (R0): forecasts are uniformly worse for all datasets when we do not takeinto account the neighborhood graph, it suggests that the regularizer improves the model when theinput graph is relevant. Furthermore, we give the results obtained without uncertainty, which cor-responds to the model described in (Ziat et al. (2016)) (denoted Rainstorm): here again, our modeloutperforms the previous one for all the datasets.ModelDatasetGL-T GL-P GFT WHORainstorm 0.0710 0.0886 0.0440 0.0804RDG (R0) 0.0719 0.900 0.0441 0.0807RDG 0.0707 0.0828 0.0388 0.0795Table 1: RMSE at T1on the four datasets5 C ONCLUSION AND FUTURE WORKWe have proposed a model for relational time series forecasting. Our model (RDG) is based onlatent Gaussian embeddings, and has shown competitive performance on four different datasetscompared to state-of-the-art models. Moreover, RDG allows us to model the uncertainty of predic-tions, providing for example confidence intervals for each prediction. Future work will investigatemore complex dynamic and prediction functions, as well as observing the behavior of the model forimputation tasks.
ryC8AjbVx
Interesting model, further experiments required
4: Ok but not good enough - rejection
In absence of authors' response, the rating is maintained. --- This paper introduces a nonlinear dynamical model for multiple related multivariate time series. It models a linear observation model conditioned on the latent variables, a linear or nonlinear dynamical model between consecutive latent variables and a similarity constraint between any two time series (provided as prior data and non-learnable). The predictions/constraints given by the three components of the model are Gaussian, because the model predicts both the mean and the variance or covariance matrix. Inference is forward only. The model is evaluated on four datasets, and compared to several baselines: plain auto-regressive models, feed-forward networks, RNN and dynamic factor graphs DFGs, which are RNNs with forward and backward inference of the latent variables. The model, which introduces lateral constraints between different time series, and which predicts both the mean and covariance seems interesting, but presents two limitations. First of all, the paper should refer to variational auto-encoders / deep gaussian models, which also predict the mean and the variance during inference. Secondly, the datasets are extremely small. For example, the WHO contains only 91 times series of 52*10 = 520 time points. Although the experiments seem to suggest that the proposed model tends to outperform RNNs, the datasets are very small and the high variance in the results indicates that further experiments, with longer time series, are required. The paper could also easily be extended with more information about the model (what is the architecture of the MLP) as well as time complexity comparison between the models (especially between DFGs and this model). Minor remark: The footnote 2 on page 5 seems to refer to the structural regularization term, not to the dynamical term.
5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature
HJrDIpiee
ICLR.cc/2017/conference
2017
Investigating Recurrence and Eligibility Traces in Deep Q-Networks
["Jean Harb", "Doina Precup"]
Eligibility traces in reinforcement learning are used as a bias-variance trade-off and can often speed up training time by propagating knowledge back over time-steps in a single update. We investigate the use of eligibility traces in combination with recurrent networks in the Atari domain. We illustrate the benefits of both recurrent nets and eligibility traces in some Atari games, and highlight also the importance of the optimization used in the training.
["Reinforcement Learning", "Deep learning"]
ABSTRACTEligibility traces in reinforcement learning are used as a bias-variance trade-offand can often speed up training time by propagating knowledge back over time-steps in a single update. We investigate the use of eligibility traces in combinationwith recurrent networks in the Atari domain. We illustrate the benefits of bothrecurrent nets and eligibility traces in some Atari games, and highlight also theimportance of the optimization used in the training.1 I NTRODUCTIONDeep reinforcement learning has had many practical successes in game playing (Mnih et al.(2015),Silver et al. (2016)) and robotics (Levine & Abbeel (2014)). Our interest is in further explor-ing these algorithms in the context of environments with sparse rewards and partial observability. Tothis end, we investigate the use of two methods that are known to mitigate these problems: recurrentnetworks, which provide a form of memory summarizing past experiences, and eligibility traces,which allow information to propagate over multiple time steps. Eligibility traces have been shownempirically to provide faster learning (Sutton & Barto (2017), in preparation) but their use with deepRL has been limited so far (van Seijen & Sutton (2014), Hausknecht & Stone (2015)). We provideexperiments in the Atari domain showing that eligibility traces boost the performance of Deep RL.We also demonstrate a surprisingly strong effect of the optimization method on the performance ofthe recurrent networks.The paper is structured as follows. In Sec. 2 we provide background and notation needed for thepaper. Sec. 3 describes the algorithms we use. In sec. 4 we present and discuss our experimentalresults. In Sec. 5 we conclude and present avenues for future work.2 B ACKGROUNDA Markov Decision Process (MDP) consists of a tuple hS;A;r;P;i, whereSis the set of states,Ais the set of actions, r:SA7! Ris the reward function, P(s0js;a)is the transition function(giving the next state distribution, conditioned on the current state and action), and 2[0;1)is thediscount factor. Reinforcement learning (RL) (Sutton & Barto, 1998) is a framework for solvingunknown MDPs, which means finding a good (or optimal) way of behaving, also called a policy. RLworks by obtaining transitions from the environment and using them, in order to compute a policythat maximizes the expected return, given by EP1t=0trt.The state-value function for a policy :SA! [0;1],V(s), is defined as the expected returnobtained by starting at state sand picking actions according to . State-action values Q(s;a)aresimilar to state values, but conditioned also on the initial action a. A policy can be derived from theQvalues by picking always the action with the best estimated value at any state.Monte Carlo (MC) and Temporal Difference (TD) are two standard methods for updating the valuefunction from data. In MC, an entire trajectory’s return is used as the target value of the current1Under review as a conference paper at ICLR 2017state.MC error =1Xi=0irt+iV(st) (1)In TD, the estimate of the next state’s value is used to correct the current state’s estimate:TD error =rt+V(st+1)V(st) (2)Q-learning is an RL algorithm that allows an agent to learn by imagining that it will take the bestpossible action in the following step:TD error =rt+maxa0Q(st+1;a0)Q(st;at) (3)This is an instance of off-policy learning, in which the agent gathers data with an exploratory policy,which randomizes the choice of action, but updates its estimates by constructing targets accordingto a differnet policy (in this case, the policy that is greedy with respect to the current value estimates.2.1 E LIGIBILITY TRACESEligibility traces are a fundamental reinforcement learning mechanism which allows a trade-offbetween TD and MC. MC methods suffer from high variance, as many trajectories can be takenfrom any given state and stochasticity is often present in the MDP. TD suffers from high bias, as itupdates values based on its own estimates.Using eligibility traces allows one to design algorithms that cover the middle-ground between MCand TD. The central notion for these are n-step returns, which provide a way of calculating the targetby using the value estimate for the state which occurs nsteps in the future (compared to the currentstate):R(n)t=n1Xi=0irt+i+nV(st+n): (4)Whennis 1, the results is the TD target, and taking n!1 yields the MC target.Eligibility traces use a geometric weighting of these n-step returns, where the weight of the k-stepreturn istimes the weight of the k1-step return. Using a = 0 reduces to using TD, as alln-steps forn>1have a weight of 0. One of the appealing effects of using eligibility traces is thata single update allows states many steps behind a reward signal to receive credit. This propagatesknowledge back at a faster rate, allowing for accelerated learning. Especially in environments whererewards are sparse and/or delayed, eligibility traces can help assign credit to past states and actions.Without traces, seeing a sparse reward will only propagate the value back by one step, which in turnneeds to be sampled to send the value back a second step, and so on.Rt= (1)1Xi=0iR(i)t= (1)1Xi=1i1i1Xj=0jrj+i+1V(st+i) (5)This way of viewing eligibility traces is called the forward view, as states are looking ahead at therewards received in the future. The forward view is rarely used, as it requires a state to wait for thefuture to unfold before calculating an update, and requires memory to store the experience. There isan equivalent view called the backward view, which allows us to calculate updates for every previousstate as we take a single action. This requires no memory and lets us perform updates without havingto wait for the future. However, this view has had limited success in the neural network setting asit requires using a trace on each neuron of the network, which tend to be dense and heavily used ateach step resulting in noisy signals. For this reason, eligibility traces aren’t heavily used when usingdeep learning, despite their potential benefits.2.1.1 Q()Q() is a variant of Q-learning where eligibility traces are used to calculate the TD error. As men-tioned previously, the backwards view of traces is traditionally used.2Under review as a conference paper at ICLR 2017A few versions of Q( ) exist, but the most used one is Watkins’s Q( ). As Q-learning is off-policy,the sequence of actions used in the past trajectory used to calculate the trace might be different fromthe actions that the current policy might take. In that case, one should not be using the trajectorypast the point where actions differ. To handle such a case, Watkins’s Q( ) sets the trace to 0 if theaction that the current policy would select is different from the one used in the past.2.2 D EEPQ-N ETWORKSMnih et al. (2015) introduced deep Q-networks (DQN), one of the first successful reinforcementlearning algorithms that use deep learning for function approximation in a way general enoughwhich is applicable to a variety of environments. Applying it to a set of Atari games, they useda convolutional neural network (CNN) which took as input the last four frames of the game, andoutput Q-values for each possible action.Equation 6 shows the DQN cost function, where we are optimizing the parameters. The parameters represent frozen Q-value weights which are update at a chosen frequency.L(st;atj) = (rt+maxa0Q(st+1;a0j)Q(st;atj))2(6)2.2.1 D EEPRECURRENT Q-N ETWORKSAs introduced in Hausknecht & Stone (2015), deep recurrent Q-networks (DRQN) are a modifica-tion on DQN, where single frames are passed through a CNN, which generates a feature vector thatis then fed to an RNN which finally outputs Q-values. This architecture gives the agent a mem-ory, allowing it to learn long-term temporal effects and handle partial observability, which is thecase in many environments. The authors showed that randomly blanking out frames was difficult toovercome for DQN, but that DRQN learned to handle without issue.To train DRQN, they proposed two variants of experience replay. The first was to sample entiretrajectories and run the RNN from end to end. However this is very computationally demandingas some trajectories can be over 10000 steps long. The second alternative was to sample sub-trajectories instead of single transitions. This is required as the RNN needs to fill its hidden stateand to allow it to understand the temporal aspect of the data.2.3 O PTIMIZERSStochastic gradient descent (SGD) is generally the algorithm used to optimize neural networks.However, some information is lost during the process as past gradients might signal that a weightdrastically needs to change, or that it is oscillating, requiring a decrease in learning rate. AdaptiveSGD algorithms have been built to use this information.RMSprop (Tieleman & Hinton (2012)), uses a geometric averaging over gradients squared, anddivides the current gradient by its square root. To perform RMSprop, first we calculate the averagingasg=g+ (1)r2and then update the parameters +rpg+.DQN (Graves (2013)) introduced a variant of RMSprop where the gradient is instead divided by thestandard deviation of the running average. First we calculate the running averages m=m+ (1)randg=g+ (1)r2, and then update the parameters using +rpgm2+. Inthe rest of the paper, when mentioning RMSprop, we’ll be referring to this version.Finally, Kingma & Ba (2014) introduced Adam, which is essentially RMSprop coupled with Nes-terov momentum, along with the running averages being corrected for bias. We have a term for therate of momentum of each of the running averages. To calculate the update with Adam, we startwith the updating the averages m=1m+ (11)r,v=2v+ (12)r2, the correct theirbiases ^m=m=(1t1),^v=v=(1t2)and finally calculate the gradient update +^mp^v+.3Under review as a conference paper at ICLR 2017Figure 1: This graph illustrates how a sample from experience replay is used in training. We use anumber of frames to fill the hidden state of the RNN. Then, for the states used for training, we havethe RNN output the Q-values. Finally, we calculate each n-step return and weight them accordingto, where the arrows represent the forward view of each trace. All states are passed though theCNN before entering the RNN.3 E XPERIMENTAL SETUPAs explained, the forward view of eligibility traces can be useful, but is computationally demandingin terms of memory and time. One must store all transitions and apply the neural network to eachstate in the trajectory. By using DRQN, experience replay is already part of the algorithm, whichremoves the memory requirement of the traces. Then, by training on sub-trajectories of data, thestates must be run through the RNN with all state values as the output, which eliminates the compu-tational cost. Finally, all that’s left to use eligibility traces is simply to calculate the weighted sumof the targets, which is very cheap to do.In this section we analyze the use of eligibility traces when training DRQN and try both RMSpropand Adam as optimizers. We only tested the algorithms on fully observable games as to comparethe learning capacities without the unfair advantage of having a memory, as would be the case onpartially observable environments.3.1 A RCHITECTUREWe tested the algorithms on two Atari 2600 games, part of the Arcade Learning Environment (Belle-mare et al. (2012)), Pong and Tennis. The architecture used is similar to the one used in Hausknecht& Stone (2015). The frames are converted to gray-scale and re-sized to 84x84. These are then fedto a CNN with the first layer being 32 8x8 filters and a stride of 4, followed by 64 4x4 filters with astride of 2, then by 64 3x3 filters with a stride of 1. The output of the CNN is then flattened beforebeing fed to a single dense layer of 512 output neurons, which is finally fed to an LSTM (Hochreiter& Schmidhuber (1997)) with 512 cells. We then have a last linear layer that takes the output ofthe recurrent layer to output the Q-values. All layers before the LSTM are activated using rectifiedlinear units (ReLU).As mentioned in subsection 2.2.1, we also altered experience replay to sample sub-trajectories. Weuse backprop through time (BPTT) to train the RNN, but only train on a sub-trajectory of experience.In runtime, the RNN will have had a large sequence of inputs in its hidden state, which can beproblematic if always trained with an empty hidden state. Like in Lample & Singh Chaplot (2016),we therefore sample a slightly longer length of trajectory and use the first mstates to fill the hiddenstate. In our experiments, we selected trajectory lengths of 32, where the first 10 states are used asfiller and the remaining 22 are used for the traces and TD costs. We used a batch size of 4.All experiments using eligibility traces use = 0:8. Furthermore, we use Watkins’s Q( ). To limitcomputation costs of using traces, we cut the trace off once it becomes too small. In our experiments,we choose the limit of 0.01, which allows the traces to affect 21 states ahead (when = 0:8). We4Under review as a conference paper at ICLR 2017calculate the trace for every state in the trajectory, except for a few in the beginning, use to fill in thehidden state of the RNN.When using RMSprop, we used a momentum of 0.95, an epsilon of 0.01 and a learning rate of0.00025. When using Adam, we used a momentum of gradients of 0.9, a momentum of squaredgradients of 0.999, an epsilon of 0.001 and a learning rate of 0.00025.Testing phases are consistent across all models, with the score being the average over each gameplayed during 125000 frames. We also use an of 0.05 for action selection.Choosekas number of trace steps and mas RNN-filler stepsInitialize weights , experience replay D s s0repeatInitialize RNN hidden state to 0.repeatChooseaaccording to greedy policy on Q(s;aj)Take actionains, observes0,rStores,a,r,s0in Experience ReplaySample 4 sub-trajectories of m+ksequential transitions ( s,a,r,s0) fromD^y=(r s’ is terminal ;r+maxaQ(s0;aj)otherwiseforeach transition sampled dot= at= arg maxa(st;aj);0otherwiseendforlfrom 0tok1do^Rt+l=Pks=lQsi=lt+iR(sl+1)t+s=Pks=lQsi=lt+iendPerform gradient descent on@(^RQ(s;aj))2@Every 10000 steps s s0untils0is terminaluntil training completeAlgorithm 1: Deep Recurrent Q-Networks with forward view eligibility traces on Atari. The eli-gibility traces are calculated using the n-step return function R(n)tfor time-step twas described insection 2.1.4 E XPERIMENTAL RESULTSWe describe experiments in two Atari games: Pong and Tennis. We chose Pong because it permitsquick experimentation, and Tennis because it is one of the games that has proven difficult in allpublished results on Atari.4.1 P ONGFirst, we tested an RNN model both with = 0and= 0:8, trained with RMSprop. Figure 2 showsthat the model without a trace ( = 0) learned at the same rate as DQN, while the model with traces(= 0:8) learned substantially faster and with more stability, without exhibiting any epochs withdepressed performance. This is probably due to the eligibility traces propagating rewards back bymany steps in a single update. In Pong, when the agent hits the ball, it must wait several time-stepsbefore the ball gets either to or past the opponent. Once this happens, the agent must assign thecredit of the event back to the time when it hit the ball, and not to the actions performed after theball had already left. The traces clearly help send this signal back faster.5Under review as a conference paper at ICLR 20170 5 10 15 20 25 30epochs−25−20−15−10−505101520scoresRMSprop0 5 10 15 20 25 30epochs−25−20−15−10−505101520scoresAdamRNN trace=0.0RNN trace=0.8DQNFigure 2: Test scores on Pong by training models with RMSprop vs Adam.We then tested the same models but using Adam as the optimizer instead of RMSprop. All modelslearn much faster with this setting. However, the model with no trace gains significantly more thanthe model with the trace. Our current intuition is that some hyper-parameters, such as the frozennetwork’s update frequency, are limiting the rate at which the model can learn. Note also that theDQN model also learns faster with Adam as the optimizer, but remains quite unstable, in comparisonwith the recurrent net models.Finally, the results in Table 1 show that both using eligibility traces and Adam provide performanceimprovements. While training with RMSProp, the model with traces gets to near optimal perfor-mance more than twice as quickly as the other models. With Adam, the model learns to be optimalin just 6 epochs.RMSprop AdamDQN 23 12RNN= 0 28 8RNN= 0:8 10 6Table 1: Number of epochs before getting to 18 points in Pong. We chose 18 points as the thresh-old because it represents a near-optimal strategy. Testing is performed with a 5% -greedy policy,stopping the agent from having a perfect score.4.2 T ENNISThe second Atari 2600 game we tested was Tennis. A match consists of only one set, which is wonby the player who is the first to win 6 ”games” (as in regular tennis). The score ranges from 24 to-24, given as the difference between the number of balls won by the two players.As in Pong, we first tried an RNN trained with RMSprop and the standard learning rate of 0.00025,both with and without eligibility traces (using again = 0:8and= 0). Figure 3 shows that bothRNN models learned to get optimal scores after about 50 epochs. This is in contrast with DQN,which never seems to be able to pass the 0 threshold, with large fluctuations ranging from -24 to0. After visually inspecting the games played in the testing phase, we noticed that the DQN agentgets stuck in a loop, where it exchanges the ball with the opponent until the timer runs out. Insuch a case, the agent minimizes the number of points scored against, but never learns to beat theopponent. The score fluctuations depend on how few points the agent allows before entering theloop. We suspect that the agent gets stuck in this policy because the reward for trying to defeat theopponent is delayed, waiting for the ball to reach the opponent and get past it. Furthermore, theexperiences of getting a point are relatively sparse. Together, it makes it difficult to propagate thereward back to the action of hitting the ball correctly.6Under review as a conference paper at ICLR 2017We also notice that both the RNN with and without eligibility traces manage to learn a near-optimalpolicy without getting stuck in the bad policy. The RNN has the capacity of sending the signal backto past states with BPTT, allowing it to do credit assignment implicitly, which might explain theirability to escape the bad policy. Remarkably, this is the only algorithm that succeeds in gettingnear-optimal scores on Tennis, out of all variants of DQN (Mnih et al. (2015), Munos et al. (2016),Wang et al. (2015), Mnih et al. (2016), Schaul et al. (2015)), which tend to get stuck in the policyof delaying. The model without traces learned at a faster pace than the one with traces, arriving toa score of 20 in 45 epochs as opposed to 62 for its counterpart. It’s possible that the updates formodel with traces were smaller, due to the weighting of target values, indirectly leading to a lowerlearning rate. We also trained the models with RMSprop and a higher learning rate of 0.001. Thisled to the model with traces getting to 20 points in just 27 epochs, while the model without traceslost its ability to get optimal scores and never passed the 0 threshold.0 10 20 30 40 50 60 70 80epochs−30−20−100102030scoresRMSprop lr=0.000250 5 10 15 20 25 30epochs−30−20−100102030scoresRMSprop lr=0.0010 5 10 15 20 25 30epochs−30−20−100102030scoresAdamRNN trace=0.0 RNN trace=0.8 DQNFigure 3: Test scores on Tennis comparing RMSprop and Adam.RMSprop lr=0.00025 RMSprop lr=0.001 Adam lr=0.00025DQN N/A N/A N/ARNN= 0 45 N/A 19RNN= 0:8 62 27 13Table 2: Number of epochs before getting to 20 points in Tennis. N/A represents the inability toreach such a level.We then tried using Adam as the optimizer, with the original learning rate of 0.00025. Both RNNmodels learned substantially faster than with RMSprop, with the RNN with traces getting to near-optimal performance in just 13 epochs. With Adam, the gradient for the positive TD is stored inthe momentum part of the equation for quite some time. Once in momentum, the gradient is part ofmany updates, which makes it enough to overtake the safe strategy. We also notice that the modelwith traces was much more stable than its counterpart. The model without traces fell back to thepolicy of delaying the game on two occasions, after having learned to beat the opponent. Finally,we trained DQN with Adam, but the model acted the same way as DQN trained with RMSprop.5 D ISCUSSION AND CONCLUSIONIn this paper, we analyzed the effects of using eligibility traces and different optimization functions.We showed that eligibility traces can improve and stabilize learning and using Adam can stronglyaccelerate learning.As shown in the Pong results, the model using eligibility traces didn’t gain much performance fromusing Adam. One possible cause is the frozen network. While it has a stabilizing effect in DQN,by stopping policies from drastically changing from a single update, it also stops newly learnedvalues from being propagated back. Double DQN seems to partially go around this issue, allowing7Under review as a conference paper at ICLR 2017the policy of the next state to change, while keeping the values frozen. In future experiments,we must consider eliminating or increasing the frozen network’s update frequency. It would also beinteresting to reduce the size of experience replay, as with increased learning speed, old observationscan become too off-policy and barely be used in eligibility traces.
ryolYUbVe
Interesting questions but very limited experiments
3: Clear rejection
This paper investigates the use of eligibility traces with recurrent DQN agents. As in other recent work on deep RL, the forward view of Sutton and Barto is used to make eligibility traces practical to use with neural networks. Experiments on the Atari games Pong and Tennis show that traces work better than standard Q-learning. The paper is well written and the use of traces in deep RL is indeed underexplored, but the experiments in the paper are too limited and do not answer the most interesting questions. As pointed out in the questions, n-step returns have been shown to work better than 1-step returns both in the classical RL literature and more recently with deep networks. [1] shows that using n-step returns in the forward view with neural networks leads to big improvements on both Atari and TORCS. Their n-step Q-learning method also combines returns of different length in expectation, while traces do this explicitly. This paper does not compare traces with n-step returns and simply shows that traces used in the forward view help on two Atari games. This is not a very significant result. It would be much more interesting to see whether traces improve on what is already known to work well with neural networks. The other claimed contribution of the paper is showing the strong effect of optimization. As with traces, I find it hard to make any conclusions from experiments on two games with fixed hyperparameter settings. This has already been demonstrated with much more thorough experiments in other papers. One could argue that these experiments show that importance of hyperparameter values and not of the optimization algorithm itself. Without tuning the optimization hyperparameters it's hard to claim anything about the relative merits of the methods. [1] "Asynchronous Methods for Deep Reinforcement Learning", ICML 2016.
5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature
HJrDIpiee
ICLR.cc/2017/conference
2017
Investigating Recurrence and Eligibility Traces in Deep Q-Networks
["Jean Harb", "Doina Precup"]
Eligibility traces in reinforcement learning are used as a bias-variance trade-off and can often speed up training time by propagating knowledge back over time-steps in a single update. We investigate the use of eligibility traces in combination with recurrent networks in the Atari domain. We illustrate the benefits of both recurrent nets and eligibility traces in some Atari games, and highlight also the importance of the optimization used in the training.
["Reinforcement Learning", "Deep learning"]
ABSTRACTEligibility traces in reinforcement learning are used as a bias-variance trade-offand can often speed up training time by propagating knowledge back over time-steps in a single update. We investigate the use of eligibility traces in combinationwith recurrent networks in the Atari domain. We illustrate the benefits of bothrecurrent nets and eligibility traces in some Atari games, and highlight also theimportance of the optimization used in the training.1 I NTRODUCTIONDeep reinforcement learning has had many practical successes in game playing (Mnih et al.(2015),Silver et al. (2016)) and robotics (Levine & Abbeel (2014)). Our interest is in further explor-ing these algorithms in the context of environments with sparse rewards and partial observability. Tothis end, we investigate the use of two methods that are known to mitigate these problems: recurrentnetworks, which provide a form of memory summarizing past experiences, and eligibility traces,which allow information to propagate over multiple time steps. Eligibility traces have been shownempirically to provide faster learning (Sutton & Barto (2017), in preparation) but their use with deepRL has been limited so far (van Seijen & Sutton (2014), Hausknecht & Stone (2015)). We provideexperiments in the Atari domain showing that eligibility traces boost the performance of Deep RL.We also demonstrate a surprisingly strong effect of the optimization method on the performance ofthe recurrent networks.The paper is structured as follows. In Sec. 2 we provide background and notation needed for thepaper. Sec. 3 describes the algorithms we use. In sec. 4 we present and discuss our experimentalresults. In Sec. 5 we conclude and present avenues for future work.2 B ACKGROUNDA Markov Decision Process (MDP) consists of a tuple hS;A;r;P;i, whereSis the set of states,Ais the set of actions, r:SA7! Ris the reward function, P(s0js;a)is the transition function(giving the next state distribution, conditioned on the current state and action), and 2[0;1)is thediscount factor. Reinforcement learning (RL) (Sutton & Barto, 1998) is a framework for solvingunknown MDPs, which means finding a good (or optimal) way of behaving, also called a policy. RLworks by obtaining transitions from the environment and using them, in order to compute a policythat maximizes the expected return, given by EP1t=0trt.The state-value function for a policy :SA! [0;1],V(s), is defined as the expected returnobtained by starting at state sand picking actions according to . State-action values Q(s;a)aresimilar to state values, but conditioned also on the initial action a. A policy can be derived from theQvalues by picking always the action with the best estimated value at any state.Monte Carlo (MC) and Temporal Difference (TD) are two standard methods for updating the valuefunction from data. In MC, an entire trajectory’s return is used as the target value of the current1Under review as a conference paper at ICLR 2017state.MC error =1Xi=0irt+iV(st) (1)In TD, the estimate of the next state’s value is used to correct the current state’s estimate:TD error =rt+V(st+1)V(st) (2)Q-learning is an RL algorithm that allows an agent to learn by imagining that it will take the bestpossible action in the following step:TD error =rt+maxa0Q(st+1;a0)Q(st;at) (3)This is an instance of off-policy learning, in which the agent gathers data with an exploratory policy,which randomizes the choice of action, but updates its estimates by constructing targets accordingto a differnet policy (in this case, the policy that is greedy with respect to the current value estimates.2.1 E LIGIBILITY TRACESEligibility traces are a fundamental reinforcement learning mechanism which allows a trade-offbetween TD and MC. MC methods suffer from high variance, as many trajectories can be takenfrom any given state and stochasticity is often present in the MDP. TD suffers from high bias, as itupdates values based on its own estimates.Using eligibility traces allows one to design algorithms that cover the middle-ground between MCand TD. The central notion for these are n-step returns, which provide a way of calculating the targetby using the value estimate for the state which occurs nsteps in the future (compared to the currentstate):R(n)t=n1Xi=0irt+i+nV(st+n): (4)Whennis 1, the results is the TD target, and taking n!1 yields the MC target.Eligibility traces use a geometric weighting of these n-step returns, where the weight of the k-stepreturn istimes the weight of the k1-step return. Using a = 0 reduces to using TD, as alln-steps forn>1have a weight of 0. One of the appealing effects of using eligibility traces is thata single update allows states many steps behind a reward signal to receive credit. This propagatesknowledge back at a faster rate, allowing for accelerated learning. Especially in environments whererewards are sparse and/or delayed, eligibility traces can help assign credit to past states and actions.Without traces, seeing a sparse reward will only propagate the value back by one step, which in turnneeds to be sampled to send the value back a second step, and so on.Rt= (1)1Xi=0iR(i)t= (1)1Xi=1i1i1Xj=0jrj+i+1V(st+i) (5)This way of viewing eligibility traces is called the forward view, as states are looking ahead at therewards received in the future. The forward view is rarely used, as it requires a state to wait for thefuture to unfold before calculating an update, and requires memory to store the experience. There isan equivalent view called the backward view, which allows us to calculate updates for every previousstate as we take a single action. This requires no memory and lets us perform updates without havingto wait for the future. However, this view has had limited success in the neural network setting asit requires using a trace on each neuron of the network, which tend to be dense and heavily used ateach step resulting in noisy signals. For this reason, eligibility traces aren’t heavily used when usingdeep learning, despite their potential benefits.2.1.1 Q()Q() is a variant of Q-learning where eligibility traces are used to calculate the TD error. As men-tioned previously, the backwards view of traces is traditionally used.2Under review as a conference paper at ICLR 2017A few versions of Q( ) exist, but the most used one is Watkins’s Q( ). As Q-learning is off-policy,the sequence of actions used in the past trajectory used to calculate the trace might be different fromthe actions that the current policy might take. In that case, one should not be using the trajectorypast the point where actions differ. To handle such a case, Watkins’s Q( ) sets the trace to 0 if theaction that the current policy would select is different from the one used in the past.2.2 D EEPQ-N ETWORKSMnih et al. (2015) introduced deep Q-networks (DQN), one of the first successful reinforcementlearning algorithms that use deep learning for function approximation in a way general enoughwhich is applicable to a variety of environments. Applying it to a set of Atari games, they useda convolutional neural network (CNN) which took as input the last four frames of the game, andoutput Q-values for each possible action.Equation 6 shows the DQN cost function, where we are optimizing the parameters. The parameters represent frozen Q-value weights which are update at a chosen frequency.L(st;atj) = (rt+maxa0Q(st+1;a0j)Q(st;atj))2(6)2.2.1 D EEPRECURRENT Q-N ETWORKSAs introduced in Hausknecht & Stone (2015), deep recurrent Q-networks (DRQN) are a modifica-tion on DQN, where single frames are passed through a CNN, which generates a feature vector thatis then fed to an RNN which finally outputs Q-values. This architecture gives the agent a mem-ory, allowing it to learn long-term temporal effects and handle partial observability, which is thecase in many environments. The authors showed that randomly blanking out frames was difficult toovercome for DQN, but that DRQN learned to handle without issue.To train DRQN, they proposed two variants of experience replay. The first was to sample entiretrajectories and run the RNN from end to end. However this is very computationally demandingas some trajectories can be over 10000 steps long. The second alternative was to sample sub-trajectories instead of single transitions. This is required as the RNN needs to fill its hidden stateand to allow it to understand the temporal aspect of the data.2.3 O PTIMIZERSStochastic gradient descent (SGD) is generally the algorithm used to optimize neural networks.However, some information is lost during the process as past gradients might signal that a weightdrastically needs to change, or that it is oscillating, requiring a decrease in learning rate. AdaptiveSGD algorithms have been built to use this information.RMSprop (Tieleman & Hinton (2012)), uses a geometric averaging over gradients squared, anddivides the current gradient by its square root. To perform RMSprop, first we calculate the averagingasg=g+ (1)r2and then update the parameters +rpg+.DQN (Graves (2013)) introduced a variant of RMSprop where the gradient is instead divided by thestandard deviation of the running average. First we calculate the running averages m=m+ (1)randg=g+ (1)r2, and then update the parameters using +rpgm2+. Inthe rest of the paper, when mentioning RMSprop, we’ll be referring to this version.Finally, Kingma & Ba (2014) introduced Adam, which is essentially RMSprop coupled with Nes-terov momentum, along with the running averages being corrected for bias. We have a term for therate of momentum of each of the running averages. To calculate the update with Adam, we startwith the updating the averages m=1m+ (11)r,v=2v+ (12)r2, the correct theirbiases ^m=m=(1t1),^v=v=(1t2)and finally calculate the gradient update +^mp^v+.3Under review as a conference paper at ICLR 2017Figure 1: This graph illustrates how a sample from experience replay is used in training. We use anumber of frames to fill the hidden state of the RNN. Then, for the states used for training, we havethe RNN output the Q-values. Finally, we calculate each n-step return and weight them accordingto, where the arrows represent the forward view of each trace. All states are passed though theCNN before entering the RNN.3 E XPERIMENTAL SETUPAs explained, the forward view of eligibility traces can be useful, but is computationally demandingin terms of memory and time. One must store all transitions and apply the neural network to eachstate in the trajectory. By using DRQN, experience replay is already part of the algorithm, whichremoves the memory requirement of the traces. Then, by training on sub-trajectories of data, thestates must be run through the RNN with all state values as the output, which eliminates the compu-tational cost. Finally, all that’s left to use eligibility traces is simply to calculate the weighted sumof the targets, which is very cheap to do.In this section we analyze the use of eligibility traces when training DRQN and try both RMSpropand Adam as optimizers. We only tested the algorithms on fully observable games as to comparethe learning capacities without the unfair advantage of having a memory, as would be the case onpartially observable environments.3.1 A RCHITECTUREWe tested the algorithms on two Atari 2600 games, part of the Arcade Learning Environment (Belle-mare et al. (2012)), Pong and Tennis. The architecture used is similar to the one used in Hausknecht& Stone (2015). The frames are converted to gray-scale and re-sized to 84x84. These are then fedto a CNN with the first layer being 32 8x8 filters and a stride of 4, followed by 64 4x4 filters with astride of 2, then by 64 3x3 filters with a stride of 1. The output of the CNN is then flattened beforebeing fed to a single dense layer of 512 output neurons, which is finally fed to an LSTM (Hochreiter& Schmidhuber (1997)) with 512 cells. We then have a last linear layer that takes the output ofthe recurrent layer to output the Q-values. All layers before the LSTM are activated using rectifiedlinear units (ReLU).As mentioned in subsection 2.2.1, we also altered experience replay to sample sub-trajectories. Weuse backprop through time (BPTT) to train the RNN, but only train on a sub-trajectory of experience.In runtime, the RNN will have had a large sequence of inputs in its hidden state, which can beproblematic if always trained with an empty hidden state. Like in Lample & Singh Chaplot (2016),we therefore sample a slightly longer length of trajectory and use the first mstates to fill the hiddenstate. In our experiments, we selected trajectory lengths of 32, where the first 10 states are used asfiller and the remaining 22 are used for the traces and TD costs. We used a batch size of 4.All experiments using eligibility traces use = 0:8. Furthermore, we use Watkins’s Q( ). To limitcomputation costs of using traces, we cut the trace off once it becomes too small. In our experiments,we choose the limit of 0.01, which allows the traces to affect 21 states ahead (when = 0:8). We4Under review as a conference paper at ICLR 2017calculate the trace for every state in the trajectory, except for a few in the beginning, use to fill in thehidden state of the RNN.When using RMSprop, we used a momentum of 0.95, an epsilon of 0.01 and a learning rate of0.00025. When using Adam, we used a momentum of gradients of 0.9, a momentum of squaredgradients of 0.999, an epsilon of 0.001 and a learning rate of 0.00025.Testing phases are consistent across all models, with the score being the average over each gameplayed during 125000 frames. We also use an of 0.05 for action selection.Choosekas number of trace steps and mas RNN-filler stepsInitialize weights , experience replay D s s0repeatInitialize RNN hidden state to 0.repeatChooseaaccording to greedy policy on Q(s;aj)Take actionains, observes0,rStores,a,r,s0in Experience ReplaySample 4 sub-trajectories of m+ksequential transitions ( s,a,r,s0) fromD^y=(r s’ is terminal ;r+maxaQ(s0;aj)otherwiseforeach transition sampled dot= at= arg maxa(st;aj);0otherwiseendforlfrom 0tok1do^Rt+l=Pks=lQsi=lt+iR(sl+1)t+s=Pks=lQsi=lt+iendPerform gradient descent on@(^RQ(s;aj))2@Every 10000 steps s s0untils0is terminaluntil training completeAlgorithm 1: Deep Recurrent Q-Networks with forward view eligibility traces on Atari. The eli-gibility traces are calculated using the n-step return function R(n)tfor time-step twas described insection 2.1.4 E XPERIMENTAL RESULTSWe describe experiments in two Atari games: Pong and Tennis. We chose Pong because it permitsquick experimentation, and Tennis because it is one of the games that has proven difficult in allpublished results on Atari.4.1 P ONGFirst, we tested an RNN model both with = 0and= 0:8, trained with RMSprop. Figure 2 showsthat the model without a trace ( = 0) learned at the same rate as DQN, while the model with traces(= 0:8) learned substantially faster and with more stability, without exhibiting any epochs withdepressed performance. This is probably due to the eligibility traces propagating rewards back bymany steps in a single update. In Pong, when the agent hits the ball, it must wait several time-stepsbefore the ball gets either to or past the opponent. Once this happens, the agent must assign thecredit of the event back to the time when it hit the ball, and not to the actions performed after theball had already left. The traces clearly help send this signal back faster.5Under review as a conference paper at ICLR 20170 5 10 15 20 25 30epochs−25−20−15−10−505101520scoresRMSprop0 5 10 15 20 25 30epochs−25−20−15−10−505101520scoresAdamRNN trace=0.0RNN trace=0.8DQNFigure 2: Test scores on Pong by training models with RMSprop vs Adam.We then tested the same models but using Adam as the optimizer instead of RMSprop. All modelslearn much faster with this setting. However, the model with no trace gains significantly more thanthe model with the trace. Our current intuition is that some hyper-parameters, such as the frozennetwork’s update frequency, are limiting the rate at which the model can learn. Note also that theDQN model also learns faster with Adam as the optimizer, but remains quite unstable, in comparisonwith the recurrent net models.Finally, the results in Table 1 show that both using eligibility traces and Adam provide performanceimprovements. While training with RMSProp, the model with traces gets to near optimal perfor-mance more than twice as quickly as the other models. With Adam, the model learns to be optimalin just 6 epochs.RMSprop AdamDQN 23 12RNN= 0 28 8RNN= 0:8 10 6Table 1: Number of epochs before getting to 18 points in Pong. We chose 18 points as the thresh-old because it represents a near-optimal strategy. Testing is performed with a 5% -greedy policy,stopping the agent from having a perfect score.4.2 T ENNISThe second Atari 2600 game we tested was Tennis. A match consists of only one set, which is wonby the player who is the first to win 6 ”games” (as in regular tennis). The score ranges from 24 to-24, given as the difference between the number of balls won by the two players.As in Pong, we first tried an RNN trained with RMSprop and the standard learning rate of 0.00025,both with and without eligibility traces (using again = 0:8and= 0). Figure 3 shows that bothRNN models learned to get optimal scores after about 50 epochs. This is in contrast with DQN,which never seems to be able to pass the 0 threshold, with large fluctuations ranging from -24 to0. After visually inspecting the games played in the testing phase, we noticed that the DQN agentgets stuck in a loop, where it exchanges the ball with the opponent until the timer runs out. Insuch a case, the agent minimizes the number of points scored against, but never learns to beat theopponent. The score fluctuations depend on how few points the agent allows before entering theloop. We suspect that the agent gets stuck in this policy because the reward for trying to defeat theopponent is delayed, waiting for the ball to reach the opponent and get past it. Furthermore, theexperiences of getting a point are relatively sparse. Together, it makes it difficult to propagate thereward back to the action of hitting the ball correctly.6Under review as a conference paper at ICLR 2017We also notice that both the RNN with and without eligibility traces manage to learn a near-optimalpolicy without getting stuck in the bad policy. The RNN has the capacity of sending the signal backto past states with BPTT, allowing it to do credit assignment implicitly, which might explain theirability to escape the bad policy. Remarkably, this is the only algorithm that succeeds in gettingnear-optimal scores on Tennis, out of all variants of DQN (Mnih et al. (2015), Munos et al. (2016),Wang et al. (2015), Mnih et al. (2016), Schaul et al. (2015)), which tend to get stuck in the policyof delaying. The model without traces learned at a faster pace than the one with traces, arriving toa score of 20 in 45 epochs as opposed to 62 for its counterpart. It’s possible that the updates formodel with traces were smaller, due to the weighting of target values, indirectly leading to a lowerlearning rate. We also trained the models with RMSprop and a higher learning rate of 0.001. Thisled to the model with traces getting to 20 points in just 27 epochs, while the model without traceslost its ability to get optimal scores and never passed the 0 threshold.0 10 20 30 40 50 60 70 80epochs−30−20−100102030scoresRMSprop lr=0.000250 5 10 15 20 25 30epochs−30−20−100102030scoresRMSprop lr=0.0010 5 10 15 20 25 30epochs−30−20−100102030scoresAdamRNN trace=0.0 RNN trace=0.8 DQNFigure 3: Test scores on Tennis comparing RMSprop and Adam.RMSprop lr=0.00025 RMSprop lr=0.001 Adam lr=0.00025DQN N/A N/A N/ARNN= 0 45 N/A 19RNN= 0:8 62 27 13Table 2: Number of epochs before getting to 20 points in Tennis. N/A represents the inability toreach such a level.We then tried using Adam as the optimizer, with the original learning rate of 0.00025. Both RNNmodels learned substantially faster than with RMSprop, with the RNN with traces getting to near-optimal performance in just 13 epochs. With Adam, the gradient for the positive TD is stored inthe momentum part of the equation for quite some time. Once in momentum, the gradient is part ofmany updates, which makes it enough to overtake the safe strategy. We also notice that the modelwith traces was much more stable than its counterpart. The model without traces fell back to thepolicy of delaying the game on two occasions, after having learned to beat the opponent. Finally,we trained DQN with Adam, but the model acted the same way as DQN trained with RMSprop.5 D ISCUSSION AND CONCLUSIONIn this paper, we analyzed the effects of using eligibility traces and different optimization functions.We showed that eligibility traces can improve and stabilize learning and using Adam can stronglyaccelerate learning.As shown in the Pong results, the model using eligibility traces didn’t gain much performance fromusing Adam. One possible cause is the frozen network. While it has a stabilizing effect in DQN,by stopping policies from drastically changing from a single update, it also stops newly learnedvalues from being propagated back. Double DQN seems to partially go around this issue, allowing7Under review as a conference paper at ICLR 2017the policy of the next state to change, while keeping the values frozen. In future experiments,we must consider eliminating or increasing the frozen network’s update frequency. It would also beinteresting to reduce the size of experience replay, as with increased learning speed, old observationscan become too off-policy and barely be used in eligibility traces.
Sy6yFzzEe
review
4: Ok but not good enough - rejection
This paper combines DRQN with eligibility traces, and also experiment with the Adam optimizer for optimizing the q-network. This direction is worth exploring, and the experiments demonstrate the benefit from using eligibility traces and Adam on two Atari games. The methods themselves are not novel. Thus, the primary contributions are (1) applying eligibility traces and Adam to DRQN and (2) the experimental evaluation. The paper is well-written and easy to understand. The experiments provide quantitative results and detailed qualitative intuition for how and why the methods perform as they do. However, with only two Atari games in the results, it is difficult to tell how well it the method would perform more generally. Showing results on several more games and/or other domains would significantly improve the paper. Showing error bars from multiple random seeds would also improve the paper.
4: The reviewer is confident but not absolutely certain that the evaluation is correct
HJrDIpiee
ICLR.cc/2017/conference
2017
Investigating Recurrence and Eligibility Traces in Deep Q-Networks
["Jean Harb", "Doina Precup"]
Eligibility traces in reinforcement learning are used as a bias-variance trade-off and can often speed up training time by propagating knowledge back over time-steps in a single update. We investigate the use of eligibility traces in combination with recurrent networks in the Atari domain. We illustrate the benefits of both recurrent nets and eligibility traces in some Atari games, and highlight also the importance of the optimization used in the training.
["Reinforcement Learning", "Deep learning"]
ABSTRACTEligibility traces in reinforcement learning are used as a bias-variance trade-offand can often speed up training time by propagating knowledge back over time-steps in a single update. We investigate the use of eligibility traces in combinationwith recurrent networks in the Atari domain. We illustrate the benefits of bothrecurrent nets and eligibility traces in some Atari games, and highlight also theimportance of the optimization used in the training.1 I NTRODUCTIONDeep reinforcement learning has had many practical successes in game playing (Mnih et al.(2015),Silver et al. (2016)) and robotics (Levine & Abbeel (2014)). Our interest is in further explor-ing these algorithms in the context of environments with sparse rewards and partial observability. Tothis end, we investigate the use of two methods that are known to mitigate these problems: recurrentnetworks, which provide a form of memory summarizing past experiences, and eligibility traces,which allow information to propagate over multiple time steps. Eligibility traces have been shownempirically to provide faster learning (Sutton & Barto (2017), in preparation) but their use with deepRL has been limited so far (van Seijen & Sutton (2014), Hausknecht & Stone (2015)). We provideexperiments in the Atari domain showing that eligibility traces boost the performance of Deep RL.We also demonstrate a surprisingly strong effect of the optimization method on the performance ofthe recurrent networks.The paper is structured as follows. In Sec. 2 we provide background and notation needed for thepaper. Sec. 3 describes the algorithms we use. In sec. 4 we present and discuss our experimentalresults. In Sec. 5 we conclude and present avenues for future work.2 B ACKGROUNDA Markov Decision Process (MDP) consists of a tuple hS;A;r;P;i, whereSis the set of states,Ais the set of actions, r:SA7! Ris the reward function, P(s0js;a)is the transition function(giving the next state distribution, conditioned on the current state and action), and 2[0;1)is thediscount factor. Reinforcement learning (RL) (Sutton & Barto, 1998) is a framework for solvingunknown MDPs, which means finding a good (or optimal) way of behaving, also called a policy. RLworks by obtaining transitions from the environment and using them, in order to compute a policythat maximizes the expected return, given by EP1t=0trt.The state-value function for a policy :SA! [0;1],V(s), is defined as the expected returnobtained by starting at state sand picking actions according to . State-action values Q(s;a)aresimilar to state values, but conditioned also on the initial action a. A policy can be derived from theQvalues by picking always the action with the best estimated value at any state.Monte Carlo (MC) and Temporal Difference (TD) are two standard methods for updating the valuefunction from data. In MC, an entire trajectory’s return is used as the target value of the current1Under review as a conference paper at ICLR 2017state.MC error =1Xi=0irt+iV(st) (1)In TD, the estimate of the next state’s value is used to correct the current state’s estimate:TD error =rt+V(st+1)V(st) (2)Q-learning is an RL algorithm that allows an agent to learn by imagining that it will take the bestpossible action in the following step:TD error =rt+maxa0Q(st+1;a0)Q(st;at) (3)This is an instance of off-policy learning, in which the agent gathers data with an exploratory policy,which randomizes the choice of action, but updates its estimates by constructing targets accordingto a differnet policy (in this case, the policy that is greedy with respect to the current value estimates.2.1 E LIGIBILITY TRACESEligibility traces are a fundamental reinforcement learning mechanism which allows a trade-offbetween TD and MC. MC methods suffer from high variance, as many trajectories can be takenfrom any given state and stochasticity is often present in the MDP. TD suffers from high bias, as itupdates values based on its own estimates.Using eligibility traces allows one to design algorithms that cover the middle-ground between MCand TD. The central notion for these are n-step returns, which provide a way of calculating the targetby using the value estimate for the state which occurs nsteps in the future (compared to the currentstate):R(n)t=n1Xi=0irt+i+nV(st+n): (4)Whennis 1, the results is the TD target, and taking n!1 yields the MC target.Eligibility traces use a geometric weighting of these n-step returns, where the weight of the k-stepreturn istimes the weight of the k1-step return. Using a = 0 reduces to using TD, as alln-steps forn>1have a weight of 0. One of the appealing effects of using eligibility traces is thata single update allows states many steps behind a reward signal to receive credit. This propagatesknowledge back at a faster rate, allowing for accelerated learning. Especially in environments whererewards are sparse and/or delayed, eligibility traces can help assign credit to past states and actions.Without traces, seeing a sparse reward will only propagate the value back by one step, which in turnneeds to be sampled to send the value back a second step, and so on.Rt= (1)1Xi=0iR(i)t= (1)1Xi=1i1i1Xj=0jrj+i+1V(st+i) (5)This way of viewing eligibility traces is called the forward view, as states are looking ahead at therewards received in the future. The forward view is rarely used, as it requires a state to wait for thefuture to unfold before calculating an update, and requires memory to store the experience. There isan equivalent view called the backward view, which allows us to calculate updates for every previousstate as we take a single action. This requires no memory and lets us perform updates without havingto wait for the future. However, this view has had limited success in the neural network setting asit requires using a trace on each neuron of the network, which tend to be dense and heavily used ateach step resulting in noisy signals. For this reason, eligibility traces aren’t heavily used when usingdeep learning, despite their potential benefits.2.1.1 Q()Q() is a variant of Q-learning where eligibility traces are used to calculate the TD error. As men-tioned previously, the backwards view of traces is traditionally used.2Under review as a conference paper at ICLR 2017A few versions of Q( ) exist, but the most used one is Watkins’s Q( ). As Q-learning is off-policy,the sequence of actions used in the past trajectory used to calculate the trace might be different fromthe actions that the current policy might take. In that case, one should not be using the trajectorypast the point where actions differ. To handle such a case, Watkins’s Q( ) sets the trace to 0 if theaction that the current policy would select is different from the one used in the past.2.2 D EEPQ-N ETWORKSMnih et al. (2015) introduced deep Q-networks (DQN), one of the first successful reinforcementlearning algorithms that use deep learning for function approximation in a way general enoughwhich is applicable to a variety of environments. Applying it to a set of Atari games, they useda convolutional neural network (CNN) which took as input the last four frames of the game, andoutput Q-values for each possible action.Equation 6 shows the DQN cost function, where we are optimizing the parameters. The parameters represent frozen Q-value weights which are update at a chosen frequency.L(st;atj) = (rt+maxa0Q(st+1;a0j)Q(st;atj))2(6)2.2.1 D EEPRECURRENT Q-N ETWORKSAs introduced in Hausknecht & Stone (2015), deep recurrent Q-networks (DRQN) are a modifica-tion on DQN, where single frames are passed through a CNN, which generates a feature vector thatis then fed to an RNN which finally outputs Q-values. This architecture gives the agent a mem-ory, allowing it to learn long-term temporal effects and handle partial observability, which is thecase in many environments. The authors showed that randomly blanking out frames was difficult toovercome for DQN, but that DRQN learned to handle without issue.To train DRQN, they proposed two variants of experience replay. The first was to sample entiretrajectories and run the RNN from end to end. However this is very computationally demandingas some trajectories can be over 10000 steps long. The second alternative was to sample sub-trajectories instead of single transitions. This is required as the RNN needs to fill its hidden stateand to allow it to understand the temporal aspect of the data.2.3 O PTIMIZERSStochastic gradient descent (SGD) is generally the algorithm used to optimize neural networks.However, some information is lost during the process as past gradients might signal that a weightdrastically needs to change, or that it is oscillating, requiring a decrease in learning rate. AdaptiveSGD algorithms have been built to use this information.RMSprop (Tieleman & Hinton (2012)), uses a geometric averaging over gradients squared, anddivides the current gradient by its square root. To perform RMSprop, first we calculate the averagingasg=g+ (1)r2and then update the parameters +rpg+.DQN (Graves (2013)) introduced a variant of RMSprop where the gradient is instead divided by thestandard deviation of the running average. First we calculate the running averages m=m+ (1)randg=g+ (1)r2, and then update the parameters using +rpgm2+. Inthe rest of the paper, when mentioning RMSprop, we’ll be referring to this version.Finally, Kingma & Ba (2014) introduced Adam, which is essentially RMSprop coupled with Nes-terov momentum, along with the running averages being corrected for bias. We have a term for therate of momentum of each of the running averages. To calculate the update with Adam, we startwith the updating the averages m=1m+ (11)r,v=2v+ (12)r2, the correct theirbiases ^m=m=(1t1),^v=v=(1t2)and finally calculate the gradient update +^mp^v+.3Under review as a conference paper at ICLR 2017Figure 1: This graph illustrates how a sample from experience replay is used in training. We use anumber of frames to fill the hidden state of the RNN. Then, for the states used for training, we havethe RNN output the Q-values. Finally, we calculate each n-step return and weight them accordingto, where the arrows represent the forward view of each trace. All states are passed though theCNN before entering the RNN.3 E XPERIMENTAL SETUPAs explained, the forward view of eligibility traces can be useful, but is computationally demandingin terms of memory and time. One must store all transitions and apply the neural network to eachstate in the trajectory. By using DRQN, experience replay is already part of the algorithm, whichremoves the memory requirement of the traces. Then, by training on sub-trajectories of data, thestates must be run through the RNN with all state values as the output, which eliminates the compu-tational cost. Finally, all that’s left to use eligibility traces is simply to calculate the weighted sumof the targets, which is very cheap to do.In this section we analyze the use of eligibility traces when training DRQN and try both RMSpropand Adam as optimizers. We only tested the algorithms on fully observable games as to comparethe learning capacities without the unfair advantage of having a memory, as would be the case onpartially observable environments.3.1 A RCHITECTUREWe tested the algorithms on two Atari 2600 games, part of the Arcade Learning Environment (Belle-mare et al. (2012)), Pong and Tennis. The architecture used is similar to the one used in Hausknecht& Stone (2015). The frames are converted to gray-scale and re-sized to 84x84. These are then fedto a CNN with the first layer being 32 8x8 filters and a stride of 4, followed by 64 4x4 filters with astride of 2, then by 64 3x3 filters with a stride of 1. The output of the CNN is then flattened beforebeing fed to a single dense layer of 512 output neurons, which is finally fed to an LSTM (Hochreiter& Schmidhuber (1997)) with 512 cells. We then have a last linear layer that takes the output ofthe recurrent layer to output the Q-values. All layers before the LSTM are activated using rectifiedlinear units (ReLU).As mentioned in subsection 2.2.1, we also altered experience replay to sample sub-trajectories. Weuse backprop through time (BPTT) to train the RNN, but only train on a sub-trajectory of experience.In runtime, the RNN will have had a large sequence of inputs in its hidden state, which can beproblematic if always trained with an empty hidden state. Like in Lample & Singh Chaplot (2016),we therefore sample a slightly longer length of trajectory and use the first mstates to fill the hiddenstate. In our experiments, we selected trajectory lengths of 32, where the first 10 states are used asfiller and the remaining 22 are used for the traces and TD costs. We used a batch size of 4.All experiments using eligibility traces use = 0:8. Furthermore, we use Watkins’s Q( ). To limitcomputation costs of using traces, we cut the trace off once it becomes too small. In our experiments,we choose the limit of 0.01, which allows the traces to affect 21 states ahead (when = 0:8). We4Under review as a conference paper at ICLR 2017calculate the trace for every state in the trajectory, except for a few in the beginning, use to fill in thehidden state of the RNN.When using RMSprop, we used a momentum of 0.95, an epsilon of 0.01 and a learning rate of0.00025. When using Adam, we used a momentum of gradients of 0.9, a momentum of squaredgradients of 0.999, an epsilon of 0.001 and a learning rate of 0.00025.Testing phases are consistent across all models, with the score being the average over each gameplayed during 125000 frames. We also use an of 0.05 for action selection.Choosekas number of trace steps and mas RNN-filler stepsInitialize weights , experience replay D s s0repeatInitialize RNN hidden state to 0.repeatChooseaaccording to greedy policy on Q(s;aj)Take actionains, observes0,rStores,a,r,s0in Experience ReplaySample 4 sub-trajectories of m+ksequential transitions ( s,a,r,s0) fromD^y=(r s’ is terminal ;r+maxaQ(s0;aj)otherwiseforeach transition sampled dot= at= arg maxa(st;aj);0otherwiseendforlfrom 0tok1do^Rt+l=Pks=lQsi=lt+iR(sl+1)t+s=Pks=lQsi=lt+iendPerform gradient descent on@(^RQ(s;aj))2@Every 10000 steps s s0untils0is terminaluntil training completeAlgorithm 1: Deep Recurrent Q-Networks with forward view eligibility traces on Atari. The eli-gibility traces are calculated using the n-step return function R(n)tfor time-step twas described insection 2.1.4 E XPERIMENTAL RESULTSWe describe experiments in two Atari games: Pong and Tennis. We chose Pong because it permitsquick experimentation, and Tennis because it is one of the games that has proven difficult in allpublished results on Atari.4.1 P ONGFirst, we tested an RNN model both with = 0and= 0:8, trained with RMSprop. Figure 2 showsthat the model without a trace ( = 0) learned at the same rate as DQN, while the model with traces(= 0:8) learned substantially faster and with more stability, without exhibiting any epochs withdepressed performance. This is probably due to the eligibility traces propagating rewards back bymany steps in a single update. In Pong, when the agent hits the ball, it must wait several time-stepsbefore the ball gets either to or past the opponent. Once this happens, the agent must assign thecredit of the event back to the time when it hit the ball, and not to the actions performed after theball had already left. The traces clearly help send this signal back faster.5Under review as a conference paper at ICLR 20170 5 10 15 20 25 30epochs−25−20−15−10−505101520scoresRMSprop0 5 10 15 20 25 30epochs−25−20−15−10−505101520scoresAdamRNN trace=0.0RNN trace=0.8DQNFigure 2: Test scores on Pong by training models with RMSprop vs Adam.We then tested the same models but using Adam as the optimizer instead of RMSprop. All modelslearn much faster with this setting. However, the model with no trace gains significantly more thanthe model with the trace. Our current intuition is that some hyper-parameters, such as the frozennetwork’s update frequency, are limiting the rate at which the model can learn. Note also that theDQN model also learns faster with Adam as the optimizer, but remains quite unstable, in comparisonwith the recurrent net models.Finally, the results in Table 1 show that both using eligibility traces and Adam provide performanceimprovements. While training with RMSProp, the model with traces gets to near optimal perfor-mance more than twice as quickly as the other models. With Adam, the model learns to be optimalin just 6 epochs.RMSprop AdamDQN 23 12RNN= 0 28 8RNN= 0:8 10 6Table 1: Number of epochs before getting to 18 points in Pong. We chose 18 points as the thresh-old because it represents a near-optimal strategy. Testing is performed with a 5% -greedy policy,stopping the agent from having a perfect score.4.2 T ENNISThe second Atari 2600 game we tested was Tennis. A match consists of only one set, which is wonby the player who is the first to win 6 ”games” (as in regular tennis). The score ranges from 24 to-24, given as the difference between the number of balls won by the two players.As in Pong, we first tried an RNN trained with RMSprop and the standard learning rate of 0.00025,both with and without eligibility traces (using again = 0:8and= 0). Figure 3 shows that bothRNN models learned to get optimal scores after about 50 epochs. This is in contrast with DQN,which never seems to be able to pass the 0 threshold, with large fluctuations ranging from -24 to0. After visually inspecting the games played in the testing phase, we noticed that the DQN agentgets stuck in a loop, where it exchanges the ball with the opponent until the timer runs out. Insuch a case, the agent minimizes the number of points scored against, but never learns to beat theopponent. The score fluctuations depend on how few points the agent allows before entering theloop. We suspect that the agent gets stuck in this policy because the reward for trying to defeat theopponent is delayed, waiting for the ball to reach the opponent and get past it. Furthermore, theexperiences of getting a point are relatively sparse. Together, it makes it difficult to propagate thereward back to the action of hitting the ball correctly.6Under review as a conference paper at ICLR 2017We also notice that both the RNN with and without eligibility traces manage to learn a near-optimalpolicy without getting stuck in the bad policy. The RNN has the capacity of sending the signal backto past states with BPTT, allowing it to do credit assignment implicitly, which might explain theirability to escape the bad policy. Remarkably, this is the only algorithm that succeeds in gettingnear-optimal scores on Tennis, out of all variants of DQN (Mnih et al. (2015), Munos et al. (2016),Wang et al. (2015), Mnih et al. (2016), Schaul et al. (2015)), which tend to get stuck in the policyof delaying. The model without traces learned at a faster pace than the one with traces, arriving toa score of 20 in 45 epochs as opposed to 62 for its counterpart. It’s possible that the updates formodel with traces were smaller, due to the weighting of target values, indirectly leading to a lowerlearning rate. We also trained the models with RMSprop and a higher learning rate of 0.001. Thisled to the model with traces getting to 20 points in just 27 epochs, while the model without traceslost its ability to get optimal scores and never passed the 0 threshold.0 10 20 30 40 50 60 70 80epochs−30−20−100102030scoresRMSprop lr=0.000250 5 10 15 20 25 30epochs−30−20−100102030scoresRMSprop lr=0.0010 5 10 15 20 25 30epochs−30−20−100102030scoresAdamRNN trace=0.0 RNN trace=0.8 DQNFigure 3: Test scores on Tennis comparing RMSprop and Adam.RMSprop lr=0.00025 RMSprop lr=0.001 Adam lr=0.00025DQN N/A N/A N/ARNN= 0 45 N/A 19RNN= 0:8 62 27 13Table 2: Number of epochs before getting to 20 points in Tennis. N/A represents the inability toreach such a level.We then tried using Adam as the optimizer, with the original learning rate of 0.00025. Both RNNmodels learned substantially faster than with RMSprop, with the RNN with traces getting to near-optimal performance in just 13 epochs. With Adam, the gradient for the positive TD is stored inthe momentum part of the equation for quite some time. Once in momentum, the gradient is part ofmany updates, which makes it enough to overtake the safe strategy. We also notice that the modelwith traces was much more stable than its counterpart. The model without traces fell back to thepolicy of delaying the game on two occasions, after having learned to beat the opponent. Finally,we trained DQN with Adam, but the model acted the same way as DQN trained with RMSprop.5 D ISCUSSION AND CONCLUSIONIn this paper, we analyzed the effects of using eligibility traces and different optimization functions.We showed that eligibility traces can improve and stabilize learning and using Adam can stronglyaccelerate learning.As shown in the Pong results, the model using eligibility traces didn’t gain much performance fromusing Adam. One possible cause is the frozen network. While it has a stabilizing effect in DQN,by stopping policies from drastically changing from a single update, it also stops newly learnedvalues from being propagated back. Double DQN seems to partially go around this issue, allowing7Under review as a conference paper at ICLR 2017the policy of the next state to change, while keeping the values frozen. In future experiments,we must consider eliminating or increasing the frozen network’s update frequency. It would also beinteresting to reduce the size of experience replay, as with increased learning speed, old observationscan become too off-policy and barely be used in eligibility traces.
Bya5vnbVg
Review
4: Ok but not good enough - rejection
The paper presents a deep RL with eligibility traces. The authors combine DRQN with eligibility traces for improved training. The new algorithm is evaluated on a two problems, with a single set of hyper-parameters, and compared with DQN. The topic is very interesting. Adding eligibility traces to RL updates is not novel, but this family of the algorithms have not been explored for deep RL. The paper is written clearly, and the related literature is well-covered. More experiments would make this promising paper much stronger. As this is an investigative, experimental paper, it is crucial for it to contain a wider range of problems, different hyper-parameter settings, and comparison with vanilla DRQN, Deepmind's DQN implementation, as well as other state of the art methods.
5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature
HkuVu3ige
ICLR.cc/2017/conference
2017
On orthogonality and learning recurrent networks with long term dependencies
["Eugene Vorontsov", "Chiheb Trabelsi", "Samuel Kadoury", "Chris Pal"]
It is well known that it is challenging to train deep neural networks and recurrent neural networks for tasks that exhibit long term dependencies. The vanishing or exploding gradient problem is a well known issue associated with these challenges. One approach to addressing vanishing and exploding gradients is to use either soft or hard constraints on weight matrices so as to encourage or enforce orthogonality. Orthogonal matrices preserve gradient norm during backpropagation and can therefore be a desirable property; however, we find that hard constraints on orthogonality can negatively affect the speed of convergence and model performance. This paper explores the issues of optimization convergence, speed and gradient stability using a variety of different methods for encouraging or enforcing orthogonality. In particular we propose a weight matrix factorization and parameterization strategy through which we we can bound matrix norms and therein control the degree of expansivity induced during backpropagation.
["Deep learning"]
ABSTRACTIt is well known that it is challenging to train deep neural networks and recur-rent neural networks for tasks that exhibit long term dependencies. The vanishingor exploding gradient problem is a well known issue associated with these chal-lenges. One approach to addressing vanishing and exploding gradients is to useeither soft or hard constraints on weight matrices so as to encourage or enforce or-thogonality. Orthogonal matrices preserve gradient norm during backpropagationand can therefore be a desirable property; however, we find that hard constraintson orthogonality can negatively affect the speed of convergence and model per-formance. This paper explores the issues of optimization convergence, speed andgradient stability using a variety of different methods for encouraging or enforcingorthogonality. In particular we propose a weight matrix factorization and parame-terization strategy through which we can bound matrix norms and therein controlthe degree of expansivity induced during backpropagation.1 I NTRODUCTIONThe depth of deep neural networks confers representational power, but also makes model optimiza-tion more challenging. Training deep networks with gradient descent based methods is known to bedifficult as a consequence of the vanishing and exploding gradient problem (Hochreiter & Schmid-huber, 1997). Typically, exploding gradients are avoided by clipping large gradients (Pascanu et al.,2013) or introducing an L2orL1weight norm penalty. The latter has the effect of bounding thespectral radius of the linear transformations, thus limiting the maximal gain across the transforma-tion. Krueger & Memisevic (2015) attempt to stabilize the norm of propagating signals directlyby penalizing differences in successive norm pairs in the forward pass and Pascanu et al. (2013)propose to penalize successive gradient norm pairs in the backward pass. These regularizers affectthe network parameterization with respect to the data instead of penalizing weights directly.Both expansivity and contractivity of linear transformations can also be limited by more tightlybounding their spectra. By limiting the transformations to be orthogonal, their singular spectra arelimited to unitary gain causing the transformations to be norm-preserving. Le et al. (2015) andHenaff et al. (2016) have respectively shown that identity initialization and orthogonal initializationcan be beneficial. Arjovsky et al. (2015) have gone beyond initialization, building unitary recurrentneural network (RNN) models with transformations that are unitary by construction which theyachieved by composing multiple basic unitary transformations. The resulting transformations, forsome n-dimensional input, cover only some subset of possible nnunitary matrices but appearto perform well on simple tasks and have the benefit of having low complexity in memory andcomputation.The entire set of possible unitary or orthogonal parameterizations forms the Stiefel manifold. At amuch higher computational cost, gradient descent optimization directly along this manifold can bedone via geodesic steps (Nishimori, 2005; Tagare, 2011). Recent work (Wisdom et al., 2016) hasproposed the optimization of unitary matrices along the Stiefel manifold using geodesic gradientdescent. To produce a full-capacity parameterization for unitary matrices they use some insights1Under review as a conference paper at ICLR 2017from Tagare (2011), combining the use of a canonical inner products and Cayley transformations.Their experimental work indicates that full capacity unitary RNN models can solve the copy memoryproblem whereas both LSTM networks and restricted capacity unitary RNN models having similarcomplexity appear unable to solve the task for a longer sequence length ( T= 2000).In contrast, here we explore the optimization of real valued matrices within a configurable marginabout the Stiefel manifold. We suspect that a strong constraint of orthogonality limits the model’srepresentational power, hindering its performance, and may make optimization more difficult. Weexplore this hypothesis empirically by employing a factorization technique that allows us to limit thedegree of deviation from the Stiefel manifold. While we use geodesic gradient descent, we simulta-neously update the singular spectra of our matrices along Euclidean steps, allowing optimization tostep away from the manifold while still curving about it.1.1 V ANISHING AND EXPLODING GRADIENTSThe issue of vanishing and exploding gradients as it pertains to the parameterization of neural net-works can be illuminated by looking at the gradient back-propagation chain through a network.A neural network with nhidden layers has pre-activationsai(hi1) =Wihi1+bi; i2f2;;ng (1)For notational convenience, we combine parameters Wiandbito form an affine matrix . We cansee that for some loss function Lat layer n, the derivative with respect to parameters iis:@L@i=@an+1@i@L@an+1(2)The partial derivatives for the pre-activations can be decomposed as follows:@ai+1@i=@ai@i@hi@ai@ai+1@hi=@ai@iDiWi+1!@ai+1@ai=DiWi+1;(3)where Diis the Jacobian corresponding to the activation function, containing partial derivatives ofthe hidden units at layer i+1 with respect to the pre-activation inputs. Typically, Dis diagonal.Following the above, the gradient in equation 2 can be fully decomposed into a recursive chain ofmatrix products:@L@i=@ai@inYj=i(DjWj+1)@L@an+1(4)In (Pascanu et al., 2013), it is shown that the 2-norm of@ai+1@aiis bounded by the product of thenorms of the non-linearity’s Jacobian and transition matrix at time t(layer i), as follows:@at+1@atjjDtjjjjWtjjDtWt=t;Dt;Wt2R:(5)whereDtandWtare the largest singular values of the non-linearity’s Jacobian Dtand the tran-sition matrix Wt. In RNNs, Wtis shared across time and can be simply denoted as W.Equation 5 shows that the gradient can grow or shrink at each layer depending on the gain of eachlayer’s linear transformation Wand the gain of the Jacobian D. The gain caused by each layeris magnified across all time steps or layers. It is easy to have extreme amplification in a recurrentneural network where Wis shared across time steps and a non-unitary gain in Wis amplifiedexponentially. The phenomena of extreme growth or contraction of the gradient across time steps orlayers are known as the exploding and the vanishing gradient problems, respectively. It is sufficientfor RNNs to have t1 at each time tto enable the possibility of vanishing gradients, typicallyfor some large number of time steps T. The rate at which a gradient (or forward signal) vanishes2Under review as a conference paper at ICLR 2017depends on both the parameterization of the model and on the input data. The parameterizationmay be conditioned by placing appropriate constraints on W. It is worth keeping in mind that theJacobian Dis typically contractive, thus tending to be norm-reducing) and is also data-dependent,whereas Wcan vary from being contractive to norm-preserving, to expansive and applies the samegain on the forward signal as on the back-propagated gradient signal.2 O UR APPROACHVanishing and exploding gradients can be controlled to a large extent by controlling the maximumand minimum gain ofW. The maximum gain of a matrix Wis given by the spectral norm whichis given byjjWjj2= max"jjWxjjjjxjj#: (6)By keeping our weight matrix Wclose to orthogonal, one can ensure that it is close to a norm-preserving transformation (where the spectral norm is equal to one, but the minimum gain is alsoone). One way to achieve this is via a simple soft constraint or regularization term of the form:XijjWTiWiIjj2: (7)However, it is possible to formulate a more direct parameterization or factorization for Wwhich per-mits hard bounds on the amount of expansion and contraction induced by W. This can be achievedby simply parameterizing Waccording to its singular value decomposition, which consists of thecomposition of orthogonal basis matrices UandVwith a diagonal spectral matrix Scontaining thesingular values which are real and positive by definition. We haveW=USVT: (8)Since the spectral norm or maximum gain of a matrix is equal to its largest singular value, thisdecomposition allows us to control the maximum gain or expansivity of the weight matrix by con-trolling the magnitude of the largest singular value. Similarly, the minimum gain or contractivity ofa matrix can be obtained from the minimum singular value.We can keep the bases UandVorthogonal via geodesic gradient descent along the set of weightsthat satisfy UTU=IandVTV=Irespectively. The submanifolds that satisfy these constraintsare called Stiefel manifolds. We discuss how this is achieved in more detail below, then discuss ourconstruction for bounding the singular values.During optimization, in order to maintain the orthogonality of an orthogonally-initialized matrixM, i.e. where M=U,M=VorM=Wif so desired, we employ a Cayley transformationof the update step onto the Stiefel manifold of (semi-)orthogonal matrices, as in Nishimori (2005)and Tagare (2011). Given an orthogonally-initialized parameter matrix Mand its Jacobian, Gwithrespect to the objective function, an update is performed as follows:A=GMTMGTMnew=M+ (I+2A)1(I2A);(9)where Ais a skew-symmetric matrix (that depends on the Jacobian and on the parameter matrix)which is mapped to an orthogonal matrix via a Cayley transform and is the learning rate.While the update rule in (9) allows us to maintain an orthogonal hidden to hidden transition matrixWif desired, we are interested in exploring the effect of stepping away from the Stiefel manifold. Assuch, we parameterize the transition matrix Win factorized form, as a singular value decompositionwith orthogonal bases UandVupdated by geodesic gradient descent using the Cayley transformapproach above.IfWis an orthogonal matrix, the singular values in the diagonal matrix Sare all equal to one.However, in our formulation we allow these singular values to deviate from one and employ asigmoidal parameterization to apply a hard constraint on the maximum and minimum amount of3Under review as a conference paper at ICLR 2017deviation. Specifically, we define a margin maround 1 within which the singular values must lie.This is achieved with the parameterizationsi= 2m((pi)0:5) + 1; s i2fdiag(S)g; m2[0;1]: (10)The singular values are thus restricted to the range [1m;1+m]and the underlying parameterspiare updated freely via stochastic gradient descent. Note that this parameterization strategy alsohas implications on the step sizes that gradient descent based optimization will take when updatingthe singular values – they tend to be smaller compared to models with no margin constraining theirvalues. Specifically, a singular value’s progression toward a margin is slowed the closer it is to themargin. The sigmoidal parameterization can also impart another effect on the step size along thespectrum which needs to be accounted for. Considering 10, the gradient backpropagation of somelossLtoward parameters piis found asdLdpi=dsidpidLdsi= 2md(pi)dpidLdsi: (11)From (11), it can be seen that the magnitude of the update step for piis scaled by the marginhyperparameter m. This means for example that for margins less than one, the effective learningrate for the spectrum is reduced in proportion to the margin. Consequently, we adjust the learningrate along the spectrum to be independent of the margin by renormalizing it by 2m.This margin formulation both guarantees singular values lie within a well defined range and slowsdeviation from orthogonality. Alternatively, one could enforce the orthogonality of UandVandimpose a regularization term corresponding to a mean one Gaussian prior on these singular values.This encourages the weight matrix Wto be norm preserving with a controllable strength equivalentto the variance of the Gaussian. We also explore this approach further below.3 E XPERIMENTSIn this section, we explore hard and soft orthogonality constraints on factorized weight matricesfor recurrent neural network hidden to hidden transitions. With hard orthogonality constraints onUandV, we investigate the effect of widening the spectral margin or bounds on convergenceand performance. Loosening these bounds allows increasingly larger margins within which thetransition matrix Wcan deviate from orthogonality. We confirm that orthogonal initialization isuseful as noted in Henaff et al. (2016), and we show that although strict orthogonality guaranteesstable gradient norm, loosening orthogonality constraints can increase the rate of gradient descentconvergence. We begin our analyses on tasks that are designed to stress memory: a sequence copyingtask and a basic addition task (Hochreiter & Schmidhuber, 1997). We then move on to tasks on realdata that require models to capture long-range dependencies: digit classification based on sequentialand permuted MNIST vectors (Le et al., 2015; LeCun et al., 1998). Finally, we look at a basiclanguage modeling task using the Penn Treebank dataset (Marcus et al., 1993).The copy and adding tasks, introduced by Hochreiter & Schmidhuber (1997), are synthetic bench-marks with pathologically hard long distance dependencies that require long-term memory in mod-els. The copy task consists of an input sequence that must be remembered by the network, followedby a series of blank inputs terminated by a delimiter that denotes the point at which the network mustbegin to output a copy of the initial sequence. We use an input sequence of T+ 20 elements thatbegins with a sub-sequence of 10 elements to copy, each containing a symbol ai2fa1;:::;apgoutofp=8possible symbols. This sub-sequence is followed by T1elements of the blank categorya0which is terminated at step Tby a delimiter symbol ap+1and 10 more elements of the blankcategory. The network must learn to remember the initial 10 element sequence for Ttime steps andoutput it after receiving the delimiter symbol.The goal of the adding task is to add two numbers together after a long delay. Each number israndomly picked at a unique position in a sequence of length T. The sequence is composed ofTvalues sampled from a uniform distribution in the range [0;1), with each value paired with anindicator value that identifies the value as one of the two numbers to remember (marked 1) or as avalue to ignore (marked 0). The two numbers are positioned randomly in the sequence, the first inthe range [0;T21]and the second in the range [T2;T1], where 0 marks the first element. Thenetwork must learn to identify and remember the two numbers and output their sum.4Under review as a conference paper at ICLR 2017The sequential MNIST task from Le et al. (2015), MNIST digits are flattened into vectors that canbe traversed sequentially by a recurrent neural network. The goal is to classify the digit based onthe sequential input of pixels. The simple variant of this task is with a simple flattening of the imagematrices; the harder variant of this task includes a random permutation of the pixels in the inputvector that is determined once for an experiment. The latter formulation introduces longer distancedependencies between pixels that must be interpreted by the classification model.The English Penn Treebank (PTB) dataset from Marcus et al. (1993) is an annotated corpus of En-glish sentences, commonly used for benchmarking language models. We employ a sequential char-acter prediction task: given a sentence, a recurrent neural network must predict the next character ateach step, from left to right. We use input sequences of variable length, with each sequence contain-ing one sentence. We model 49 characters including lowercase letters (all strings are in lowercase),numbers, common punctuation, and an unknown character placeholder. In our experiments on twosubsets of the data: in the first, we first use 23% of the data with strings with up to 75 characters andin the second we include over 99% of the dataset, picking strings with up to 300 characters.3.1 L OOSENING HARD ORTHOGONALITY CONSTRAINTSIn this section, we experimentally explore the effect of loosening hard orthogonality constraintsthrough loosening the spectral margin defined above for the hidden to hidden transition matrix.In all experiments, we employed RMSprop (Tieleman & Hinton, 2012) when not using geodesicgradient descent. We used minibatches of size 50 and for generated data (the copy and addingtasks), we assumed an epoch length of 100 minibatches. We cautiously introduced gradient clippingat magnitude 100 (unless stated otherwise) in all of our RNN experiments although it may not berequired and we consistently applied a small weight decay of 0.0001. Unless otherwise specified,we trained all simple recurrent neural networks with the hidden to hidden matrix factorization asin (8) using geodesic gradient descent on the bases (learning rate 106) and RMSprop on the otherparameters (learning rate 0.0001), using a tanh transition nonlinearity, and clipping gradients of 100magnitude. The neural network code was built on the Theano framework (Theano DevelopmentTeam, 2016). When parameterizing a matrix in factorized form, we apply the weight decay on thecomposite matrix rather than on the factors in order to be consistent across experiments. For MNISTand PTB, test set metrics were computed based on the parameterization that gave the best validationset accuracy.3.1.1 C ONVERGENCE ON SYNTHETIC MEMORY TASKSFor different sequence lengths Tof the copy and adding tasks, we trained a factorized RNN with 128hidden units and various spectral margins m. For the copy task, we used Elman networks withouta transition non-linearity as in Henaff et al. (2016). We discuss our investigations into the use of anon-linearity on the copy task in the Appendix.As shown in Figure 1 we see an increase in the rate of convergence as we increase the spectralmargin. This observation generally holds across the tested sequence lengths ( T= 200 ,T= 500 ,T= 1000 ,T= 10000 ); however, large spectral margins hinder convergence on extremely longsequence lengths. At sequence length T= 10000 , parameterizations with spectral margins largerthan 0.001 converge slower than when using a margin of 0.001. In addition, the experiment withouta margin failed to converge on the longest sequence length. This follows the expected pattern wherestepping away from the Stiefel manifold may help with gradient descent optimization but looseningorthogonality constraints can reduce the stability of signal propagation through the network.For the adding task, we trained a factorized RNN on T= 1000 length sequences, using a ReLUactivation function on the hidden to hidden transition matrix. The mean squared error (MSE) isshown for different spectral margins in Figure 5 in the Appendix. Testing spectral margins m= 0,m= 1,m= 10 ,m= 100 , and no margin, we find that the models with the purely orthogonal(m= 0) and the unconstrained (no margin) transition matrices failed to begin converging beyondbaseline MSE within 2000 epochs.5Under review as a conference paper at ICLR 20170 20 40 60 80 100number of epochs0.00.20.40.60.81.0accuracy020406080100120140160number of epochs0.00.20.40.60.81.0accuracy0 50 100 150 200number of epochs0.00.20.40.60.81.0accuracy0 50 100 150 200 250 300number of epochs0.00.20.40.60.81.0accuracym=0m=0.001m=0.01m=0.1m=1no marginFigure 1: Accuracy curves on the copy task for sequence lengths of (from left to right) T=200,T=500, T=1000, T=10000 given different spectral margins. Convergence speed increases with mar-gin size; however, large margin sizes are ineffective at longer sequence lengths (T=10000, right).margin initialization accuracy0 orthogonal 77.180.001 orthogonal 79.260.01 orthogonal 85.470.1 orthogonal 94.101 orthogonal 93.84none orthogonal 93.24none Glorot normal 66.71none identity 53.53LSTM 97.30Table 1: Ordered sequential MNIST classifica-tion with different margin sizes and an LSTM.margin initialization accuracy0 orthogonal 83.560.001 orthogonal 84.590.01 orthogonal 89.630.1 orthogonal 91.441 orthogonal 90.83none orthogonal 90.51none Glorot normal 79.33none identity 42.72LSTM 92.62Table 2: Permuted sequential MNIST classifica-tion with different margin sizes and an LSTM.3.1.2 P ERFORMANCE ON REAL DATAHaving confirmed that an orthogonality constraint can negatively impact convergence rate, we seekto investigate the effect on model performance for tasks on real data. We show the results of experi-ments on permuted sequential MNIST in Table 2 and ordered sequential MNIST in Table 1. The losscurves are shown in Figure 6 in the Appendix and reveal an increased convergence rate for largerspectral margins. We trained the factorized RNN models with 128 hidden units for 120 epochs. Wealso trained an LSTM with 128 hidden units on both tasks for 150 epochs, configured with peep-hole connections, orthogonally initialized (and forget gate bias initialized to one), and trained withRMSprop (learning rate 0.0001, clipping gradients of magnitude 1).We show the results of experiments on PTB character prediction, in terms of bits per character (bpc)and prediction accuracy, for a subset of short sequences (up to 75 characters; 23% of data) in Table3 and for a subset of long sequences (up to 300 characters; 99% of data) in Table 4. We trainedfactorized RNN models with 512 hidden units for 200 epochs with geodesic gradient descent on thebases (learning rate 106) and RMSprop on the other parameters (learning rate 0.001), using a tanhtransition nonlinearity, and clipping gradients of 30 magnitude.Interestingly, for both the ordered and permuted sequential MNIST tasks, models with a non-zeromargin significantly outperform those that are constrained to have purely orthogonal transition matri-margin initialization bpc accuracy0 orthogonal 2.16 55.310.01 orthogonal 2.16 55.330.1 orthogonal 2.12 55.371 orthogonal 2.06 57.07100 orthogonal 2.04 57.51none orthogonal 2.06 57.38none Glorot normal 2.08 57.37none identity 2.25 53.83Table 3: Character prediction on PTB sentencesof to 75 characters, using different margins.margin initialization bpc accuracy0 orthogonal 2.20 54.880.01 orthogonal 2.20 54.830.1 orthogonal 2.24 54.101 orthogonal 2.36 51.12100 orthogonal 2.36 51.20none orthogonal 2.34 51.30none Glorot normal 2.34 51.04none identity 2.68 45.35Table 4: Character prediction on PTB sentencesof up to 300 characters, using different margins.6Under review as a conference paper at ICLR 2017ces (margin of zero). The best results on both the ordered and sequential MNIST tasks were yieldedby models with a spectral margin of 0.1, at 94.10% accuracy and 91.44% accuracy, respectively. AnLSTM outperformed the RNNs in both tasks; nevertheless, RNNs with hidden to hidden transitionsinitialized as orthogonal matrices performed admirably without a memory component and withoutall of the additional parameters associated with gates. Indeed, orthogonally initialized RNNs per-formed almost on par with the LSTM in the permuted sequential MNIST task which presents longerdistance dependencies than the ordered task. Although the optimal margin appears to be 0.1, RNNswith large margins perform almost identically to an RNN without a margin, as long as the transitionmatrix is initialized as orthogonal. On these tasks, orthogonal initialization appears to significantlyoutperform Glorot normal initialization (Glorot & Bengio, 2010) or initializing the matrix as iden-tity. It is interesting to note that for the MNIST tasks, orthogonal initialization appears useful whileorthogonality constraints appear mainly detrimental. This suggests that while orthogonality helpsearly training by stabilizing gradient flow across many time steps, orthogonality constraints mayneed to be loosened on some tasks so as not to over-constrain the model’s representational ability.Curiously, larger margins and even models without sigmoidal constraints on the spectrum (no mar-gin) performed well as long as they were initialized to be orthogonal, suggesting that evolution awayfrom orthogonality is not a serious problem on MNIST. It is not surprising that orthogonality is use-ful for the MNIST tasks since they depend on long distance signal propagation with a single output atthe end of the input sequence. On the other hand, character prediction with PTB produces an outputat every time step. Constraining deviation from orthogonality proved detrimental for short sentences(Table 3) and beneficial when long sentences were included (Table 4). Furthermore, Glorot normalinitialization did not perform worse than orthogonal initialization for PTB. Since an output is gen-erated for every character in a sentence, short distance signal propagation is possible. Thus it ispossible that the RNN is first learning very local dependencies between neighbouring characters andthat given enough context, constraining deviation from orthogonality can help force the network tolearn longer distance dependencies.3.1.3 S PECTRAL AND GRADIENT EVOLUTIONIt is interesting to note that even long sequence lengths (T=1000) in the copy task can be solvedefficiently with rather large margins on the spectrum. In Figure 2 we look at the gradient propaga-tion of the loss from the last time step in the network with respect to the hidden activations. We cansee that for a purely orthogonal parameterization of the transition matrix (when the margin is zero),the gradient norm is preserved across time steps, as expected. We further observe that with increas-ing margin size, the number of update steps over which this norm preservation survives decreases,though surprisingly not as quickly as expected.Figure 2: The norm of the gradient of the loss from the last time step with respect to the hiddenunits at a given time step for a length 220 RNN over 1000 update iterations for different margins.Iterations are along the abscissa and time steps are denoted along the ordinate. The first columnmargins are: 0, 0.001, 0.01. The second column margins are: 0.1, 1, no margin. Gradient norms arenormalized across the time dimension.Although the deviation of singular values from one should be slowed by the sigmoidal parameteriza-tions, even parameterizations without a sigmoid (no margin) can be effectively trained for all but thelongest sequence lengths. This suggests that the spectrum is not deviating far from orthogonality andthat inputs to the hidden to hidden transitions are mostly not aligned along the dimensions of great-7Under review as a conference paper at ICLR 2017est expansion or contraction. We evaluated the spread of the spectrum in all of our experiments andfound that indeed, singular values tend to stay well within their prescribed bounds and only reachthe margin when using a very large learning rate that does not permit convergence. Furthermore,when transition matrices are initialized as orthogonal, singular values remain near one throughouttraining even without a sigmoidal margin for tasks that require long term memory (copy, adding,sequential MNIST). On the other hand, singular value distributions tend to drift away from one forPTB character prediction which may help explain why enforcing an orthogonality constraint canbe helpful for this task, when modeling long sequences. Interestingly, singular values spread outless for longer sequence lengths (nevertheless, the T=10000 copy task could not be solved with nosigmoid on the spectrum).We visualize the spread of singular values for different model parameterizations on the permuted se-quential MNIST task in Figure 3. Curiously, we find that the distribution of singular values tends toshift upward to a mean of approximately 1.05 on both the ordered and permuted sequential MNISTtasks. We note that in those experiments, a tanh transition nonlinearity was used which is contractivein both the forward signal pass and the gradient backward pass. An upward shift in the distributionof singular values of the transition matrix would help compensate for that contraction. Indeed, (Saxeet al., 2013) describe this as a possibly good regime for learning in deep neural networks. That themodel appears to evolve toward this regime suggests that deviating from it may incur a cost. Thisis interesting because the cost function cannot take into account numerical issues such as vanish-ing or exploding gradients (or forward signals); we do not know what could make this deviationcostly. That the transition matrix may be compensating for the contraction of the tanh is supportedby further experiments: applying a 1.05 pre-activation gain appears to allow a model with a marginof 0 to nearly match the top performance reached on both of the MNIST tasks. Furthermore, whenusing the OPLU norm-preserving activation function (Chernodub & Nowicki, 2016), we found thatorthogonally initialized models performed equally well with all margins, achieving over 90% ac-curacy on the permuted sequential MNIST task. Unlike orthgonally initialized models, the RNNon the bottom right of Figure 3 with Glorot normal initialized transition matrices, begins and endswith a wide singular spectrum. While there is no clear positive shift in the distribution of singularvalues, the mean value appears to very gradually increase for both the ordered and permuted sequen-tial MNIST tasks. If the model is to be expected to positively shift singular values to compensatefor the contractivity of the tanh nonlinearity, it is not doing so well for the Glorot-initialized case;however, this may be due to the inefficiency of training as a result of vanishing gradients, given thatinitialization.0 50 100 150 200number of epochs0.800.850.900.951.001.051.101.151.200 50 100 150 200number of epochs0.800.850.900.951.001.051.101.151.200 50 100 150 200number of epochs0.800.850.900.951.001.051.101.151.200 50 100 150 200number of epochs0.800.850.900.951.001.051.101.151.200 50 100 150 200number of epochs0.800.850.900.951.001.051.101.151.200 50 100 150 200number of epochs0.00.51.01.52.02.5Figure 3: Singular value evolution on the permuted sequential MNIST task for factorized RNNswith different margin sizes. Margins are, from left to right: top row : 0.001, 0.01, 0.1; bottom row : 1,no margin, no margin. The singular value distributions are summarized with the mean (green line,center) and standard deviation (green shading about mean), minimum (red, bottom) and maximum(blue, top) values. All models are initialized with orthogonal hidden to hidden transition matricesexcept for the model on the bottom right where Glorot normal initialization is used.8Under review as a conference paper at ICLR 20173.2 E XPLORING SOFT ORTHOGONALITY CONSTRAINTSHaving established that it may indeed be useful to step away from orthogonality, here we exploretwo forms of soft constraints (rather than hard bounds as above) on hidden to hidden transitionmatrix orthogonality. The first is a simple penalty that directly encourages a transition matrix Wtobe orthogonal, of the form jjWTWIjj22. This is similar to the orthogonality penalty introducedby Henaff et al. (2016). In the first two subfigures on the left of Figure 4, we explore the effectof weakening this form of regularization. We trained both a regular non-factorized RNN on theT= 200 copy task and a factorized RNN with orthogonal bases on the T= 500 copy task. Forthe regular RNN, we had to reduce the learning rate to 105. Here again we see that weakening thestrength of the orthogonality-encouraging penalty can increase convergence speed.0 200 400 600 800 1000number of epochs0.00.20.40.60.81.0accuracy0 20 40 60 80 100number of epochs0.00.20.40.60.81.0accuracy0.0010.010.11101000 50 100 150 200 250 300number of epochs0.00.20.40.60.81.0accuracy0 50 100 150 200 250 300number of epochs0.00.20.40.60.81.0accuracy0.00010.0010.010.1110100Figure 4: Accuracy curves on the copy task for different strengths of soft orthogonality constraints.A soft orthogonality constraint is applied to the transition matrix Wfor a regular RNN on T= 200(Left) and the same is applied on a factorized RNN on T= 500 (Left center). Another constraintin the form of a mean one Gaussian prior on the singular values is applied to a factorized RNN onT= 200 (Right center); the same is applied to a factorized RNN with a sigmoidal parameterizationof the spectrum, using a large margin of 1 (Right). Loosening orthogonality speeds convergence.The second approach we explore replaces the sigmoidal margin parameterization with a mean oneGaussian prior on the singular values. In the two right subfigures of Figure 4, we visualize the accu-racy on the length 200 copy task, using geoSGD (learning rate 106)to keep UandVorthogonaland different strengths of a Gaussian prior with mean one on the singular values. We trained theseexperiments with regular SGD on the spectrum and other non-orthogonal parameter matrices, usinga105learning rate. We see that priors which are too strong lead to slow convergence. Looseningthe strength of the prior makes the optimization more efficient. Furthermore, we compare a directparameterization of the spectrum (no sigmoid) in Figure 4 with a sigmoidal parameterization, usinga large margin of 1. Without the sigmoidal parameterization, optimization quickly becomes unsta-ble; on the other hand, the optimization also becomes unstable if the prior is removed completely inthe sigmoidal formulation (margin 1). These results further motivate the idea that parameterizationsthat deviate from orthogonality may perform better than purely orthogonal ones, as long as they aresufficiently constrained to avoid instability during training.4 C ONCLUSIONSWe have explored a number of methods for controlling the expansivity of gradients during backprop-agation based learning in RNNs through manipulating orthogonality constraints and regularizationon matrices. Our experiments indicate that while orthogonal initialization may be beneficial, main-taining constraints on orthogonality can be detrimental. Indeed, moving away from hard constraintson matrix orthogonality can help improve optimization convergence rate and model performance.However, we also observe with synthetic tasks that relaxing regularization which encourages thespectral norms of weight matrices to be close to one, or allowing bounds on the spectral norms ofweight matrices to be too wide, can reverse these gains and may lead to unstable optimization.ACKNOWLEDGMENTSWe thank the Natural Sciences and Engineeering Research Council (NSERC) of Canada and Sam-sung for supporting this research.9Under review as a conference paper at ICLR 2017
rypQ3tJ4e
This paper investigates the issue of orthogonality of the transfer weight matrix in RNNs and suggests an optimization formulation on the manifold of (semi)orthogonal matrices.
5: Marginally below acceptance threshold
Vanishing and exploding gradients makes the optimization of RNNs very challenging. The issue becomes worse on tasks with long term dependencies that requires longer RNNs. One of the suggested approaches to improve the optimization is to optimize in a way that the transfer matrix is almost orthogonal. This paper investigate the role of orthogonality on the optimization and learning which is very important. The writing is sound and clear and arguments are easy to follow. The suggested optimization method is very interesting. The main shortcoming of this paper is the experiments which I find very important and I hope authors can update the experiment section significantly. Below I mention some comments on the experiment section: 1- I think the experiments are not enough. At the very least, report the result on the adding problem and language modeling task on Penn Treebank. 2- I understand that the copying task becomes difficult with non-lineary. However, removing non-linearity makes the optimization very different and therefore, it is very hard to conclude anything from the results on the copying task. 3- I was not able to find the number of hidden units used for RNNs in different tasks. 4- Please report the running time of your method in the paper for different numbers of hidden units, compare it with the SGD and mention the NN package you have used. 5- The results on Table 1 and Table 2 might also suggest that the orthogonality is not really helpful since even without a margin, the numbers are very close compare to the case when you find the optimal margin. Am I right? 6- What do we learn from Figure 2? It is left without any discussion.
5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature
HkuVu3ige
ICLR.cc/2017/conference
2017
On orthogonality and learning recurrent networks with long term dependencies
["Eugene Vorontsov", "Chiheb Trabelsi", "Samuel Kadoury", "Chris Pal"]
It is well known that it is challenging to train deep neural networks and recurrent neural networks for tasks that exhibit long term dependencies. The vanishing or exploding gradient problem is a well known issue associated with these challenges. One approach to addressing vanishing and exploding gradients is to use either soft or hard constraints on weight matrices so as to encourage or enforce orthogonality. Orthogonal matrices preserve gradient norm during backpropagation and can therefore be a desirable property; however, we find that hard constraints on orthogonality can negatively affect the speed of convergence and model performance. This paper explores the issues of optimization convergence, speed and gradient stability using a variety of different methods for encouraging or enforcing orthogonality. In particular we propose a weight matrix factorization and parameterization strategy through which we we can bound matrix norms and therein control the degree of expansivity induced during backpropagation.
["Deep learning"]
ABSTRACTIt is well known that it is challenging to train deep neural networks and recur-rent neural networks for tasks that exhibit long term dependencies. The vanishingor exploding gradient problem is a well known issue associated with these chal-lenges. One approach to addressing vanishing and exploding gradients is to useeither soft or hard constraints on weight matrices so as to encourage or enforce or-thogonality. Orthogonal matrices preserve gradient norm during backpropagationand can therefore be a desirable property; however, we find that hard constraintson orthogonality can negatively affect the speed of convergence and model per-formance. This paper explores the issues of optimization convergence, speed andgradient stability using a variety of different methods for encouraging or enforcingorthogonality. In particular we propose a weight matrix factorization and parame-terization strategy through which we can bound matrix norms and therein controlthe degree of expansivity induced during backpropagation.1 I NTRODUCTIONThe depth of deep neural networks confers representational power, but also makes model optimiza-tion more challenging. Training deep networks with gradient descent based methods is known to bedifficult as a consequence of the vanishing and exploding gradient problem (Hochreiter & Schmid-huber, 1997). Typically, exploding gradients are avoided by clipping large gradients (Pascanu et al.,2013) or introducing an L2orL1weight norm penalty. The latter has the effect of bounding thespectral radius of the linear transformations, thus limiting the maximal gain across the transforma-tion. Krueger & Memisevic (2015) attempt to stabilize the norm of propagating signals directlyby penalizing differences in successive norm pairs in the forward pass and Pascanu et al. (2013)propose to penalize successive gradient norm pairs in the backward pass. These regularizers affectthe network parameterization with respect to the data instead of penalizing weights directly.Both expansivity and contractivity of linear transformations can also be limited by more tightlybounding their spectra. By limiting the transformations to be orthogonal, their singular spectra arelimited to unitary gain causing the transformations to be norm-preserving. Le et al. (2015) andHenaff et al. (2016) have respectively shown that identity initialization and orthogonal initializationcan be beneficial. Arjovsky et al. (2015) have gone beyond initialization, building unitary recurrentneural network (RNN) models with transformations that are unitary by construction which theyachieved by composing multiple basic unitary transformations. The resulting transformations, forsome n-dimensional input, cover only some subset of possible nnunitary matrices but appearto perform well on simple tasks and have the benefit of having low complexity in memory andcomputation.The entire set of possible unitary or orthogonal parameterizations forms the Stiefel manifold. At amuch higher computational cost, gradient descent optimization directly along this manifold can bedone via geodesic steps (Nishimori, 2005; Tagare, 2011). Recent work (Wisdom et al., 2016) hasproposed the optimization of unitary matrices along the Stiefel manifold using geodesic gradientdescent. To produce a full-capacity parameterization for unitary matrices they use some insights1Under review as a conference paper at ICLR 2017from Tagare (2011), combining the use of a canonical inner products and Cayley transformations.Their experimental work indicates that full capacity unitary RNN models can solve the copy memoryproblem whereas both LSTM networks and restricted capacity unitary RNN models having similarcomplexity appear unable to solve the task for a longer sequence length ( T= 2000).In contrast, here we explore the optimization of real valued matrices within a configurable marginabout the Stiefel manifold. We suspect that a strong constraint of orthogonality limits the model’srepresentational power, hindering its performance, and may make optimization more difficult. Weexplore this hypothesis empirically by employing a factorization technique that allows us to limit thedegree of deviation from the Stiefel manifold. While we use geodesic gradient descent, we simulta-neously update the singular spectra of our matrices along Euclidean steps, allowing optimization tostep away from the manifold while still curving about it.1.1 V ANISHING AND EXPLODING GRADIENTSThe issue of vanishing and exploding gradients as it pertains to the parameterization of neural net-works can be illuminated by looking at the gradient back-propagation chain through a network.A neural network with nhidden layers has pre-activationsai(hi1) =Wihi1+bi; i2f2;;ng (1)For notational convenience, we combine parameters Wiandbito form an affine matrix . We cansee that for some loss function Lat layer n, the derivative with respect to parameters iis:@L@i=@an+1@i@L@an+1(2)The partial derivatives for the pre-activations can be decomposed as follows:@ai+1@i=@ai@i@hi@ai@ai+1@hi=@ai@iDiWi+1!@ai+1@ai=DiWi+1;(3)where Diis the Jacobian corresponding to the activation function, containing partial derivatives ofthe hidden units at layer i+1 with respect to the pre-activation inputs. Typically, Dis diagonal.Following the above, the gradient in equation 2 can be fully decomposed into a recursive chain ofmatrix products:@L@i=@ai@inYj=i(DjWj+1)@L@an+1(4)In (Pascanu et al., 2013), it is shown that the 2-norm of@ai+1@aiis bounded by the product of thenorms of the non-linearity’s Jacobian and transition matrix at time t(layer i), as follows:@at+1@atjjDtjjjjWtjjDtWt=t;Dt;Wt2R:(5)whereDtandWtare the largest singular values of the non-linearity’s Jacobian Dtand the tran-sition matrix Wt. In RNNs, Wtis shared across time and can be simply denoted as W.Equation 5 shows that the gradient can grow or shrink at each layer depending on the gain of eachlayer’s linear transformation Wand the gain of the Jacobian D. The gain caused by each layeris magnified across all time steps or layers. It is easy to have extreme amplification in a recurrentneural network where Wis shared across time steps and a non-unitary gain in Wis amplifiedexponentially. The phenomena of extreme growth or contraction of the gradient across time steps orlayers are known as the exploding and the vanishing gradient problems, respectively. It is sufficientfor RNNs to have t1 at each time tto enable the possibility of vanishing gradients, typicallyfor some large number of time steps T. The rate at which a gradient (or forward signal) vanishes2Under review as a conference paper at ICLR 2017depends on both the parameterization of the model and on the input data. The parameterizationmay be conditioned by placing appropriate constraints on W. It is worth keeping in mind that theJacobian Dis typically contractive, thus tending to be norm-reducing) and is also data-dependent,whereas Wcan vary from being contractive to norm-preserving, to expansive and applies the samegain on the forward signal as on the back-propagated gradient signal.2 O UR APPROACHVanishing and exploding gradients can be controlled to a large extent by controlling the maximumand minimum gain ofW. The maximum gain of a matrix Wis given by the spectral norm whichis given byjjWjj2= max"jjWxjjjjxjj#: (6)By keeping our weight matrix Wclose to orthogonal, one can ensure that it is close to a norm-preserving transformation (where the spectral norm is equal to one, but the minimum gain is alsoone). One way to achieve this is via a simple soft constraint or regularization term of the form:XijjWTiWiIjj2: (7)However, it is possible to formulate a more direct parameterization or factorization for Wwhich per-mits hard bounds on the amount of expansion and contraction induced by W. This can be achievedby simply parameterizing Waccording to its singular value decomposition, which consists of thecomposition of orthogonal basis matrices UandVwith a diagonal spectral matrix Scontaining thesingular values which are real and positive by definition. We haveW=USVT: (8)Since the spectral norm or maximum gain of a matrix is equal to its largest singular value, thisdecomposition allows us to control the maximum gain or expansivity of the weight matrix by con-trolling the magnitude of the largest singular value. Similarly, the minimum gain or contractivity ofa matrix can be obtained from the minimum singular value.We can keep the bases UandVorthogonal via geodesic gradient descent along the set of weightsthat satisfy UTU=IandVTV=Irespectively. The submanifolds that satisfy these constraintsare called Stiefel manifolds. We discuss how this is achieved in more detail below, then discuss ourconstruction for bounding the singular values.During optimization, in order to maintain the orthogonality of an orthogonally-initialized matrixM, i.e. where M=U,M=VorM=Wif so desired, we employ a Cayley transformationof the update step onto the Stiefel manifold of (semi-)orthogonal matrices, as in Nishimori (2005)and Tagare (2011). Given an orthogonally-initialized parameter matrix Mand its Jacobian, Gwithrespect to the objective function, an update is performed as follows:A=GMTMGTMnew=M+ (I+2A)1(I2A);(9)where Ais a skew-symmetric matrix (that depends on the Jacobian and on the parameter matrix)which is mapped to an orthogonal matrix via a Cayley transform and is the learning rate.While the update rule in (9) allows us to maintain an orthogonal hidden to hidden transition matrixWif desired, we are interested in exploring the effect of stepping away from the Stiefel manifold. Assuch, we parameterize the transition matrix Win factorized form, as a singular value decompositionwith orthogonal bases UandVupdated by geodesic gradient descent using the Cayley transformapproach above.IfWis an orthogonal matrix, the singular values in the diagonal matrix Sare all equal to one.However, in our formulation we allow these singular values to deviate from one and employ asigmoidal parameterization to apply a hard constraint on the maximum and minimum amount of3Under review as a conference paper at ICLR 2017deviation. Specifically, we define a margin maround 1 within which the singular values must lie.This is achieved with the parameterizationsi= 2m((pi)0:5) + 1; s i2fdiag(S)g; m2[0;1]: (10)The singular values are thus restricted to the range [1m;1+m]and the underlying parameterspiare updated freely via stochastic gradient descent. Note that this parameterization strategy alsohas implications on the step sizes that gradient descent based optimization will take when updatingthe singular values – they tend to be smaller compared to models with no margin constraining theirvalues. Specifically, a singular value’s progression toward a margin is slowed the closer it is to themargin. The sigmoidal parameterization can also impart another effect on the step size along thespectrum which needs to be accounted for. Considering 10, the gradient backpropagation of somelossLtoward parameters piis found asdLdpi=dsidpidLdsi= 2md(pi)dpidLdsi: (11)From (11), it can be seen that the magnitude of the update step for piis scaled by the marginhyperparameter m. This means for example that for margins less than one, the effective learningrate for the spectrum is reduced in proportion to the margin. Consequently, we adjust the learningrate along the spectrum to be independent of the margin by renormalizing it by 2m.This margin formulation both guarantees singular values lie within a well defined range and slowsdeviation from orthogonality. Alternatively, one could enforce the orthogonality of UandVandimpose a regularization term corresponding to a mean one Gaussian prior on these singular values.This encourages the weight matrix Wto be norm preserving with a controllable strength equivalentto the variance of the Gaussian. We also explore this approach further below.3 E XPERIMENTSIn this section, we explore hard and soft orthogonality constraints on factorized weight matricesfor recurrent neural network hidden to hidden transitions. With hard orthogonality constraints onUandV, we investigate the effect of widening the spectral margin or bounds on convergenceand performance. Loosening these bounds allows increasingly larger margins within which thetransition matrix Wcan deviate from orthogonality. We confirm that orthogonal initialization isuseful as noted in Henaff et al. (2016), and we show that although strict orthogonality guaranteesstable gradient norm, loosening orthogonality constraints can increase the rate of gradient descentconvergence. We begin our analyses on tasks that are designed to stress memory: a sequence copyingtask and a basic addition task (Hochreiter & Schmidhuber, 1997). We then move on to tasks on realdata that require models to capture long-range dependencies: digit classification based on sequentialand permuted MNIST vectors (Le et al., 2015; LeCun et al., 1998). Finally, we look at a basiclanguage modeling task using the Penn Treebank dataset (Marcus et al., 1993).The copy and adding tasks, introduced by Hochreiter & Schmidhuber (1997), are synthetic bench-marks with pathologically hard long distance dependencies that require long-term memory in mod-els. The copy task consists of an input sequence that must be remembered by the network, followedby a series of blank inputs terminated by a delimiter that denotes the point at which the network mustbegin to output a copy of the initial sequence. We use an input sequence of T+ 20 elements thatbegins with a sub-sequence of 10 elements to copy, each containing a symbol ai2fa1;:::;apgoutofp=8possible symbols. This sub-sequence is followed by T1elements of the blank categorya0which is terminated at step Tby a delimiter symbol ap+1and 10 more elements of the blankcategory. The network must learn to remember the initial 10 element sequence for Ttime steps andoutput it after receiving the delimiter symbol.The goal of the adding task is to add two numbers together after a long delay. Each number israndomly picked at a unique position in a sequence of length T. The sequence is composed ofTvalues sampled from a uniform distribution in the range [0;1), with each value paired with anindicator value that identifies the value as one of the two numbers to remember (marked 1) or as avalue to ignore (marked 0). The two numbers are positioned randomly in the sequence, the first inthe range [0;T21]and the second in the range [T2;T1], where 0 marks the first element. Thenetwork must learn to identify and remember the two numbers and output their sum.4Under review as a conference paper at ICLR 2017The sequential MNIST task from Le et al. (2015), MNIST digits are flattened into vectors that canbe traversed sequentially by a recurrent neural network. The goal is to classify the digit based onthe sequential input of pixels. The simple variant of this task is with a simple flattening of the imagematrices; the harder variant of this task includes a random permutation of the pixels in the inputvector that is determined once for an experiment. The latter formulation introduces longer distancedependencies between pixels that must be interpreted by the classification model.The English Penn Treebank (PTB) dataset from Marcus et al. (1993) is an annotated corpus of En-glish sentences, commonly used for benchmarking language models. We employ a sequential char-acter prediction task: given a sentence, a recurrent neural network must predict the next character ateach step, from left to right. We use input sequences of variable length, with each sequence contain-ing one sentence. We model 49 characters including lowercase letters (all strings are in lowercase),numbers, common punctuation, and an unknown character placeholder. In our experiments on twosubsets of the data: in the first, we first use 23% of the data with strings with up to 75 characters andin the second we include over 99% of the dataset, picking strings with up to 300 characters.3.1 L OOSENING HARD ORTHOGONALITY CONSTRAINTSIn this section, we experimentally explore the effect of loosening hard orthogonality constraintsthrough loosening the spectral margin defined above for the hidden to hidden transition matrix.In all experiments, we employed RMSprop (Tieleman & Hinton, 2012) when not using geodesicgradient descent. We used minibatches of size 50 and for generated data (the copy and addingtasks), we assumed an epoch length of 100 minibatches. We cautiously introduced gradient clippingat magnitude 100 (unless stated otherwise) in all of our RNN experiments although it may not berequired and we consistently applied a small weight decay of 0.0001. Unless otherwise specified,we trained all simple recurrent neural networks with the hidden to hidden matrix factorization asin (8) using geodesic gradient descent on the bases (learning rate 106) and RMSprop on the otherparameters (learning rate 0.0001), using a tanh transition nonlinearity, and clipping gradients of 100magnitude. The neural network code was built on the Theano framework (Theano DevelopmentTeam, 2016). When parameterizing a matrix in factorized form, we apply the weight decay on thecomposite matrix rather than on the factors in order to be consistent across experiments. For MNISTand PTB, test set metrics were computed based on the parameterization that gave the best validationset accuracy.3.1.1 C ONVERGENCE ON SYNTHETIC MEMORY TASKSFor different sequence lengths Tof the copy and adding tasks, we trained a factorized RNN with 128hidden units and various spectral margins m. For the copy task, we used Elman networks withouta transition non-linearity as in Henaff et al. (2016). We discuss our investigations into the use of anon-linearity on the copy task in the Appendix.As shown in Figure 1 we see an increase in the rate of convergence as we increase the spectralmargin. This observation generally holds across the tested sequence lengths ( T= 200 ,T= 500 ,T= 1000 ,T= 10000 ); however, large spectral margins hinder convergence on extremely longsequence lengths. At sequence length T= 10000 , parameterizations with spectral margins largerthan 0.001 converge slower than when using a margin of 0.001. In addition, the experiment withouta margin failed to converge on the longest sequence length. This follows the expected pattern wherestepping away from the Stiefel manifold may help with gradient descent optimization but looseningorthogonality constraints can reduce the stability of signal propagation through the network.For the adding task, we trained a factorized RNN on T= 1000 length sequences, using a ReLUactivation function on the hidden to hidden transition matrix. The mean squared error (MSE) isshown for different spectral margins in Figure 5 in the Appendix. Testing spectral margins m= 0,m= 1,m= 10 ,m= 100 , and no margin, we find that the models with the purely orthogonal(m= 0) and the unconstrained (no margin) transition matrices failed to begin converging beyondbaseline MSE within 2000 epochs.5Under review as a conference paper at ICLR 20170 20 40 60 80 100number of epochs0.00.20.40.60.81.0accuracy020406080100120140160number of epochs0.00.20.40.60.81.0accuracy0 50 100 150 200number of epochs0.00.20.40.60.81.0accuracy0 50 100 150 200 250 300number of epochs0.00.20.40.60.81.0accuracym=0m=0.001m=0.01m=0.1m=1no marginFigure 1: Accuracy curves on the copy task for sequence lengths of (from left to right) T=200,T=500, T=1000, T=10000 given different spectral margins. Convergence speed increases with mar-gin size; however, large margin sizes are ineffective at longer sequence lengths (T=10000, right).margin initialization accuracy0 orthogonal 77.180.001 orthogonal 79.260.01 orthogonal 85.470.1 orthogonal 94.101 orthogonal 93.84none orthogonal 93.24none Glorot normal 66.71none identity 53.53LSTM 97.30Table 1: Ordered sequential MNIST classifica-tion with different margin sizes and an LSTM.margin initialization accuracy0 orthogonal 83.560.001 orthogonal 84.590.01 orthogonal 89.630.1 orthogonal 91.441 orthogonal 90.83none orthogonal 90.51none Glorot normal 79.33none identity 42.72LSTM 92.62Table 2: Permuted sequential MNIST classifica-tion with different margin sizes and an LSTM.3.1.2 P ERFORMANCE ON REAL DATAHaving confirmed that an orthogonality constraint can negatively impact convergence rate, we seekto investigate the effect on model performance for tasks on real data. We show the results of experi-ments on permuted sequential MNIST in Table 2 and ordered sequential MNIST in Table 1. The losscurves are shown in Figure 6 in the Appendix and reveal an increased convergence rate for largerspectral margins. We trained the factorized RNN models with 128 hidden units for 120 epochs. Wealso trained an LSTM with 128 hidden units on both tasks for 150 epochs, configured with peep-hole connections, orthogonally initialized (and forget gate bias initialized to one), and trained withRMSprop (learning rate 0.0001, clipping gradients of magnitude 1).We show the results of experiments on PTB character prediction, in terms of bits per character (bpc)and prediction accuracy, for a subset of short sequences (up to 75 characters; 23% of data) in Table3 and for a subset of long sequences (up to 300 characters; 99% of data) in Table 4. We trainedfactorized RNN models with 512 hidden units for 200 epochs with geodesic gradient descent on thebases (learning rate 106) and RMSprop on the other parameters (learning rate 0.001), using a tanhtransition nonlinearity, and clipping gradients of 30 magnitude.Interestingly, for both the ordered and permuted sequential MNIST tasks, models with a non-zeromargin significantly outperform those that are constrained to have purely orthogonal transition matri-margin initialization bpc accuracy0 orthogonal 2.16 55.310.01 orthogonal 2.16 55.330.1 orthogonal 2.12 55.371 orthogonal 2.06 57.07100 orthogonal 2.04 57.51none orthogonal 2.06 57.38none Glorot normal 2.08 57.37none identity 2.25 53.83Table 3: Character prediction on PTB sentencesof to 75 characters, using different margins.margin initialization bpc accuracy0 orthogonal 2.20 54.880.01 orthogonal 2.20 54.830.1 orthogonal 2.24 54.101 orthogonal 2.36 51.12100 orthogonal 2.36 51.20none orthogonal 2.34 51.30none Glorot normal 2.34 51.04none identity 2.68 45.35Table 4: Character prediction on PTB sentencesof up to 300 characters, using different margins.6Under review as a conference paper at ICLR 2017ces (margin of zero). The best results on both the ordered and sequential MNIST tasks were yieldedby models with a spectral margin of 0.1, at 94.10% accuracy and 91.44% accuracy, respectively. AnLSTM outperformed the RNNs in both tasks; nevertheless, RNNs with hidden to hidden transitionsinitialized as orthogonal matrices performed admirably without a memory component and withoutall of the additional parameters associated with gates. Indeed, orthogonally initialized RNNs per-formed almost on par with the LSTM in the permuted sequential MNIST task which presents longerdistance dependencies than the ordered task. Although the optimal margin appears to be 0.1, RNNswith large margins perform almost identically to an RNN without a margin, as long as the transitionmatrix is initialized as orthogonal. On these tasks, orthogonal initialization appears to significantlyoutperform Glorot normal initialization (Glorot & Bengio, 2010) or initializing the matrix as iden-tity. It is interesting to note that for the MNIST tasks, orthogonal initialization appears useful whileorthogonality constraints appear mainly detrimental. This suggests that while orthogonality helpsearly training by stabilizing gradient flow across many time steps, orthogonality constraints mayneed to be loosened on some tasks so as not to over-constrain the model’s representational ability.Curiously, larger margins and even models without sigmoidal constraints on the spectrum (no mar-gin) performed well as long as they were initialized to be orthogonal, suggesting that evolution awayfrom orthogonality is not a serious problem on MNIST. It is not surprising that orthogonality is use-ful for the MNIST tasks since they depend on long distance signal propagation with a single output atthe end of the input sequence. On the other hand, character prediction with PTB produces an outputat every time step. Constraining deviation from orthogonality proved detrimental for short sentences(Table 3) and beneficial when long sentences were included (Table 4). Furthermore, Glorot normalinitialization did not perform worse than orthogonal initialization for PTB. Since an output is gen-erated for every character in a sentence, short distance signal propagation is possible. Thus it ispossible that the RNN is first learning very local dependencies between neighbouring characters andthat given enough context, constraining deviation from orthogonality can help force the network tolearn longer distance dependencies.3.1.3 S PECTRAL AND GRADIENT EVOLUTIONIt is interesting to note that even long sequence lengths (T=1000) in the copy task can be solvedefficiently with rather large margins on the spectrum. In Figure 2 we look at the gradient propaga-tion of the loss from the last time step in the network with respect to the hidden activations. We cansee that for a purely orthogonal parameterization of the transition matrix (when the margin is zero),the gradient norm is preserved across time steps, as expected. We further observe that with increas-ing margin size, the number of update steps over which this norm preservation survives decreases,though surprisingly not as quickly as expected.Figure 2: The norm of the gradient of the loss from the last time step with respect to the hiddenunits at a given time step for a length 220 RNN over 1000 update iterations for different margins.Iterations are along the abscissa and time steps are denoted along the ordinate. The first columnmargins are: 0, 0.001, 0.01. The second column margins are: 0.1, 1, no margin. Gradient norms arenormalized across the time dimension.Although the deviation of singular values from one should be slowed by the sigmoidal parameteriza-tions, even parameterizations without a sigmoid (no margin) can be effectively trained for all but thelongest sequence lengths. This suggests that the spectrum is not deviating far from orthogonality andthat inputs to the hidden to hidden transitions are mostly not aligned along the dimensions of great-7Under review as a conference paper at ICLR 2017est expansion or contraction. We evaluated the spread of the spectrum in all of our experiments andfound that indeed, singular values tend to stay well within their prescribed bounds and only reachthe margin when using a very large learning rate that does not permit convergence. Furthermore,when transition matrices are initialized as orthogonal, singular values remain near one throughouttraining even without a sigmoidal margin for tasks that require long term memory (copy, adding,sequential MNIST). On the other hand, singular value distributions tend to drift away from one forPTB character prediction which may help explain why enforcing an orthogonality constraint canbe helpful for this task, when modeling long sequences. Interestingly, singular values spread outless for longer sequence lengths (nevertheless, the T=10000 copy task could not be solved with nosigmoid on the spectrum).We visualize the spread of singular values for different model parameterizations on the permuted se-quential MNIST task in Figure 3. Curiously, we find that the distribution of singular values tends toshift upward to a mean of approximately 1.05 on both the ordered and permuted sequential MNISTtasks. We note that in those experiments, a tanh transition nonlinearity was used which is contractivein both the forward signal pass and the gradient backward pass. An upward shift in the distributionof singular values of the transition matrix would help compensate for that contraction. Indeed, (Saxeet al., 2013) describe this as a possibly good regime for learning in deep neural networks. That themodel appears to evolve toward this regime suggests that deviating from it may incur a cost. Thisis interesting because the cost function cannot take into account numerical issues such as vanish-ing or exploding gradients (or forward signals); we do not know what could make this deviationcostly. That the transition matrix may be compensating for the contraction of the tanh is supportedby further experiments: applying a 1.05 pre-activation gain appears to allow a model with a marginof 0 to nearly match the top performance reached on both of the MNIST tasks. Furthermore, whenusing the OPLU norm-preserving activation function (Chernodub & Nowicki, 2016), we found thatorthogonally initialized models performed equally well with all margins, achieving over 90% ac-curacy on the permuted sequential MNIST task. Unlike orthgonally initialized models, the RNNon the bottom right of Figure 3 with Glorot normal initialized transition matrices, begins and endswith a wide singular spectrum. While there is no clear positive shift in the distribution of singularvalues, the mean value appears to very gradually increase for both the ordered and permuted sequen-tial MNIST tasks. If the model is to be expected to positively shift singular values to compensatefor the contractivity of the tanh nonlinearity, it is not doing so well for the Glorot-initialized case;however, this may be due to the inefficiency of training as a result of vanishing gradients, given thatinitialization.0 50 100 150 200number of epochs0.800.850.900.951.001.051.101.151.200 50 100 150 200number of epochs0.800.850.900.951.001.051.101.151.200 50 100 150 200number of epochs0.800.850.900.951.001.051.101.151.200 50 100 150 200number of epochs0.800.850.900.951.001.051.101.151.200 50 100 150 200number of epochs0.800.850.900.951.001.051.101.151.200 50 100 150 200number of epochs0.00.51.01.52.02.5Figure 3: Singular value evolution on the permuted sequential MNIST task for factorized RNNswith different margin sizes. Margins are, from left to right: top row : 0.001, 0.01, 0.1; bottom row : 1,no margin, no margin. The singular value distributions are summarized with the mean (green line,center) and standard deviation (green shading about mean), minimum (red, bottom) and maximum(blue, top) values. All models are initialized with orthogonal hidden to hidden transition matricesexcept for the model on the bottom right where Glorot normal initialization is used.8Under review as a conference paper at ICLR 20173.2 E XPLORING SOFT ORTHOGONALITY CONSTRAINTSHaving established that it may indeed be useful to step away from orthogonality, here we exploretwo forms of soft constraints (rather than hard bounds as above) on hidden to hidden transitionmatrix orthogonality. The first is a simple penalty that directly encourages a transition matrix Wtobe orthogonal, of the form jjWTWIjj22. This is similar to the orthogonality penalty introducedby Henaff et al. (2016). In the first two subfigures on the left of Figure 4, we explore the effectof weakening this form of regularization. We trained both a regular non-factorized RNN on theT= 200 copy task and a factorized RNN with orthogonal bases on the T= 500 copy task. Forthe regular RNN, we had to reduce the learning rate to 105. Here again we see that weakening thestrength of the orthogonality-encouraging penalty can increase convergence speed.0 200 400 600 800 1000number of epochs0.00.20.40.60.81.0accuracy0 20 40 60 80 100number of epochs0.00.20.40.60.81.0accuracy0.0010.010.11101000 50 100 150 200 250 300number of epochs0.00.20.40.60.81.0accuracy0 50 100 150 200 250 300number of epochs0.00.20.40.60.81.0accuracy0.00010.0010.010.1110100Figure 4: Accuracy curves on the copy task for different strengths of soft orthogonality constraints.A soft orthogonality constraint is applied to the transition matrix Wfor a regular RNN on T= 200(Left) and the same is applied on a factorized RNN on T= 500 (Left center). Another constraintin the form of a mean one Gaussian prior on the singular values is applied to a factorized RNN onT= 200 (Right center); the same is applied to a factorized RNN with a sigmoidal parameterizationof the spectrum, using a large margin of 1 (Right). Loosening orthogonality speeds convergence.The second approach we explore replaces the sigmoidal margin parameterization with a mean oneGaussian prior on the singular values. In the two right subfigures of Figure 4, we visualize the accu-racy on the length 200 copy task, using geoSGD (learning rate 106)to keep UandVorthogonaland different strengths of a Gaussian prior with mean one on the singular values. We trained theseexperiments with regular SGD on the spectrum and other non-orthogonal parameter matrices, usinga105learning rate. We see that priors which are too strong lead to slow convergence. Looseningthe strength of the prior makes the optimization more efficient. Furthermore, we compare a directparameterization of the spectrum (no sigmoid) in Figure 4 with a sigmoidal parameterization, usinga large margin of 1. Without the sigmoidal parameterization, optimization quickly becomes unsta-ble; on the other hand, the optimization also becomes unstable if the prior is removed completely inthe sigmoidal formulation (margin 1). These results further motivate the idea that parameterizationsthat deviate from orthogonality may perform better than purely orthogonal ones, as long as they aresufficiently constrained to avoid instability during training.4 C ONCLUSIONSWe have explored a number of methods for controlling the expansivity of gradients during backprop-agation based learning in RNNs through manipulating orthogonality constraints and regularizationon matrices. Our experiments indicate that while orthogonal initialization may be beneficial, main-taining constraints on orthogonality can be detrimental. Indeed, moving away from hard constraintson matrix orthogonality can help improve optimization convergence rate and model performance.However, we also observe with synthetic tasks that relaxing regularization which encourages thespectral norms of weight matrices to be close to one, or allowing bounds on the spectral norms ofweight matrices to be too wide, can reverse these gains and may lead to unstable optimization.ACKNOWLEDGMENTSWe thank the Natural Sciences and Engineeering Research Council (NSERC) of Canada and Sam-sung for supporting this research.9Under review as a conference paper at ICLR 2017
ryRAK-8Vg
Interesting investigation into orthogonal parametrizations and initializations for RNNs
7: Good paper, accept
This paper investigates the impact of orthogonal weight matrices on learning dynamics in RNNs. The paper proposes a variety of interesting optimization formulations that enforce orthogonality in the recurrent weight matrix to varying degrees. The experimental results demonstrate several conclusions: enforcing exact orthogonality does not help learning, while enforcing soft orthogonality or initializing to orthogonal weights can substantially improve learning. While some of the optimization methods proposed currently require matrix inversion and are therefore slow in wall clock time, orthogonal initialization and some of the soft orthogonality constraints are relatively inexpensive and may find their way into practical use. The experiments are generally done to a high standard and yield a variety of useful insights, and the writing is clear. The experimental results are based on using a fixed learning rate for the different regularization strengths. Learning speed might be highly dependent on this, and different strengths may admit different maximal stable learning rates. It would be instructive to optimize the learning rate for each margin separately (maybe on one of the shorter sequence lengths) to see how soft orthogonality impacts the stability of the learning process. Fig. 5, for instance, shows that a sigmoid improves stability—but perhaps slightly reducing the learning rate for the non-sigmoid Gaussian prior RNN would make the learning well-behaved again for weightings less than 1. Fig. 4 shows singular values converging around 1.05 rather than 1. Does initializing to orthogonal matrices multiplied by 1.05 confer any noticeable advantage over standard orthogonal matrices? Especially on the T=10K copy task? “Curiously, larger margins and even models without sigmoidal constraints on the spectrum (no margin) performed well as long as they were initialized to be orthogonal suggesting that evolution away from orthogonality is not a serious problem on this task.” This is consistent with the analysis given in Saxe et al. 2013, where for deep linear nets, if a singular value is initialized to 1 but dies away during training, this is because it must be zero to implement the desired input-output map. More broadly, an open question has been whether orthogonality is useful as an initialization, as proposed by Saxe et al., where its role is mainly as a preconditioner which makes optimization proceed quickly but doesn’t fundamentally change the optimization problem; or whether it is useful as a regularizer, as proposed by Arjovsky et al. 2015 and Henaff et al. 2015, that is, as an additional constraint in the optimization problem (minimize loss subject to weights being orthogonal). These experiments seem to show that mere initialization to orthogonal weights is enough to reap an optimization speed advantage, and that too much regularization begins to hurt performance—i.e., substantially changing the optimization problem is undesirable. This point is also apparent in Fig. 2: In terms of the training loss on MNIST (Fig. 2), no margin does almost indistinguishably from a margin of 1 or .1. However in terms of accuracy, a margin of .1 is best. This shows that large or nonexistent margins (i.e., orthogonal initializations) enable fast optimization of the training loss, but among models that attain similar training loss, the more nearly orthogonal weights perform better. This starts to separate out the optimization speed advantage conferred by orthogonality from the regularization advantage it confers. It may be useful to more explicitly discuss the initialization vs regularization dimension in the text. Overall, this paper contributes a variety of techniques and intuitions which are likely to be useful in training RNNs.
4: The reviewer is confident but not absolutely certain that the evaluation is correct
HkuVu3ige
ICLR.cc/2017/conference
2017
On orthogonality and learning recurrent networks with long term dependencies
["Eugene Vorontsov", "Chiheb Trabelsi", "Samuel Kadoury", "Chris Pal"]
It is well known that it is challenging to train deep neural networks and recurrent neural networks for tasks that exhibit long term dependencies. The vanishing or exploding gradient problem is a well known issue associated with these challenges. One approach to addressing vanishing and exploding gradients is to use either soft or hard constraints on weight matrices so as to encourage or enforce orthogonality. Orthogonal matrices preserve gradient norm during backpropagation and can therefore be a desirable property; however, we find that hard constraints on orthogonality can negatively affect the speed of convergence and model performance. This paper explores the issues of optimization convergence, speed and gradient stability using a variety of different methods for encouraging or enforcing orthogonality. In particular we propose a weight matrix factorization and parameterization strategy through which we we can bound matrix norms and therein control the degree of expansivity induced during backpropagation.
["Deep learning"]
ABSTRACTIt is well known that it is challenging to train deep neural networks and recur-rent neural networks for tasks that exhibit long term dependencies. The vanishingor exploding gradient problem is a well known issue associated with these chal-lenges. One approach to addressing vanishing and exploding gradients is to useeither soft or hard constraints on weight matrices so as to encourage or enforce or-thogonality. Orthogonal matrices preserve gradient norm during backpropagationand can therefore be a desirable property; however, we find that hard constraintson orthogonality can negatively affect the speed of convergence and model per-formance. This paper explores the issues of optimization convergence, speed andgradient stability using a variety of different methods for encouraging or enforcingorthogonality. In particular we propose a weight matrix factorization and parame-terization strategy through which we can bound matrix norms and therein controlthe degree of expansivity induced during backpropagation.1 I NTRODUCTIONThe depth of deep neural networks confers representational power, but also makes model optimiza-tion more challenging. Training deep networks with gradient descent based methods is known to bedifficult as a consequence of the vanishing and exploding gradient problem (Hochreiter & Schmid-huber, 1997). Typically, exploding gradients are avoided by clipping large gradients (Pascanu et al.,2013) or introducing an L2orL1weight norm penalty. The latter has the effect of bounding thespectral radius of the linear transformations, thus limiting the maximal gain across the transforma-tion. Krueger & Memisevic (2015) attempt to stabilize the norm of propagating signals directlyby penalizing differences in successive norm pairs in the forward pass and Pascanu et al. (2013)propose to penalize successive gradient norm pairs in the backward pass. These regularizers affectthe network parameterization with respect to the data instead of penalizing weights directly.Both expansivity and contractivity of linear transformations can also be limited by more tightlybounding their spectra. By limiting the transformations to be orthogonal, their singular spectra arelimited to unitary gain causing the transformations to be norm-preserving. Le et al. (2015) andHenaff et al. (2016) have respectively shown that identity initialization and orthogonal initializationcan be beneficial. Arjovsky et al. (2015) have gone beyond initialization, building unitary recurrentneural network (RNN) models with transformations that are unitary by construction which theyachieved by composing multiple basic unitary transformations. The resulting transformations, forsome n-dimensional input, cover only some subset of possible nnunitary matrices but appearto perform well on simple tasks and have the benefit of having low complexity in memory andcomputation.The entire set of possible unitary or orthogonal parameterizations forms the Stiefel manifold. At amuch higher computational cost, gradient descent optimization directly along this manifold can bedone via geodesic steps (Nishimori, 2005; Tagare, 2011). Recent work (Wisdom et al., 2016) hasproposed the optimization of unitary matrices along the Stiefel manifold using geodesic gradientdescent. To produce a full-capacity parameterization for unitary matrices they use some insights1Under review as a conference paper at ICLR 2017from Tagare (2011), combining the use of a canonical inner products and Cayley transformations.Their experimental work indicates that full capacity unitary RNN models can solve the copy memoryproblem whereas both LSTM networks and restricted capacity unitary RNN models having similarcomplexity appear unable to solve the task for a longer sequence length ( T= 2000).In contrast, here we explore the optimization of real valued matrices within a configurable marginabout the Stiefel manifold. We suspect that a strong constraint of orthogonality limits the model’srepresentational power, hindering its performance, and may make optimization more difficult. Weexplore this hypothesis empirically by employing a factorization technique that allows us to limit thedegree of deviation from the Stiefel manifold. While we use geodesic gradient descent, we simulta-neously update the singular spectra of our matrices along Euclidean steps, allowing optimization tostep away from the manifold while still curving about it.1.1 V ANISHING AND EXPLODING GRADIENTSThe issue of vanishing and exploding gradients as it pertains to the parameterization of neural net-works can be illuminated by looking at the gradient back-propagation chain through a network.A neural network with nhidden layers has pre-activationsai(hi1) =Wihi1+bi; i2f2;;ng (1)For notational convenience, we combine parameters Wiandbito form an affine matrix . We cansee that for some loss function Lat layer n, the derivative with respect to parameters iis:@L@i=@an+1@i@L@an+1(2)The partial derivatives for the pre-activations can be decomposed as follows:@ai+1@i=@ai@i@hi@ai@ai+1@hi=@ai@iDiWi+1!@ai+1@ai=DiWi+1;(3)where Diis the Jacobian corresponding to the activation function, containing partial derivatives ofthe hidden units at layer i+1 with respect to the pre-activation inputs. Typically, Dis diagonal.Following the above, the gradient in equation 2 can be fully decomposed into a recursive chain ofmatrix products:@L@i=@ai@inYj=i(DjWj+1)@L@an+1(4)In (Pascanu et al., 2013), it is shown that the 2-norm of@ai+1@aiis bounded by the product of thenorms of the non-linearity’s Jacobian and transition matrix at time t(layer i), as follows:@at+1@atjjDtjjjjWtjjDtWt=t;Dt;Wt2R:(5)whereDtandWtare the largest singular values of the non-linearity’s Jacobian Dtand the tran-sition matrix Wt. In RNNs, Wtis shared across time and can be simply denoted as W.Equation 5 shows that the gradient can grow or shrink at each layer depending on the gain of eachlayer’s linear transformation Wand the gain of the Jacobian D. The gain caused by each layeris magnified across all time steps or layers. It is easy to have extreme amplification in a recurrentneural network where Wis shared across time steps and a non-unitary gain in Wis amplifiedexponentially. The phenomena of extreme growth or contraction of the gradient across time steps orlayers are known as the exploding and the vanishing gradient problems, respectively. It is sufficientfor RNNs to have t1 at each time tto enable the possibility of vanishing gradients, typicallyfor some large number of time steps T. The rate at which a gradient (or forward signal) vanishes2Under review as a conference paper at ICLR 2017depends on both the parameterization of the model and on the input data. The parameterizationmay be conditioned by placing appropriate constraints on W. It is worth keeping in mind that theJacobian Dis typically contractive, thus tending to be norm-reducing) and is also data-dependent,whereas Wcan vary from being contractive to norm-preserving, to expansive and applies the samegain on the forward signal as on the back-propagated gradient signal.2 O UR APPROACHVanishing and exploding gradients can be controlled to a large extent by controlling the maximumand minimum gain ofW. The maximum gain of a matrix Wis given by the spectral norm whichis given byjjWjj2= max"jjWxjjjjxjj#: (6)By keeping our weight matrix Wclose to orthogonal, one can ensure that it is close to a norm-preserving transformation (where the spectral norm is equal to one, but the minimum gain is alsoone). One way to achieve this is via a simple soft constraint or regularization term of the form:XijjWTiWiIjj2: (7)However, it is possible to formulate a more direct parameterization or factorization for Wwhich per-mits hard bounds on the amount of expansion and contraction induced by W. This can be achievedby simply parameterizing Waccording to its singular value decomposition, which consists of thecomposition of orthogonal basis matrices UandVwith a diagonal spectral matrix Scontaining thesingular values which are real and positive by definition. We haveW=USVT: (8)Since the spectral norm or maximum gain of a matrix is equal to its largest singular value, thisdecomposition allows us to control the maximum gain or expansivity of the weight matrix by con-trolling the magnitude of the largest singular value. Similarly, the minimum gain or contractivity ofa matrix can be obtained from the minimum singular value.We can keep the bases UandVorthogonal via geodesic gradient descent along the set of weightsthat satisfy UTU=IandVTV=Irespectively. The submanifolds that satisfy these constraintsare called Stiefel manifolds. We discuss how this is achieved in more detail below, then discuss ourconstruction for bounding the singular values.During optimization, in order to maintain the orthogonality of an orthogonally-initialized matrixM, i.e. where M=U,M=VorM=Wif so desired, we employ a Cayley transformationof the update step onto the Stiefel manifold of (semi-)orthogonal matrices, as in Nishimori (2005)and Tagare (2011). Given an orthogonally-initialized parameter matrix Mand its Jacobian, Gwithrespect to the objective function, an update is performed as follows:A=GMTMGTMnew=M+ (I+2A)1(I2A);(9)where Ais a skew-symmetric matrix (that depends on the Jacobian and on the parameter matrix)which is mapped to an orthogonal matrix via a Cayley transform and is the learning rate.While the update rule in (9) allows us to maintain an orthogonal hidden to hidden transition matrixWif desired, we are interested in exploring the effect of stepping away from the Stiefel manifold. Assuch, we parameterize the transition matrix Win factorized form, as a singular value decompositionwith orthogonal bases UandVupdated by geodesic gradient descent using the Cayley transformapproach above.IfWis an orthogonal matrix, the singular values in the diagonal matrix Sare all equal to one.However, in our formulation we allow these singular values to deviate from one and employ asigmoidal parameterization to apply a hard constraint on the maximum and minimum amount of3Under review as a conference paper at ICLR 2017deviation. Specifically, we define a margin maround 1 within which the singular values must lie.This is achieved with the parameterizationsi= 2m((pi)0:5) + 1; s i2fdiag(S)g; m2[0;1]: (10)The singular values are thus restricted to the range [1m;1+m]and the underlying parameterspiare updated freely via stochastic gradient descent. Note that this parameterization strategy alsohas implications on the step sizes that gradient descent based optimization will take when updatingthe singular values – they tend to be smaller compared to models with no margin constraining theirvalues. Specifically, a singular value’s progression toward a margin is slowed the closer it is to themargin. The sigmoidal parameterization can also impart another effect on the step size along thespectrum which needs to be accounted for. Considering 10, the gradient backpropagation of somelossLtoward parameters piis found asdLdpi=dsidpidLdsi= 2md(pi)dpidLdsi: (11)From (11), it can be seen that the magnitude of the update step for piis scaled by the marginhyperparameter m. This means for example that for margins less than one, the effective learningrate for the spectrum is reduced in proportion to the margin. Consequently, we adjust the learningrate along the spectrum to be independent of the margin by renormalizing it by 2m.This margin formulation both guarantees singular values lie within a well defined range and slowsdeviation from orthogonality. Alternatively, one could enforce the orthogonality of UandVandimpose a regularization term corresponding to a mean one Gaussian prior on these singular values.This encourages the weight matrix Wto be norm preserving with a controllable strength equivalentto the variance of the Gaussian. We also explore this approach further below.3 E XPERIMENTSIn this section, we explore hard and soft orthogonality constraints on factorized weight matricesfor recurrent neural network hidden to hidden transitions. With hard orthogonality constraints onUandV, we investigate the effect of widening the spectral margin or bounds on convergenceand performance. Loosening these bounds allows increasingly larger margins within which thetransition matrix Wcan deviate from orthogonality. We confirm that orthogonal initialization isuseful as noted in Henaff et al. (2016), and we show that although strict orthogonality guaranteesstable gradient norm, loosening orthogonality constraints can increase the rate of gradient descentconvergence. We begin our analyses on tasks that are designed to stress memory: a sequence copyingtask and a basic addition task (Hochreiter & Schmidhuber, 1997). We then move on to tasks on realdata that require models to capture long-range dependencies: digit classification based on sequentialand permuted MNIST vectors (Le et al., 2015; LeCun et al., 1998). Finally, we look at a basiclanguage modeling task using the Penn Treebank dataset (Marcus et al., 1993).The copy and adding tasks, introduced by Hochreiter & Schmidhuber (1997), are synthetic bench-marks with pathologically hard long distance dependencies that require long-term memory in mod-els. The copy task consists of an input sequence that must be remembered by the network, followedby a series of blank inputs terminated by a delimiter that denotes the point at which the network mustbegin to output a copy of the initial sequence. We use an input sequence of T+ 20 elements thatbegins with a sub-sequence of 10 elements to copy, each containing a symbol ai2fa1;:::;apgoutofp=8possible symbols. This sub-sequence is followed by T1elements of the blank categorya0which is terminated at step Tby a delimiter symbol ap+1and 10 more elements of the blankcategory. The network must learn to remember the initial 10 element sequence for Ttime steps andoutput it after receiving the delimiter symbol.The goal of the adding task is to add two numbers together after a long delay. Each number israndomly picked at a unique position in a sequence of length T. The sequence is composed ofTvalues sampled from a uniform distribution in the range [0;1), with each value paired with anindicator value that identifies the value as one of the two numbers to remember (marked 1) or as avalue to ignore (marked 0). The two numbers are positioned randomly in the sequence, the first inthe range [0;T21]and the second in the range [T2;T1], where 0 marks the first element. Thenetwork must learn to identify and remember the two numbers and output their sum.4Under review as a conference paper at ICLR 2017The sequential MNIST task from Le et al. (2015), MNIST digits are flattened into vectors that canbe traversed sequentially by a recurrent neural network. The goal is to classify the digit based onthe sequential input of pixels. The simple variant of this task is with a simple flattening of the imagematrices; the harder variant of this task includes a random permutation of the pixels in the inputvector that is determined once for an experiment. The latter formulation introduces longer distancedependencies between pixels that must be interpreted by the classification model.The English Penn Treebank (PTB) dataset from Marcus et al. (1993) is an annotated corpus of En-glish sentences, commonly used for benchmarking language models. We employ a sequential char-acter prediction task: given a sentence, a recurrent neural network must predict the next character ateach step, from left to right. We use input sequences of variable length, with each sequence contain-ing one sentence. We model 49 characters including lowercase letters (all strings are in lowercase),numbers, common punctuation, and an unknown character placeholder. In our experiments on twosubsets of the data: in the first, we first use 23% of the data with strings with up to 75 characters andin the second we include over 99% of the dataset, picking strings with up to 300 characters.3.1 L OOSENING HARD ORTHOGONALITY CONSTRAINTSIn this section, we experimentally explore the effect of loosening hard orthogonality constraintsthrough loosening the spectral margin defined above for the hidden to hidden transition matrix.In all experiments, we employed RMSprop (Tieleman & Hinton, 2012) when not using geodesicgradient descent. We used minibatches of size 50 and for generated data (the copy and addingtasks), we assumed an epoch length of 100 minibatches. We cautiously introduced gradient clippingat magnitude 100 (unless stated otherwise) in all of our RNN experiments although it may not berequired and we consistently applied a small weight decay of 0.0001. Unless otherwise specified,we trained all simple recurrent neural networks with the hidden to hidden matrix factorization asin (8) using geodesic gradient descent on the bases (learning rate 106) and RMSprop on the otherparameters (learning rate 0.0001), using a tanh transition nonlinearity, and clipping gradients of 100magnitude. The neural network code was built on the Theano framework (Theano DevelopmentTeam, 2016). When parameterizing a matrix in factorized form, we apply the weight decay on thecomposite matrix rather than on the factors in order to be consistent across experiments. For MNISTand PTB, test set metrics were computed based on the parameterization that gave the best validationset accuracy.3.1.1 C ONVERGENCE ON SYNTHETIC MEMORY TASKSFor different sequence lengths Tof the copy and adding tasks, we trained a factorized RNN with 128hidden units and various spectral margins m. For the copy task, we used Elman networks withouta transition non-linearity as in Henaff et al. (2016). We discuss our investigations into the use of anon-linearity on the copy task in the Appendix.As shown in Figure 1 we see an increase in the rate of convergence as we increase the spectralmargin. This observation generally holds across the tested sequence lengths ( T= 200 ,T= 500 ,T= 1000 ,T= 10000 ); however, large spectral margins hinder convergence on extremely longsequence lengths. At sequence length T= 10000 , parameterizations with spectral margins largerthan 0.001 converge slower than when using a margin of 0.001. In addition, the experiment withouta margin failed to converge on the longest sequence length. This follows the expected pattern wherestepping away from the Stiefel manifold may help with gradient descent optimization but looseningorthogonality constraints can reduce the stability of signal propagation through the network.For the adding task, we trained a factorized RNN on T= 1000 length sequences, using a ReLUactivation function on the hidden to hidden transition matrix. The mean squared error (MSE) isshown for different spectral margins in Figure 5 in the Appendix. Testing spectral margins m= 0,m= 1,m= 10 ,m= 100 , and no margin, we find that the models with the purely orthogonal(m= 0) and the unconstrained (no margin) transition matrices failed to begin converging beyondbaseline MSE within 2000 epochs.5Under review as a conference paper at ICLR 20170 20 40 60 80 100number of epochs0.00.20.40.60.81.0accuracy020406080100120140160number of epochs0.00.20.40.60.81.0accuracy0 50 100 150 200number of epochs0.00.20.40.60.81.0accuracy0 50 100 150 200 250 300number of epochs0.00.20.40.60.81.0accuracym=0m=0.001m=0.01m=0.1m=1no marginFigure 1: Accuracy curves on the copy task for sequence lengths of (from left to right) T=200,T=500, T=1000, T=10000 given different spectral margins. Convergence speed increases with mar-gin size; however, large margin sizes are ineffective at longer sequence lengths (T=10000, right).margin initialization accuracy0 orthogonal 77.180.001 orthogonal 79.260.01 orthogonal 85.470.1 orthogonal 94.101 orthogonal 93.84none orthogonal 93.24none Glorot normal 66.71none identity 53.53LSTM 97.30Table 1: Ordered sequential MNIST classifica-tion with different margin sizes and an LSTM.margin initialization accuracy0 orthogonal 83.560.001 orthogonal 84.590.01 orthogonal 89.630.1 orthogonal 91.441 orthogonal 90.83none orthogonal 90.51none Glorot normal 79.33none identity 42.72LSTM 92.62Table 2: Permuted sequential MNIST classifica-tion with different margin sizes and an LSTM.3.1.2 P ERFORMANCE ON REAL DATAHaving confirmed that an orthogonality constraint can negatively impact convergence rate, we seekto investigate the effect on model performance for tasks on real data. We show the results of experi-ments on permuted sequential MNIST in Table 2 and ordered sequential MNIST in Table 1. The losscurves are shown in Figure 6 in the Appendix and reveal an increased convergence rate for largerspectral margins. We trained the factorized RNN models with 128 hidden units for 120 epochs. Wealso trained an LSTM with 128 hidden units on both tasks for 150 epochs, configured with peep-hole connections, orthogonally initialized (and forget gate bias initialized to one), and trained withRMSprop (learning rate 0.0001, clipping gradients of magnitude 1).We show the results of experiments on PTB character prediction, in terms of bits per character (bpc)and prediction accuracy, for a subset of short sequences (up to 75 characters; 23% of data) in Table3 and for a subset of long sequences (up to 300 characters; 99% of data) in Table 4. We trainedfactorized RNN models with 512 hidden units for 200 epochs with geodesic gradient descent on thebases (learning rate 106) and RMSprop on the other parameters (learning rate 0.001), using a tanhtransition nonlinearity, and clipping gradients of 30 magnitude.Interestingly, for both the ordered and permuted sequential MNIST tasks, models with a non-zeromargin significantly outperform those that are constrained to have purely orthogonal transition matri-margin initialization bpc accuracy0 orthogonal 2.16 55.310.01 orthogonal 2.16 55.330.1 orthogonal 2.12 55.371 orthogonal 2.06 57.07100 orthogonal 2.04 57.51none orthogonal 2.06 57.38none Glorot normal 2.08 57.37none identity 2.25 53.83Table 3: Character prediction on PTB sentencesof to 75 characters, using different margins.margin initialization bpc accuracy0 orthogonal 2.20 54.880.01 orthogonal 2.20 54.830.1 orthogonal 2.24 54.101 orthogonal 2.36 51.12100 orthogonal 2.36 51.20none orthogonal 2.34 51.30none Glorot normal 2.34 51.04none identity 2.68 45.35Table 4: Character prediction on PTB sentencesof up to 300 characters, using different margins.6Under review as a conference paper at ICLR 2017ces (margin of zero). The best results on both the ordered and sequential MNIST tasks were yieldedby models with a spectral margin of 0.1, at 94.10% accuracy and 91.44% accuracy, respectively. AnLSTM outperformed the RNNs in both tasks; nevertheless, RNNs with hidden to hidden transitionsinitialized as orthogonal matrices performed admirably without a memory component and withoutall of the additional parameters associated with gates. Indeed, orthogonally initialized RNNs per-formed almost on par with the LSTM in the permuted sequential MNIST task which presents longerdistance dependencies than the ordered task. Although the optimal margin appears to be 0.1, RNNswith large margins perform almost identically to an RNN without a margin, as long as the transitionmatrix is initialized as orthogonal. On these tasks, orthogonal initialization appears to significantlyoutperform Glorot normal initialization (Glorot & Bengio, 2010) or initializing the matrix as iden-tity. It is interesting to note that for the MNIST tasks, orthogonal initialization appears useful whileorthogonality constraints appear mainly detrimental. This suggests that while orthogonality helpsearly training by stabilizing gradient flow across many time steps, orthogonality constraints mayneed to be loosened on some tasks so as not to over-constrain the model’s representational ability.Curiously, larger margins and even models without sigmoidal constraints on the spectrum (no mar-gin) performed well as long as they were initialized to be orthogonal, suggesting that evolution awayfrom orthogonality is not a serious problem on MNIST. It is not surprising that orthogonality is use-ful for the MNIST tasks since they depend on long distance signal propagation with a single output atthe end of the input sequence. On the other hand, character prediction with PTB produces an outputat every time step. Constraining deviation from orthogonality proved detrimental for short sentences(Table 3) and beneficial when long sentences were included (Table 4). Furthermore, Glorot normalinitialization did not perform worse than orthogonal initialization for PTB. Since an output is gen-erated for every character in a sentence, short distance signal propagation is possible. Thus it ispossible that the RNN is first learning very local dependencies between neighbouring characters andthat given enough context, constraining deviation from orthogonality can help force the network tolearn longer distance dependencies.3.1.3 S PECTRAL AND GRADIENT EVOLUTIONIt is interesting to note that even long sequence lengths (T=1000) in the copy task can be solvedefficiently with rather large margins on the spectrum. In Figure 2 we look at the gradient propaga-tion of the loss from the last time step in the network with respect to the hidden activations. We cansee that for a purely orthogonal parameterization of the transition matrix (when the margin is zero),the gradient norm is preserved across time steps, as expected. We further observe that with increas-ing margin size, the number of update steps over which this norm preservation survives decreases,though surprisingly not as quickly as expected.Figure 2: The norm of the gradient of the loss from the last time step with respect to the hiddenunits at a given time step for a length 220 RNN over 1000 update iterations for different margins.Iterations are along the abscissa and time steps are denoted along the ordinate. The first columnmargins are: 0, 0.001, 0.01. The second column margins are: 0.1, 1, no margin. Gradient norms arenormalized across the time dimension.Although the deviation of singular values from one should be slowed by the sigmoidal parameteriza-tions, even parameterizations without a sigmoid (no margin) can be effectively trained for all but thelongest sequence lengths. This suggests that the spectrum is not deviating far from orthogonality andthat inputs to the hidden to hidden transitions are mostly not aligned along the dimensions of great-7Under review as a conference paper at ICLR 2017est expansion or contraction. We evaluated the spread of the spectrum in all of our experiments andfound that indeed, singular values tend to stay well within their prescribed bounds and only reachthe margin when using a very large learning rate that does not permit convergence. Furthermore,when transition matrices are initialized as orthogonal, singular values remain near one throughouttraining even without a sigmoidal margin for tasks that require long term memory (copy, adding,sequential MNIST). On the other hand, singular value distributions tend to drift away from one forPTB character prediction which may help explain why enforcing an orthogonality constraint canbe helpful for this task, when modeling long sequences. Interestingly, singular values spread outless for longer sequence lengths (nevertheless, the T=10000 copy task could not be solved with nosigmoid on the spectrum).We visualize the spread of singular values for different model parameterizations on the permuted se-quential MNIST task in Figure 3. Curiously, we find that the distribution of singular values tends toshift upward to a mean of approximately 1.05 on both the ordered and permuted sequential MNISTtasks. We note that in those experiments, a tanh transition nonlinearity was used which is contractivein both the forward signal pass and the gradient backward pass. An upward shift in the distributionof singular values of the transition matrix would help compensate for that contraction. Indeed, (Saxeet al., 2013) describe this as a possibly good regime for learning in deep neural networks. That themodel appears to evolve toward this regime suggests that deviating from it may incur a cost. Thisis interesting because the cost function cannot take into account numerical issues such as vanish-ing or exploding gradients (or forward signals); we do not know what could make this deviationcostly. That the transition matrix may be compensating for the contraction of the tanh is supportedby further experiments: applying a 1.05 pre-activation gain appears to allow a model with a marginof 0 to nearly match the top performance reached on both of the MNIST tasks. Furthermore, whenusing the OPLU norm-preserving activation function (Chernodub & Nowicki, 2016), we found thatorthogonally initialized models performed equally well with all margins, achieving over 90% ac-curacy on the permuted sequential MNIST task. Unlike orthgonally initialized models, the RNNon the bottom right of Figure 3 with Glorot normal initialized transition matrices, begins and endswith a wide singular spectrum. While there is no clear positive shift in the distribution of singularvalues, the mean value appears to very gradually increase for both the ordered and permuted sequen-tial MNIST tasks. If the model is to be expected to positively shift singular values to compensatefor the contractivity of the tanh nonlinearity, it is not doing so well for the Glorot-initialized case;however, this may be due to the inefficiency of training as a result of vanishing gradients, given thatinitialization.0 50 100 150 200number of epochs0.800.850.900.951.001.051.101.151.200 50 100 150 200number of epochs0.800.850.900.951.001.051.101.151.200 50 100 150 200number of epochs0.800.850.900.951.001.051.101.151.200 50 100 150 200number of epochs0.800.850.900.951.001.051.101.151.200 50 100 150 200number of epochs0.800.850.900.951.001.051.101.151.200 50 100 150 200number of epochs0.00.51.01.52.02.5Figure 3: Singular value evolution on the permuted sequential MNIST task for factorized RNNswith different margin sizes. Margins are, from left to right: top row : 0.001, 0.01, 0.1; bottom row : 1,no margin, no margin. The singular value distributions are summarized with the mean (green line,center) and standard deviation (green shading about mean), minimum (red, bottom) and maximum(blue, top) values. All models are initialized with orthogonal hidden to hidden transition matricesexcept for the model on the bottom right where Glorot normal initialization is used.8Under review as a conference paper at ICLR 20173.2 E XPLORING SOFT ORTHOGONALITY CONSTRAINTSHaving established that it may indeed be useful to step away from orthogonality, here we exploretwo forms of soft constraints (rather than hard bounds as above) on hidden to hidden transitionmatrix orthogonality. The first is a simple penalty that directly encourages a transition matrix Wtobe orthogonal, of the form jjWTWIjj22. This is similar to the orthogonality penalty introducedby Henaff et al. (2016). In the first two subfigures on the left of Figure 4, we explore the effectof weakening this form of regularization. We trained both a regular non-factorized RNN on theT= 200 copy task and a factorized RNN with orthogonal bases on the T= 500 copy task. Forthe regular RNN, we had to reduce the learning rate to 105. Here again we see that weakening thestrength of the orthogonality-encouraging penalty can increase convergence speed.0 200 400 600 800 1000number of epochs0.00.20.40.60.81.0accuracy0 20 40 60 80 100number of epochs0.00.20.40.60.81.0accuracy0.0010.010.11101000 50 100 150 200 250 300number of epochs0.00.20.40.60.81.0accuracy0 50 100 150 200 250 300number of epochs0.00.20.40.60.81.0accuracy0.00010.0010.010.1110100Figure 4: Accuracy curves on the copy task for different strengths of soft orthogonality constraints.A soft orthogonality constraint is applied to the transition matrix Wfor a regular RNN on T= 200(Left) and the same is applied on a factorized RNN on T= 500 (Left center). Another constraintin the form of a mean one Gaussian prior on the singular values is applied to a factorized RNN onT= 200 (Right center); the same is applied to a factorized RNN with a sigmoidal parameterizationof the spectrum, using a large margin of 1 (Right). Loosening orthogonality speeds convergence.The second approach we explore replaces the sigmoidal margin parameterization with a mean oneGaussian prior on the singular values. In the two right subfigures of Figure 4, we visualize the accu-racy on the length 200 copy task, using geoSGD (learning rate 106)to keep UandVorthogonaland different strengths of a Gaussian prior with mean one on the singular values. We trained theseexperiments with regular SGD on the spectrum and other non-orthogonal parameter matrices, usinga105learning rate. We see that priors which are too strong lead to slow convergence. Looseningthe strength of the prior makes the optimization more efficient. Furthermore, we compare a directparameterization of the spectrum (no sigmoid) in Figure 4 with a sigmoidal parameterization, usinga large margin of 1. Without the sigmoidal parameterization, optimization quickly becomes unsta-ble; on the other hand, the optimization also becomes unstable if the prior is removed completely inthe sigmoidal formulation (margin 1). These results further motivate the idea that parameterizationsthat deviate from orthogonality may perform better than purely orthogonal ones, as long as they aresufficiently constrained to avoid instability during training.4 C ONCLUSIONSWe have explored a number of methods for controlling the expansivity of gradients during backprop-agation based learning in RNNs through manipulating orthogonality constraints and regularizationon matrices. Our experiments indicate that while orthogonal initialization may be beneficial, main-taining constraints on orthogonality can be detrimental. Indeed, moving away from hard constraintson matrix orthogonality can help improve optimization convergence rate and model performance.However, we also observe with synthetic tasks that relaxing regularization which encourages thespectral norms of weight matrices to be close to one, or allowing bounds on the spectral norms ofweight matrices to be too wide, can reverse these gains and may lead to unstable optimization.ACKNOWLEDGMENTSWe thank the Natural Sciences and Engineeering Research Council (NSERC) of Canada and Sam-sung for supporting this research.9Under review as a conference paper at ICLR 2017
ByCXAcHVl
Interesting question and proposed approach, with significance restricted by limited experimental settings.
5: Marginally below acceptance threshold
The paper is well-motivated, and is part of a line of recent work investigating the use of orthogonal weight matrices within recurrent neural networks. While using orthogonal weights addresses the issue of vanishing/exploding gradients, it is unclear whether anything is lost, either in representational power or in trainability, by enforcing orthogonality. As such, an empirical investigation that examines how these properties are affected by deviation from orthogonality is a useful contribution. The paper is clearly written, and the primary formulation for investigating soft orthogonality constraints (representing the weight matrices in their SVD factorized form, which gives explicit control over the singular values) is clean and natural, albeit not necessarily ideal from a practical computational standpoint (as it requires maintaining multiple orthogonal weight matrices each requiring an expensive update step). I am unaware of this approach being investigated previously. The experimental side, however, is somewhat lacking. The paper evaluates two tasks: a copy task, using an RNN architecture without transition non-linearities, and sequential/permuted sequential MNIST. These are reasonable choices for an initial evaluation, but are both toy problems and don't shed much light on the practical aspects of the proposed approaches. An evaluation in a more realistic setting would be valuable (e.g., a language modeling task). Furthermore, while investigating pure RNN's makes sense for evaluating effects of orthogonality, it feels somewhat academic: LSTMs also provide a mechanism to capture longer-term dependencies, and in the tasks where the proposed approach was compared directly to an LSTM, it was significantly outperformed. It would be very interesting to see the effects of the proposed soft orthogonality constraint in additional architectures (e.g., deep feed-forward architectures, or whether there's any benefit when embedded within an LSTM, although this seems doubtful). Overall, the paper addresses a clear-cut question with a well-motivated approach, and has interesting findings on some toy datasets. As such I think it could provide a valuable contribution. However, the significance of the work is restricted by the limited experimental settings (both datasets and network architectures).
4: The reviewer is confident but not absolutely certain that the evaluation is correct
HyxQzBceg
ICLR.cc/2017/conference
2017
Deep Variational Information Bottleneck
["Alexander A. Alemi", "Ian Fischer", "Joshua V. Dillon", "Kevin Murphy"]
We present a variational approximation to the information bottleneck of Tishby et al. (1999). This variational approach allows us to parameterize the information bottleneck model using a neural network and leverage the reparameterization trick for efficient training. We call this method “Deep Variational Information Bottleneck”, or Deep VIB. We show that models trained with the VIB objective outperform those that are trained with other forms of regularization, in terms of generalization performance and robustness to adversarial attack.
["Theory", "Computer vision", "Deep learning", "Supervised Learning"]
ABSTRACTWe present a variational approximation to the information bottleneck of Tishbyet al. (1999). This variational approach allows us to parameterize the informa-tion bottleneck model using a neural network and leverage the reparameterizationtrick for efficient training. We call this method “Deep Variational InformationBottleneck”, or Deep VIB. We show that models trained with the VIB objectiveoutperform those that are trained with other forms of regularization, in terms ofgeneralization performance and robustness to adversarial attack.1 I NTRODUCTIONWe adopt an information theoretic view of deep networks. We regard the internal representation ofsome intermediate layer as a stochastic encoding Zof the input source X, defined by a parametricencoderp(zjx;).1Our goal is to learn an encoding that is maximally informative about our targetY, measured by the mutual information between our encoding and the target I(Z;Y;), whereI(Z;Y;) =Zdxdyp (z;yj) logp(z;yj)p(zj)p(yj):2(1)Given the data processing inequality, and the invariance of the mutual information to reparameteriza-tions, if this was our only objective we could always ensure a maximally informative representationby taking the identity encoding of our data (Z=X), but this is not a useful representation of ourdata. Instead we would like to find the best representation we can obtain subject to a constraint onits complexity. A natural and useful constraint to apply is on the mutual information between ourencoding and the original data, I(X;Z)Ic, whereIcis the information constraint. This suggeststhe objective:maxI(Z;Y;)s.t.I(X;Z;)Ic: (2)Equivalently, with the introduction of a Lagrange multiplier , we can maximize the objective func-tionRIB() =I(Z;Y;)I(Z;X;): (3)Here our goal is to learn an encoding Zthat is maximally expressive about Ywhile being maximallycompressive about X, where0controls the tradeoff.3This approach is known as the informa-tion bottleneck (IB), and was first proposed in Tishby et al. (1999). Intuitively, the first term in RIBencouragesZto be predictive of Y; the second term encourages Zto “forget”X. Essentially itforcesZto act like a minimal sufficient statistic of Xfor predicting Y.The IB principle is appealing, since it defines what we mean by a good representation, in terms of thefundamental tradeoff between having a concise representation and one with good predictive power(Tishby & Zaslavsky, 2015a). The main drawback of the IB principle is that computing mutualinformation is, in general, computationally challenging. There are two notable exceptions: the first1In this work, X;Y;Z are random variables, x;y;z andx;y;zare instances of random variables, andF(;)andf(;)are functionals or functions parameterized by .2Note that in the present discussion, Yis the ground truth label which is independent of our parameters sop(yj) =p(y).3Note that, in our notation, large results in a highly compressed representation. In some works, the IBprinciple is formulated as the minimization of I(Z;X )I(Z;Y ), in which case large corresponds to highmutual information between ZandY, and hence low compression.1Published as a conference paper at ICLR 2017is whenX,YandZare all discrete, as in Tishby et al. (1999); this can be used to cluster discretedata, such as words. The second case is when X,YandZare all jointly Gaussian (Chechik et al.,2005). However, these assumptions both severely constrain the class of learnable models.In this paper, we propose to use variational inference to construct a lower bound on the IB objectivein Equation 3. We call the resulting method VIB (variational information bottleneck). By using thereparameterization trick (Kingma & Welling, 2014), we can use Monte Carlo sampling to get anunbiased estimate of the gradient, and hence we can optimize the objective using stochastic gradientdescent. This allows us to use deep neural networks to parameterize our distributions, and thus tohandle high-dimensional, continuous data, such as images, avoiding the previous restrictions to thediscrete or Gaussian cases.We also show, by a series of experiments, that stochastic neural networks, fit using our VIB method,are robust to overfitting, since VIB finds a representation Zwhich ignores as many details of theinputXas possible. In addition, they are more robust to adversarial inputs than deterministic modelswhich are fit using (penalized) maximum likelihood estimation. Intuitively this is because each inputimage gets mapped to a distribution rather than a unique Z, so it is more difficult to pass small,idiosyncratic perturbations through the latent bottleneck.2 R ELATED WORKThe idea of using information theoretic objectives for deep neural networks was pointed out inTishby & Zaslavsky (2015b). However, they did not include any experimental results, since theirapproach for optimizing the IB objective relied on the iterative Blahut Arimoto algorithm, which isinfeasible to apply to deep neural networks.Variational inference is a natural way to approximate the problem. Variational bounds on mutualinformation have previously been explored in Agakov (2004), though not in conjunction with theinformation bottleneck objective. Mohamed & Rezende (2015) also explore variational bounds onmutual information, and apply them to deep neural networks, but in the context of reinforcementlearning. We recently discovered Chalk et al. (2016), who independently developed the same varia-tional lower bound on the IB objective as us. However, they apply it to sparse coding problems, anduse the kernel trick to achieve nonlinear mappings, whereas we apply it to deep neural networks,which are computationally more efficient. In addition, we are able to handle large datasets by usingstochastic gradient descent, whereas they use batch variational EM.In the supervised learning literature, our work is related to the recently proposed confidence penalty(entropy regularization) method of (Pereyra et al., 2016). In this work, they fit a deterministicnetwork by optimizing an objective that combines the usual cross entropy loss with an extra termwhich penalizes models for having low entropy predictive distributions. In more detail, their costfunction has the formJCP=1NNXn=1[H(p(yjyn);p(yjxn))H(p(yjxn))] (4)whereH(p;q) =Pyp(y) logq(y)is the cross entropy, H(p) =H(p;p)is the entropy,p(yjyn) =yn(y)is a one-hot encoding of the label yn, andNis the number of training exam-ples. (Note that setting = 0corresponds to the usual maximum likelihood estimate.) In (Pereyraet al., 2016) they show that CP performs better than the simpler technique of label smoothing, inwhich we replace the zeros in the one-hot encoding of the labels by >0, and then renormalizeso that the distribution still sums to one. We will compare our VIB method to both the confidencepenalty method and label smoothing in Section 4.1.In the unsupervised learning literature, our work is closely related to the work in Kingma & Welling(2014) on variational autoencoders. In fact, their method is a special case of an unsupervised versionof the VIB, but with the parameter fixed at 1.0, as we explain in Appendix B. The V AE objective,but with different values of , was also explored in Higgins et al. (2016), but from a differentperspective.The method of Wang et al. (2016b) proposes a latent variable generative model of both xandy;their variational lower bound is closely related to ours, with the following differences. First, we do2Published as a conference paper at ICLR 2017not have a likelihood term for x, since we are in the discriminative setting. Second, they fix = 1,since they do not consider compression.Finally, the variational fair autoencoder of Louizos et al. (2016) shares with our paper the idea ofignoring parts of the input. However, in their approach, the user must specify which aspects of theinput (the so-called “sensitive” parts) to ignore, whereas in our method, we can discover irrelevantparts of the input automatically.3 M ETHODFollowing standard practice in the IB literature, we assume that the joint distribution p(X;Y;Z )factors as follows:p(X;Y;Z ) =p(ZjX;Y )p(YjX)p(X) =p(ZjX)p(YjX)p(X) (5)i.e., we assume p(ZjX;Y ) =p(ZjX), corresponding to the Markov chain Y$X$Z. Thisrestriction means that our representation Zcannot depend directly on the labels Y. (This opensthe door to unsupervised representation learning, which we will discuss in Appendix B.) Besidesthe structure in the joint data distribution p(X;Y ), the only content at this point is our model forthe stochastic encoder p(ZjX), all other distributions are fully determined by these and the Markovchain constraint.Recall that the IB objective has the form I(Z;Y)I(Z;X). We will examine each of theseexpressions in turn. Let us start with I(Z;Y). Writing it out in full, this becomesI(Z;Y) =Zdydzp (y;z) logp(y;z)p(y)p(z)=Zdydzp (y;z) logp(yjz)p(y): (6)wherep(yjz)is fully defined by our encoder and Markov Chain as follows:p(yjz) =Zdxp(x;yjz) =Zdxp(yjx)p(xjz) =Zdxp(yjx)p(zjx)p(x)p(z): (7)Since this is intractable in our case, let q(yjz)be a variational approximation to p(yjz). This is ourdecoder, which we will take to be another neural network with its own set of parameters. Using thefact that the Kullback Leibler divergence is always positive, we haveKL[p(YjZ);q(YjZ)]0 =)Zdyp(yjz) logp(yjz)Zdyp(yjz) logq(yjz); (8)and henceI(Z;Y)Zdydzp (y;z) logq(yjz)p(y)(9)=Zdydzp (y;z) logq(yjz)Zdyp(y) logp(y) (10)=Zdydzp (y;z) logq(yjz) +H(Y): (11)Notice that the entropy of our labels H(Y)is independent of our optimization procedure and so canbe ignored.Focusing on the first term in Equation 11, we can rewrite p(y;z)asp(y;z) =Rdxp(x;y;z ) =Rdxp(x)p(yjx)p(zjx)(leveraging our Markov assumption), which gives us a new lower bound onthe first term of our objective:I(Z;Y)Zdxdydzp (x)p(yjx)p(zjx) logq(yjz): (12)This only requires samples from both our joint data distribution as well as samples from our stochas-tic encoder, while it requires we have access to a tractable variational approximation in q(yjz).We now consider the term I(Z;X):I(Z;X) =Zdzdxp (x;z) logp(zjx)p(z)=Zdzdxp (x;z) logp(zjx)Zdzp(z) logp(z):(13)3Published as a conference paper at ICLR 2017In general, while it is fully defined, computing the marginal distribution of Z,p(z) = Rdxp(zjx)p(x), might be difficult. So let r(z)be a variational approximation to this marginal.Since KL[p(Z);r(Z)]0 =)Rdzp(z) logp(z)Rdzp(z) logr(z), we have the followingupper bound:I(Z;X)Zdxdzp (x)p(zjx) logp(zjx)r(z): (14)Combining both of these bounds we have thatI(Z;Y)I(Z;X)Zdxdydzp (x)p(yjx)p(zjx) logq(yjz)Zdxdzp (x)p(zjx) logp(zjx)r(z)=L: (15)We now discuss how to compute the lower bound Lin practice. We can approximate p(x;y) =p(x)p(yjx)using the empirical data distribution p(x;y) =1NPNn=1xn(x)yn(y), and hence wecan writeL1NNXn=1Zdzp(zjxn) logq(ynjz)p(zjxn) logp(zjxn)r(z): (16)Suppose we use an encoder of the form p(zjx) =N(zjfe(x);fe(x)), wherefeis an MLP whichoutputs both the K-dimensional mean ofzas well as the KKcovariance matrix . Then wecan use the reparameterization trick (Kingma & Welling, 2014) to write p(zjx)dz=p()d, wherez=f(x;)is a deterministic function of xand the Gaussian random variable . This formulationhas the important advantage that the noise term is independent of the parameters of the model, so itis easy to take gradients.Assuming our choice of p(zjx)andr(z)allows computation of an analytic Kullback-Leibler di-vergence, we can put everything together to get the following objective function, which we try tominimize:JIB=1NNXn=1Ep()[logq(ynjf(xn;))] +KL [p(Zjxn);r(Z)]: (17)As in Kingma & Welling (2014), this formulation allows us to directly backpropagate through asingle sample of our stochastic code and ensure that our gradient is an unbiased estimate of the trueexpected gradient.44 E XPERIMENTAL RESULTSIn this section, we present various experimental results, comparing the behavior of standard deter-ministic networks to stochastic neural networks trained by optimizing the VIB objective.4.1 B EHAVIOR ON MNISTWe start with experiments on unmodified MNIST (i.e. no data augmentation). In order to pick amodel with some “headroom” to improve, we decided to use the same architecture as in the (Pereyraet al., 2016) paper, namely an MLP with fully connected layers of the form 784 - 1024 - 1024- 10, and ReLu activations. (Since we are not exploiting spatial information, this correpsonds tothe “permutation invariant” version of MNIST.) The performance of this baseline is 1.38% error.(Pereyra et al., 2016) were able to improve this to 1.17% using their regularization technique. Wewere able to improve this to 1.13% using our technique, as we explain below.In our method, the stochastic encoder has the form p(zjx) =N(zjfe(x);fe(x)), wherefeis anMLP of the form 784102410242K, whereKis the size of the bottleneck. The first Koutputs from feencode, the remaining Koutputs encode (after a softplus transform).4Even if our choice of encoding distribution and variational prior do not admit an analytic KL, we couldsimilarly reparameterize through a sample of the divergence (Kingma & Welling, 2014; Blundell et al., 2015).4Published as a conference paper at ICLR 2017Model errorBaseline 1.38%Dropout 1.34%Dropout (Pereyra et al., 2016) 1.40%Confidence Penalty 1.36%Confidence Penalty (Pereyra et al., 2016) 1.17%Label Smoothing 1.40%Label Smoothing (Pereyra et al., 2016) 1.23%VIB (= 103)1.13%Table 1: Test set misclassification rate on permutation-invariant MNIST using K= 256 . We com-pare our method (VIB) to an equivalent deterministic model using various forms of regularization.The discrepancy between our results for confidence penalty and label smoothing and the numbersreported in (Pereyra et al., 2016) are due to slightly different hyperparameters.The decoder is a simple logistic regression model of the form q(yjz) =S(yjfd(z)), whereS(a) =[exp(ac)=PCc0=1exp(ac0)]is the softmax function, and fd(z) =Wz+bmaps theKdimensionallatent code to the logits of the C= 10 classes. (In later sections, we consider more complexdecoders, but here we wanted to show the benefits of VIB in a simple setting.)Finally, we treat r(z)as a fixedK-dimensional spherical Gaussian, r(z) =N(zj0;I).We compare our method to the baseline MLP. We calso consider the following deterministic limitof our model, when = 0. In this case, we obtain the following objective function:JIB0=1NNXn=1EzN(fe(xn);fe(xn))[logS(ynjfd(z)] (18)When!0, we observe the VIB optimization process tends to make fe(x)!0, so the networkbecomes nearly deterministic. In our experiments we also train an explicitly deterministic modelthat has the same form as the stochastic model, except that we just use z=fe(x)as the hiddenencoding, and drop the Gaussian layer.4.1.1 H IGHER DIMENSIONAL EMBEDDINGTo demonstrate that our VIB method can achieve competitive classification results, we comparedagainst a deterministic MLP trained with various forms of regularization. We use a K= 256dimensional bottleneck and a diagonal Gaussian for p(zjx). The networks were trained using Ten-sorFlow for 200 epochs using the Adam optimizer (Kingma & Ba, 2015) with a learning rate of0.0001. Full hyperparameter details can be found in Appendix A.The results are shown in Table 1. We see that we can slightly outperform other forms of regulariza-tion that have been proposed in the literature while using the same network for each. Of course, theperformance varies depending on . These results are not state of the art, nor is our main focus ofour work to suggest that VIB is the best regularization method by itself, which would require muchmore experimentation. However, using the same architecture for each experiment and comparingto VIB as the only source of regularization suggests VIB works as a decent regularizer in and ofitself. Figure 1(a) plots the train and test error vs , averaged over 5 trials (with error bars) for thecase where we use a single Monte Carlo sample of zwhen predicting, and also for the case wherewe average over 12 posterior samples (i.e., we use p(yjx) =1SPSs=1q(yjzs)forzsp(zjx),whereS= 12 ). In our own investigations, a dozen samples seemed to be sufficient to capture anyadditional benefit the stochastic evaluations had to offer in this experiment5.We see several interesting properties in Figure 1(a). First, we notice that the error rate shoots uponcerises above the critical value of 102. This corresponds to a setting where the mutualinformation between XandZis less than log2(10) bits, so the model can no longer represent thefact that there are 10 different classes. Second, we notice that, for small values of , the test error5A dozen samples wasn’t chosen for any particular reason, except the old addage that a dozen samples aresufficient, as mirrored in David MacKay’s book (MacKay, 2003). They proved sufficient in this case.5Published as a conference paper at ICLR 2017is higher than the training error, which indicates that we are overfitting. This is because the networklearns to be more deterministic, forcing 0, thus reducing the benefits of regularization. Third,we notice that for intermediate values of , Monte Carlo averaging helps. Interestingly, the regionwith the best performance roughly corresponds to where the added benefit from stochastic averaginggoes away, suggesting an avenue by which one could try to optimize using purely statistics on thetraining set without a validation set. We have not extensively studied this possibility yet.In Figure 1(c), we plot the IB curve, i.e., we plot I(Z;Y)vsI(Z;X)as we vary. As we allowmore information from the input through to the bottleneck (by lowering ), we increase the mutualinformation between our embedding and the label on the training set, but not necessarily on the testset, as is evident from the plot.In Figure 1(d) we plot the second term in our objective, the upper bound on the mutual informationbetween the images Xand our stochastic encoding Z, which in our case is simply the relativeentropy between our encoding and the fixed isotropic unit Gaussian prior. Notice that the y-axis is alogarithmic one. This demonstrates that our best results (when is between 103and102) occurwhere the mutual information between the stochastic encoding and the images is on the order of 10to 100 bits.10−910−810−710−610−510−410−310−210−1100101b0.0000.0050.0100.0150.020errortest 1 shot evaltest avg evaltrain 1 shot evaltrain avg eval10−910−810−710−610−510−410−310−210−1100101b0.000.010.020.030.040.05errortest 1 shot evaltest avg evaltrain 1 shot evaltrain avg eval(a) (b)101102103104I(Z,X)2.82.93.03.13.23.3I(Z,Y)traintest10−910−810−710−610−510−410−310−210−1100101b10−310−210−1100101102103I(Z,X)traintest(c) (d)Figure 1: Results of VIB model on MNIST. (a) Error rate vs forK= 256 on train and test set.“1 shot eval” means a single posterior sample of z, “avg eval” means 12 Monte Carlo samples. Thespike in the error rate at 102corresponds to a model that is too highly regularized. Plottedvalues are the average over 5 independent training runs at each . Error bars show the standarddeviation in the results. (b) Same as (a), but for K= 2. Performance is much worse, since we passthrough a very narrow bottleneck. (c) I(Z;Y)vsI(Z;X)as we varyforK= 256 . We see thatincreasingI(Z;X)helps training set performance, but can result in overfitting. (d) I(Z;X)vsforK= 256 . We see that for a good value of , such as 102, we only need to store about 10 bitsof information about the input.4.1.2 T WO DIMENSIONAL EMBEDDINGTo better understand the behavior of our method, we refit our model to MNIST using a K= 2dimensional bottleneck, but using a full covariance Gaussian. (The neural net predicts the mean andthe Cholesky decomposition of the covariance matrix.) Figure 1(b) shows that, not surprisingly, theclassification performance is worse (note the different scaled axes), but the overall trends are the6Published as a conference paper at ICLR 2017same as in the K= 256 dimensional case. The IB curve (not shown) also has a similar shape tobefore, except now the gap between training and testing is even larger.Figure 2 provides a visualization of what the network is doing. We plot the posteriors p(zjx)as a 2dGaussian ellipse (representing the 95% confidence region) for 1000 images from the test set. Colorscorrespond to the true class labels. In the background of each plot is the entropy of the variationalclassifierq(yjz)evaluated at that point.−15−10−5 0 5 10 15−15−10−5051015(a)= 103, errmc= 3:18% ,err1= 3:24%−4−2 0 2 4−4−2024(b)= 101, errmc= 3:44% ,err1= 4:32%−3−2−1 0 1 2 3−3−2−10123(c)= 100, errmc= 33:82% ,err1= 62:81% .Figure 2: Visualizing embeddings of 1000 test images in two dimensions. We plot the 95% confi-dence interval of the Gaussian embedding p(zjx) =N(;)as an ellipse. The images are coloredaccording to their true class label. The background greyscale image denotes the entropy of the vari-ational classifier evaluated at each two dimensional location. As becomes larger, we forget moreabout the input and the embeddings start to overlap to such a degree that the classes become indis-tinguishable. We also report the test error using a single sample, err 1, and using 12 Monte Carlosamples, err mc. For “good” values of , a single sample suffices.We see several interesting properties. First, as increases (so we pass less information through),the embedding covariances increase in relation to the distance between samples, and the classesstart to overlap. Second, once passes a critical value, the encoding “collapses”, and essentiallyall the class information is lost. Third, there is a fair amount of uncertainty in the class preditions(q(yjz)) in the areas between the class embeddings. Fourth, for intermediate values of (say101in Figure 2(b)), predictive performance is still good, even though there is a lot of uncertainty aboutwhere any individual image will map to in comparison to other images in the same class. This meansit would be difficult for an outside agent to infer which particular instance the model is representing,a property which we will explore more in the following sections.4.2 B EHAVIOR ON ADVERSARIAL EXAMPLESSzegedy et al. (2013) was the first work to show that deep neural networks (and other kinds ofclassifiers) can be easily “fooled” into making mistakes by changing their inputs by imperceptiblysmall amounts. In this section, we will show how training with the VIB objective makes modelssignificantly more robust to such adversarial examples.4.2.1 T YPES OF ADVERSARIESSince the initial work by Szegedy et al. (2013) and Goodfellow et al. (2014), many different adver-saries have been proposed. Most attacks fall into three broad categories: optimization-based attacks(Szegedy et al., 2013; Carlini & Wagner, 2016; Moosavi-Dezfooli et al., 2016; Papernot et al., 2015;Robinson & Graham, 2015; Sabour et al., 2016), which directly run an optimizer such as L-BFGSor ADAM (Kingma & Ba, 2015) on image pixels to find a minimal perturbation that changes themodel’s classification; single-step gradient-based attacks (Goodfellow et al., 2014; Kurakin et al.,2016; Huang et al., 2015), which choose a gradient direction of the image pixels at some loss andthen take a single step in that direction; and iterative gradient-based attacks (Kurakin et al., 2016),7Published as a conference paper at ICLR 2017which take multiple small steps along the gradient direction of the image pixels at some loss, recom-puting the gradient direction after each step.6Many adversaries can be formalized as either untargeted or targeted variants. An untargeted ad-versary can be defined as A(X;M )!X0, whereA(:)is the adversarial function, Xis the inputimage,X0is the adversarial example, and Mis the target model. Ais considered successful ifM(X)6=M(X0). Recently, Moosavi-Dezfooli et al. (2016) showed how to create a “universal”adversarial perturbation that can be added to any image Xin order to make M(X+)6=M(X)for a particular target model.A targeted adversary can be defined as A(X;M;l )!X0, wherelis an additional target label, andAis only considered successful if M(X0) =l.7Targeted attacks usually require larger magnitudeperturbations, since the adversary cannot just “nudge” the input across the nearest decision boundary,but instead must force it into a desired decision region.In this work, we focus on the Fast Gradient Sign (FGS) method proposed in Goodfellow et al.(2014) and the L2optimization method proposed in Carlini & Wagner (2016). FGS is a standardbaseline attack that takes a single step in the gradient direction to generate the adversarial example.As originally described, FGS generates untargeted adversarial examples. On MNIST, Goodfellowet al. (2014) reported that FGS could generate adversarial examples that fooled a maxout networkapproximately 90% of the time with = 0:25, whereis the magnitude of the perturbation at eachpixel. TheL2optimization method has been shown to generate adversarial examples with smallerperturbations than any other method published to date, which were capable of fooling the targetnetwork 100% of the time. We consider both targeted attacks and untargeted attacks for the L2optimization method.84.2.2 A DVERSARIAL ROBUSTNESSThere are multiple definitions of adversarial robustness in the literature. The most basic, which weshall use, is accuracy on adversarially perturbed versions of the test set, called adversarial examples.It is also important to have a measure of the magnitude of the adversarial perturbation. Since ad-versaries are defined relative to human perception, the ideal measure would explicitly correspond tohow easily a human observer would notice the perturbation. In lieu of such a measure, it is commonto compute the size of the perturbation using L0,L1,L2, andL1norms (Szegedy et al., 2013;Goodfellow et al., 2014; Carlini & Wagner, 2016; Sabour et al., 2016). In particular, the L0normmeasures the number of perturbed pixels, the L2norm measures the Euclidean distance between XandX0, and theL1norm measures the largest single change to any pixel.4.2.3 E XPERIMENTAL SETUPWe used the same model architectures as in Section 4.1, using a K= 256 bottleneck. The archi-tectures included a deterministic (base) model trained by MLE; a deterministic model trained withdropout (the dropout rate was chosen on the validation set); and a stochastic model trained with VIBfor various values of .For the VIB models, we use 12 posterior samples of Zto compute the class label distribution p(yjx).This helps ensure that the adversaries can get a consistent gradient when constructing the perturba-tion, and that they can get a consistent evaluation when checking if the perturbation was successful6There are also other adversaries that don’t fall as cleanly into those categories, such as “fooling im-ages” from Nguyen et al. (2014), which remove the human perceptual constraint, generating regular geometricpatterns or noise patterns that networks confidently classify as natural images; and the idea of generating ad-versaries by stochastic search for images near the decision boundary of multiple networks from Baluja et al.(2015).7Sabour et al. (2016) proposes a variant of the targeted attack, A(XS;M;X T;k)!X0S, whereXSis thesource image, XTis a target image, and kis a target layer in the model M.AproducesX0Sby minimizing thedifference in activations of Mat layerkbetweenXTandX0S. The end result of this attack for a classificationnetwork is still that M(X0S)yields a target label implicitly specified by XTin a successful attack.8Carlini & Wagner (2016) shared their code with us, which allowed us to perform the attack with exactlythe same parameters they used for their paper, including the maximum number of iterations and maximum Cvalue (see their paper for details).8Published as a conference paper at ICLR 2017(i.e., it reduces the chance that the adversary “gets lucky” in its perturbation due to an untypicalsample). We also ran the VIB models in “mean mode”, where the s are forced to be 0. This had nonoticeable impact on the results, so all reported results are for stochastic evaluation with 12 samples.4.2.4 MNIST R ESULTS AND DISCUSSIONWe selected the first 10 zeros in the MNIST test set, and use the L2optimization adversary of Carlini& Wagner (2016) to try to perturb those zeros into ones.9Some sample results are shown in Figure3. We see that the deterministic models are easily fooled by making small perturbations, but for theVIB models with reasonably large , the adversary often fails to find an attack (indicated by thegreen borders) within the permitted number of iterations. Furthermore, when an attack is succesful,it needs to be much larger for the VIB models. To quantify this, Figure 4 plots the magnitude of theperturbation (relative to that of the deterministic and dropout models) needed for a successful attackas a function of . Asincreases, the L0norm of the perturbation decreases, but both L2andL1norms increase, indicating that the adversary is being forced to put larger modifications into fewerpixels while searching for an adversarial perturbation.Figure 5 plots the accuracy on FGS adversarial examples of the first 1000 images from the MNISTtest set as a function of . Each point in the plot corresponds to 3 separate executions of threedifferent models trained with the same value of . All models tested achieve over 98.4% accuracy onthe unperturbed MNIST test set, so there is no appreciable measurement distortion due to underlyingmodel accuracy.Figure 6 plots the accuracy on L2optimization adversarial examples of the first 1000 images fromthe MNIST test set as a function of . The same sets of three models per were tested three times,as with the FGS adversarial examples.We generated both untargeted and targeted adversarial examples for Figure 6. For targeting, wegenerate a random target label different from the source label in order to avoid biasing the resultswith unevenly explored source/target pairs. We see that for a reasonably broad range of values,the VIB models have significantly better accuracy on the adversarial examples than the deterministicmodels, which have an accuracy of 0% (the L2optimization attack is very effective on traditionalmodel architectures).Figure 6 also reveals a surprising level of adversarial robustness even when !0. This can beexplained by the theoretical framework of Fawzi et al. (2016). Their work proves that quadraticclassifiers (e.g., xTAx, symmetric A) have a greater capacity for adversarial robustness than linearclassifiers. As we show in Appendix C, our Gaussian/softmax encoder/decoder is approximatelyquadratic for all <1.4.2.5 I MAGE NETRESULTS AND DISCUSSIONVIB improved classification accuracy and adversarial robustness for toy datasets like MNIST. Wenow investigate if VIB offers similar advantages for ImageNet, a more challenging natural imageclassification. Recall that ImageNet has approximately 1M images spanning 1K classes. We pre-process images such that they are 299x299 pixels.ArchitectureWe make use of publicly available, pretrained checkpoints10of Inception Resnet V2 (Szegedy et al.,2016) on ImageNet (Deng et al., 2009). The checkpoint obtains 80.4% classification accuracy on theImageNet validation set. Using the checkpoint, we transformed the original training set by applyingthe pretrained network to each image and extracting the representation at the penultimate layer.This new image representation has 1536 dimensions. The higher layers of the network continue toclassify this representation with 80.4% accuracy; conditioned on this extraction the classification9We chose this pair of labels since intuitively zeros and ones are the digits that are least similar in terms ofhuman perception, so if the adversary can change a zero into a one without much human-noticeable perturba-tion, it is unlikely that the model has learned a representation similar to what humans learn.10Available at the Tensorflow Models repository in the Slim directory: https://github.com/tensorflow/models/tree/master/slim9Published as a conference paper at ICLR 2017Orig: Det: Dropout = 0= 1010= 108= 106= 104= 103= 102Figure 3: The adversary is trying to force each 0 to be classified as a 1. Successful attacks have a redbackground. Unsuccessful attacks have a green background. In the case that the label is changedto an incorrect label different from the target label (i.e., the classifier outputs something other than0 or 1), the background is purple. The first column is the original image. The second column isadversarial examples targeting our deterministic baseline model. The third column is adversarialexamples targeting our dropout model. The remaining columns are adversarial examples targetingour VIB models for different .10-1110-1010-910-810-710-610-510-410-310-2β1.01.52.02.53.0All L*/Deterministic Model L*Deterministic Model L*Targeted L2 Optimization (0->1):L0Targeted L2 Optimization (0->1):L2Targeted L2 Optimization (0->1):L∞10-1110-1010-910-810-710-610-510-410-310-2β0.60.81.01.21.41.61.82.0All L*/Dropout Model L*Dropout Model L*Targeted L2 Optimization (0->1):L0Targeted L2 Optimization (0->1):L2Targeted L2 Optimization (0->1):L∞(a) (b)Figure 4: (a) Relative magnitude of the adversarial perturbation, measured using L0,L2, andL1norms, for the images in Figure 3 as a function of . (We normalize all values by the correspondingnorm of the perturbation against the base model.) As increases,L0decreases, but both L2andL1increase, indicating that the adversary is being forced to put larger modifications into fewer pixelswhile searching for an adversarial perturbation. (b) Same as (a), but with the dropout model as thebaseline. Dropout is more robust to the adversarial perturbations than the base deterministic model,but still performs much worse than the VIB model as increases.10Published as a conference paper at ICLR 201710-810-710-610-510-410-310-210-1β246810Relative Accuracy on Adversarial ExamplesDeterministic ModelFGS, epsilon=0.350FGS, epsilon=0.400FGS, epsilon=0.450FGS, epsilon=0.50010-810-710-610-510-410-310-210-1β12345Relative Accuracy on Adversarial ExamplesDropout ModelFGS, epsilon=0.350FGS, epsilon=0.400FGS, epsilon=0.450FGS, epsilon=0.500(a) (b)Figure 5: Classification accuracy of VIB classifiers, divided by accuracy of baseline classifiers, onFGS-generated adversarial examples as a function of . Higher is better, and the baseline is alwaysat1:0. For the FGS adversarial examples, when = 0(not shown), the VIB model’s performance isalmost identical to when = 108. (a) FGS accuracy normalized by the base deterministic modelperformance. The base deterministic model’s accuracy on the adversarial examples ranges fromabout 1% when = 0:5to about 5% when = 0:35. (b) Same as (a), but with the dropout modelas the baseline. The dropout model is more robust than the base model, but less robust than VIB,particularly for stronger adversaries (i.e., larger values of ). The dropout model’s accuracy on theadversarial examples ranges from about 5% when = 0:5to about 16% when = 0:35. As inthe other results, relative performance is more dramatic as increases, which seems to indicate thatthe VIB models are learning to ignore more of the perturbations caused by the FGS method, eventhough they were not trained on any adversarial examples.10-1110-1010-910-810-710-610-510-410-310-210-1β0.00.10.20.30.40.50.60.7Accuracy on Adversarial ExamplesDeterministic and Dropout Models (Targeted and Untargeted)Targeted L2 OptimizationUntargeted L2 OptimizationFigure 6: Classification accuracy (from 0 to 1) on L2adversarial examples (of all classes) as afunction of . The blue line is for targeted attacks, and the green line is for untargeted attacks(which are easier to resist). In this case, = 1011has performance indistinguishable from = 0.The deterministic model and dropout model both have a classification accuracy of 0% in both thetargeted and untargeted attack scenarios, indicated by the horizontal red dashed line at the bottom ofthe plot. This is the same accuracy on adversarial examples from this adversary reported in Carlini& Wagner (2016) on a convolutional network trained on MNIST.11Published as a conference paper at ICLR 2017(a) (b)(c) (d)Figure 7: The results of our ImageNet targeted L2optimization attack. In all cases we target anew label of 222 (“soccer ball”). Figure (a) shows the 30 images from the first 40 images in theImageNet validation set that the VIB network classifies correctly. The class label is shown in greenon each image. The predicted label and targeted label are shown in red. Figure (b) shows adversarialexamples of the same images generated by attacking our VIB network with = 0:01. While allof the attacks change the classification of the image, in 13 out of 30 examples the attack fails tohit the intended target class (“soccer ball”). Pink crosses denote cases where the attack failed toforce the model to misclassify the image as a soccer ball. Figure (c) shows the same result butfor our deterministic baseline operating on the whitened precomputed features. The attack alwayssuccceeds. Figure (d) is the same but for the original full Inception ResNet V2 network withoutmodification. The attack always succceeds. There are slight variations in the set of adversarialexamples shown for each network because we limited the adversarial search to correctly classifiedimages. In the case of the deterministic baseline and original Inception ResNet V2 network, theperturbations are hardly noticable in the perturbed images, but in many instances, the perturbationsfor the VIB network can be percieved.12Published as a conference paper at ICLR 2017Figure 8: Shown are the absolute differences between the original and final perturbed images forall three networks. The left block shows the perturbations created while targeting the VIB network.The middle block shows the perturbations needed for the deterministic baseline using precomputedwhitened features. The right block shows the perturbations created for the unmodified InceptionResNet V2 network. The contrast has been increased by the same amount in all three columns toemphasize the difference in the magnitude of the perturbations. The VIB network required muchlarger perturbations to confuse the classifier, and even then did not achieve the targeted class in 13of those cases.model is simply logistic regression. To further speed training, we whitened the 1536 dimensionalrepresentation.Under this transformation, the experiment regime is identical to the permutation invariant MNISTtask. We therefore used a similar model architecture. Inputs are passed through two fully connectedlayers, each with 1024 units. Next, data is fed to a stochastic encoding layer; this layer is charac-terized by a spherical Gaussian with 1024 learned means and standard deviations. The output ofthe stochastic layer is fed to the variational classifier–itself a logistic regression, for simplicity. Allother hyperparameters and training choices are identical to those used in MNIST, more details inAppendix A.ClassificationWe see the same favorable VIB classification performance in ImageNet as in MNIST. By varying, the estimated mutual information between encoding and image ( I(Z;X)) varies as well. At largevalues ofaccuracy suffers, but at intermediate values we obtain improved performance over botha deterministic baseline and a = 0 regime. In all cases our accuracy is somewhat lower thanthe original 80.4% accuracy. This may be a consequence of inadequate training time or suboptimalhyperparameters.Overall the best accuracy we achieved was using = 0:01. Under this setting we saw an accu-racy of 80.12%–nearly the same as the state-of-the-art unmodified network– but with substantiallysmaller information footprint, only I(X;Z)45bits. This is a surprisingly small amount of infor-mation;= 0implies over 10,000 bits yet only reaches an accuracy of 78.87%. The deterministicbaseline, which was the same network, but without the VIB loss and a 1024 fully connected lin-ear layer instead of the stochastic embedding similarly only achieved 78.75% accuracy. We stressthat regressions from the achievable 80.4% are likely due to suboptimal hyperparameters settings orinadequate training.Considering a continuum of and a deterministic baseline, the best classification accuracy wasachieved with a = 0:012(0;1). In other words, VIB offered accuracy benefit yet using a mere45bits of information from each image.13Published as a conference paper at ICLR 2017Adversarial RobustnessWe next show that the VIB-trained network improves resistance to adversarial attack.We focus on the Carlini targeted L2attack (see Section 4.2.1). We show results for the VIB-trainednetwork and a deterministic baseline (both on top of precomputed features), as well as for the origi-nal pretrained Inception ResNet V2 network itself. The VIB network is more robust to the targetedL2optimization attack in both magnitude of perturbation and frequency of successful attack.Figure 7 shows some example images which were all misclassified as “soccer balls” by the deter-ministic models; by contrast, with the VIB model, only 17 out of 30 of the attacks succeeded inbeing mislabeled as the target label.11We find that the VIB model can resist about 43.3% of theattacks, but the deterministic models always fail (i.e., always misclassify into the targeted label).Figure 8 shows the absolute pixel differences between the perturbed and unperturbed images for theexamples in Figure 7. We see that the VIB network requires much larger perturbations in order tofool the classifier, as quantified in Table 2.Metric Determ IRv2 VIB(0.01)Sucessful target 1.0 1.0 0.567L26.45 14.43 43.27L1 0.18 0.44 0.92Table 2: Quantitative results showing how the different Inception Resnet V2-based architectures(described in Section 4.2.5) respond to targeted L2adversarial examples. Determ is the deterministicarchitecture, IRv2 is the unmodified Inception Resnet V2 architecture, and VIB(0.01) is the VIBarchitecture with = 0:01.Successful target is the fraction of adversarial examples that causedthe architecture to classify as the target class (soccer ball). Lower is better. L2andL1are theaverageLdistances between the original images and the adversarial examples. Larger values meanthe adversary had to make a larger perturbation to change the class.5 F UTURE DIRECTIONSThere are many possible directions for future work, including: putting the VIB objective at multipleor every layer of a network; testing on real images; using richer parametric marginal approxima-tions, rather than assuming r(z) =N(0;I); exploring the connections to differential privacy (seee.g., Wang et al. (2016a); Cuff & Yu (2016)); and investigating open universe classification problems(see e.g., Bendale & Boult (2015)). In addition, we would like to explore applications to sequenceprediction, where Xdenotes the past of the sequence and Ythe future, while Zis the current repre-sentation of the network. This form of the information bottleneck is known as predictive information(Bialek et al., 2001; Palmer et al., 2015).
r1dPtY0me
A paper with interesting methods, but the presentation is a bit confusing
6: Marginally above acceptance threshold
Thank you for an interesting read. I personally like the information bottleneck principle and am very happy to see its application to deep neural networks. To my knowledge, this is the first paper that applies IB to train deep networks (the original papers only presented the concept), but see below for the note of independent work claim. The derivation of the variational lowerbound is very clear, even for those who are not very familiar with variational inference. Also the explanation of the IB principle is clear. Experimental results seem to be very promising. I found the presentation for the model a bit confusing. In variational inference/information maximisation, p usually denotes the model and q represents the "inference engine". This means the choice of inference method is independent to the modelling procedure. However the presented VIB assumed p(x, y) as the **underlying data distribution** (and approximated by the empirical distribution), thus here the model is actually q(y|z)p(z|x). Then the authors presented p(y|x) as the **predictive distribution** in page 8, paragraph 2 of section 4.2.3. Predictive in what sense? I guess you meant p(y|x) = \int q(y|z) p(z|x) dz in this case, but this makes the two definitions contradict to each other! The authors have made an interesting connection to variational auto-encoder and the warm-up training (by tuning beta). However, even when the loss function formula is the same to the variational lowerbound used in VAE (in this case beta = 1), the underlying model is different! For example, r(z) in VIB is the variational approximation to p(z) (which means r(z) is not a component in the model), while in VAE it is the prior distribution which is actually defined in the modelling procedure. Similaly p(z|x) in VIB is included in the model, while in VAE that is the approximate posterior and can be independently chosen (e.g. you can use p(x|z) as a deep NN but p(z|x) as a deep NN or a Gaussian process). In summary, I think the presentation for the modelling procedure is unclear. I hope these point would be made clearer in revision since the current presentation makes me uncomfortable as a Bayesian person. In the VAE part, it's better to clearly mention the difference between VIB and VAE, and provide some intuitions if the VIB interpretation is preferred. Typos: Eq. 9-11: did you mean q(y|z) instead of q(z|y)? Fig 2 "as beta becomes smaller": did you mean "larger"? **claim for independent work** The authors claimed that the manuscript presented an independent work to Chalk et al. 2016 which is online since May 2016. It seems to me that nowadays deep learning research is very competitve that many people publish the same idea at the same time. So I would trust this claim and commend the authors' honesty, but in case this is not true, I would not recommend the manuscript for acceptance.
3: The reviewer is fairly confident that the evaluation is correct
HyxQzBceg
ICLR.cc/2017/conference
2017
Deep Variational Information Bottleneck
["Alexander A. Alemi", "Ian Fischer", "Joshua V. Dillon", "Kevin Murphy"]
We present a variational approximation to the information bottleneck of Tishby et al. (1999). This variational approach allows us to parameterize the information bottleneck model using a neural network and leverage the reparameterization trick for efficient training. We call this method “Deep Variational Information Bottleneck”, or Deep VIB. We show that models trained with the VIB objective outperform those that are trained with other forms of regularization, in terms of generalization performance and robustness to adversarial attack.
["Theory", "Computer vision", "Deep learning", "Supervised Learning"]
ABSTRACTWe present a variational approximation to the information bottleneck of Tishbyet al. (1999). This variational approach allows us to parameterize the informa-tion bottleneck model using a neural network and leverage the reparameterizationtrick for efficient training. We call this method “Deep Variational InformationBottleneck”, or Deep VIB. We show that models trained with the VIB objectiveoutperform those that are trained with other forms of regularization, in terms ofgeneralization performance and robustness to adversarial attack.1 I NTRODUCTIONWe adopt an information theoretic view of deep networks. We regard the internal representation ofsome intermediate layer as a stochastic encoding Zof the input source X, defined by a parametricencoderp(zjx;).1Our goal is to learn an encoding that is maximally informative about our targetY, measured by the mutual information between our encoding and the target I(Z;Y;), whereI(Z;Y;) =Zdxdyp (z;yj) logp(z;yj)p(zj)p(yj):2(1)Given the data processing inequality, and the invariance of the mutual information to reparameteriza-tions, if this was our only objective we could always ensure a maximally informative representationby taking the identity encoding of our data (Z=X), but this is not a useful representation of ourdata. Instead we would like to find the best representation we can obtain subject to a constraint onits complexity. A natural and useful constraint to apply is on the mutual information between ourencoding and the original data, I(X;Z)Ic, whereIcis the information constraint. This suggeststhe objective:maxI(Z;Y;)s.t.I(X;Z;)Ic: (2)Equivalently, with the introduction of a Lagrange multiplier , we can maximize the objective func-tionRIB() =I(Z;Y;)I(Z;X;): (3)Here our goal is to learn an encoding Zthat is maximally expressive about Ywhile being maximallycompressive about X, where0controls the tradeoff.3This approach is known as the informa-tion bottleneck (IB), and was first proposed in Tishby et al. (1999). Intuitively, the first term in RIBencouragesZto be predictive of Y; the second term encourages Zto “forget”X. Essentially itforcesZto act like a minimal sufficient statistic of Xfor predicting Y.The IB principle is appealing, since it defines what we mean by a good representation, in terms of thefundamental tradeoff between having a concise representation and one with good predictive power(Tishby & Zaslavsky, 2015a). The main drawback of the IB principle is that computing mutualinformation is, in general, computationally challenging. There are two notable exceptions: the first1In this work, X;Y;Z are random variables, x;y;z andx;y;zare instances of random variables, andF(;)andf(;)are functionals or functions parameterized by .2Note that in the present discussion, Yis the ground truth label which is independent of our parameters sop(yj) =p(y).3Note that, in our notation, large results in a highly compressed representation. In some works, the IBprinciple is formulated as the minimization of I(Z;X )I(Z;Y ), in which case large corresponds to highmutual information between ZandY, and hence low compression.1Published as a conference paper at ICLR 2017is whenX,YandZare all discrete, as in Tishby et al. (1999); this can be used to cluster discretedata, such as words. The second case is when X,YandZare all jointly Gaussian (Chechik et al.,2005). However, these assumptions both severely constrain the class of learnable models.In this paper, we propose to use variational inference to construct a lower bound on the IB objectivein Equation 3. We call the resulting method VIB (variational information bottleneck). By using thereparameterization trick (Kingma & Welling, 2014), we can use Monte Carlo sampling to get anunbiased estimate of the gradient, and hence we can optimize the objective using stochastic gradientdescent. This allows us to use deep neural networks to parameterize our distributions, and thus tohandle high-dimensional, continuous data, such as images, avoiding the previous restrictions to thediscrete or Gaussian cases.We also show, by a series of experiments, that stochastic neural networks, fit using our VIB method,are robust to overfitting, since VIB finds a representation Zwhich ignores as many details of theinputXas possible. In addition, they are more robust to adversarial inputs than deterministic modelswhich are fit using (penalized) maximum likelihood estimation. Intuitively this is because each inputimage gets mapped to a distribution rather than a unique Z, so it is more difficult to pass small,idiosyncratic perturbations through the latent bottleneck.2 R ELATED WORKThe idea of using information theoretic objectives for deep neural networks was pointed out inTishby & Zaslavsky (2015b). However, they did not include any experimental results, since theirapproach for optimizing the IB objective relied on the iterative Blahut Arimoto algorithm, which isinfeasible to apply to deep neural networks.Variational inference is a natural way to approximate the problem. Variational bounds on mutualinformation have previously been explored in Agakov (2004), though not in conjunction with theinformation bottleneck objective. Mohamed & Rezende (2015) also explore variational bounds onmutual information, and apply them to deep neural networks, but in the context of reinforcementlearning. We recently discovered Chalk et al. (2016), who independently developed the same varia-tional lower bound on the IB objective as us. However, they apply it to sparse coding problems, anduse the kernel trick to achieve nonlinear mappings, whereas we apply it to deep neural networks,which are computationally more efficient. In addition, we are able to handle large datasets by usingstochastic gradient descent, whereas they use batch variational EM.In the supervised learning literature, our work is related to the recently proposed confidence penalty(entropy regularization) method of (Pereyra et al., 2016). In this work, they fit a deterministicnetwork by optimizing an objective that combines the usual cross entropy loss with an extra termwhich penalizes models for having low entropy predictive distributions. In more detail, their costfunction has the formJCP=1NNXn=1[H(p(yjyn);p(yjxn))H(p(yjxn))] (4)whereH(p;q) =Pyp(y) logq(y)is the cross entropy, H(p) =H(p;p)is the entropy,p(yjyn) =yn(y)is a one-hot encoding of the label yn, andNis the number of training exam-ples. (Note that setting = 0corresponds to the usual maximum likelihood estimate.) In (Pereyraet al., 2016) they show that CP performs better than the simpler technique of label smoothing, inwhich we replace the zeros in the one-hot encoding of the labels by >0, and then renormalizeso that the distribution still sums to one. We will compare our VIB method to both the confidencepenalty method and label smoothing in Section 4.1.In the unsupervised learning literature, our work is closely related to the work in Kingma & Welling(2014) on variational autoencoders. In fact, their method is a special case of an unsupervised versionof the VIB, but with the parameter fixed at 1.0, as we explain in Appendix B. The V AE objective,but with different values of , was also explored in Higgins et al. (2016), but from a differentperspective.The method of Wang et al. (2016b) proposes a latent variable generative model of both xandy;their variational lower bound is closely related to ours, with the following differences. First, we do2Published as a conference paper at ICLR 2017not have a likelihood term for x, since we are in the discriminative setting. Second, they fix = 1,since they do not consider compression.Finally, the variational fair autoencoder of Louizos et al. (2016) shares with our paper the idea ofignoring parts of the input. However, in their approach, the user must specify which aspects of theinput (the so-called “sensitive” parts) to ignore, whereas in our method, we can discover irrelevantparts of the input automatically.3 M ETHODFollowing standard practice in the IB literature, we assume that the joint distribution p(X;Y;Z )factors as follows:p(X;Y;Z ) =p(ZjX;Y )p(YjX)p(X) =p(ZjX)p(YjX)p(X) (5)i.e., we assume p(ZjX;Y ) =p(ZjX), corresponding to the Markov chain Y$X$Z. Thisrestriction means that our representation Zcannot depend directly on the labels Y. (This opensthe door to unsupervised representation learning, which we will discuss in Appendix B.) Besidesthe structure in the joint data distribution p(X;Y ), the only content at this point is our model forthe stochastic encoder p(ZjX), all other distributions are fully determined by these and the Markovchain constraint.Recall that the IB objective has the form I(Z;Y)I(Z;X). We will examine each of theseexpressions in turn. Let us start with I(Z;Y). Writing it out in full, this becomesI(Z;Y) =Zdydzp (y;z) logp(y;z)p(y)p(z)=Zdydzp (y;z) logp(yjz)p(y): (6)wherep(yjz)is fully defined by our encoder and Markov Chain as follows:p(yjz) =Zdxp(x;yjz) =Zdxp(yjx)p(xjz) =Zdxp(yjx)p(zjx)p(x)p(z): (7)Since this is intractable in our case, let q(yjz)be a variational approximation to p(yjz). This is ourdecoder, which we will take to be another neural network with its own set of parameters. Using thefact that the Kullback Leibler divergence is always positive, we haveKL[p(YjZ);q(YjZ)]0 =)Zdyp(yjz) logp(yjz)Zdyp(yjz) logq(yjz); (8)and henceI(Z;Y)Zdydzp (y;z) logq(yjz)p(y)(9)=Zdydzp (y;z) logq(yjz)Zdyp(y) logp(y) (10)=Zdydzp (y;z) logq(yjz) +H(Y): (11)Notice that the entropy of our labels H(Y)is independent of our optimization procedure and so canbe ignored.Focusing on the first term in Equation 11, we can rewrite p(y;z)asp(y;z) =Rdxp(x;y;z ) =Rdxp(x)p(yjx)p(zjx)(leveraging our Markov assumption), which gives us a new lower bound onthe first term of our objective:I(Z;Y)Zdxdydzp (x)p(yjx)p(zjx) logq(yjz): (12)This only requires samples from both our joint data distribution as well as samples from our stochas-tic encoder, while it requires we have access to a tractable variational approximation in q(yjz).We now consider the term I(Z;X):I(Z;X) =Zdzdxp (x;z) logp(zjx)p(z)=Zdzdxp (x;z) logp(zjx)Zdzp(z) logp(z):(13)3Published as a conference paper at ICLR 2017In general, while it is fully defined, computing the marginal distribution of Z,p(z) = Rdxp(zjx)p(x), might be difficult. So let r(z)be a variational approximation to this marginal.Since KL[p(Z);r(Z)]0 =)Rdzp(z) logp(z)Rdzp(z) logr(z), we have the followingupper bound:I(Z;X)Zdxdzp (x)p(zjx) logp(zjx)r(z): (14)Combining both of these bounds we have thatI(Z;Y)I(Z;X)Zdxdydzp (x)p(yjx)p(zjx) logq(yjz)Zdxdzp (x)p(zjx) logp(zjx)r(z)=L: (15)We now discuss how to compute the lower bound Lin practice. We can approximate p(x;y) =p(x)p(yjx)using the empirical data distribution p(x;y) =1NPNn=1xn(x)yn(y), and hence wecan writeL1NNXn=1Zdzp(zjxn) logq(ynjz)p(zjxn) logp(zjxn)r(z): (16)Suppose we use an encoder of the form p(zjx) =N(zjfe(x);fe(x)), wherefeis an MLP whichoutputs both the K-dimensional mean ofzas well as the KKcovariance matrix . Then wecan use the reparameterization trick (Kingma & Welling, 2014) to write p(zjx)dz=p()d, wherez=f(x;)is a deterministic function of xand the Gaussian random variable . This formulationhas the important advantage that the noise term is independent of the parameters of the model, so itis easy to take gradients.Assuming our choice of p(zjx)andr(z)allows computation of an analytic Kullback-Leibler di-vergence, we can put everything together to get the following objective function, which we try tominimize:JIB=1NNXn=1Ep()[logq(ynjf(xn;))] +KL [p(Zjxn);r(Z)]: (17)As in Kingma & Welling (2014), this formulation allows us to directly backpropagate through asingle sample of our stochastic code and ensure that our gradient is an unbiased estimate of the trueexpected gradient.44 E XPERIMENTAL RESULTSIn this section, we present various experimental results, comparing the behavior of standard deter-ministic networks to stochastic neural networks trained by optimizing the VIB objective.4.1 B EHAVIOR ON MNISTWe start with experiments on unmodified MNIST (i.e. no data augmentation). In order to pick amodel with some “headroom” to improve, we decided to use the same architecture as in the (Pereyraet al., 2016) paper, namely an MLP with fully connected layers of the form 784 - 1024 - 1024- 10, and ReLu activations. (Since we are not exploiting spatial information, this correpsonds tothe “permutation invariant” version of MNIST.) The performance of this baseline is 1.38% error.(Pereyra et al., 2016) were able to improve this to 1.17% using their regularization technique. Wewere able to improve this to 1.13% using our technique, as we explain below.In our method, the stochastic encoder has the form p(zjx) =N(zjfe(x);fe(x)), wherefeis anMLP of the form 784102410242K, whereKis the size of the bottleneck. The first Koutputs from feencode, the remaining Koutputs encode (after a softplus transform).4Even if our choice of encoding distribution and variational prior do not admit an analytic KL, we couldsimilarly reparameterize through a sample of the divergence (Kingma & Welling, 2014; Blundell et al., 2015).4Published as a conference paper at ICLR 2017Model errorBaseline 1.38%Dropout 1.34%Dropout (Pereyra et al., 2016) 1.40%Confidence Penalty 1.36%Confidence Penalty (Pereyra et al., 2016) 1.17%Label Smoothing 1.40%Label Smoothing (Pereyra et al., 2016) 1.23%VIB (= 103)1.13%Table 1: Test set misclassification rate on permutation-invariant MNIST using K= 256 . We com-pare our method (VIB) to an equivalent deterministic model using various forms of regularization.The discrepancy between our results for confidence penalty and label smoothing and the numbersreported in (Pereyra et al., 2016) are due to slightly different hyperparameters.The decoder is a simple logistic regression model of the form q(yjz) =S(yjfd(z)), whereS(a) =[exp(ac)=PCc0=1exp(ac0)]is the softmax function, and fd(z) =Wz+bmaps theKdimensionallatent code to the logits of the C= 10 classes. (In later sections, we consider more complexdecoders, but here we wanted to show the benefits of VIB in a simple setting.)Finally, we treat r(z)as a fixedK-dimensional spherical Gaussian, r(z) =N(zj0;I).We compare our method to the baseline MLP. We calso consider the following deterministic limitof our model, when = 0. In this case, we obtain the following objective function:JIB0=1NNXn=1EzN(fe(xn);fe(xn))[logS(ynjfd(z)] (18)When!0, we observe the VIB optimization process tends to make fe(x)!0, so the networkbecomes nearly deterministic. In our experiments we also train an explicitly deterministic modelthat has the same form as the stochastic model, except that we just use z=fe(x)as the hiddenencoding, and drop the Gaussian layer.4.1.1 H IGHER DIMENSIONAL EMBEDDINGTo demonstrate that our VIB method can achieve competitive classification results, we comparedagainst a deterministic MLP trained with various forms of regularization. We use a K= 256dimensional bottleneck and a diagonal Gaussian for p(zjx). The networks were trained using Ten-sorFlow for 200 epochs using the Adam optimizer (Kingma & Ba, 2015) with a learning rate of0.0001. Full hyperparameter details can be found in Appendix A.The results are shown in Table 1. We see that we can slightly outperform other forms of regulariza-tion that have been proposed in the literature while using the same network for each. Of course, theperformance varies depending on . These results are not state of the art, nor is our main focus ofour work to suggest that VIB is the best regularization method by itself, which would require muchmore experimentation. However, using the same architecture for each experiment and comparingto VIB as the only source of regularization suggests VIB works as a decent regularizer in and ofitself. Figure 1(a) plots the train and test error vs , averaged over 5 trials (with error bars) for thecase where we use a single Monte Carlo sample of zwhen predicting, and also for the case wherewe average over 12 posterior samples (i.e., we use p(yjx) =1SPSs=1q(yjzs)forzsp(zjx),whereS= 12 ). In our own investigations, a dozen samples seemed to be sufficient to capture anyadditional benefit the stochastic evaluations had to offer in this experiment5.We see several interesting properties in Figure 1(a). First, we notice that the error rate shoots uponcerises above the critical value of 102. This corresponds to a setting where the mutualinformation between XandZis less than log2(10) bits, so the model can no longer represent thefact that there are 10 different classes. Second, we notice that, for small values of , the test error5A dozen samples wasn’t chosen for any particular reason, except the old addage that a dozen samples aresufficient, as mirrored in David MacKay’s book (MacKay, 2003). They proved sufficient in this case.5Published as a conference paper at ICLR 2017is higher than the training error, which indicates that we are overfitting. This is because the networklearns to be more deterministic, forcing 0, thus reducing the benefits of regularization. Third,we notice that for intermediate values of , Monte Carlo averaging helps. Interestingly, the regionwith the best performance roughly corresponds to where the added benefit from stochastic averaginggoes away, suggesting an avenue by which one could try to optimize using purely statistics on thetraining set without a validation set. We have not extensively studied this possibility yet.In Figure 1(c), we plot the IB curve, i.e., we plot I(Z;Y)vsI(Z;X)as we vary. As we allowmore information from the input through to the bottleneck (by lowering ), we increase the mutualinformation between our embedding and the label on the training set, but not necessarily on the testset, as is evident from the plot.In Figure 1(d) we plot the second term in our objective, the upper bound on the mutual informationbetween the images Xand our stochastic encoding Z, which in our case is simply the relativeentropy between our encoding and the fixed isotropic unit Gaussian prior. Notice that the y-axis is alogarithmic one. This demonstrates that our best results (when is between 103and102) occurwhere the mutual information between the stochastic encoding and the images is on the order of 10to 100 bits.10−910−810−710−610−510−410−310−210−1100101b0.0000.0050.0100.0150.020errortest 1 shot evaltest avg evaltrain 1 shot evaltrain avg eval10−910−810−710−610−510−410−310−210−1100101b0.000.010.020.030.040.05errortest 1 shot evaltest avg evaltrain 1 shot evaltrain avg eval(a) (b)101102103104I(Z,X)2.82.93.03.13.23.3I(Z,Y)traintest10−910−810−710−610−510−410−310−210−1100101b10−310−210−1100101102103I(Z,X)traintest(c) (d)Figure 1: Results of VIB model on MNIST. (a) Error rate vs forK= 256 on train and test set.“1 shot eval” means a single posterior sample of z, “avg eval” means 12 Monte Carlo samples. Thespike in the error rate at 102corresponds to a model that is too highly regularized. Plottedvalues are the average over 5 independent training runs at each . Error bars show the standarddeviation in the results. (b) Same as (a), but for K= 2. Performance is much worse, since we passthrough a very narrow bottleneck. (c) I(Z;Y)vsI(Z;X)as we varyforK= 256 . We see thatincreasingI(Z;X)helps training set performance, but can result in overfitting. (d) I(Z;X)vsforK= 256 . We see that for a good value of , such as 102, we only need to store about 10 bitsof information about the input.4.1.2 T WO DIMENSIONAL EMBEDDINGTo better understand the behavior of our method, we refit our model to MNIST using a K= 2dimensional bottleneck, but using a full covariance Gaussian. (The neural net predicts the mean andthe Cholesky decomposition of the covariance matrix.) Figure 1(b) shows that, not surprisingly, theclassification performance is worse (note the different scaled axes), but the overall trends are the6Published as a conference paper at ICLR 2017same as in the K= 256 dimensional case. The IB curve (not shown) also has a similar shape tobefore, except now the gap between training and testing is even larger.Figure 2 provides a visualization of what the network is doing. We plot the posteriors p(zjx)as a 2dGaussian ellipse (representing the 95% confidence region) for 1000 images from the test set. Colorscorrespond to the true class labels. In the background of each plot is the entropy of the variationalclassifierq(yjz)evaluated at that point.−15−10−5 0 5 10 15−15−10−5051015(a)= 103, errmc= 3:18% ,err1= 3:24%−4−2 0 2 4−4−2024(b)= 101, errmc= 3:44% ,err1= 4:32%−3−2−1 0 1 2 3−3−2−10123(c)= 100, errmc= 33:82% ,err1= 62:81% .Figure 2: Visualizing embeddings of 1000 test images in two dimensions. We plot the 95% confi-dence interval of the Gaussian embedding p(zjx) =N(;)as an ellipse. The images are coloredaccording to their true class label. The background greyscale image denotes the entropy of the vari-ational classifier evaluated at each two dimensional location. As becomes larger, we forget moreabout the input and the embeddings start to overlap to such a degree that the classes become indis-tinguishable. We also report the test error using a single sample, err 1, and using 12 Monte Carlosamples, err mc. For “good” values of , a single sample suffices.We see several interesting properties. First, as increases (so we pass less information through),the embedding covariances increase in relation to the distance between samples, and the classesstart to overlap. Second, once passes a critical value, the encoding “collapses”, and essentiallyall the class information is lost. Third, there is a fair amount of uncertainty in the class preditions(q(yjz)) in the areas between the class embeddings. Fourth, for intermediate values of (say101in Figure 2(b)), predictive performance is still good, even though there is a lot of uncertainty aboutwhere any individual image will map to in comparison to other images in the same class. This meansit would be difficult for an outside agent to infer which particular instance the model is representing,a property which we will explore more in the following sections.4.2 B EHAVIOR ON ADVERSARIAL EXAMPLESSzegedy et al. (2013) was the first work to show that deep neural networks (and other kinds ofclassifiers) can be easily “fooled” into making mistakes by changing their inputs by imperceptiblysmall amounts. In this section, we will show how training with the VIB objective makes modelssignificantly more robust to such adversarial examples.4.2.1 T YPES OF ADVERSARIESSince the initial work by Szegedy et al. (2013) and Goodfellow et al. (2014), many different adver-saries have been proposed. Most attacks fall into three broad categories: optimization-based attacks(Szegedy et al., 2013; Carlini & Wagner, 2016; Moosavi-Dezfooli et al., 2016; Papernot et al., 2015;Robinson & Graham, 2015; Sabour et al., 2016), which directly run an optimizer such as L-BFGSor ADAM (Kingma & Ba, 2015) on image pixels to find a minimal perturbation that changes themodel’s classification; single-step gradient-based attacks (Goodfellow et al., 2014; Kurakin et al.,2016; Huang et al., 2015), which choose a gradient direction of the image pixels at some loss andthen take a single step in that direction; and iterative gradient-based attacks (Kurakin et al., 2016),7Published as a conference paper at ICLR 2017which take multiple small steps along the gradient direction of the image pixels at some loss, recom-puting the gradient direction after each step.6Many adversaries can be formalized as either untargeted or targeted variants. An untargeted ad-versary can be defined as A(X;M )!X0, whereA(:)is the adversarial function, Xis the inputimage,X0is the adversarial example, and Mis the target model. Ais considered successful ifM(X)6=M(X0). Recently, Moosavi-Dezfooli et al. (2016) showed how to create a “universal”adversarial perturbation that can be added to any image Xin order to make M(X+)6=M(X)for a particular target model.A targeted adversary can be defined as A(X;M;l )!X0, wherelis an additional target label, andAis only considered successful if M(X0) =l.7Targeted attacks usually require larger magnitudeperturbations, since the adversary cannot just “nudge” the input across the nearest decision boundary,but instead must force it into a desired decision region.In this work, we focus on the Fast Gradient Sign (FGS) method proposed in Goodfellow et al.(2014) and the L2optimization method proposed in Carlini & Wagner (2016). FGS is a standardbaseline attack that takes a single step in the gradient direction to generate the adversarial example.As originally described, FGS generates untargeted adversarial examples. On MNIST, Goodfellowet al. (2014) reported that FGS could generate adversarial examples that fooled a maxout networkapproximately 90% of the time with = 0:25, whereis the magnitude of the perturbation at eachpixel. TheL2optimization method has been shown to generate adversarial examples with smallerperturbations than any other method published to date, which were capable of fooling the targetnetwork 100% of the time. We consider both targeted attacks and untargeted attacks for the L2optimization method.84.2.2 A DVERSARIAL ROBUSTNESSThere are multiple definitions of adversarial robustness in the literature. The most basic, which weshall use, is accuracy on adversarially perturbed versions of the test set, called adversarial examples.It is also important to have a measure of the magnitude of the adversarial perturbation. Since ad-versaries are defined relative to human perception, the ideal measure would explicitly correspond tohow easily a human observer would notice the perturbation. In lieu of such a measure, it is commonto compute the size of the perturbation using L0,L1,L2, andL1norms (Szegedy et al., 2013;Goodfellow et al., 2014; Carlini & Wagner, 2016; Sabour et al., 2016). In particular, the L0normmeasures the number of perturbed pixels, the L2norm measures the Euclidean distance between XandX0, and theL1norm measures the largest single change to any pixel.4.2.3 E XPERIMENTAL SETUPWe used the same model architectures as in Section 4.1, using a K= 256 bottleneck. The archi-tectures included a deterministic (base) model trained by MLE; a deterministic model trained withdropout (the dropout rate was chosen on the validation set); and a stochastic model trained with VIBfor various values of .For the VIB models, we use 12 posterior samples of Zto compute the class label distribution p(yjx).This helps ensure that the adversaries can get a consistent gradient when constructing the perturba-tion, and that they can get a consistent evaluation when checking if the perturbation was successful6There are also other adversaries that don’t fall as cleanly into those categories, such as “fooling im-ages” from Nguyen et al. (2014), which remove the human perceptual constraint, generating regular geometricpatterns or noise patterns that networks confidently classify as natural images; and the idea of generating ad-versaries by stochastic search for images near the decision boundary of multiple networks from Baluja et al.(2015).7Sabour et al. (2016) proposes a variant of the targeted attack, A(XS;M;X T;k)!X0S, whereXSis thesource image, XTis a target image, and kis a target layer in the model M.AproducesX0Sby minimizing thedifference in activations of Mat layerkbetweenXTandX0S. The end result of this attack for a classificationnetwork is still that M(X0S)yields a target label implicitly specified by XTin a successful attack.8Carlini & Wagner (2016) shared their code with us, which allowed us to perform the attack with exactlythe same parameters they used for their paper, including the maximum number of iterations and maximum Cvalue (see their paper for details).8Published as a conference paper at ICLR 2017(i.e., it reduces the chance that the adversary “gets lucky” in its perturbation due to an untypicalsample). We also ran the VIB models in “mean mode”, where the s are forced to be 0. This had nonoticeable impact on the results, so all reported results are for stochastic evaluation with 12 samples.4.2.4 MNIST R ESULTS AND DISCUSSIONWe selected the first 10 zeros in the MNIST test set, and use the L2optimization adversary of Carlini& Wagner (2016) to try to perturb those zeros into ones.9Some sample results are shown in Figure3. We see that the deterministic models are easily fooled by making small perturbations, but for theVIB models with reasonably large , the adversary often fails to find an attack (indicated by thegreen borders) within the permitted number of iterations. Furthermore, when an attack is succesful,it needs to be much larger for the VIB models. To quantify this, Figure 4 plots the magnitude of theperturbation (relative to that of the deterministic and dropout models) needed for a successful attackas a function of . Asincreases, the L0norm of the perturbation decreases, but both L2andL1norms increase, indicating that the adversary is being forced to put larger modifications into fewerpixels while searching for an adversarial perturbation.Figure 5 plots the accuracy on FGS adversarial examples of the first 1000 images from the MNISTtest set as a function of . Each point in the plot corresponds to 3 separate executions of threedifferent models trained with the same value of . All models tested achieve over 98.4% accuracy onthe unperturbed MNIST test set, so there is no appreciable measurement distortion due to underlyingmodel accuracy.Figure 6 plots the accuracy on L2optimization adversarial examples of the first 1000 images fromthe MNIST test set as a function of . The same sets of three models per were tested three times,as with the FGS adversarial examples.We generated both untargeted and targeted adversarial examples for Figure 6. For targeting, wegenerate a random target label different from the source label in order to avoid biasing the resultswith unevenly explored source/target pairs. We see that for a reasonably broad range of values,the VIB models have significantly better accuracy on the adversarial examples than the deterministicmodels, which have an accuracy of 0% (the L2optimization attack is very effective on traditionalmodel architectures).Figure 6 also reveals a surprising level of adversarial robustness even when !0. This can beexplained by the theoretical framework of Fawzi et al. (2016). Their work proves that quadraticclassifiers (e.g., xTAx, symmetric A) have a greater capacity for adversarial robustness than linearclassifiers. As we show in Appendix C, our Gaussian/softmax encoder/decoder is approximatelyquadratic for all <1.4.2.5 I MAGE NETRESULTS AND DISCUSSIONVIB improved classification accuracy and adversarial robustness for toy datasets like MNIST. Wenow investigate if VIB offers similar advantages for ImageNet, a more challenging natural imageclassification. Recall that ImageNet has approximately 1M images spanning 1K classes. We pre-process images such that they are 299x299 pixels.ArchitectureWe make use of publicly available, pretrained checkpoints10of Inception Resnet V2 (Szegedy et al.,2016) on ImageNet (Deng et al., 2009). The checkpoint obtains 80.4% classification accuracy on theImageNet validation set. Using the checkpoint, we transformed the original training set by applyingthe pretrained network to each image and extracting the representation at the penultimate layer.This new image representation has 1536 dimensions. The higher layers of the network continue toclassify this representation with 80.4% accuracy; conditioned on this extraction the classification9We chose this pair of labels since intuitively zeros and ones are the digits that are least similar in terms ofhuman perception, so if the adversary can change a zero into a one without much human-noticeable perturba-tion, it is unlikely that the model has learned a representation similar to what humans learn.10Available at the Tensorflow Models repository in the Slim directory: https://github.com/tensorflow/models/tree/master/slim9Published as a conference paper at ICLR 2017Orig: Det: Dropout = 0= 1010= 108= 106= 104= 103= 102Figure 3: The adversary is trying to force each 0 to be classified as a 1. Successful attacks have a redbackground. Unsuccessful attacks have a green background. In the case that the label is changedto an incorrect label different from the target label (i.e., the classifier outputs something other than0 or 1), the background is purple. The first column is the original image. The second column isadversarial examples targeting our deterministic baseline model. The third column is adversarialexamples targeting our dropout model. The remaining columns are adversarial examples targetingour VIB models for different .10-1110-1010-910-810-710-610-510-410-310-2β1.01.52.02.53.0All L*/Deterministic Model L*Deterministic Model L*Targeted L2 Optimization (0->1):L0Targeted L2 Optimization (0->1):L2Targeted L2 Optimization (0->1):L∞10-1110-1010-910-810-710-610-510-410-310-2β0.60.81.01.21.41.61.82.0All L*/Dropout Model L*Dropout Model L*Targeted L2 Optimization (0->1):L0Targeted L2 Optimization (0->1):L2Targeted L2 Optimization (0->1):L∞(a) (b)Figure 4: (a) Relative magnitude of the adversarial perturbation, measured using L0,L2, andL1norms, for the images in Figure 3 as a function of . (We normalize all values by the correspondingnorm of the perturbation against the base model.) As increases,L0decreases, but both L2andL1increase, indicating that the adversary is being forced to put larger modifications into fewer pixelswhile searching for an adversarial perturbation. (b) Same as (a), but with the dropout model as thebaseline. Dropout is more robust to the adversarial perturbations than the base deterministic model,but still performs much worse than the VIB model as increases.10Published as a conference paper at ICLR 201710-810-710-610-510-410-310-210-1β246810Relative Accuracy on Adversarial ExamplesDeterministic ModelFGS, epsilon=0.350FGS, epsilon=0.400FGS, epsilon=0.450FGS, epsilon=0.50010-810-710-610-510-410-310-210-1β12345Relative Accuracy on Adversarial ExamplesDropout ModelFGS, epsilon=0.350FGS, epsilon=0.400FGS, epsilon=0.450FGS, epsilon=0.500(a) (b)Figure 5: Classification accuracy of VIB classifiers, divided by accuracy of baseline classifiers, onFGS-generated adversarial examples as a function of . Higher is better, and the baseline is alwaysat1:0. For the FGS adversarial examples, when = 0(not shown), the VIB model’s performance isalmost identical to when = 108. (a) FGS accuracy normalized by the base deterministic modelperformance. The base deterministic model’s accuracy on the adversarial examples ranges fromabout 1% when = 0:5to about 5% when = 0:35. (b) Same as (a), but with the dropout modelas the baseline. The dropout model is more robust than the base model, but less robust than VIB,particularly for stronger adversaries (i.e., larger values of ). The dropout model’s accuracy on theadversarial examples ranges from about 5% when = 0:5to about 16% when = 0:35. As inthe other results, relative performance is more dramatic as increases, which seems to indicate thatthe VIB models are learning to ignore more of the perturbations caused by the FGS method, eventhough they were not trained on any adversarial examples.10-1110-1010-910-810-710-610-510-410-310-210-1β0.00.10.20.30.40.50.60.7Accuracy on Adversarial ExamplesDeterministic and Dropout Models (Targeted and Untargeted)Targeted L2 OptimizationUntargeted L2 OptimizationFigure 6: Classification accuracy (from 0 to 1) on L2adversarial examples (of all classes) as afunction of . The blue line is for targeted attacks, and the green line is for untargeted attacks(which are easier to resist). In this case, = 1011has performance indistinguishable from = 0.The deterministic model and dropout model both have a classification accuracy of 0% in both thetargeted and untargeted attack scenarios, indicated by the horizontal red dashed line at the bottom ofthe plot. This is the same accuracy on adversarial examples from this adversary reported in Carlini& Wagner (2016) on a convolutional network trained on MNIST.11Published as a conference paper at ICLR 2017(a) (b)(c) (d)Figure 7: The results of our ImageNet targeted L2optimization attack. In all cases we target anew label of 222 (“soccer ball”). Figure (a) shows the 30 images from the first 40 images in theImageNet validation set that the VIB network classifies correctly. The class label is shown in greenon each image. The predicted label and targeted label are shown in red. Figure (b) shows adversarialexamples of the same images generated by attacking our VIB network with = 0:01. While allof the attacks change the classification of the image, in 13 out of 30 examples the attack fails tohit the intended target class (“soccer ball”). Pink crosses denote cases where the attack failed toforce the model to misclassify the image as a soccer ball. Figure (c) shows the same result butfor our deterministic baseline operating on the whitened precomputed features. The attack alwayssuccceeds. Figure (d) is the same but for the original full Inception ResNet V2 network withoutmodification. The attack always succceeds. There are slight variations in the set of adversarialexamples shown for each network because we limited the adversarial search to correctly classifiedimages. In the case of the deterministic baseline and original Inception ResNet V2 network, theperturbations are hardly noticable in the perturbed images, but in many instances, the perturbationsfor the VIB network can be percieved.12Published as a conference paper at ICLR 2017Figure 8: Shown are the absolute differences between the original and final perturbed images forall three networks. The left block shows the perturbations created while targeting the VIB network.The middle block shows the perturbations needed for the deterministic baseline using precomputedwhitened features. The right block shows the perturbations created for the unmodified InceptionResNet V2 network. The contrast has been increased by the same amount in all three columns toemphasize the difference in the magnitude of the perturbations. The VIB network required muchlarger perturbations to confuse the classifier, and even then did not achieve the targeted class in 13of those cases.model is simply logistic regression. To further speed training, we whitened the 1536 dimensionalrepresentation.Under this transformation, the experiment regime is identical to the permutation invariant MNISTtask. We therefore used a similar model architecture. Inputs are passed through two fully connectedlayers, each with 1024 units. Next, data is fed to a stochastic encoding layer; this layer is charac-terized by a spherical Gaussian with 1024 learned means and standard deviations. The output ofthe stochastic layer is fed to the variational classifier–itself a logistic regression, for simplicity. Allother hyperparameters and training choices are identical to those used in MNIST, more details inAppendix A.ClassificationWe see the same favorable VIB classification performance in ImageNet as in MNIST. By varying, the estimated mutual information between encoding and image ( I(Z;X)) varies as well. At largevalues ofaccuracy suffers, but at intermediate values we obtain improved performance over botha deterministic baseline and a = 0 regime. In all cases our accuracy is somewhat lower thanthe original 80.4% accuracy. This may be a consequence of inadequate training time or suboptimalhyperparameters.Overall the best accuracy we achieved was using = 0:01. Under this setting we saw an accu-racy of 80.12%–nearly the same as the state-of-the-art unmodified network– but with substantiallysmaller information footprint, only I(X;Z)45bits. This is a surprisingly small amount of infor-mation;= 0implies over 10,000 bits yet only reaches an accuracy of 78.87%. The deterministicbaseline, which was the same network, but without the VIB loss and a 1024 fully connected lin-ear layer instead of the stochastic embedding similarly only achieved 78.75% accuracy. We stressthat regressions from the achievable 80.4% are likely due to suboptimal hyperparameters settings orinadequate training.Considering a continuum of and a deterministic baseline, the best classification accuracy wasachieved with a = 0:012(0;1). In other words, VIB offered accuracy benefit yet using a mere45bits of information from each image.13Published as a conference paper at ICLR 2017Adversarial RobustnessWe next show that the VIB-trained network improves resistance to adversarial attack.We focus on the Carlini targeted L2attack (see Section 4.2.1). We show results for the VIB-trainednetwork and a deterministic baseline (both on top of precomputed features), as well as for the origi-nal pretrained Inception ResNet V2 network itself. The VIB network is more robust to the targetedL2optimization attack in both magnitude of perturbation and frequency of successful attack.Figure 7 shows some example images which were all misclassified as “soccer balls” by the deter-ministic models; by contrast, with the VIB model, only 17 out of 30 of the attacks succeeded inbeing mislabeled as the target label.11We find that the VIB model can resist about 43.3% of theattacks, but the deterministic models always fail (i.e., always misclassify into the targeted label).Figure 8 shows the absolute pixel differences between the perturbed and unperturbed images for theexamples in Figure 7. We see that the VIB network requires much larger perturbations in order tofool the classifier, as quantified in Table 2.Metric Determ IRv2 VIB(0.01)Sucessful target 1.0 1.0 0.567L26.45 14.43 43.27L1 0.18 0.44 0.92Table 2: Quantitative results showing how the different Inception Resnet V2-based architectures(described in Section 4.2.5) respond to targeted L2adversarial examples. Determ is the deterministicarchitecture, IRv2 is the unmodified Inception Resnet V2 architecture, and VIB(0.01) is the VIBarchitecture with = 0:01.Successful target is the fraction of adversarial examples that causedthe architecture to classify as the target class (soccer ball). Lower is better. L2andL1are theaverageLdistances between the original images and the adversarial examples. Larger values meanthe adversary had to make a larger perturbation to change the class.5 F UTURE DIRECTIONSThere are many possible directions for future work, including: putting the VIB objective at multipleor every layer of a network; testing on real images; using richer parametric marginal approxima-tions, rather than assuming r(z) =N(0;I); exploring the connections to differential privacy (seee.g., Wang et al. (2016a); Cuff & Yu (2016)); and investigating open universe classification problems(see e.g., Bendale & Boult (2015)). In addition, we would like to explore applications to sequenceprediction, where Xdenotes the past of the sequence and Ythe future, while Zis the current repre-sentation of the network. This form of the information bottleneck is known as predictive information(Bialek et al., 2001; Palmer et al., 2015).
rJ_3n_ZNx
Great idea, lacking empirical section
6: Marginally above acceptance threshold
Summary: The paper “Deep Variational Information Bottleneck” explores the optimization of neural networks for variational approximations of the information bottleneck (IB; Tishby et al., 1999). On the example of MNIST, the authors show that this may be used for regularization or to improve robustness against adversarial attacks. Review: The IB is potentially very useful for important applications (regularization, adversarial robustness, and privacy are mentioned in the paper). Combining the IB with recent advances in deep learning to make it more widely applicable is an excellent idea. But given that the theoretical contribution is a fairly straight-forward application of well-known ideas, I would have liked to see a stronger experimental section. Since the proposed approach allows us to scale IB, a better demonstration of this would have been on a larger problem than MNIST. It is also not clear whether the proposed approach will still work well to regularize more interesting networks with many layers. Why is dropout not included in the quantitative comparison of robustness to adversarial examples (Figure 4)? How was the number of samples (12) chosen? What are the error bars in Figure 1 (a)? On page 7 the authors claim “the posterior covariance becomes larger” as beta “decreases” (increases?). Is this really the case? It’s hard to judge based on Figure 1, since the figures are differently scaled. It might be worth comparing to variational fair autoencoders (Louizos et al., 2016), which also try to learn representations minimizing the information shared with an aspect of the input. The paper is well written and easy to follow.
4: The reviewer is confident but not absolutely certain that the evaluation is correct
HyxQzBceg
ICLR.cc/2017/conference
2017
Deep Variational Information Bottleneck
["Alexander A. Alemi", "Ian Fischer", "Joshua V. Dillon", "Kevin Murphy"]
We present a variational approximation to the information bottleneck of Tishby et al. (1999). This variational approach allows us to parameterize the information bottleneck model using a neural network and leverage the reparameterization trick for efficient training. We call this method “Deep Variational Information Bottleneck”, or Deep VIB. We show that models trained with the VIB objective outperform those that are trained with other forms of regularization, in terms of generalization performance and robustness to adversarial attack.
["Theory", "Computer vision", "Deep learning", "Supervised Learning"]
ABSTRACTWe present a variational approximation to the information bottleneck of Tishbyet al. (1999). This variational approach allows us to parameterize the informa-tion bottleneck model using a neural network and leverage the reparameterizationtrick for efficient training. We call this method “Deep Variational InformationBottleneck”, or Deep VIB. We show that models trained with the VIB objectiveoutperform those that are trained with other forms of regularization, in terms ofgeneralization performance and robustness to adversarial attack.1 I NTRODUCTIONWe adopt an information theoretic view of deep networks. We regard the internal representation ofsome intermediate layer as a stochastic encoding Zof the input source X, defined by a parametricencoderp(zjx;).1Our goal is to learn an encoding that is maximally informative about our targetY, measured by the mutual information between our encoding and the target I(Z;Y;), whereI(Z;Y;) =Zdxdyp (z;yj) logp(z;yj)p(zj)p(yj):2(1)Given the data processing inequality, and the invariance of the mutual information to reparameteriza-tions, if this was our only objective we could always ensure a maximally informative representationby taking the identity encoding of our data (Z=X), but this is not a useful representation of ourdata. Instead we would like to find the best representation we can obtain subject to a constraint onits complexity. A natural and useful constraint to apply is on the mutual information between ourencoding and the original data, I(X;Z)Ic, whereIcis the information constraint. This suggeststhe objective:maxI(Z;Y;)s.t.I(X;Z;)Ic: (2)Equivalently, with the introduction of a Lagrange multiplier , we can maximize the objective func-tionRIB() =I(Z;Y;)I(Z;X;): (3)Here our goal is to learn an encoding Zthat is maximally expressive about Ywhile being maximallycompressive about X, where0controls the tradeoff.3This approach is known as the informa-tion bottleneck (IB), and was first proposed in Tishby et al. (1999). Intuitively, the first term in RIBencouragesZto be predictive of Y; the second term encourages Zto “forget”X. Essentially itforcesZto act like a minimal sufficient statistic of Xfor predicting Y.The IB principle is appealing, since it defines what we mean by a good representation, in terms of thefundamental tradeoff between having a concise representation and one with good predictive power(Tishby & Zaslavsky, 2015a). The main drawback of the IB principle is that computing mutualinformation is, in general, computationally challenging. There are two notable exceptions: the first1In this work, X;Y;Z are random variables, x;y;z andx;y;zare instances of random variables, andF(;)andf(;)are functionals or functions parameterized by .2Note that in the present discussion, Yis the ground truth label which is independent of our parameters sop(yj) =p(y).3Note that, in our notation, large results in a highly compressed representation. In some works, the IBprinciple is formulated as the minimization of I(Z;X )I(Z;Y ), in which case large corresponds to highmutual information between ZandY, and hence low compression.1Published as a conference paper at ICLR 2017is whenX,YandZare all discrete, as in Tishby et al. (1999); this can be used to cluster discretedata, such as words. The second case is when X,YandZare all jointly Gaussian (Chechik et al.,2005). However, these assumptions both severely constrain the class of learnable models.In this paper, we propose to use variational inference to construct a lower bound on the IB objectivein Equation 3. We call the resulting method VIB (variational information bottleneck). By using thereparameterization trick (Kingma & Welling, 2014), we can use Monte Carlo sampling to get anunbiased estimate of the gradient, and hence we can optimize the objective using stochastic gradientdescent. This allows us to use deep neural networks to parameterize our distributions, and thus tohandle high-dimensional, continuous data, such as images, avoiding the previous restrictions to thediscrete or Gaussian cases.We also show, by a series of experiments, that stochastic neural networks, fit using our VIB method,are robust to overfitting, since VIB finds a representation Zwhich ignores as many details of theinputXas possible. In addition, they are more robust to adversarial inputs than deterministic modelswhich are fit using (penalized) maximum likelihood estimation. Intuitively this is because each inputimage gets mapped to a distribution rather than a unique Z, so it is more difficult to pass small,idiosyncratic perturbations through the latent bottleneck.2 R ELATED WORKThe idea of using information theoretic objectives for deep neural networks was pointed out inTishby & Zaslavsky (2015b). However, they did not include any experimental results, since theirapproach for optimizing the IB objective relied on the iterative Blahut Arimoto algorithm, which isinfeasible to apply to deep neural networks.Variational inference is a natural way to approximate the problem. Variational bounds on mutualinformation have previously been explored in Agakov (2004), though not in conjunction with theinformation bottleneck objective. Mohamed & Rezende (2015) also explore variational bounds onmutual information, and apply them to deep neural networks, but in the context of reinforcementlearning. We recently discovered Chalk et al. (2016), who independently developed the same varia-tional lower bound on the IB objective as us. However, they apply it to sparse coding problems, anduse the kernel trick to achieve nonlinear mappings, whereas we apply it to deep neural networks,which are computationally more efficient. In addition, we are able to handle large datasets by usingstochastic gradient descent, whereas they use batch variational EM.In the supervised learning literature, our work is related to the recently proposed confidence penalty(entropy regularization) method of (Pereyra et al., 2016). In this work, they fit a deterministicnetwork by optimizing an objective that combines the usual cross entropy loss with an extra termwhich penalizes models for having low entropy predictive distributions. In more detail, their costfunction has the formJCP=1NNXn=1[H(p(yjyn);p(yjxn))H(p(yjxn))] (4)whereH(p;q) =Pyp(y) logq(y)is the cross entropy, H(p) =H(p;p)is the entropy,p(yjyn) =yn(y)is a one-hot encoding of the label yn, andNis the number of training exam-ples. (Note that setting = 0corresponds to the usual maximum likelihood estimate.) In (Pereyraet al., 2016) they show that CP performs better than the simpler technique of label smoothing, inwhich we replace the zeros in the one-hot encoding of the labels by >0, and then renormalizeso that the distribution still sums to one. We will compare our VIB method to both the confidencepenalty method and label smoothing in Section 4.1.In the unsupervised learning literature, our work is closely related to the work in Kingma & Welling(2014) on variational autoencoders. In fact, their method is a special case of an unsupervised versionof the VIB, but with the parameter fixed at 1.0, as we explain in Appendix B. The V AE objective,but with different values of , was also explored in Higgins et al. (2016), but from a differentperspective.The method of Wang et al. (2016b) proposes a latent variable generative model of both xandy;their variational lower bound is closely related to ours, with the following differences. First, we do2Published as a conference paper at ICLR 2017not have a likelihood term for x, since we are in the discriminative setting. Second, they fix = 1,since they do not consider compression.Finally, the variational fair autoencoder of Louizos et al. (2016) shares with our paper the idea ofignoring parts of the input. However, in their approach, the user must specify which aspects of theinput (the so-called “sensitive” parts) to ignore, whereas in our method, we can discover irrelevantparts of the input automatically.3 M ETHODFollowing standard practice in the IB literature, we assume that the joint distribution p(X;Y;Z )factors as follows:p(X;Y;Z ) =p(ZjX;Y )p(YjX)p(X) =p(ZjX)p(YjX)p(X) (5)i.e., we assume p(ZjX;Y ) =p(ZjX), corresponding to the Markov chain Y$X$Z. Thisrestriction means that our representation Zcannot depend directly on the labels Y. (This opensthe door to unsupervised representation learning, which we will discuss in Appendix B.) Besidesthe structure in the joint data distribution p(X;Y ), the only content at this point is our model forthe stochastic encoder p(ZjX), all other distributions are fully determined by these and the Markovchain constraint.Recall that the IB objective has the form I(Z;Y)I(Z;X). We will examine each of theseexpressions in turn. Let us start with I(Z;Y). Writing it out in full, this becomesI(Z;Y) =Zdydzp (y;z) logp(y;z)p(y)p(z)=Zdydzp (y;z) logp(yjz)p(y): (6)wherep(yjz)is fully defined by our encoder and Markov Chain as follows:p(yjz) =Zdxp(x;yjz) =Zdxp(yjx)p(xjz) =Zdxp(yjx)p(zjx)p(x)p(z): (7)Since this is intractable in our case, let q(yjz)be a variational approximation to p(yjz). This is ourdecoder, which we will take to be another neural network with its own set of parameters. Using thefact that the Kullback Leibler divergence is always positive, we haveKL[p(YjZ);q(YjZ)]0 =)Zdyp(yjz) logp(yjz)Zdyp(yjz) logq(yjz); (8)and henceI(Z;Y)Zdydzp (y;z) logq(yjz)p(y)(9)=Zdydzp (y;z) logq(yjz)Zdyp(y) logp(y) (10)=Zdydzp (y;z) logq(yjz) +H(Y): (11)Notice that the entropy of our labels H(Y)is independent of our optimization procedure and so canbe ignored.Focusing on the first term in Equation 11, we can rewrite p(y;z)asp(y;z) =Rdxp(x;y;z ) =Rdxp(x)p(yjx)p(zjx)(leveraging our Markov assumption), which gives us a new lower bound onthe first term of our objective:I(Z;Y)Zdxdydzp (x)p(yjx)p(zjx) logq(yjz): (12)This only requires samples from both our joint data distribution as well as samples from our stochas-tic encoder, while it requires we have access to a tractable variational approximation in q(yjz).We now consider the term I(Z;X):I(Z;X) =Zdzdxp (x;z) logp(zjx)p(z)=Zdzdxp (x;z) logp(zjx)Zdzp(z) logp(z):(13)3Published as a conference paper at ICLR 2017In general, while it is fully defined, computing the marginal distribution of Z,p(z) = Rdxp(zjx)p(x), might be difficult. So let r(z)be a variational approximation to this marginal.Since KL[p(Z);r(Z)]0 =)Rdzp(z) logp(z)Rdzp(z) logr(z), we have the followingupper bound:I(Z;X)Zdxdzp (x)p(zjx) logp(zjx)r(z): (14)Combining both of these bounds we have thatI(Z;Y)I(Z;X)Zdxdydzp (x)p(yjx)p(zjx) logq(yjz)Zdxdzp (x)p(zjx) logp(zjx)r(z)=L: (15)We now discuss how to compute the lower bound Lin practice. We can approximate p(x;y) =p(x)p(yjx)using the empirical data distribution p(x;y) =1NPNn=1xn(x)yn(y), and hence wecan writeL1NNXn=1Zdzp(zjxn) logq(ynjz)p(zjxn) logp(zjxn)r(z): (16)Suppose we use an encoder of the form p(zjx) =N(zjfe(x);fe(x)), wherefeis an MLP whichoutputs both the K-dimensional mean ofzas well as the KKcovariance matrix . Then wecan use the reparameterization trick (Kingma & Welling, 2014) to write p(zjx)dz=p()d, wherez=f(x;)is a deterministic function of xand the Gaussian random variable . This formulationhas the important advantage that the noise term is independent of the parameters of the model, so itis easy to take gradients.Assuming our choice of p(zjx)andr(z)allows computation of an analytic Kullback-Leibler di-vergence, we can put everything together to get the following objective function, which we try tominimize:JIB=1NNXn=1Ep()[logq(ynjf(xn;))] +KL [p(Zjxn);r(Z)]: (17)As in Kingma & Welling (2014), this formulation allows us to directly backpropagate through asingle sample of our stochastic code and ensure that our gradient is an unbiased estimate of the trueexpected gradient.44 E XPERIMENTAL RESULTSIn this section, we present various experimental results, comparing the behavior of standard deter-ministic networks to stochastic neural networks trained by optimizing the VIB objective.4.1 B EHAVIOR ON MNISTWe start with experiments on unmodified MNIST (i.e. no data augmentation). In order to pick amodel with some “headroom” to improve, we decided to use the same architecture as in the (Pereyraet al., 2016) paper, namely an MLP with fully connected layers of the form 784 - 1024 - 1024- 10, and ReLu activations. (Since we are not exploiting spatial information, this correpsonds tothe “permutation invariant” version of MNIST.) The performance of this baseline is 1.38% error.(Pereyra et al., 2016) were able to improve this to 1.17% using their regularization technique. Wewere able to improve this to 1.13% using our technique, as we explain below.In our method, the stochastic encoder has the form p(zjx) =N(zjfe(x);fe(x)), wherefeis anMLP of the form 784102410242K, whereKis the size of the bottleneck. The first Koutputs from feencode, the remaining Koutputs encode (after a softplus transform).4Even if our choice of encoding distribution and variational prior do not admit an analytic KL, we couldsimilarly reparameterize through a sample of the divergence (Kingma & Welling, 2014; Blundell et al., 2015).4Published as a conference paper at ICLR 2017Model errorBaseline 1.38%Dropout 1.34%Dropout (Pereyra et al., 2016) 1.40%Confidence Penalty 1.36%Confidence Penalty (Pereyra et al., 2016) 1.17%Label Smoothing 1.40%Label Smoothing (Pereyra et al., 2016) 1.23%VIB (= 103)1.13%Table 1: Test set misclassification rate on permutation-invariant MNIST using K= 256 . We com-pare our method (VIB) to an equivalent deterministic model using various forms of regularization.The discrepancy between our results for confidence penalty and label smoothing and the numbersreported in (Pereyra et al., 2016) are due to slightly different hyperparameters.The decoder is a simple logistic regression model of the form q(yjz) =S(yjfd(z)), whereS(a) =[exp(ac)=PCc0=1exp(ac0)]is the softmax function, and fd(z) =Wz+bmaps theKdimensionallatent code to the logits of the C= 10 classes. (In later sections, we consider more complexdecoders, but here we wanted to show the benefits of VIB in a simple setting.)Finally, we treat r(z)as a fixedK-dimensional spherical Gaussian, r(z) =N(zj0;I).We compare our method to the baseline MLP. We calso consider the following deterministic limitof our model, when = 0. In this case, we obtain the following objective function:JIB0=1NNXn=1EzN(fe(xn);fe(xn))[logS(ynjfd(z)] (18)When!0, we observe the VIB optimization process tends to make fe(x)!0, so the networkbecomes nearly deterministic. In our experiments we also train an explicitly deterministic modelthat has the same form as the stochastic model, except that we just use z=fe(x)as the hiddenencoding, and drop the Gaussian layer.4.1.1 H IGHER DIMENSIONAL EMBEDDINGTo demonstrate that our VIB method can achieve competitive classification results, we comparedagainst a deterministic MLP trained with various forms of regularization. We use a K= 256dimensional bottleneck and a diagonal Gaussian for p(zjx). The networks were trained using Ten-sorFlow for 200 epochs using the Adam optimizer (Kingma & Ba, 2015) with a learning rate of0.0001. Full hyperparameter details can be found in Appendix A.The results are shown in Table 1. We see that we can slightly outperform other forms of regulariza-tion that have been proposed in the literature while using the same network for each. Of course, theperformance varies depending on . These results are not state of the art, nor is our main focus ofour work to suggest that VIB is the best regularization method by itself, which would require muchmore experimentation. However, using the same architecture for each experiment and comparingto VIB as the only source of regularization suggests VIB works as a decent regularizer in and ofitself. Figure 1(a) plots the train and test error vs , averaged over 5 trials (with error bars) for thecase where we use a single Monte Carlo sample of zwhen predicting, and also for the case wherewe average over 12 posterior samples (i.e., we use p(yjx) =1SPSs=1q(yjzs)forzsp(zjx),whereS= 12 ). In our own investigations, a dozen samples seemed to be sufficient to capture anyadditional benefit the stochastic evaluations had to offer in this experiment5.We see several interesting properties in Figure 1(a). First, we notice that the error rate shoots uponcerises above the critical value of 102. This corresponds to a setting where the mutualinformation between XandZis less than log2(10) bits, so the model can no longer represent thefact that there are 10 different classes. Second, we notice that, for small values of , the test error5A dozen samples wasn’t chosen for any particular reason, except the old addage that a dozen samples aresufficient, as mirrored in David MacKay’s book (MacKay, 2003). They proved sufficient in this case.5Published as a conference paper at ICLR 2017is higher than the training error, which indicates that we are overfitting. This is because the networklearns to be more deterministic, forcing 0, thus reducing the benefits of regularization. Third,we notice that for intermediate values of , Monte Carlo averaging helps. Interestingly, the regionwith the best performance roughly corresponds to where the added benefit from stochastic averaginggoes away, suggesting an avenue by which one could try to optimize using purely statistics on thetraining set without a validation set. We have not extensively studied this possibility yet.In Figure 1(c), we plot the IB curve, i.e., we plot I(Z;Y)vsI(Z;X)as we vary. As we allowmore information from the input through to the bottleneck (by lowering ), we increase the mutualinformation between our embedding and the label on the training set, but not necessarily on the testset, as is evident from the plot.In Figure 1(d) we plot the second term in our objective, the upper bound on the mutual informationbetween the images Xand our stochastic encoding Z, which in our case is simply the relativeentropy between our encoding and the fixed isotropic unit Gaussian prior. Notice that the y-axis is alogarithmic one. This demonstrates that our best results (when is between 103and102) occurwhere the mutual information between the stochastic encoding and the images is on the order of 10to 100 bits.10−910−810−710−610−510−410−310−210−1100101b0.0000.0050.0100.0150.020errortest 1 shot evaltest avg evaltrain 1 shot evaltrain avg eval10−910−810−710−610−510−410−310−210−1100101b0.000.010.020.030.040.05errortest 1 shot evaltest avg evaltrain 1 shot evaltrain avg eval(a) (b)101102103104I(Z,X)2.82.93.03.13.23.3I(Z,Y)traintest10−910−810−710−610−510−410−310−210−1100101b10−310−210−1100101102103I(Z,X)traintest(c) (d)Figure 1: Results of VIB model on MNIST. (a) Error rate vs forK= 256 on train and test set.“1 shot eval” means a single posterior sample of z, “avg eval” means 12 Monte Carlo samples. Thespike in the error rate at 102corresponds to a model that is too highly regularized. Plottedvalues are the average over 5 independent training runs at each . Error bars show the standarddeviation in the results. (b) Same as (a), but for K= 2. Performance is much worse, since we passthrough a very narrow bottleneck. (c) I(Z;Y)vsI(Z;X)as we varyforK= 256 . We see thatincreasingI(Z;X)helps training set performance, but can result in overfitting. (d) I(Z;X)vsforK= 256 . We see that for a good value of , such as 102, we only need to store about 10 bitsof information about the input.4.1.2 T WO DIMENSIONAL EMBEDDINGTo better understand the behavior of our method, we refit our model to MNIST using a K= 2dimensional bottleneck, but using a full covariance Gaussian. (The neural net predicts the mean andthe Cholesky decomposition of the covariance matrix.) Figure 1(b) shows that, not surprisingly, theclassification performance is worse (note the different scaled axes), but the overall trends are the6Published as a conference paper at ICLR 2017same as in the K= 256 dimensional case. The IB curve (not shown) also has a similar shape tobefore, except now the gap between training and testing is even larger.Figure 2 provides a visualization of what the network is doing. We plot the posteriors p(zjx)as a 2dGaussian ellipse (representing the 95% confidence region) for 1000 images from the test set. Colorscorrespond to the true class labels. In the background of each plot is the entropy of the variationalclassifierq(yjz)evaluated at that point.−15−10−5 0 5 10 15−15−10−5051015(a)= 103, errmc= 3:18% ,err1= 3:24%−4−2 0 2 4−4−2024(b)= 101, errmc= 3:44% ,err1= 4:32%−3−2−1 0 1 2 3−3−2−10123(c)= 100, errmc= 33:82% ,err1= 62:81% .Figure 2: Visualizing embeddings of 1000 test images in two dimensions. We plot the 95% confi-dence interval of the Gaussian embedding p(zjx) =N(;)as an ellipse. The images are coloredaccording to their true class label. The background greyscale image denotes the entropy of the vari-ational classifier evaluated at each two dimensional location. As becomes larger, we forget moreabout the input and the embeddings start to overlap to such a degree that the classes become indis-tinguishable. We also report the test error using a single sample, err 1, and using 12 Monte Carlosamples, err mc. For “good” values of , a single sample suffices.We see several interesting properties. First, as increases (so we pass less information through),the embedding covariances increase in relation to the distance between samples, and the classesstart to overlap. Second, once passes a critical value, the encoding “collapses”, and essentiallyall the class information is lost. Third, there is a fair amount of uncertainty in the class preditions(q(yjz)) in the areas between the class embeddings. Fourth, for intermediate values of (say101in Figure 2(b)), predictive performance is still good, even though there is a lot of uncertainty aboutwhere any individual image will map to in comparison to other images in the same class. This meansit would be difficult for an outside agent to infer which particular instance the model is representing,a property which we will explore more in the following sections.4.2 B EHAVIOR ON ADVERSARIAL EXAMPLESSzegedy et al. (2013) was the first work to show that deep neural networks (and other kinds ofclassifiers) can be easily “fooled” into making mistakes by changing their inputs by imperceptiblysmall amounts. In this section, we will show how training with the VIB objective makes modelssignificantly more robust to such adversarial examples.4.2.1 T YPES OF ADVERSARIESSince the initial work by Szegedy et al. (2013) and Goodfellow et al. (2014), many different adver-saries have been proposed. Most attacks fall into three broad categories: optimization-based attacks(Szegedy et al., 2013; Carlini & Wagner, 2016; Moosavi-Dezfooli et al., 2016; Papernot et al., 2015;Robinson & Graham, 2015; Sabour et al., 2016), which directly run an optimizer such as L-BFGSor ADAM (Kingma & Ba, 2015) on image pixels to find a minimal perturbation that changes themodel’s classification; single-step gradient-based attacks (Goodfellow et al., 2014; Kurakin et al.,2016; Huang et al., 2015), which choose a gradient direction of the image pixels at some loss andthen take a single step in that direction; and iterative gradient-based attacks (Kurakin et al., 2016),7Published as a conference paper at ICLR 2017which take multiple small steps along the gradient direction of the image pixels at some loss, recom-puting the gradient direction after each step.6Many adversaries can be formalized as either untargeted or targeted variants. An untargeted ad-versary can be defined as A(X;M )!X0, whereA(:)is the adversarial function, Xis the inputimage,X0is the adversarial example, and Mis the target model. Ais considered successful ifM(X)6=M(X0). Recently, Moosavi-Dezfooli et al. (2016) showed how to create a “universal”adversarial perturbation that can be added to any image Xin order to make M(X+)6=M(X)for a particular target model.A targeted adversary can be defined as A(X;M;l )!X0, wherelis an additional target label, andAis only considered successful if M(X0) =l.7Targeted attacks usually require larger magnitudeperturbations, since the adversary cannot just “nudge” the input across the nearest decision boundary,but instead must force it into a desired decision region.In this work, we focus on the Fast Gradient Sign (FGS) method proposed in Goodfellow et al.(2014) and the L2optimization method proposed in Carlini & Wagner (2016). FGS is a standardbaseline attack that takes a single step in the gradient direction to generate the adversarial example.As originally described, FGS generates untargeted adversarial examples. On MNIST, Goodfellowet al. (2014) reported that FGS could generate adversarial examples that fooled a maxout networkapproximately 90% of the time with = 0:25, whereis the magnitude of the perturbation at eachpixel. TheL2optimization method has been shown to generate adversarial examples with smallerperturbations than any other method published to date, which were capable of fooling the targetnetwork 100% of the time. We consider both targeted attacks and untargeted attacks for the L2optimization method.84.2.2 A DVERSARIAL ROBUSTNESSThere are multiple definitions of adversarial robustness in the literature. The most basic, which weshall use, is accuracy on adversarially perturbed versions of the test set, called adversarial examples.It is also important to have a measure of the magnitude of the adversarial perturbation. Since ad-versaries are defined relative to human perception, the ideal measure would explicitly correspond tohow easily a human observer would notice the perturbation. In lieu of such a measure, it is commonto compute the size of the perturbation using L0,L1,L2, andL1norms (Szegedy et al., 2013;Goodfellow et al., 2014; Carlini & Wagner, 2016; Sabour et al., 2016). In particular, the L0normmeasures the number of perturbed pixels, the L2norm measures the Euclidean distance between XandX0, and theL1norm measures the largest single change to any pixel.4.2.3 E XPERIMENTAL SETUPWe used the same model architectures as in Section 4.1, using a K= 256 bottleneck. The archi-tectures included a deterministic (base) model trained by MLE; a deterministic model trained withdropout (the dropout rate was chosen on the validation set); and a stochastic model trained with VIBfor various values of .For the VIB models, we use 12 posterior samples of Zto compute the class label distribution p(yjx).This helps ensure that the adversaries can get a consistent gradient when constructing the perturba-tion, and that they can get a consistent evaluation when checking if the perturbation was successful6There are also other adversaries that don’t fall as cleanly into those categories, such as “fooling im-ages” from Nguyen et al. (2014), which remove the human perceptual constraint, generating regular geometricpatterns or noise patterns that networks confidently classify as natural images; and the idea of generating ad-versaries by stochastic search for images near the decision boundary of multiple networks from Baluja et al.(2015).7Sabour et al. (2016) proposes a variant of the targeted attack, A(XS;M;X T;k)!X0S, whereXSis thesource image, XTis a target image, and kis a target layer in the model M.AproducesX0Sby minimizing thedifference in activations of Mat layerkbetweenXTandX0S. The end result of this attack for a classificationnetwork is still that M(X0S)yields a target label implicitly specified by XTin a successful attack.8Carlini & Wagner (2016) shared their code with us, which allowed us to perform the attack with exactlythe same parameters they used for their paper, including the maximum number of iterations and maximum Cvalue (see their paper for details).8Published as a conference paper at ICLR 2017(i.e., it reduces the chance that the adversary “gets lucky” in its perturbation due to an untypicalsample). We also ran the VIB models in “mean mode”, where the s are forced to be 0. This had nonoticeable impact on the results, so all reported results are for stochastic evaluation with 12 samples.4.2.4 MNIST R ESULTS AND DISCUSSIONWe selected the first 10 zeros in the MNIST test set, and use the L2optimization adversary of Carlini& Wagner (2016) to try to perturb those zeros into ones.9Some sample results are shown in Figure3. We see that the deterministic models are easily fooled by making small perturbations, but for theVIB models with reasonably large , the adversary often fails to find an attack (indicated by thegreen borders) within the permitted number of iterations. Furthermore, when an attack is succesful,it needs to be much larger for the VIB models. To quantify this, Figure 4 plots the magnitude of theperturbation (relative to that of the deterministic and dropout models) needed for a successful attackas a function of . Asincreases, the L0norm of the perturbation decreases, but both L2andL1norms increase, indicating that the adversary is being forced to put larger modifications into fewerpixels while searching for an adversarial perturbation.Figure 5 plots the accuracy on FGS adversarial examples of the first 1000 images from the MNISTtest set as a function of . Each point in the plot corresponds to 3 separate executions of threedifferent models trained with the same value of . All models tested achieve over 98.4% accuracy onthe unperturbed MNIST test set, so there is no appreciable measurement distortion due to underlyingmodel accuracy.Figure 6 plots the accuracy on L2optimization adversarial examples of the first 1000 images fromthe MNIST test set as a function of . The same sets of three models per were tested three times,as with the FGS adversarial examples.We generated both untargeted and targeted adversarial examples for Figure 6. For targeting, wegenerate a random target label different from the source label in order to avoid biasing the resultswith unevenly explored source/target pairs. We see that for a reasonably broad range of values,the VIB models have significantly better accuracy on the adversarial examples than the deterministicmodels, which have an accuracy of 0% (the L2optimization attack is very effective on traditionalmodel architectures).Figure 6 also reveals a surprising level of adversarial robustness even when !0. This can beexplained by the theoretical framework of Fawzi et al. (2016). Their work proves that quadraticclassifiers (e.g., xTAx, symmetric A) have a greater capacity for adversarial robustness than linearclassifiers. As we show in Appendix C, our Gaussian/softmax encoder/decoder is approximatelyquadratic for all <1.4.2.5 I MAGE NETRESULTS AND DISCUSSIONVIB improved classification accuracy and adversarial robustness for toy datasets like MNIST. Wenow investigate if VIB offers similar advantages for ImageNet, a more challenging natural imageclassification. Recall that ImageNet has approximately 1M images spanning 1K classes. We pre-process images such that they are 299x299 pixels.ArchitectureWe make use of publicly available, pretrained checkpoints10of Inception Resnet V2 (Szegedy et al.,2016) on ImageNet (Deng et al., 2009). The checkpoint obtains 80.4% classification accuracy on theImageNet validation set. Using the checkpoint, we transformed the original training set by applyingthe pretrained network to each image and extracting the representation at the penultimate layer.This new image representation has 1536 dimensions. The higher layers of the network continue toclassify this representation with 80.4% accuracy; conditioned on this extraction the classification9We chose this pair of labels since intuitively zeros and ones are the digits that are least similar in terms ofhuman perception, so if the adversary can change a zero into a one without much human-noticeable perturba-tion, it is unlikely that the model has learned a representation similar to what humans learn.10Available at the Tensorflow Models repository in the Slim directory: https://github.com/tensorflow/models/tree/master/slim9Published as a conference paper at ICLR 2017Orig: Det: Dropout = 0= 1010= 108= 106= 104= 103= 102Figure 3: The adversary is trying to force each 0 to be classified as a 1. Successful attacks have a redbackground. Unsuccessful attacks have a green background. In the case that the label is changedto an incorrect label different from the target label (i.e., the classifier outputs something other than0 or 1), the background is purple. The first column is the original image. The second column isadversarial examples targeting our deterministic baseline model. The third column is adversarialexamples targeting our dropout model. The remaining columns are adversarial examples targetingour VIB models for different .10-1110-1010-910-810-710-610-510-410-310-2β1.01.52.02.53.0All L*/Deterministic Model L*Deterministic Model L*Targeted L2 Optimization (0->1):L0Targeted L2 Optimization (0->1):L2Targeted L2 Optimization (0->1):L∞10-1110-1010-910-810-710-610-510-410-310-2β0.60.81.01.21.41.61.82.0All L*/Dropout Model L*Dropout Model L*Targeted L2 Optimization (0->1):L0Targeted L2 Optimization (0->1):L2Targeted L2 Optimization (0->1):L∞(a) (b)Figure 4: (a) Relative magnitude of the adversarial perturbation, measured using L0,L2, andL1norms, for the images in Figure 3 as a function of . (We normalize all values by the correspondingnorm of the perturbation against the base model.) As increases,L0decreases, but both L2andL1increase, indicating that the adversary is being forced to put larger modifications into fewer pixelswhile searching for an adversarial perturbation. (b) Same as (a), but with the dropout model as thebaseline. Dropout is more robust to the adversarial perturbations than the base deterministic model,but still performs much worse than the VIB model as increases.10Published as a conference paper at ICLR 201710-810-710-610-510-410-310-210-1β246810Relative Accuracy on Adversarial ExamplesDeterministic ModelFGS, epsilon=0.350FGS, epsilon=0.400FGS, epsilon=0.450FGS, epsilon=0.50010-810-710-610-510-410-310-210-1β12345Relative Accuracy on Adversarial ExamplesDropout ModelFGS, epsilon=0.350FGS, epsilon=0.400FGS, epsilon=0.450FGS, epsilon=0.500(a) (b)Figure 5: Classification accuracy of VIB classifiers, divided by accuracy of baseline classifiers, onFGS-generated adversarial examples as a function of . Higher is better, and the baseline is alwaysat1:0. For the FGS adversarial examples, when = 0(not shown), the VIB model’s performance isalmost identical to when = 108. (a) FGS accuracy normalized by the base deterministic modelperformance. The base deterministic model’s accuracy on the adversarial examples ranges fromabout 1% when = 0:5to about 5% when = 0:35. (b) Same as (a), but with the dropout modelas the baseline. The dropout model is more robust than the base model, but less robust than VIB,particularly for stronger adversaries (i.e., larger values of ). The dropout model’s accuracy on theadversarial examples ranges from about 5% when = 0:5to about 16% when = 0:35. As inthe other results, relative performance is more dramatic as increases, which seems to indicate thatthe VIB models are learning to ignore more of the perturbations caused by the FGS method, eventhough they were not trained on any adversarial examples.10-1110-1010-910-810-710-610-510-410-310-210-1β0.00.10.20.30.40.50.60.7Accuracy on Adversarial ExamplesDeterministic and Dropout Models (Targeted and Untargeted)Targeted L2 OptimizationUntargeted L2 OptimizationFigure 6: Classification accuracy (from 0 to 1) on L2adversarial examples (of all classes) as afunction of . The blue line is for targeted attacks, and the green line is for untargeted attacks(which are easier to resist). In this case, = 1011has performance indistinguishable from = 0.The deterministic model and dropout model both have a classification accuracy of 0% in both thetargeted and untargeted attack scenarios, indicated by the horizontal red dashed line at the bottom ofthe plot. This is the same accuracy on adversarial examples from this adversary reported in Carlini& Wagner (2016) on a convolutional network trained on MNIST.11Published as a conference paper at ICLR 2017(a) (b)(c) (d)Figure 7: The results of our ImageNet targeted L2optimization attack. In all cases we target anew label of 222 (“soccer ball”). Figure (a) shows the 30 images from the first 40 images in theImageNet validation set that the VIB network classifies correctly. The class label is shown in greenon each image. The predicted label and targeted label are shown in red. Figure (b) shows adversarialexamples of the same images generated by attacking our VIB network with = 0:01. While allof the attacks change the classification of the image, in 13 out of 30 examples the attack fails tohit the intended target class (“soccer ball”). Pink crosses denote cases where the attack failed toforce the model to misclassify the image as a soccer ball. Figure (c) shows the same result butfor our deterministic baseline operating on the whitened precomputed features. The attack alwayssuccceeds. Figure (d) is the same but for the original full Inception ResNet V2 network withoutmodification. The attack always succceeds. There are slight variations in the set of adversarialexamples shown for each network because we limited the adversarial search to correctly classifiedimages. In the case of the deterministic baseline and original Inception ResNet V2 network, theperturbations are hardly noticable in the perturbed images, but in many instances, the perturbationsfor the VIB network can be percieved.12Published as a conference paper at ICLR 2017Figure 8: Shown are the absolute differences between the original and final perturbed images forall three networks. The left block shows the perturbations created while targeting the VIB network.The middle block shows the perturbations needed for the deterministic baseline using precomputedwhitened features. The right block shows the perturbations created for the unmodified InceptionResNet V2 network. The contrast has been increased by the same amount in all three columns toemphasize the difference in the magnitude of the perturbations. The VIB network required muchlarger perturbations to confuse the classifier, and even then did not achieve the targeted class in 13of those cases.model is simply logistic regression. To further speed training, we whitened the 1536 dimensionalrepresentation.Under this transformation, the experiment regime is identical to the permutation invariant MNISTtask. We therefore used a similar model architecture. Inputs are passed through two fully connectedlayers, each with 1024 units. Next, data is fed to a stochastic encoding layer; this layer is charac-terized by a spherical Gaussian with 1024 learned means and standard deviations. The output ofthe stochastic layer is fed to the variational classifier–itself a logistic regression, for simplicity. Allother hyperparameters and training choices are identical to those used in MNIST, more details inAppendix A.ClassificationWe see the same favorable VIB classification performance in ImageNet as in MNIST. By varying, the estimated mutual information between encoding and image ( I(Z;X)) varies as well. At largevalues ofaccuracy suffers, but at intermediate values we obtain improved performance over botha deterministic baseline and a = 0 regime. In all cases our accuracy is somewhat lower thanthe original 80.4% accuracy. This may be a consequence of inadequate training time or suboptimalhyperparameters.Overall the best accuracy we achieved was using = 0:01. Under this setting we saw an accu-racy of 80.12%–nearly the same as the state-of-the-art unmodified network– but with substantiallysmaller information footprint, only I(X;Z)45bits. This is a surprisingly small amount of infor-mation;= 0implies over 10,000 bits yet only reaches an accuracy of 78.87%. The deterministicbaseline, which was the same network, but without the VIB loss and a 1024 fully connected lin-ear layer instead of the stochastic embedding similarly only achieved 78.75% accuracy. We stressthat regressions from the achievable 80.4% are likely due to suboptimal hyperparameters settings orinadequate training.Considering a continuum of and a deterministic baseline, the best classification accuracy wasachieved with a = 0:012(0;1). In other words, VIB offered accuracy benefit yet using a mere45bits of information from each image.13Published as a conference paper at ICLR 2017Adversarial RobustnessWe next show that the VIB-trained network improves resistance to adversarial attack.We focus on the Carlini targeted L2attack (see Section 4.2.1). We show results for the VIB-trainednetwork and a deterministic baseline (both on top of precomputed features), as well as for the origi-nal pretrained Inception ResNet V2 network itself. The VIB network is more robust to the targetedL2optimization attack in both magnitude of perturbation and frequency of successful attack.Figure 7 shows some example images which were all misclassified as “soccer balls” by the deter-ministic models; by contrast, with the VIB model, only 17 out of 30 of the attacks succeeded inbeing mislabeled as the target label.11We find that the VIB model can resist about 43.3% of theattacks, but the deterministic models always fail (i.e., always misclassify into the targeted label).Figure 8 shows the absolute pixel differences between the perturbed and unperturbed images for theexamples in Figure 7. We see that the VIB network requires much larger perturbations in order tofool the classifier, as quantified in Table 2.Metric Determ IRv2 VIB(0.01)Sucessful target 1.0 1.0 0.567L26.45 14.43 43.27L1 0.18 0.44 0.92Table 2: Quantitative results showing how the different Inception Resnet V2-based architectures(described in Section 4.2.5) respond to targeted L2adversarial examples. Determ is the deterministicarchitecture, IRv2 is the unmodified Inception Resnet V2 architecture, and VIB(0.01) is the VIBarchitecture with = 0:01.Successful target is the fraction of adversarial examples that causedthe architecture to classify as the target class (soccer ball). Lower is better. L2andL1are theaverageLdistances between the original images and the adversarial examples. Larger values meanthe adversary had to make a larger perturbation to change the class.5 F UTURE DIRECTIONSThere are many possible directions for future work, including: putting the VIB objective at multipleor every layer of a network; testing on real images; using richer parametric marginal approxima-tions, rather than assuming r(z) =N(0;I); exploring the connections to differential privacy (seee.g., Wang et al. (2016a); Cuff & Yu (2016)); and investigating open universe classification problems(see e.g., Bendale & Boult (2015)). In addition, we would like to explore applications to sequenceprediction, where Xdenotes the past of the sequence and Ythe future, while Zis the current repre-sentation of the network. This form of the information bottleneck is known as predictive information(Bialek et al., 2001; Palmer et al., 2015).
By6srMMVx
Review
7: Good paper, accept
Update: raised the score, because I think the arguments about adversarial examples are compelling. I think that the paper convincingly proves that this method acts as a decent regularizer, but I'm not convinced that it's a competitive regularizer. For example, I don't believe that there is sufficient evidence that it gives a better regularizer than dropout/normalization/etc. I also think that it will be much harder to tune than these other methods (discussed in my rebuttal reply). ---- Summary: If I understand correctly, this paper proposes to take the "bottleneck" term from variational autoencoders which pulls the latent variable towards a noise prior (like N(0,1)) and apply it in a supervised learning context where the reconstruction term log(p(x|z)) is replaced with the usual supervised cross-entropy objective. The argument is that this is an effective regularizer and increases robustness to adversarial attacks. Pros: -The presentation is quite good and the paper is easy to follow. -The idea is reasonable and the relationship to previous work is well described. -The robustness to adversarial examples experiment seems convincing, though I'm not an expert in this area. Is there any way to compare to an external quantitative baseline on robustness to adversarial examples? This would help a lot, since I'm not sure how the method here compares with other regularizers in terms of combatting adversarial examples. For example, if one uses a very high dropout rate, does this confer a comparable robustness to adversarial examples (perhaps at the expense of accuracy)? Cons: -MNIST accuracy results don't seem very strong, unless I'm missing something. The Maxout paper from ICML 2013 listed many permutation invariant MNIST results with error rates below 1%. So the 1.13% error rate listed here doesn't necessarily prove that the method is a competitive regularizer. I also suspect that tuning this method to make it work well is harder than other regularizers like dropout. -There are many distinct architectural choices with this method, particularly in how many hidden layers come before and after z. For example, the output could directly follow z, or there could be several layers between z and the output. As far as I can tell the paper says that p(y | z) is a simple logistic regression (i.e. one weight matrix followed by softmax), but it's not obvious why this choice was made. Did it work best empirically? Other: -I wonder what would happen if you "trained against" the discovered adversarial examples while also using the method from this paper. Would it learn to have a higher variance p(z | x) when presented with an adversarial example?
4: The reviewer is confident but not absolutely certain that the evaluation is correct
SyVVJ85lg
ICLR.cc/2017/conference
2017
Paleo: A Performance Model for Deep Neural Networks
["Hang Qi", "Evan R. Sparks", "Ameet Talwalkar"]
Although various scalable deep learning software packages have been proposed, it remains unclear how to best leverage parallel and distributed computing infrastructure to accelerate their training and deployment. Moreover, the effectiveness of existing parallel and distributed systems varies widely based on the neural network architecture and dataset under consideration. In order to efficiently explore the space of scalable deep learning systems and quickly diagnose their effectiveness for a given problem instance, we introduce an analytical performance model called Paleo. Our key observation is that a neural network architecture carries with it a declarative specification of the computational requirements associated with its training and evaluation. By extracting these requirements from a given architecture and mapping them to a specific point within the design space of software, hardware and communication strategies, Paleo can efficiently and accurately model the expected scalability and performance of a putative deep learning system. We show that Paleo is robust to the choice of network architecture, hardware, software, communication schemes, and parallelization strategies. We further demonstrate its ability to accurately model various recently published scalability results for CNNs such as NiN, Inception and AlexNet.
["Deep learning"]
ABSTRACTAlthough various scalable deep learning software packages have been proposed,it remains unclear how to best leverage parallel and distributed computing infras-tructure to accelerate their training and deployment. Moreover, the effectivenessof existing parallel and distributed systems varies widely based on the neural net-work architecture and dataset under consideration. In order to efficiently explorethe space of scalable deep learning systems and quickly diagnose their effective-ness for a given problem instance, we introduce an analytical performance modelcalled P ALEO . Our key observation is that a neural network architecture carrieswith it a declarative specification of the computational requirements associatedwith its training and evaluation. By extracting these requirements from a givenarchitecture and mapping them to a specific point within the design space of soft-ware, hardware and communication strategies, P ALEO can efficiently and accu-rately model the expected scalability and performance of a putative deep learningsystem. We show that P ALEO is robust to the choice of network architecture,hardware, software, communication schemes, and parallelization strategies. Wefurther demonstrate its ability to accurately model various recently published scal-ability results for CNNs such as NiN, Inception and AlexNet.1 I NTRODUCTIONDeep learning has been successfully applied in many areas including natural language processingand computer vision. The scale of modern datasets and the millions to billions of parameters in thesedeep networks pose new challenges when designing computational systems that leverage paralleland distributed computing. Indeed, several important open questions remain:How fast can we train or evaluate a model on a user’s given hardware?For a given architecture, how can a user best leverage parallel and distributed computation?How can we design a new neural network architecture that can be trained and evaluated efficientlyunder common hardware setups?In response to these fundamental questions, various software packages and systemshave beenpainstakingly developed, e.g. DistBelief (Dean et al., 2012), TensorFlow (Abadi et al., 2015),MXNet (Chen et al., 2015), SparkNet (Moritz et al., 2015), FireCaffe (Iandola et al., 2016). More-over, expensive benchmarking efforts, e.g., Chintala et al. (2016), have performed brute-force pro-filing on some of these deep learning systems on a handful network architectures.In this work we aim to tackle these questions by taking an analytical approach to model the per-formance of arbitrary learning systems. Our work hinges on the observation that a neural networkarchitecture is a declarative specification of the forward and backward propagation steps requiredfor training and deploying the network. However, given this specification, there is a rich designspace of algorithms, hardware choices, and communications strategies to most efficiently executethese specifications. We build a novel performance model called P ALEO1that maps this declarativespecification to arbitrary points in this design space to estimate the execution time of training and1Open-sourced at https://github.com/TalwalkarLab/paleo .1Published as a conference paper at ICLR 2017deploying deep neural networks.2PALEO applies broadly to a wide variety of neural network archi-tectures and for arbitrary learning systems within this design space, and thus can serve as a valuabletool for practitioners and developers to answer the questions mentioned above.2 B ACKGROUND AND RELATED WORKTraining deep neural networks can be very time and resource consuming, and it is not uncommonfor the training of a model to take days across tens or hundreds of machines. Several high-levelstrategies have been proposed to accelerate this process, and these strategies collectively define thedesign space considered by P ALEO .Hardware acceleration approaches are designed to accelerate the computation of the forward andbackward passes and often make use of specialized hardware, such as GPUs (Coates et al., 2013), ormore recently custom hardware designed specifically for deep learning (Jouppi, 2016). P ALEO ac-cepts constants associated with hardware as input (e.g., peak FLOPS, network bandwidth) and au-tomatically adapts to changes in this input.Software acceleration via specialized libraries, e.g., cuda-convnet (Krizhevsky, 2014a) andcuDNN (Chetlur et al., 2014), and highly-optimized algorithms for commonly used primitives,e.g., Chetlur et al. (2014) and Lavin (2016), can also be used to accelerate deep model training.PALEO dynamically picks among the best available implementation for each layer at execution time.Parallelization is a natural approach to consider, and can involve training a neural network withmany computational devices (e.g. CPUs, GPUs) on a single machine, or across a network. Thereare two major parallelization strategies when it comes to training deep neural network models atscale: data parallelism and model parallelism. In classical data parallel systems, each worker storesan identical copy of the model and computes gradients only on a shard of the training examples, andthese gradients are aggregated to update the model. In contrast, model parallel systems shard themodel itself across the workers, while the training data may be stored on each worker or shardedacross the workers. P ALEO models both data and model parallel settings.Communication schemes have also been explored to accelerate incremental model updates acrossdistributed workers. Three of the most common schemes are (Iandola et al., 2016; Zhao & Canny,2013): (i) the OneToAll scheme has a 2KT communication time as a master node must communi-cate with all Kworkers individually, where Tis the time for communicating data through one linkin the network; (ii) the Tree AllReduce scheme takes 2 log2(K)Tfor weights to be aggregated andbroadcasted to all workers following a tree topology; and (iii) the Butterfly AllReduce scheme inwhich all workers receive aggregated weights in log2(K)Tusing a butterfly network. We restrictthe focus of P ALEO to distributed communication schemes that return equivalent results to serialexecutions, and we thus do not consider the recently introduced butterfly mixing scheme of Zhao &Canny (2013), or non-deterministic asynchronous parameter servers.3 P ALEOWe now present P ALEO , a model for the lean consumption of resources during the training of DNNs.PALEO decomposes the total execution time into computation time and communication time; bothare estimated for each pass of a neural network’s evaluation given user specified choices within thedesign space of algorithms, hardware, and communications strategies. Figure 1 illustrates the overallidea. The computation time is calculated from factors including the size of the computation inputsimposed by the network architecture, the complexity of the algorithms and operations involved inthe network layers, and the performance of the hardware to be used. The communication timeis estimated based on the computational dependencies imposed by the network, the communicationbandwidth of the hardware, and the assumed parallelization schemes. Once the network architectureand design space choices are fixed, all of the key factors in P ALEO can be derived, and we canestimate execution time without actually implementing the entire network and/or an underlyingsoftware package.2Training a neural network involves both forward and backward propagation, whereas deploying a trainednetwork on a new data point involves only forward propagation. Thus, estimating the execution time of modeltraining encompasses both model training and deployment, and is the focus of this work.2Published as a conference paper at ICLR 2017Network architecture GPUs CPUs GPU cluster CPU cluster scale-out scale-up Complexity (e.g. FLOP counts) Communication bandwidth (GB/s) Computation speed (TFLOPS) Memory (Data, weights, gradients, activations) Dependencies (Network architecture) Communication Computation Execution Time Software framework compute network Parallelization strategies (Model parallel, data parallel) Operation selection (e.g. GEMM, FFT, Tiled FFT ) Communication scheme (OneToAll, Tree AllReduce, Butterfly AllReduce) Figure 1: Overview of the P ALEO modeling approach. P ALEO decomposes execution time intocomputation time and communication time, which can be derived from various factors implicitlyspecified by network architectures and hardware configurations.3.1 C OMPUTATION MODELINGWe first describe the computation model on a single machine. The computation in a neural networkcan be expressed as a directed graph N=hfu(i)gni=1;f(u(i); u(j))gi, where each node u(i)isassociated with an operation f(i)on a device d(i); each directed edge (u(i); u(j))represents thedependency that operation f(j)cannot be executed until f(i)is finished. We use Pa (u(j))to representthe set of immediate parent nodes of u(j). We model each layer in the neural network as a node, andthe connections between layers as edges. In the following text, we omit the superscript index whenthere is no ambiguity.3.1.1 C OMPUTATION TIME FOR A SINGLE LAYERTo model the runtime of a layer u, we consider the operation fand decompose the execution timeof this operation into three terms (as shown in Figure 2a): the time to fetch the input produced byits parent layersR(Pa(u)); the time to perform the computation of fon the designated device d,i.e.,C(f; d); and the time to write the outputs to the local memory W(f; d). Assuming a sequentialexecution, the runtime for a node ucan be written as a simple summation:T(u) =R(Pa(u)) +C(f; d) +W(f; d): (1)Among the three terms, the computation time C(f; d)is calculated as the FLOP (floating-point op-eration) counts of the operation divided by the computation speed (FLOPS; floating-point operationper second) of the device: C(f; d) = FLOPs (f)=speed (d):The IO timesR(Pa(u))andW(u)arecalculated as the size of memory footprints involved in the computation divided by the IO bandwidthof the device. When inputs must be fetched from other devices, e.g. in the case of model parallelism,this IO bandwidth refers to the communication bandwidth between two devices. P ALEO treats thespeed and bandwidth of devices as parameters given to the model so that users can configure themto reflect user-specific configurations.Using this per-layer model, we will next describe how to model the computation time of an entirenetwork. We will subsequently we present FLOP counts for layer operations commonly used inmodern DNNs in Section 4.3.1.2 C OMPUTATION TIME FOR NETWORKSWe first consider simple sequential structures where layers are constructed one after another, as inFigure 2b. The total execution time can be calculated as the sum of execution time of all layersT(N) =Pni=1T(u(i)). While this calculation may seem trivial at first glance, it forms the founda-tion for modeling execution time for more complex architectures.3Published as a conference paper at ICLR 2017Operation f x y Write outputs Fetch inputs Conv Pooling Conv Pooling ......Pooling Pooling Conv Conv Pooling FC......Device 1 Device 2 (a) (b) (c) Figure 2: (a) The execution time of a node in the computation graph consists of the time for fetchinginput, computing results, and writing results to memory. (b) An example of a sequential computationgraph segment. (c) An example of a parallel computation graph segment.Parallel structures are not uncommon in DNNs; for example, the Inception model (Szegedy et al.,2015a) contains layers that can be evaluated simultaneously, and layers on different workers canrun in parallel in model parallel setups (Dean et al., 2012). Figure 2c illustrates a parallel structure,where two convolutional layers (each followed by a pooling layer) are scheduled to be executed ontwo devices.To model computation time of parallel structures, we identify synchronization barriers before andafter every parallel structure and introduce a notation of supernode U=fG(i)gki=1as a set of disjointsubgraphs sandwiched by the synchronization barriers in the computation graph. When substitutingthe subgraphs with the supernode, the network is reduced to a sequential structure described above.For the supernode, the execution time T(U)is within the range [max iT(G(i));PiT(G(i))], wherethe lower bound corresponds to perfect parallelization, the upper bound corresponds to sequentialexecution. Note that the execution time of a subgraph T(G(i))can be calculated recursively.3.1.3 C OMPUTATION MODELING FOR LAYER OPERATIONSIn modern DNNs, the convolutional layer is one of the most commonly used and computation-ally intensive type of layer. For this reason, there has been many heavily optimized implementa-tions (Chetlur et al., 2014; Vasilache et al., 2015; Lavin, 2016). Deriving plausible FLOP countsfor other types of layers is a straightforward process, and in this section, we consider two leadingimplementations for convolutional operations: matrix multiplication and Fast Fourier Transform.Following the notation used by Chetlur et al. (2014), a 2D convolutional layer during forward prop-agation3takes an input feature map DNCHW(which has a batch of Ninput feature maps withshape HWandCchannels) and a set of convolutional filters FKCRS(Kfilters with shapeRSandCchannels). It produces NKfeature maps each of shape PQwhich can be calcu-lated from the shapes of inputs and filters together with additional striding and padding parameters.The FLOP counts for the convolution operation can be expressed as 2KCRSNPQ . A commonlyused implementation is to reduce convolution operations to matrix multiplications, which can beefficiently computed with well-optimized SGEMM routines on various platforms. Although theseFLOP counts ignore auxiliary operations (e.g. indexing arithmetic in efficient implementations),they nonetheless provide a good estimate of FLOP counts for matrix multiplication implementa-tions.Another implementation is based on Fast Fourier Transform (Vasilache et al., 2015): both input fea-ture maps and filters are transformed into the frequency domain, then element-wise multiplicationsare performed followed by an inverse Fourier transform. This implementation introduces computa-tion and memory overhead in discrete Fourier transforms, but reduces the computation complexitytoO(NCKHW +(NC+CK+NK)HW log(HW)). Convolutional layers with large filters or a3Our arguments generalize to N-dimensional settings, and similar arguments apply for the backward pass.4Published as a conference paper at ICLR 2017large problem size can benefit from FFT implementations. When counting FLOPs, it is not possibleto get exact counts without knowing the underlying implementation details. In P ALEO , we adopt thecommonly used FFT complexity 5nlog2nas the FLOP counts for complex-valued transformationsof size n(Cooley & Tukey, 1965). To account for the IO overhead caused by auxiliary memories,PALEO estimates the memory size required for complex-valued matrices in the frequency domainand incorporates it into the data reading and writing terms. For FFT-based implementations withtilings, P ALEO estimates the number of tiles from the convolution specifications.The choice of algorithm – matrix multiplication or FFT – is problem specific, as it depends on thefilter size, strides, input size of the convolutional layers, and memory workspace. In order to derivereasonable estimations for user-specific DNNs comparable to real executions, it is important for P A-LEO to make decisions comparable to real-world systems. Two common approaches are employedin existing DNN software frameworks and libraries to choose between these algorithms: (i) usingpredefined heuristics based on offline benchmarks; (ii) autotuning to empirically evaluate availablealgorithms on the given specification. Since autotuning is tied to platform and software implementa-tions, for maximum generality P ALEO by default takes the first approach. In particular, P ALEO usesheuristics from cuDNN to make algorithm choices while also accounting for user preferences.3.2 C OMMUNICATION MODELINGWe now describe our modeling for communication among multiple workers. Let jDjbe the size ofdata to be communicated between two workers, and define Bas the bandwidth of the communica-tion channel. Then the communication time can simply be written as Tcomm =jDj=B. By usingdifferent bandwidth configurations, P ALEO works for both scale-up setups (multiple GPUs on onemachine) and scale-out setups (multiple machines in a cluster). Moreover, in data-parallel settings,an AllReduce operation is performed to synchronize model parameters across all workers after everybackward pass. P ALEO considers three communications schemes: OneToAll, Tree AllReduce, andButterfly AllReduce. The communication time under these three schemes is described in Section 2.3.3 P LATFORM PERCENT OF PEAKThus far, we have assumed that deep learning software platforms make perfect use of their underly-ing hardware. That is, that the CPUs and GPUs are operating at “peak FLOPS”, and that networkand IO links are fully saturated. This has allowed our model to be platform independent.However, this assumption is unreasonable in practice. For instance, achieving peak FLOPS is adifficult proposition, usually requiring customized libraries developed by organizations with intimateknowledge of the underlying hardware, e.g., Intel’s MKL (int, 2009), ATLAS (Whaley & Petitet,2005), and cuDNN. Even these specially tuned libraries may fall short of peak execution by as muchas 40% (atl). Further, anycomputation done outside the scope of P ALEO (e.g. job scheduling, datacopying) will exacerbate the observed inefficiency in practice. Sometimes such inefficiencies arewarranted from the perspective of ease of programmability or maintenance of the learning platforms.Rather than trying to measure and capture every source of inefficiency in every learning framework,we take a small number of representative deep learning workloads which contain convolutions,pooling, dropout, and fully connected layers and run them for a short time on a single GPU. Givenobserved total throughput and estimated total throughput on this benchmark we fit a scaling constantto estimate a platform percent of peak (PPP) parameter which captures the average relative ineffi-ciency of the platform compared to peak FLOPS. Highly specialized frameworks (e.g. cuDNN) willin general have a computational PPP that is close to 100%, while frameworks with higher overheadsmay have PPP constants closer to 50% or less.We follow a similar benchmarking procedure to estimate PPP for the communication link for Ten-sorFlow. For the FireCaffe experiments, we estimate the communication PPP based on the empiricalresults for communication reported in Table 4 of the paper.4 E XPERIMENTSWe now present empirical results which illustrate that P ALEO is robust to the choice of networkarchitecture, hardware, communication schemes, and parallelization strategies.5Published as a conference paper at ICLR 20174.1 L AYER -WISE EVALUATIONWe first compare P ALEO -estimated runtimes with actual runtimes measured from Tensor-Flow4(Abadi et al., 2015) execution in two popular CNN architectures: the one-tower variant ofAlexNet (Krizhevsky, 2014b) and the 16-layer VGG network (Simonyan & Zisserman, 2014). P A-LEO uses cuDNN heuristics to choose algorithms and the auto-tuning mechanism in TensorFlow isdisabled. Experiments are run on a NVIDIA TITAN X GPU with a 4 GB workspace limit.For convolutional and fully connected layers, we evaluate forward computation, backward compu-tation with respect to layer inputs, and backward computation with respect to filters separately (seeFigure 4 in the appendix for the plots of layer-by-layer comparison.) Table 1 shows a comparisonof full forward pass and backward pass with all layers included. P ALEO ’s per layer estimates arequite close to the actual TensorFlow execution, with only one layer – ‘fc6’ – consistently beingunderestimated by P ALEO .5In spite of this issue with ‘fc6’, our full pass estimates are remarkablyaccurate.Table 1: Full pass time of TensorFlow and P ALEO estimation on AlexNet and VGG-16.Forward pass (ms) Backward pass (ms)AlexNet TensorFlow 44.00 155.10PALEO Estimation 45.96 118.44VGG-16 TensorFlow 400.46 1117.48PALEO Estimation 435.46 1077.274.2 C ASE STUDIESWe now revisit the questions posed at the beginning of the paper and demonstrate how P ALEO canhelp in answering them. In this subsection we present three case studies. We extract experiment se-tups including network architectures, hardware specifications, communication schemes, and paral-lelization strategies from selected publications focusing on scalability of CNNs. We then plug thoseconfigurations into P ALEO and compare the simulated scalability results with the reported results inthe original publications. Table 2 summaries the configurations of P ALEO in these experiments.Table 2: P ALEO configurations used in the case studies.Case 1 Case 2 Case 3Net NiN Inception v3 AlexNetDevice NVIDIA K20X NVIDIA K20 NVIDIA K20Workers Up to 128 Up to 100 Up to 8Bandwidth 70 Gbps 10 Gbps 6 GB/sCommunication Tree AllReduce Parameter Server VariousParallelization Data Parallelism Data Parallelism HybridPlatform Caffe TensorFlow cuda-convnet2One Step Time6PALEO Estimation 1918 ms 4269 ms 402 msReported Time72275 ms – 418 ms4TensorFlow 0.9 with cuDNN 4 backend.5Examining the TensorFlow execution with the NVIDIA profiler revealed that TensorFlow spent two-thirdsof its reported ‘fc6’ time in transforming data layout between NHWC and NCHW when calling the underlyingcuBLAS primitives.6Total time of forward pass, backward pass, and parameter update for one mini-batch on one worker.7Reported times for Cases 1 and 3 are derived approximately from information in the publications. For Case2 no run time information is provided.6Published as a conference paper at ICLR 20174.2.1 C ASE 1: N INWITH FIRECAFFEFireCaffe (Iandola et al., 2016) adopts the Tree AllReduce communication scheme when training aNiN model (Lin et al., 2013) in data parallel settings with up to 128 servers on the Titan supercom-puter. They report a 38 speedup for NiN with batch size 1024 relative to single-GPU performance.Tabel 3 shows the results from P ALEO compared with the results reported by FireCaffe.Table 3: Comparison between P ALEO estimation and FireCaffe for training NiN.FireCaffe PALEO EstimationWorkers Batch size Train Time Speedup Train Time Speedup1 256 5.8 days 1 4.9 days 132 256 11 hours 13 7.6 hours 15.5 32 1024 6 hours 23 4.6 hours 25.3 128 1024 3.6 hours 39 2.3 hours 51.6 4.2.2 C ASE 2: I NCEPTION WITH TENSOR FLOWMurray et al. (2016) reported their results in synchronously training the Inception model (Szegedyet al., 2015b) with TensorFlow and achieved a 56 speedup with 100 workers. They apply a weakscaling strategy with batch size 256 to keep GPUs saturated. Although Murray et al. (2016) lever-aged a distributed parameter server rather than one of the three communications schemes consideredin P ALEO , the communication cost of Butterfly AllReduce can be viewed as a lower bound (Zhao &Canny, 2013). To account for the fact that they train with worker nodes each of which have 8 GPUs,we assumes a linear speedup for GPUs on the same host. Figure 3a shows a comparison betweenreported speedups and P ALEO estimated speedups. For absolute runtime, in one of the experiments,their model completes 20 epochs of training after 100 hours when using 8 Tesla K40’s and a batchsize 256. P ALEO projects a 111 hours runtime under the same setting.4.2.3 C ASE 3: A LEXNET WITH HYBRID PARALLELISMKrizhevsky (2014b) describes a hybrid model and data parallelism approach for training AlexNetusing up to 8 GPUs with a weak scaling strategy. In his setup, each of the two CPUs connects to 4GPUs, the communication bandwidth is penalized by 50% across the two groups as mentioned inthe paper. Table 4 shows the comparison between P ALEO ’s projection and the original result, whichare quite similar. Moreover, whereas Krizhevsky (2014b) does not quantify the speedup of hybridparallelism relative to strict data parallelism, P ALEO simulates training the entire network with onlydata parallelism (see last two columns of Table 4) in order to estimate this speedup.Table 4: Comparison between P ALEO estimation and Krizhevsky (2014b) for training AlexNet.One Weird Trick PALEO EstimationHybrid parallelism Hybrid parallelism Data parallelismWorkers Train Time (h) Speedup Train Time (h) Speedup Train Time (h) Speedup1 98.95 1 96.31 1 96.31 12 50.24 1.95 49.57 1.94 55.90 1.724 26.20 3.74 25.42 3.79 32.82 3.038 16.68 6.25 14.37 6.70 23.65 5.404.3 H YPOTHETICAL SETUPSIn this subsection, we use P ALEO in two hypothetical setups to analyze the scalability of AlexNetand a GAN model under different communication schemes.7Published as a conference paper at ICLR 20174.3.1 A LEXNET IN A CLOUD -BASED SETUPIn this study, we present an analysis of data parallel training of AlexNet. We assume a modern cloudsetup with a cluster of servers each equipped with a NVIDIA K80 GPU connected to a 20 Gbpsnetwork. In contrast to the Inception model with 23 million parameter, the one-tower variant ofAlexNet has 50 million parameters and therefore doubles communication workload when trainingwith data parallelism.We show strong scaling for all three communication schemes in Figure 3c. Even when assuminga fairly large batch size of 2048 which is beneficial in distributed settings, we see very modestspeedups. The OneToAll scheme achieves a max speedup of less than a 2 using 4 workers, whilethe communication-efficient Butterfly AllReduce scheme achieves a max speedup of roughly 5 when using 32 workers. The weak scaling results, shown in Figure 3b, show drastically improvedscaling results, as we observe nearly linear speedups as we increase the number of workers. How-ever, it is important to note that we are increasing the effective batch size as we increase the numberof workers, and it is well-known that training with large effective batch-sizes can yield models withsubstandard accuracy (Breuel, 2015).1 2 4 8 16 50 100Workers020406080100SpeedupPaleo: OneToAllPaleo: Tree AllReducePaleo: Butterfly AllReduceMurray el at. (2016)(a) Inception / weak1 2 4 8 16 32 64 128Workers020406080100120Estimated SpeedupOneToAllTree AllReduceButterfly AllReduce (b) AlexNet / weak1 2 4 8 16 32 64 128Workers012345678Estimated SpeedupOneToAllTree AllReduceButterfly AllReduce (c) AlexNet / strong1 2 4 8 16 32 64 128Workers0123456789Estimated SpeedupOneToAllTree AllReduceButterfly AllReduce (d) GAN / strongFigure 3: Comparison of P ALEO projected speedups for various networks under different scalingstrategies and communication schemes. (a-b) weak scaling. (c-d) strong scaling.4.3.2 GAN A RCHITECTUREPALEO can be applied to architectures other than CNNs. We profile a generative adversarial network(GAN) inspired by Radford et al. (2015) for the LSUN dataset with the same hardware assumptionsas the previous case study. Table 5 shows that P ALEO estimations are close to empirical TensorFlowrun time for both the discriminator and generator networks. Figure 3d plots the estimated speedupsfor training the model with a batch size 256 on up to 128 workers under strong scaling. With-out communication-intensive fully-connected layers, while training this GAN architecture is morescalable than AlexNet, P ALEO still only predicts an 8 sub-linear speedup with 64 workers.Table 5: Full pass time of the discriminator and generator in a GAN architecture.Forward pass (ms) Backward pass (ms)Discriminator TensorFlow 30.19 77.39PALEO Estimation 27.55 79.25Generator TensorFlow 110.11 374.18PALEO Estimation 117.02 324.495 C ONCLUSIONWe introduced P ALEO – an analytical performance model for exploring the space of scalable deeplearning systems. By extracting computational requirements carried by neural network architecturesand mapping them to the design space of software, hardware, and communication strategies, P A-LEO can effectively and accurately model the expected scalability and performance of a putativedeep learning system.8Published as a conference paper at ICLR 2017
S1Y403RQe
Final review: Sound paper but a very simple model, few experiments at start but more added.
6: Marginally above acceptance threshold
In PALEO the authors propose a simple model of execution of deep neural networks. It turns out that even this simple model allows to quite accurately predict the computation time for image recognition networks both in single-machine and distributed settings. The ability to predict network running time is very useful, and the paper shows that even a simple model does it reasonably, which is a strength. But the tests are only performed on a few networks of very similar type (AlexNet, Inception, NiN) and only in a few settings. Much broader experiments, including a variety of models (RNNs, fully connected, adversarial, etc.) in a variety of settings (different batch sizes, layer sizes, node placement on devices, etc.) would probably reveal weaknesses of the proposed very simplified model. This is why this reviewer considers this paper borderline -- it's a first step, but a very basic one and without sufficiently large experimental underpinning. More experiments were added, so I'm updating my score.
4: The reviewer is confident but not absolutely certain that the evaluation is correct
SyVVJ85lg
ICLR.cc/2017/conference
2017
Paleo: A Performance Model for Deep Neural Networks
["Hang Qi", "Evan R. Sparks", "Ameet Talwalkar"]
Although various scalable deep learning software packages have been proposed, it remains unclear how to best leverage parallel and distributed computing infrastructure to accelerate their training and deployment. Moreover, the effectiveness of existing parallel and distributed systems varies widely based on the neural network architecture and dataset under consideration. In order to efficiently explore the space of scalable deep learning systems and quickly diagnose their effectiveness for a given problem instance, we introduce an analytical performance model called Paleo. Our key observation is that a neural network architecture carries with it a declarative specification of the computational requirements associated with its training and evaluation. By extracting these requirements from a given architecture and mapping them to a specific point within the design space of software, hardware and communication strategies, Paleo can efficiently and accurately model the expected scalability and performance of a putative deep learning system. We show that Paleo is robust to the choice of network architecture, hardware, software, communication schemes, and parallelization strategies. We further demonstrate its ability to accurately model various recently published scalability results for CNNs such as NiN, Inception and AlexNet.
["Deep learning"]
ABSTRACTAlthough various scalable deep learning software packages have been proposed,it remains unclear how to best leverage parallel and distributed computing infras-tructure to accelerate their training and deployment. Moreover, the effectivenessof existing parallel and distributed systems varies widely based on the neural net-work architecture and dataset under consideration. In order to efficiently explorethe space of scalable deep learning systems and quickly diagnose their effective-ness for a given problem instance, we introduce an analytical performance modelcalled P ALEO . Our key observation is that a neural network architecture carrieswith it a declarative specification of the computational requirements associatedwith its training and evaluation. By extracting these requirements from a givenarchitecture and mapping them to a specific point within the design space of soft-ware, hardware and communication strategies, P ALEO can efficiently and accu-rately model the expected scalability and performance of a putative deep learningsystem. We show that P ALEO is robust to the choice of network architecture,hardware, software, communication schemes, and parallelization strategies. Wefurther demonstrate its ability to accurately model various recently published scal-ability results for CNNs such as NiN, Inception and AlexNet.1 I NTRODUCTIONDeep learning has been successfully applied in many areas including natural language processingand computer vision. The scale of modern datasets and the millions to billions of parameters in thesedeep networks pose new challenges when designing computational systems that leverage paralleland distributed computing. Indeed, several important open questions remain:How fast can we train or evaluate a model on a user’s given hardware?For a given architecture, how can a user best leverage parallel and distributed computation?How can we design a new neural network architecture that can be trained and evaluated efficientlyunder common hardware setups?In response to these fundamental questions, various software packages and systemshave beenpainstakingly developed, e.g. DistBelief (Dean et al., 2012), TensorFlow (Abadi et al., 2015),MXNet (Chen et al., 2015), SparkNet (Moritz et al., 2015), FireCaffe (Iandola et al., 2016). More-over, expensive benchmarking efforts, e.g., Chintala et al. (2016), have performed brute-force pro-filing on some of these deep learning systems on a handful network architectures.In this work we aim to tackle these questions by taking an analytical approach to model the per-formance of arbitrary learning systems. Our work hinges on the observation that a neural networkarchitecture is a declarative specification of the forward and backward propagation steps requiredfor training and deploying the network. However, given this specification, there is a rich designspace of algorithms, hardware choices, and communications strategies to most efficiently executethese specifications. We build a novel performance model called P ALEO1that maps this declarativespecification to arbitrary points in this design space to estimate the execution time of training and1Open-sourced at https://github.com/TalwalkarLab/paleo .1Published as a conference paper at ICLR 2017deploying deep neural networks.2PALEO applies broadly to a wide variety of neural network archi-tectures and for arbitrary learning systems within this design space, and thus can serve as a valuabletool for practitioners and developers to answer the questions mentioned above.2 B ACKGROUND AND RELATED WORKTraining deep neural networks can be very time and resource consuming, and it is not uncommonfor the training of a model to take days across tens or hundreds of machines. Several high-levelstrategies have been proposed to accelerate this process, and these strategies collectively define thedesign space considered by P ALEO .Hardware acceleration approaches are designed to accelerate the computation of the forward andbackward passes and often make use of specialized hardware, such as GPUs (Coates et al., 2013), ormore recently custom hardware designed specifically for deep learning (Jouppi, 2016). P ALEO ac-cepts constants associated with hardware as input (e.g., peak FLOPS, network bandwidth) and au-tomatically adapts to changes in this input.Software acceleration via specialized libraries, e.g., cuda-convnet (Krizhevsky, 2014a) andcuDNN (Chetlur et al., 2014), and highly-optimized algorithms for commonly used primitives,e.g., Chetlur et al. (2014) and Lavin (2016), can also be used to accelerate deep model training.PALEO dynamically picks among the best available implementation for each layer at execution time.Parallelization is a natural approach to consider, and can involve training a neural network withmany computational devices (e.g. CPUs, GPUs) on a single machine, or across a network. Thereare two major parallelization strategies when it comes to training deep neural network models atscale: data parallelism and model parallelism. In classical data parallel systems, each worker storesan identical copy of the model and computes gradients only on a shard of the training examples, andthese gradients are aggregated to update the model. In contrast, model parallel systems shard themodel itself across the workers, while the training data may be stored on each worker or shardedacross the workers. P ALEO models both data and model parallel settings.Communication schemes have also been explored to accelerate incremental model updates acrossdistributed workers. Three of the most common schemes are (Iandola et al., 2016; Zhao & Canny,2013): (i) the OneToAll scheme has a 2KT communication time as a master node must communi-cate with all Kworkers individually, where Tis the time for communicating data through one linkin the network; (ii) the Tree AllReduce scheme takes 2 log2(K)Tfor weights to be aggregated andbroadcasted to all workers following a tree topology; and (iii) the Butterfly AllReduce scheme inwhich all workers receive aggregated weights in log2(K)Tusing a butterfly network. We restrictthe focus of P ALEO to distributed communication schemes that return equivalent results to serialexecutions, and we thus do not consider the recently introduced butterfly mixing scheme of Zhao &Canny (2013), or non-deterministic asynchronous parameter servers.3 P ALEOWe now present P ALEO , a model for the lean consumption of resources during the training of DNNs.PALEO decomposes the total execution time into computation time and communication time; bothare estimated for each pass of a neural network’s evaluation given user specified choices within thedesign space of algorithms, hardware, and communications strategies. Figure 1 illustrates the overallidea. The computation time is calculated from factors including the size of the computation inputsimposed by the network architecture, the complexity of the algorithms and operations involved inthe network layers, and the performance of the hardware to be used. The communication timeis estimated based on the computational dependencies imposed by the network, the communicationbandwidth of the hardware, and the assumed parallelization schemes. Once the network architectureand design space choices are fixed, all of the key factors in P ALEO can be derived, and we canestimate execution time without actually implementing the entire network and/or an underlyingsoftware package.2Training a neural network involves both forward and backward propagation, whereas deploying a trainednetwork on a new data point involves only forward propagation. Thus, estimating the execution time of modeltraining encompasses both model training and deployment, and is the focus of this work.2Published as a conference paper at ICLR 2017Network architecture GPUs CPUs GPU cluster CPU cluster scale-out scale-up Complexity (e.g. FLOP counts) Communication bandwidth (GB/s) Computation speed (TFLOPS) Memory (Data, weights, gradients, activations) Dependencies (Network architecture) Communication Computation Execution Time Software framework compute network Parallelization strategies (Model parallel, data parallel) Operation selection (e.g. GEMM, FFT, Tiled FFT ) Communication scheme (OneToAll, Tree AllReduce, Butterfly AllReduce) Figure 1: Overview of the P ALEO modeling approach. P ALEO decomposes execution time intocomputation time and communication time, which can be derived from various factors implicitlyspecified by network architectures and hardware configurations.3.1 C OMPUTATION MODELINGWe first describe the computation model on a single machine. The computation in a neural networkcan be expressed as a directed graph N=hfu(i)gni=1;f(u(i); u(j))gi, where each node u(i)isassociated with an operation f(i)on a device d(i); each directed edge (u(i); u(j))represents thedependency that operation f(j)cannot be executed until f(i)is finished. We use Pa (u(j))to representthe set of immediate parent nodes of u(j). We model each layer in the neural network as a node, andthe connections between layers as edges. In the following text, we omit the superscript index whenthere is no ambiguity.3.1.1 C OMPUTATION TIME FOR A SINGLE LAYERTo model the runtime of a layer u, we consider the operation fand decompose the execution timeof this operation into three terms (as shown in Figure 2a): the time to fetch the input produced byits parent layersR(Pa(u)); the time to perform the computation of fon the designated device d,i.e.,C(f; d); and the time to write the outputs to the local memory W(f; d). Assuming a sequentialexecution, the runtime for a node ucan be written as a simple summation:T(u) =R(Pa(u)) +C(f; d) +W(f; d): (1)Among the three terms, the computation time C(f; d)is calculated as the FLOP (floating-point op-eration) counts of the operation divided by the computation speed (FLOPS; floating-point operationper second) of the device: C(f; d) = FLOPs (f)=speed (d):The IO timesR(Pa(u))andW(u)arecalculated as the size of memory footprints involved in the computation divided by the IO bandwidthof the device. When inputs must be fetched from other devices, e.g. in the case of model parallelism,this IO bandwidth refers to the communication bandwidth between two devices. P ALEO treats thespeed and bandwidth of devices as parameters given to the model so that users can configure themto reflect user-specific configurations.Using this per-layer model, we will next describe how to model the computation time of an entirenetwork. We will subsequently we present FLOP counts for layer operations commonly used inmodern DNNs in Section 4.3.1.2 C OMPUTATION TIME FOR NETWORKSWe first consider simple sequential structures where layers are constructed one after another, as inFigure 2b. The total execution time can be calculated as the sum of execution time of all layersT(N) =Pni=1T(u(i)). While this calculation may seem trivial at first glance, it forms the founda-tion for modeling execution time for more complex architectures.3Published as a conference paper at ICLR 2017Operation f x y Write outputs Fetch inputs Conv Pooling Conv Pooling ......Pooling Pooling Conv Conv Pooling FC......Device 1 Device 2 (a) (b) (c) Figure 2: (a) The execution time of a node in the computation graph consists of the time for fetchinginput, computing results, and writing results to memory. (b) An example of a sequential computationgraph segment. (c) An example of a parallel computation graph segment.Parallel structures are not uncommon in DNNs; for example, the Inception model (Szegedy et al.,2015a) contains layers that can be evaluated simultaneously, and layers on different workers canrun in parallel in model parallel setups (Dean et al., 2012). Figure 2c illustrates a parallel structure,where two convolutional layers (each followed by a pooling layer) are scheduled to be executed ontwo devices.To model computation time of parallel structures, we identify synchronization barriers before andafter every parallel structure and introduce a notation of supernode U=fG(i)gki=1as a set of disjointsubgraphs sandwiched by the synchronization barriers in the computation graph. When substitutingthe subgraphs with the supernode, the network is reduced to a sequential structure described above.For the supernode, the execution time T(U)is within the range [max iT(G(i));PiT(G(i))], wherethe lower bound corresponds to perfect parallelization, the upper bound corresponds to sequentialexecution. Note that the execution time of a subgraph T(G(i))can be calculated recursively.3.1.3 C OMPUTATION MODELING FOR LAYER OPERATIONSIn modern DNNs, the convolutional layer is one of the most commonly used and computation-ally intensive type of layer. For this reason, there has been many heavily optimized implementa-tions (Chetlur et al., 2014; Vasilache et al., 2015; Lavin, 2016). Deriving plausible FLOP countsfor other types of layers is a straightforward process, and in this section, we consider two leadingimplementations for convolutional operations: matrix multiplication and Fast Fourier Transform.Following the notation used by Chetlur et al. (2014), a 2D convolutional layer during forward prop-agation3takes an input feature map DNCHW(which has a batch of Ninput feature maps withshape HWandCchannels) and a set of convolutional filters FKCRS(Kfilters with shapeRSandCchannels). It produces NKfeature maps each of shape PQwhich can be calcu-lated from the shapes of inputs and filters together with additional striding and padding parameters.The FLOP counts for the convolution operation can be expressed as 2KCRSNPQ . A commonlyused implementation is to reduce convolution operations to matrix multiplications, which can beefficiently computed with well-optimized SGEMM routines on various platforms. Although theseFLOP counts ignore auxiliary operations (e.g. indexing arithmetic in efficient implementations),they nonetheless provide a good estimate of FLOP counts for matrix multiplication implementa-tions.Another implementation is based on Fast Fourier Transform (Vasilache et al., 2015): both input fea-ture maps and filters are transformed into the frequency domain, then element-wise multiplicationsare performed followed by an inverse Fourier transform. This implementation introduces computa-tion and memory overhead in discrete Fourier transforms, but reduces the computation complexitytoO(NCKHW +(NC+CK+NK)HW log(HW)). Convolutional layers with large filters or a3Our arguments generalize to N-dimensional settings, and similar arguments apply for the backward pass.4Published as a conference paper at ICLR 2017large problem size can benefit from FFT implementations. When counting FLOPs, it is not possibleto get exact counts without knowing the underlying implementation details. In P ALEO , we adopt thecommonly used FFT complexity 5nlog2nas the FLOP counts for complex-valued transformationsof size n(Cooley & Tukey, 1965). To account for the IO overhead caused by auxiliary memories,PALEO estimates the memory size required for complex-valued matrices in the frequency domainand incorporates it into the data reading and writing terms. For FFT-based implementations withtilings, P ALEO estimates the number of tiles from the convolution specifications.The choice of algorithm – matrix multiplication or FFT – is problem specific, as it depends on thefilter size, strides, input size of the convolutional layers, and memory workspace. In order to derivereasonable estimations for user-specific DNNs comparable to real executions, it is important for P A-LEO to make decisions comparable to real-world systems. Two common approaches are employedin existing DNN software frameworks and libraries to choose between these algorithms: (i) usingpredefined heuristics based on offline benchmarks; (ii) autotuning to empirically evaluate availablealgorithms on the given specification. Since autotuning is tied to platform and software implementa-tions, for maximum generality P ALEO by default takes the first approach. In particular, P ALEO usesheuristics from cuDNN to make algorithm choices while also accounting for user preferences.3.2 C OMMUNICATION MODELINGWe now describe our modeling for communication among multiple workers. Let jDjbe the size ofdata to be communicated between two workers, and define Bas the bandwidth of the communica-tion channel. Then the communication time can simply be written as Tcomm =jDj=B. By usingdifferent bandwidth configurations, P ALEO works for both scale-up setups (multiple GPUs on onemachine) and scale-out setups (multiple machines in a cluster). Moreover, in data-parallel settings,an AllReduce operation is performed to synchronize model parameters across all workers after everybackward pass. P ALEO considers three communications schemes: OneToAll, Tree AllReduce, andButterfly AllReduce. The communication time under these three schemes is described in Section 2.3.3 P LATFORM PERCENT OF PEAKThus far, we have assumed that deep learning software platforms make perfect use of their underly-ing hardware. That is, that the CPUs and GPUs are operating at “peak FLOPS”, and that networkand IO links are fully saturated. This has allowed our model to be platform independent.However, this assumption is unreasonable in practice. For instance, achieving peak FLOPS is adifficult proposition, usually requiring customized libraries developed by organizations with intimateknowledge of the underlying hardware, e.g., Intel’s MKL (int, 2009), ATLAS (Whaley & Petitet,2005), and cuDNN. Even these specially tuned libraries may fall short of peak execution by as muchas 40% (atl). Further, anycomputation done outside the scope of P ALEO (e.g. job scheduling, datacopying) will exacerbate the observed inefficiency in practice. Sometimes such inefficiencies arewarranted from the perspective of ease of programmability or maintenance of the learning platforms.Rather than trying to measure and capture every source of inefficiency in every learning framework,we take a small number of representative deep learning workloads which contain convolutions,pooling, dropout, and fully connected layers and run them for a short time on a single GPU. Givenobserved total throughput and estimated total throughput on this benchmark we fit a scaling constantto estimate a platform percent of peak (PPP) parameter which captures the average relative ineffi-ciency of the platform compared to peak FLOPS. Highly specialized frameworks (e.g. cuDNN) willin general have a computational PPP that is close to 100%, while frameworks with higher overheadsmay have PPP constants closer to 50% or less.We follow a similar benchmarking procedure to estimate PPP for the communication link for Ten-sorFlow. For the FireCaffe experiments, we estimate the communication PPP based on the empiricalresults for communication reported in Table 4 of the paper.4 E XPERIMENTSWe now present empirical results which illustrate that P ALEO is robust to the choice of networkarchitecture, hardware, communication schemes, and parallelization strategies.5Published as a conference paper at ICLR 20174.1 L AYER -WISE EVALUATIONWe first compare P ALEO -estimated runtimes with actual runtimes measured from Tensor-Flow4(Abadi et al., 2015) execution in two popular CNN architectures: the one-tower variant ofAlexNet (Krizhevsky, 2014b) and the 16-layer VGG network (Simonyan & Zisserman, 2014). P A-LEO uses cuDNN heuristics to choose algorithms and the auto-tuning mechanism in TensorFlow isdisabled. Experiments are run on a NVIDIA TITAN X GPU with a 4 GB workspace limit.For convolutional and fully connected layers, we evaluate forward computation, backward compu-tation with respect to layer inputs, and backward computation with respect to filters separately (seeFigure 4 in the appendix for the plots of layer-by-layer comparison.) Table 1 shows a comparisonof full forward pass and backward pass with all layers included. P ALEO ’s per layer estimates arequite close to the actual TensorFlow execution, with only one layer – ‘fc6’ – consistently beingunderestimated by P ALEO .5In spite of this issue with ‘fc6’, our full pass estimates are remarkablyaccurate.Table 1: Full pass time of TensorFlow and P ALEO estimation on AlexNet and VGG-16.Forward pass (ms) Backward pass (ms)AlexNet TensorFlow 44.00 155.10PALEO Estimation 45.96 118.44VGG-16 TensorFlow 400.46 1117.48PALEO Estimation 435.46 1077.274.2 C ASE STUDIESWe now revisit the questions posed at the beginning of the paper and demonstrate how P ALEO canhelp in answering them. In this subsection we present three case studies. We extract experiment se-tups including network architectures, hardware specifications, communication schemes, and paral-lelization strategies from selected publications focusing on scalability of CNNs. We then plug thoseconfigurations into P ALEO and compare the simulated scalability results with the reported results inthe original publications. Table 2 summaries the configurations of P ALEO in these experiments.Table 2: P ALEO configurations used in the case studies.Case 1 Case 2 Case 3Net NiN Inception v3 AlexNetDevice NVIDIA K20X NVIDIA K20 NVIDIA K20Workers Up to 128 Up to 100 Up to 8Bandwidth 70 Gbps 10 Gbps 6 GB/sCommunication Tree AllReduce Parameter Server VariousParallelization Data Parallelism Data Parallelism HybridPlatform Caffe TensorFlow cuda-convnet2One Step Time6PALEO Estimation 1918 ms 4269 ms 402 msReported Time72275 ms – 418 ms4TensorFlow 0.9 with cuDNN 4 backend.5Examining the TensorFlow execution with the NVIDIA profiler revealed that TensorFlow spent two-thirdsof its reported ‘fc6’ time in transforming data layout between NHWC and NCHW when calling the underlyingcuBLAS primitives.6Total time of forward pass, backward pass, and parameter update for one mini-batch on one worker.7Reported times for Cases 1 and 3 are derived approximately from information in the publications. For Case2 no run time information is provided.6Published as a conference paper at ICLR 20174.2.1 C ASE 1: N INWITH FIRECAFFEFireCaffe (Iandola et al., 2016) adopts the Tree AllReduce communication scheme when training aNiN model (Lin et al., 2013) in data parallel settings with up to 128 servers on the Titan supercom-puter. They report a 38 speedup for NiN with batch size 1024 relative to single-GPU performance.Tabel 3 shows the results from P ALEO compared with the results reported by FireCaffe.Table 3: Comparison between P ALEO estimation and FireCaffe for training NiN.FireCaffe PALEO EstimationWorkers Batch size Train Time Speedup Train Time Speedup1 256 5.8 days 1 4.9 days 132 256 11 hours 13 7.6 hours 15.5 32 1024 6 hours 23 4.6 hours 25.3 128 1024 3.6 hours 39 2.3 hours 51.6 4.2.2 C ASE 2: I NCEPTION WITH TENSOR FLOWMurray et al. (2016) reported their results in synchronously training the Inception model (Szegedyet al., 2015b) with TensorFlow and achieved a 56 speedup with 100 workers. They apply a weakscaling strategy with batch size 256 to keep GPUs saturated. Although Murray et al. (2016) lever-aged a distributed parameter server rather than one of the three communications schemes consideredin P ALEO , the communication cost of Butterfly AllReduce can be viewed as a lower bound (Zhao &Canny, 2013). To account for the fact that they train with worker nodes each of which have 8 GPUs,we assumes a linear speedup for GPUs on the same host. Figure 3a shows a comparison betweenreported speedups and P ALEO estimated speedups. For absolute runtime, in one of the experiments,their model completes 20 epochs of training after 100 hours when using 8 Tesla K40’s and a batchsize 256. P ALEO projects a 111 hours runtime under the same setting.4.2.3 C ASE 3: A LEXNET WITH HYBRID PARALLELISMKrizhevsky (2014b) describes a hybrid model and data parallelism approach for training AlexNetusing up to 8 GPUs with a weak scaling strategy. In his setup, each of the two CPUs connects to 4GPUs, the communication bandwidth is penalized by 50% across the two groups as mentioned inthe paper. Table 4 shows the comparison between P ALEO ’s projection and the original result, whichare quite similar. Moreover, whereas Krizhevsky (2014b) does not quantify the speedup of hybridparallelism relative to strict data parallelism, P ALEO simulates training the entire network with onlydata parallelism (see last two columns of Table 4) in order to estimate this speedup.Table 4: Comparison between P ALEO estimation and Krizhevsky (2014b) for training AlexNet.One Weird Trick PALEO EstimationHybrid parallelism Hybrid parallelism Data parallelismWorkers Train Time (h) Speedup Train Time (h) Speedup Train Time (h) Speedup1 98.95 1 96.31 1 96.31 12 50.24 1.95 49.57 1.94 55.90 1.724 26.20 3.74 25.42 3.79 32.82 3.038 16.68 6.25 14.37 6.70 23.65 5.404.3 H YPOTHETICAL SETUPSIn this subsection, we use P ALEO in two hypothetical setups to analyze the scalability of AlexNetand a GAN model under different communication schemes.7Published as a conference paper at ICLR 20174.3.1 A LEXNET IN A CLOUD -BASED SETUPIn this study, we present an analysis of data parallel training of AlexNet. We assume a modern cloudsetup with a cluster of servers each equipped with a NVIDIA K80 GPU connected to a 20 Gbpsnetwork. In contrast to the Inception model with 23 million parameter, the one-tower variant ofAlexNet has 50 million parameters and therefore doubles communication workload when trainingwith data parallelism.We show strong scaling for all three communication schemes in Figure 3c. Even when assuminga fairly large batch size of 2048 which is beneficial in distributed settings, we see very modestspeedups. The OneToAll scheme achieves a max speedup of less than a 2 using 4 workers, whilethe communication-efficient Butterfly AllReduce scheme achieves a max speedup of roughly 5 when using 32 workers. The weak scaling results, shown in Figure 3b, show drastically improvedscaling results, as we observe nearly linear speedups as we increase the number of workers. How-ever, it is important to note that we are increasing the effective batch size as we increase the numberof workers, and it is well-known that training with large effective batch-sizes can yield models withsubstandard accuracy (Breuel, 2015).1 2 4 8 16 50 100Workers020406080100SpeedupPaleo: OneToAllPaleo: Tree AllReducePaleo: Butterfly AllReduceMurray el at. (2016)(a) Inception / weak1 2 4 8 16 32 64 128Workers020406080100120Estimated SpeedupOneToAllTree AllReduceButterfly AllReduce (b) AlexNet / weak1 2 4 8 16 32 64 128Workers012345678Estimated SpeedupOneToAllTree AllReduceButterfly AllReduce (c) AlexNet / strong1 2 4 8 16 32 64 128Workers0123456789Estimated SpeedupOneToAllTree AllReduceButterfly AllReduce (d) GAN / strongFigure 3: Comparison of P ALEO projected speedups for various networks under different scalingstrategies and communication schemes. (a-b) weak scaling. (c-d) strong scaling.4.3.2 GAN A RCHITECTUREPALEO can be applied to architectures other than CNNs. We profile a generative adversarial network(GAN) inspired by Radford et al. (2015) for the LSUN dataset with the same hardware assumptionsas the previous case study. Table 5 shows that P ALEO estimations are close to empirical TensorFlowrun time for both the discriminator and generator networks. Figure 3d plots the estimated speedupsfor training the model with a batch size 256 on up to 128 workers under strong scaling. With-out communication-intensive fully-connected layers, while training this GAN architecture is morescalable than AlexNet, P ALEO still only predicts an 8 sub-linear speedup with 64 workers.Table 5: Full pass time of the discriminator and generator in a GAN architecture.Forward pass (ms) Backward pass (ms)Discriminator TensorFlow 30.19 77.39PALEO Estimation 27.55 79.25Generator TensorFlow 110.11 374.18PALEO Estimation 117.02 324.495 C ONCLUSIONWe introduced P ALEO – an analytical performance model for exploring the space of scalable deeplearning systems. By extracting computational requirements carried by neural network architecturesand mapping them to the design space of software, hardware, and communication strategies, P A-LEO can effectively and accurately model the expected scalability and performance of a putativedeep learning system.8Published as a conference paper at ICLR 2017
H1GUJz-Ne
7: Good paper, accept
This paper introduces an analytical performance model to estimate the training and evaluation time of a given network for different software, hardware and communication strategies. The paper is very clear. The authors included many freedoms in the variables while calculating the run-time of a network such as the number of workers, bandwidth, platform, and parallelization strategy. Their results are consistent with the reported results from literature. Furthermore, their code is open-source and the live demo is looking good. The authors mentioned in their comment that they will allow users to upload customized networks and model splits in the coming releases of the interface, then the tool can become very useful. It would be interesting to see some newer network architectures with skip connections such as ResNet, and DenseNet.
4: The reviewer is confident but not absolutely certain that the evaluation is correct
SyVVJ85lg
ICLR.cc/2017/conference
2017
Paleo: A Performance Model for Deep Neural Networks
["Hang Qi", "Evan R. Sparks", "Ameet Talwalkar"]
Although various scalable deep learning software packages have been proposed, it remains unclear how to best leverage parallel and distributed computing infrastructure to accelerate their training and deployment. Moreover, the effectiveness of existing parallel and distributed systems varies widely based on the neural network architecture and dataset under consideration. In order to efficiently explore the space of scalable deep learning systems and quickly diagnose their effectiveness for a given problem instance, we introduce an analytical performance model called Paleo. Our key observation is that a neural network architecture carries with it a declarative specification of the computational requirements associated with its training and evaluation. By extracting these requirements from a given architecture and mapping them to a specific point within the design space of software, hardware and communication strategies, Paleo can efficiently and accurately model the expected scalability and performance of a putative deep learning system. We show that Paleo is robust to the choice of network architecture, hardware, software, communication schemes, and parallelization strategies. We further demonstrate its ability to accurately model various recently published scalability results for CNNs such as NiN, Inception and AlexNet.
["Deep learning"]
ABSTRACTAlthough various scalable deep learning software packages have been proposed,it remains unclear how to best leverage parallel and distributed computing infras-tructure to accelerate their training and deployment. Moreover, the effectivenessof existing parallel and distributed systems varies widely based on the neural net-work architecture and dataset under consideration. In order to efficiently explorethe space of scalable deep learning systems and quickly diagnose their effective-ness for a given problem instance, we introduce an analytical performance modelcalled P ALEO . Our key observation is that a neural network architecture carrieswith it a declarative specification of the computational requirements associatedwith its training and evaluation. By extracting these requirements from a givenarchitecture and mapping them to a specific point within the design space of soft-ware, hardware and communication strategies, P ALEO can efficiently and accu-rately model the expected scalability and performance of a putative deep learningsystem. We show that P ALEO is robust to the choice of network architecture,hardware, software, communication schemes, and parallelization strategies. Wefurther demonstrate its ability to accurately model various recently published scal-ability results for CNNs such as NiN, Inception and AlexNet.1 I NTRODUCTIONDeep learning has been successfully applied in many areas including natural language processingand computer vision. The scale of modern datasets and the millions to billions of parameters in thesedeep networks pose new challenges when designing computational systems that leverage paralleland distributed computing. Indeed, several important open questions remain:How fast can we train or evaluate a model on a user’s given hardware?For a given architecture, how can a user best leverage parallel and distributed computation?How can we design a new neural network architecture that can be trained and evaluated efficientlyunder common hardware setups?In response to these fundamental questions, various software packages and systemshave beenpainstakingly developed, e.g. DistBelief (Dean et al., 2012), TensorFlow (Abadi et al., 2015),MXNet (Chen et al., 2015), SparkNet (Moritz et al., 2015), FireCaffe (Iandola et al., 2016). More-over, expensive benchmarking efforts, e.g., Chintala et al. (2016), have performed brute-force pro-filing on some of these deep learning systems on a handful network architectures.In this work we aim to tackle these questions by taking an analytical approach to model the per-formance of arbitrary learning systems. Our work hinges on the observation that a neural networkarchitecture is a declarative specification of the forward and backward propagation steps requiredfor training and deploying the network. However, given this specification, there is a rich designspace of algorithms, hardware choices, and communications strategies to most efficiently executethese specifications. We build a novel performance model called P ALEO1that maps this declarativespecification to arbitrary points in this design space to estimate the execution time of training and1Open-sourced at https://github.com/TalwalkarLab/paleo .1Published as a conference paper at ICLR 2017deploying deep neural networks.2PALEO applies broadly to a wide variety of neural network archi-tectures and for arbitrary learning systems within this design space, and thus can serve as a valuabletool for practitioners and developers to answer the questions mentioned above.2 B ACKGROUND AND RELATED WORKTraining deep neural networks can be very time and resource consuming, and it is not uncommonfor the training of a model to take days across tens or hundreds of machines. Several high-levelstrategies have been proposed to accelerate this process, and these strategies collectively define thedesign space considered by P ALEO .Hardware acceleration approaches are designed to accelerate the computation of the forward andbackward passes and often make use of specialized hardware, such as GPUs (Coates et al., 2013), ormore recently custom hardware designed specifically for deep learning (Jouppi, 2016). P ALEO ac-cepts constants associated with hardware as input (e.g., peak FLOPS, network bandwidth) and au-tomatically adapts to changes in this input.Software acceleration via specialized libraries, e.g., cuda-convnet (Krizhevsky, 2014a) andcuDNN (Chetlur et al., 2014), and highly-optimized algorithms for commonly used primitives,e.g., Chetlur et al. (2014) and Lavin (2016), can also be used to accelerate deep model training.PALEO dynamically picks among the best available implementation for each layer at execution time.Parallelization is a natural approach to consider, and can involve training a neural network withmany computational devices (e.g. CPUs, GPUs) on a single machine, or across a network. Thereare two major parallelization strategies when it comes to training deep neural network models atscale: data parallelism and model parallelism. In classical data parallel systems, each worker storesan identical copy of the model and computes gradients only on a shard of the training examples, andthese gradients are aggregated to update the model. In contrast, model parallel systems shard themodel itself across the workers, while the training data may be stored on each worker or shardedacross the workers. P ALEO models both data and model parallel settings.Communication schemes have also been explored to accelerate incremental model updates acrossdistributed workers. Three of the most common schemes are (Iandola et al., 2016; Zhao & Canny,2013): (i) the OneToAll scheme has a 2KT communication time as a master node must communi-cate with all Kworkers individually, where Tis the time for communicating data through one linkin the network; (ii) the Tree AllReduce scheme takes 2 log2(K)Tfor weights to be aggregated andbroadcasted to all workers following a tree topology; and (iii) the Butterfly AllReduce scheme inwhich all workers receive aggregated weights in log2(K)Tusing a butterfly network. We restrictthe focus of P ALEO to distributed communication schemes that return equivalent results to serialexecutions, and we thus do not consider the recently introduced butterfly mixing scheme of Zhao &Canny (2013), or non-deterministic asynchronous parameter servers.3 P ALEOWe now present P ALEO , a model for the lean consumption of resources during the training of DNNs.PALEO decomposes the total execution time into computation time and communication time; bothare estimated for each pass of a neural network’s evaluation given user specified choices within thedesign space of algorithms, hardware, and communications strategies. Figure 1 illustrates the overallidea. The computation time is calculated from factors including the size of the computation inputsimposed by the network architecture, the complexity of the algorithms and operations involved inthe network layers, and the performance of the hardware to be used. The communication timeis estimated based on the computational dependencies imposed by the network, the communicationbandwidth of the hardware, and the assumed parallelization schemes. Once the network architectureand design space choices are fixed, all of the key factors in P ALEO can be derived, and we canestimate execution time without actually implementing the entire network and/or an underlyingsoftware package.2Training a neural network involves both forward and backward propagation, whereas deploying a trainednetwork on a new data point involves only forward propagation. Thus, estimating the execution time of modeltraining encompasses both model training and deployment, and is the focus of this work.2Published as a conference paper at ICLR 2017Network architecture GPUs CPUs GPU cluster CPU cluster scale-out scale-up Complexity (e.g. FLOP counts) Communication bandwidth (GB/s) Computation speed (TFLOPS) Memory (Data, weights, gradients, activations) Dependencies (Network architecture) Communication Computation Execution Time Software framework compute network Parallelization strategies (Model parallel, data parallel) Operation selection (e.g. GEMM, FFT, Tiled FFT ) Communication scheme (OneToAll, Tree AllReduce, Butterfly AllReduce) Figure 1: Overview of the P ALEO modeling approach. P ALEO decomposes execution time intocomputation time and communication time, which can be derived from various factors implicitlyspecified by network architectures and hardware configurations.3.1 C OMPUTATION MODELINGWe first describe the computation model on a single machine. The computation in a neural networkcan be expressed as a directed graph N=hfu(i)gni=1;f(u(i); u(j))gi, where each node u(i)isassociated with an operation f(i)on a device d(i); each directed edge (u(i); u(j))represents thedependency that operation f(j)cannot be executed until f(i)is finished. We use Pa (u(j))to representthe set of immediate parent nodes of u(j). We model each layer in the neural network as a node, andthe connections between layers as edges. In the following text, we omit the superscript index whenthere is no ambiguity.3.1.1 C OMPUTATION TIME FOR A SINGLE LAYERTo model the runtime of a layer u, we consider the operation fand decompose the execution timeof this operation into three terms (as shown in Figure 2a): the time to fetch the input produced byits parent layersR(Pa(u)); the time to perform the computation of fon the designated device d,i.e.,C(f; d); and the time to write the outputs to the local memory W(f; d). Assuming a sequentialexecution, the runtime for a node ucan be written as a simple summation:T(u) =R(Pa(u)) +C(f; d) +W(f; d): (1)Among the three terms, the computation time C(f; d)is calculated as the FLOP (floating-point op-eration) counts of the operation divided by the computation speed (FLOPS; floating-point operationper second) of the device: C(f; d) = FLOPs (f)=speed (d):The IO timesR(Pa(u))andW(u)arecalculated as the size of memory footprints involved in the computation divided by the IO bandwidthof the device. When inputs must be fetched from other devices, e.g. in the case of model parallelism,this IO bandwidth refers to the communication bandwidth between two devices. P ALEO treats thespeed and bandwidth of devices as parameters given to the model so that users can configure themto reflect user-specific configurations.Using this per-layer model, we will next describe how to model the computation time of an entirenetwork. We will subsequently we present FLOP counts for layer operations commonly used inmodern DNNs in Section 4.3.1.2 C OMPUTATION TIME FOR NETWORKSWe first consider simple sequential structures where layers are constructed one after another, as inFigure 2b. The total execution time can be calculated as the sum of execution time of all layersT(N) =Pni=1T(u(i)). While this calculation may seem trivial at first glance, it forms the founda-tion for modeling execution time for more complex architectures.3Published as a conference paper at ICLR 2017Operation f x y Write outputs Fetch inputs Conv Pooling Conv Pooling ......Pooling Pooling Conv Conv Pooling FC......Device 1 Device 2 (a) (b) (c) Figure 2: (a) The execution time of a node in the computation graph consists of the time for fetchinginput, computing results, and writing results to memory. (b) An example of a sequential computationgraph segment. (c) An example of a parallel computation graph segment.Parallel structures are not uncommon in DNNs; for example, the Inception model (Szegedy et al.,2015a) contains layers that can be evaluated simultaneously, and layers on different workers canrun in parallel in model parallel setups (Dean et al., 2012). Figure 2c illustrates a parallel structure,where two convolutional layers (each followed by a pooling layer) are scheduled to be executed ontwo devices.To model computation time of parallel structures, we identify synchronization barriers before andafter every parallel structure and introduce a notation of supernode U=fG(i)gki=1as a set of disjointsubgraphs sandwiched by the synchronization barriers in the computation graph. When substitutingthe subgraphs with the supernode, the network is reduced to a sequential structure described above.For the supernode, the execution time T(U)is within the range [max iT(G(i));PiT(G(i))], wherethe lower bound corresponds to perfect parallelization, the upper bound corresponds to sequentialexecution. Note that the execution time of a subgraph T(G(i))can be calculated recursively.3.1.3 C OMPUTATION MODELING FOR LAYER OPERATIONSIn modern DNNs, the convolutional layer is one of the most commonly used and computation-ally intensive type of layer. For this reason, there has been many heavily optimized implementa-tions (Chetlur et al., 2014; Vasilache et al., 2015; Lavin, 2016). Deriving plausible FLOP countsfor other types of layers is a straightforward process, and in this section, we consider two leadingimplementations for convolutional operations: matrix multiplication and Fast Fourier Transform.Following the notation used by Chetlur et al. (2014), a 2D convolutional layer during forward prop-agation3takes an input feature map DNCHW(which has a batch of Ninput feature maps withshape HWandCchannels) and a set of convolutional filters FKCRS(Kfilters with shapeRSandCchannels). It produces NKfeature maps each of shape PQwhich can be calcu-lated from the shapes of inputs and filters together with additional striding and padding parameters.The FLOP counts for the convolution operation can be expressed as 2KCRSNPQ . A commonlyused implementation is to reduce convolution operations to matrix multiplications, which can beefficiently computed with well-optimized SGEMM routines on various platforms. Although theseFLOP counts ignore auxiliary operations (e.g. indexing arithmetic in efficient implementations),they nonetheless provide a good estimate of FLOP counts for matrix multiplication implementa-tions.Another implementation is based on Fast Fourier Transform (Vasilache et al., 2015): both input fea-ture maps and filters are transformed into the frequency domain, then element-wise multiplicationsare performed followed by an inverse Fourier transform. This implementation introduces computa-tion and memory overhead in discrete Fourier transforms, but reduces the computation complexitytoO(NCKHW +(NC+CK+NK)HW log(HW)). Convolutional layers with large filters or a3Our arguments generalize to N-dimensional settings, and similar arguments apply for the backward pass.4Published as a conference paper at ICLR 2017large problem size can benefit from FFT implementations. When counting FLOPs, it is not possibleto get exact counts without knowing the underlying implementation details. In P ALEO , we adopt thecommonly used FFT complexity 5nlog2nas the FLOP counts for complex-valued transformationsof size n(Cooley & Tukey, 1965). To account for the IO overhead caused by auxiliary memories,PALEO estimates the memory size required for complex-valued matrices in the frequency domainand incorporates it into the data reading and writing terms. For FFT-based implementations withtilings, P ALEO estimates the number of tiles from the convolution specifications.The choice of algorithm – matrix multiplication or FFT – is problem specific, as it depends on thefilter size, strides, input size of the convolutional layers, and memory workspace. In order to derivereasonable estimations for user-specific DNNs comparable to real executions, it is important for P A-LEO to make decisions comparable to real-world systems. Two common approaches are employedin existing DNN software frameworks and libraries to choose between these algorithms: (i) usingpredefined heuristics based on offline benchmarks; (ii) autotuning to empirically evaluate availablealgorithms on the given specification. Since autotuning is tied to platform and software implementa-tions, for maximum generality P ALEO by default takes the first approach. In particular, P ALEO usesheuristics from cuDNN to make algorithm choices while also accounting for user preferences.3.2 C OMMUNICATION MODELINGWe now describe our modeling for communication among multiple workers. Let jDjbe the size ofdata to be communicated between two workers, and define Bas the bandwidth of the communica-tion channel. Then the communication time can simply be written as Tcomm =jDj=B. By usingdifferent bandwidth configurations, P ALEO works for both scale-up setups (multiple GPUs on onemachine) and scale-out setups (multiple machines in a cluster). Moreover, in data-parallel settings,an AllReduce operation is performed to synchronize model parameters across all workers after everybackward pass. P ALEO considers three communications schemes: OneToAll, Tree AllReduce, andButterfly AllReduce. The communication time under these three schemes is described in Section 2.3.3 P LATFORM PERCENT OF PEAKThus far, we have assumed that deep learning software platforms make perfect use of their underly-ing hardware. That is, that the CPUs and GPUs are operating at “peak FLOPS”, and that networkand IO links are fully saturated. This has allowed our model to be platform independent.However, this assumption is unreasonable in practice. For instance, achieving peak FLOPS is adifficult proposition, usually requiring customized libraries developed by organizations with intimateknowledge of the underlying hardware, e.g., Intel’s MKL (int, 2009), ATLAS (Whaley & Petitet,2005), and cuDNN. Even these specially tuned libraries may fall short of peak execution by as muchas 40% (atl). Further, anycomputation done outside the scope of P ALEO (e.g. job scheduling, datacopying) will exacerbate the observed inefficiency in practice. Sometimes such inefficiencies arewarranted from the perspective of ease of programmability or maintenance of the learning platforms.Rather than trying to measure and capture every source of inefficiency in every learning framework,we take a small number of representative deep learning workloads which contain convolutions,pooling, dropout, and fully connected layers and run them for a short time on a single GPU. Givenobserved total throughput and estimated total throughput on this benchmark we fit a scaling constantto estimate a platform percent of peak (PPP) parameter which captures the average relative ineffi-ciency of the platform compared to peak FLOPS. Highly specialized frameworks (e.g. cuDNN) willin general have a computational PPP that is close to 100%, while frameworks with higher overheadsmay have PPP constants closer to 50% or less.We follow a similar benchmarking procedure to estimate PPP for the communication link for Ten-sorFlow. For the FireCaffe experiments, we estimate the communication PPP based on the empiricalresults for communication reported in Table 4 of the paper.4 E XPERIMENTSWe now present empirical results which illustrate that P ALEO is robust to the choice of networkarchitecture, hardware, communication schemes, and parallelization strategies.5Published as a conference paper at ICLR 20174.1 L AYER -WISE EVALUATIONWe first compare P ALEO -estimated runtimes with actual runtimes measured from Tensor-Flow4(Abadi et al., 2015) execution in two popular CNN architectures: the one-tower variant ofAlexNet (Krizhevsky, 2014b) and the 16-layer VGG network (Simonyan & Zisserman, 2014). P A-LEO uses cuDNN heuristics to choose algorithms and the auto-tuning mechanism in TensorFlow isdisabled. Experiments are run on a NVIDIA TITAN X GPU with a 4 GB workspace limit.For convolutional and fully connected layers, we evaluate forward computation, backward compu-tation with respect to layer inputs, and backward computation with respect to filters separately (seeFigure 4 in the appendix for the plots of layer-by-layer comparison.) Table 1 shows a comparisonof full forward pass and backward pass with all layers included. P ALEO ’s per layer estimates arequite close to the actual TensorFlow execution, with only one layer – ‘fc6’ – consistently beingunderestimated by P ALEO .5In spite of this issue with ‘fc6’, our full pass estimates are remarkablyaccurate.Table 1: Full pass time of TensorFlow and P ALEO estimation on AlexNet and VGG-16.Forward pass (ms) Backward pass (ms)AlexNet TensorFlow 44.00 155.10PALEO Estimation 45.96 118.44VGG-16 TensorFlow 400.46 1117.48PALEO Estimation 435.46 1077.274.2 C ASE STUDIESWe now revisit the questions posed at the beginning of the paper and demonstrate how P ALEO canhelp in answering them. In this subsection we present three case studies. We extract experiment se-tups including network architectures, hardware specifications, communication schemes, and paral-lelization strategies from selected publications focusing on scalability of CNNs. We then plug thoseconfigurations into P ALEO and compare the simulated scalability results with the reported results inthe original publications. Table 2 summaries the configurations of P ALEO in these experiments.Table 2: P ALEO configurations used in the case studies.Case 1 Case 2 Case 3Net NiN Inception v3 AlexNetDevice NVIDIA K20X NVIDIA K20 NVIDIA K20Workers Up to 128 Up to 100 Up to 8Bandwidth 70 Gbps 10 Gbps 6 GB/sCommunication Tree AllReduce Parameter Server VariousParallelization Data Parallelism Data Parallelism HybridPlatform Caffe TensorFlow cuda-convnet2One Step Time6PALEO Estimation 1918 ms 4269 ms 402 msReported Time72275 ms – 418 ms4TensorFlow 0.9 with cuDNN 4 backend.5Examining the TensorFlow execution with the NVIDIA profiler revealed that TensorFlow spent two-thirdsof its reported ‘fc6’ time in transforming data layout between NHWC and NCHW when calling the underlyingcuBLAS primitives.6Total time of forward pass, backward pass, and parameter update for one mini-batch on one worker.7Reported times for Cases 1 and 3 are derived approximately from information in the publications. For Case2 no run time information is provided.6Published as a conference paper at ICLR 20174.2.1 C ASE 1: N INWITH FIRECAFFEFireCaffe (Iandola et al., 2016) adopts the Tree AllReduce communication scheme when training aNiN model (Lin et al., 2013) in data parallel settings with up to 128 servers on the Titan supercom-puter. They report a 38 speedup for NiN with batch size 1024 relative to single-GPU performance.Tabel 3 shows the results from P ALEO compared with the results reported by FireCaffe.Table 3: Comparison between P ALEO estimation and FireCaffe for training NiN.FireCaffe PALEO EstimationWorkers Batch size Train Time Speedup Train Time Speedup1 256 5.8 days 1 4.9 days 132 256 11 hours 13 7.6 hours 15.5 32 1024 6 hours 23 4.6 hours 25.3 128 1024 3.6 hours 39 2.3 hours 51.6 4.2.2 C ASE 2: I NCEPTION WITH TENSOR FLOWMurray et al. (2016) reported their results in synchronously training the Inception model (Szegedyet al., 2015b) with TensorFlow and achieved a 56 speedup with 100 workers. They apply a weakscaling strategy with batch size 256 to keep GPUs saturated. Although Murray et al. (2016) lever-aged a distributed parameter server rather than one of the three communications schemes consideredin P ALEO , the communication cost of Butterfly AllReduce can be viewed as a lower bound (Zhao &Canny, 2013). To account for the fact that they train with worker nodes each of which have 8 GPUs,we assumes a linear speedup for GPUs on the same host. Figure 3a shows a comparison betweenreported speedups and P ALEO estimated speedups. For absolute runtime, in one of the experiments,their model completes 20 epochs of training after 100 hours when using 8 Tesla K40’s and a batchsize 256. P ALEO projects a 111 hours runtime under the same setting.4.2.3 C ASE 3: A LEXNET WITH HYBRID PARALLELISMKrizhevsky (2014b) describes a hybrid model and data parallelism approach for training AlexNetusing up to 8 GPUs with a weak scaling strategy. In his setup, each of the two CPUs connects to 4GPUs, the communication bandwidth is penalized by 50% across the two groups as mentioned inthe paper. Table 4 shows the comparison between P ALEO ’s projection and the original result, whichare quite similar. Moreover, whereas Krizhevsky (2014b) does not quantify the speedup of hybridparallelism relative to strict data parallelism, P ALEO simulates training the entire network with onlydata parallelism (see last two columns of Table 4) in order to estimate this speedup.Table 4: Comparison between P ALEO estimation and Krizhevsky (2014b) for training AlexNet.One Weird Trick PALEO EstimationHybrid parallelism Hybrid parallelism Data parallelismWorkers Train Time (h) Speedup Train Time (h) Speedup Train Time (h) Speedup1 98.95 1 96.31 1 96.31 12 50.24 1.95 49.57 1.94 55.90 1.724 26.20 3.74 25.42 3.79 32.82 3.038 16.68 6.25 14.37 6.70 23.65 5.404.3 H YPOTHETICAL SETUPSIn this subsection, we use P ALEO in two hypothetical setups to analyze the scalability of AlexNetand a GAN model under different communication schemes.7Published as a conference paper at ICLR 20174.3.1 A LEXNET IN A CLOUD -BASED SETUPIn this study, we present an analysis of data parallel training of AlexNet. We assume a modern cloudsetup with a cluster of servers each equipped with a NVIDIA K80 GPU connected to a 20 Gbpsnetwork. In contrast to the Inception model with 23 million parameter, the one-tower variant ofAlexNet has 50 million parameters and therefore doubles communication workload when trainingwith data parallelism.We show strong scaling for all three communication schemes in Figure 3c. Even when assuminga fairly large batch size of 2048 which is beneficial in distributed settings, we see very modestspeedups. The OneToAll scheme achieves a max speedup of less than a 2 using 4 workers, whilethe communication-efficient Butterfly AllReduce scheme achieves a max speedup of roughly 5 when using 32 workers. The weak scaling results, shown in Figure 3b, show drastically improvedscaling results, as we observe nearly linear speedups as we increase the number of workers. How-ever, it is important to note that we are increasing the effective batch size as we increase the numberof workers, and it is well-known that training with large effective batch-sizes can yield models withsubstandard accuracy (Breuel, 2015).1 2 4 8 16 50 100Workers020406080100SpeedupPaleo: OneToAllPaleo: Tree AllReducePaleo: Butterfly AllReduceMurray el at. (2016)(a) Inception / weak1 2 4 8 16 32 64 128Workers020406080100120Estimated SpeedupOneToAllTree AllReduceButterfly AllReduce (b) AlexNet / weak1 2 4 8 16 32 64 128Workers012345678Estimated SpeedupOneToAllTree AllReduceButterfly AllReduce (c) AlexNet / strong1 2 4 8 16 32 64 128Workers0123456789Estimated SpeedupOneToAllTree AllReduceButterfly AllReduce (d) GAN / strongFigure 3: Comparison of P ALEO projected speedups for various networks under different scalingstrategies and communication schemes. (a-b) weak scaling. (c-d) strong scaling.4.3.2 GAN A RCHITECTUREPALEO can be applied to architectures other than CNNs. We profile a generative adversarial network(GAN) inspired by Radford et al. (2015) for the LSUN dataset with the same hardware assumptionsas the previous case study. Table 5 shows that P ALEO estimations are close to empirical TensorFlowrun time for both the discriminator and generator networks. Figure 3d plots the estimated speedupsfor training the model with a batch size 256 on up to 128 workers under strong scaling. With-out communication-intensive fully-connected layers, while training this GAN architecture is morescalable than AlexNet, P ALEO still only predicts an 8 sub-linear speedup with 64 workers.Table 5: Full pass time of the discriminator and generator in a GAN architecture.Forward pass (ms) Backward pass (ms)Discriminator TensorFlow 30.19 77.39PALEO Estimation 27.55 79.25Generator TensorFlow 110.11 374.18PALEO Estimation 117.02 324.495 C ONCLUSIONWe introduced P ALEO – an analytical performance model for exploring the space of scalable deeplearning systems. By extracting computational requirements carried by neural network architecturesand mapping them to the design space of software, hardware, and communication strategies, P A-LEO can effectively and accurately model the expected scalability and performance of a putativedeep learning system.8Published as a conference paper at ICLR 2017
SyzvzN7Qx
Technically sound. Only useful under the assumption that the code is released.
6: Marginally above acceptance threshold
This paper is technically sound. It highlights well the strengths and weaknesses of the proposed simplified model. In terms of impact, its novelty is limited, in the sense that the authors did seemingly the right thing and obtained the expected outcomes. The idea of modeling deep learning computation is not in itself particularly novel. As a companion paper to an open source release of the model, it would meet my bar of acceptance in the same vein as a paper describing a novel dataset, which might not provide groundbreaking insights, yet be generally useful to the community. In the absence of released code, even if the authors promise to release it soon, I am more ambivalent, since that's where all the value lies. It would also be a different story if the authors had been able to use this framework to make novel architectural decisions that improved training scalability in some way, and incorporated such new insights in the paper. UPDATED: code is now available. Revised review accordingly.
4: The reviewer is confident but not absolutely certain that the evaluation is correct
ryelgY5eg
ICLR.cc/2017/conference
2017
Optimal Binary Autoencoding with Pairwise Correlations
["Akshay Balsubramani"]
We formulate learning of a binary autoencoder as a biconvex optimization problem which learns from the pairwise correlations between encoded and decoded bits. Among all possible algorithms that use this information, ours finds the autoencoder that reconstructs its inputs with worst-case optimal loss. The optimal decoder is a single layer of artificial neurons, emerging entirely from the minimax loss minimization, and with weights learned by convex optimization. All this is reflected in competitive experimental results, demonstrating that binary autoencoding can be done efficiently by conveying information in pairwise correlations in an optimal fashion.
["Theory", "Unsupervised Learning", "Games"]
ABSTRACTWe formulate learning of a binary autoencoder as a biconvex optimization problemwhich learns from the pairwise correlations between encoded and decoded bits.Among all possible algorithms that use this information, ours finds the autoencoderthat reconstructs its inputs with worst-case optimal loss. The optimal decoderis a single layer of artificial neurons, emerging entirely from the minimax lossminimization, and with weights learned by convex optimization. All this is reflectedin competitive experimental results, demonstrating that binary autoencoding canbe done efficiently by conveying information in pairwise correlations in an optimalfashion.1 I NTRODUCTIONConsider a general autoencoding scenario, in which an algorithm learns a compression scheme forindependently, identically distributed (i.i.d.) V-dimensional bit vector data^x(1);:::; ^x(n). Forsome encoding dimension H, the algorithm encodes each data example ^x(i)= (^x(i)1;:::; ^x(i)V)>into anH-dimensional representation e(i), withH < V . It then decodes each e(i)back into areconstructed example ~x(i)using some small amount of additional memory, and is evaluated on thequality of the reconstruction by the cross-entropy loss commonly used to compare bit vectors. Agood autoencoder learns to compress the data into Hbits so as to reconstruct it with low loss.When the loss is squared reconstruction error and the goal is to compress data in RVtoRH, this isoften accomplished with principal component analysis (PCA), which projects the input data on thetopHeigenvectors of their covariance matrix (Bourlard & Kamp (1988); Baldi & Hornik (1989)).These eigenvectors in RVconstituteVH real values of additional memory needed to decode thecompressed data in RHback to the reconstructions in RV, which are linear combinations of theeigenvectors. Crucially, this total additional memory does not depend on the amount of data n,making it applicable when data are abundant.This paper considers a similar problem, except using bit-vector data and the cross-entropy recon-struction loss. Since we are compressing samples of i.i.d. V-bit data into H-bit encodings, a naturalapproach is to remember the pairwise statistics: the VH average correlations between pairs of bits inthe encoding and decoding, constituting as much additional memory as the eigenvectors used in PCA.The decoder uses these along with the H-bit encoded data, to produce V-bit reconstructions.We show how to efficiently learn the autoencoder with the worst-case optimal loss in this scenario,without any further assumptions, parametric or otherwise. It has some striking properties.The decoding function is identical in form to the one used in a standard binary autoencoder with onehidden layer (Bengio et al. (2013a)) and cross-entropy reconstruction loss. Specifically, each bit vof the decoding is the output of a logistic sigmoid artificial neuron of the encoded bits, with somelearned weights wv2RH. This form emerges as the uniquely optimal decoding function, and is notassumed as part of any explicit model.We show that the worst-case optimal reconstruction loss suffered by the autoencoder is convex inthese decoding weights W=fwvgv2[V], and in the encoded representations E. Though it is notMost of the work was done as a PhD student at UC San Diego.1Published as a conference paper at ICLR 2017jointly convex in both, the situation still admits a natural and efficient optimization algorithm inwhich the loss is alternately minimized in EandWwhile the other is held fixed. The algorithmis practical and performs well empirically, learning incrementally from minibatches of data in astochastic optimization setting.1.1 N OTATIONThe observed data and encodings can be written in matrix form, representing bits as 1:^X=0B@^x(1)1 ^x(n)1.........^x(1)V ^x(n)V1CA2[1;1]Vn;E=0B@e(1)1e(n)1.........e(1)He(n)H1CA2[1;1]Hn(1)Here the encodings are allowed to be randomized, represented by values in [1;1]instead of just thetwo valuesf1;1g; e.g.e(1)i=12is+1w.p.34and1w.p.14. The data in Xare also allowed to berandomized, which we will see essentially loses no generality (Appendix B). We write the columns of^X;Eas^x(i);e(i)fori2[n](where [s] :=f1;:::;sg), representing the data. The rows are writtenas^xv= (x(1)v;:::;x(n)v)>forv2[V]andeh= (e(1)h;:::;e(n)h)>forh2[H].We also consider the correlation of each bit hof the encoding with each decoded bit vover the data,i.e.bv;h:=1nPni=1x(i)ve(i)h. This too can be written in matrix form as B:=1n^XE>2RVH,whose rows and columns we respectively write as bv= (bv;1;:::;bv;H)>overv2[V]andbh= (b1;h;:::;bV;h)>overh2[H]; the indexing will be clear from context.As alluded to earlier, the loss incurred on any example x(i)is the cross-entropy between the exam-ple and its reconstruction ~x(i), in expectation over the randomness in x(i). Defining`(~x(i)v) =ln21~x(i)v(thepartial losses to true labels1), the loss is written as:`(x(i);~x(i)) :=VXv=1" 1 +x(i)v2!`+(~x(i)v) + 1x(i)v2!`(~x(i)v)#(2)In addition, define a potential well (m) := ln (1 + em) + ln (1 +em)with derivative 0(m) :=1em1+em. Univariate functions like this are applied componentwise to matrices in this paper.1.2 P ROBLEM SETUPWith these definitions, the autoencoding problem we address can be precisely stated as two tasks,encoding and decoding. These share only the side information B. Our goal is to perform these stepsso as to achieve the best possible guarantee on reconstruction loss, with no further assumptions. Thiscan be written as a zero-sum game of an autoencoding algorithm seeking to minimize loss against anadversary, by playing encodings and reconstructions:Using ^X, algorithm plays (randomized) encodings E, resulting in pairwise correlations B.Using EandB, algorithm plays reconstructions ~X=~x(1);:::;~x(n)2[1;1]Vn.Given ~X;E;B, adversary plays X2[1;1]Vnto maximize reconstruction loss1nPni=1`(x(i);~x(i)).To incur low loss, the algorithm must use an EandBsuch that no adversary playing Xcan inflicthigher loss. The algorithm never sees X, which represents the worst the data could be given thealgorithm’s incomplete memory of it ( E;B) and reconstructions ( ~X).We find the autoencoding algorithm’s best strategy in two parts. First, we find the optimal decodingfunction of any encodings EgivenB, in Section 2. Then, we use the resulting optimal reconstructionfunction to outline the best encoding procedure, i.e. one that finds the E;Bthat lead to the bestreconstruction, in Section 3.1. Combining these ideas yields an autoencoding algorithm in Section2Published as a conference paper at ICLR 20173.2 (Algorithm 1), where its implementation and interpretation are specified. Further discussion andrelated work in Section 4 are followed by more extensions of the framework in Section 5. Experimentsin Section 6 show extremely competitive results with equivalent fully-connected autoencoders trainedwith backpropagation.2 O PTIMALLY DECODING AN ENCODED REPRESENTATIONTo address the game of Section 1.2, we first assume EandBare fixed, and derive the optimaldecoding rule given this information. We show in this section that the form of this optimal decoder isprecisely the same as in a classical autoencoder: having learned a weight vector wv2RHfor eachv2[V], thevthbit of each reconstruction ~xiis expressed as a logistic function of a wv-weightedcombination of the Hencoded bits ei– a logistic artificial neuron with weights wv. The weightvectors are learned by convex optimization, despite the nonconvexity of the transfer functions.To develop this, we minimize the worst-case reconstruction error, where Xis constrained by our priorknowledge that B=1nXE>, i.e.1nExv=bv8v2[V]. This can be written as a function of E:LB(E) := min~x(1);:::;~x(n)2[1;1]Vmaxx(1);:::;x(n)2[1;1]V;8v2[V]:1nExv=bv1nnXi=1`(x(i);~x(i)) (3)We solve this minimax problem for the optimal reconstructions played by the minimizing player in(3), written as ~x(1);:::; ~x(n).Theorem 1. Define the bitwise slack function E(w;b) :=b>w+1nPni=1(w>e(i)), which isconvex in w. W.r.t. any bv, this has minimizing weights wv:=wv(E;B) := arg minw2RHE(w;bv).Then the minimax value of the game (3)isLB(E) =12VXv=1E(wv;bv). For any example i2[n],the minimax optimal reconstruction can be written for any bit vas~x(i)v:=1ew>ve(i)1+ew>ve(i).This tells us that the optimization problem of finding the minimax optimal reconstructions ~x(i)isextremely convenient in several respects. The learning problem decomposes over the Vbits in thedecoding, reducing to solving for a weight vector wv2RHfor each bitv, by optimizing each bitwiseslack function. Given the weights, the optimal reconstruction of any example ican be specified by alayer of logistic sigmoid artificial neurons of its encoded bits, with w>ve(i)as the bitwise logits.Hereafter, we write W2RVHas the matrix of decoding weights, with rows fwvgVv=1. In particular,the optimal decoding weights W(E;B)are the matrix with rows fwv(E;B)gVv=1.3 L EARNING AN AUTOENCODER3.1 F INDING AN ENCODED REPRESENTATIONHaving computed the optimal decoding function in the previous section given any EandB, wenow switch perspectives to the encoder, which seeks to compress the input data ^Xinto encodedrepresentations E(from which Bis easily calculated to pass to the decoder). We seek to find (E;B)to ensure the lowest worst-case reconstruction loss after decoding; recall that this is LB(E)from (3).Observe that1n^XE>=Bby definition, and that the encoder is given ^X. Therefore, by using Thm. 1and substituting bv=1nE^xv8v2[V],LB(E) =12nnXi=1VXv=1h^x(i)v(w>ve(i)) + ( w>ve(i))i:=L(W;E) (4)3Published as a conference paper at ICLR 2017So it is convenient to define the feature distortion1for anyv2[V]with respect to W, between anyexample xand its encoding e:Wv(e;x) :=xvw>ve+ (w>ve) (5)From the above discussion, the best Egiven any decoding W, written as E(W), solves theminimizationminE2[1;1]HnL(W;E) =12nnXi=1mine(i)2[1;1]HVXv=1Wv(e(i);^x(i))which immediately yields the following result.Proposition 2. Define the optimal encodings for decoding weights WasE(W) :=arg minE2[1;1]HnL(W;E). Then e(i)(W)can be computed separately for each example ^x(i)2[1;1]V, minimizing its total feature distortion over the decoded bits w.r.t. W:ENC(^x(i);W) :=e(i)(W) := arg mine2[1;1]HVXv=1Wv(e;^x(i)) (6)Observe that the encoding function ENC(^x(i);W)can be efficiently computed to any desired pre-cision since the feature distortion Wv(e;^x(i))of each bitvis convex and Lipschitz in e; anL1error ofcan be reached in O(2)linear-time first-order optimization iterations. Note that theencodings need not be bits, and can be e.g. unconstrained 2RHinstead; the proof of Thm. 1 assumesno structure on them, and the optimization will proceed as above but without projecting into thehypercube.3.2 A NAUTOENCODER LEARNING ALGORITHMOur ultimate goal is to minimize the worst-case reconstruction loss. As we have seen in (3)and(6),it is convex in the encoding Eand in the decoding parameters W, each of which can be fixed whileminimizing with respect to the other. This suggests a learning algorithm that alternately performs twosteps: finding encodings Ethat minimizeL(W;E)as in (6)with a fixed W, and finding decodingparameters W(E;B), as given in Algorithm 1.Algorithm 1 Pairwise Correlation Autoencoder (PC-AE)Input: Size-ndataset ^X, number of epochs TInitialize W0(e.g. with each element being i.i.d. N(0;1))fort= 1toTdoEncode each example to ensure accurate reconstruction using weights Wt1, and compute theassociated pairwise bit correlations Bt:8i2[n] : [e(i)]t=ENC(^x(i);Wt1); Bt=1n^XE>tUpdate weight vectors [wv]tfor eachv2[V]to minimize slack function, using encodings Et:8v2[V] : [wv]t= arg minw2RH"[bv]>tw+1nnXi=1(w>e(i)t)#end forOutput: Weights WT1Noting that (w>ve)w>ve, we see that Wv(e;^x)w>vesgn(w>ve)^xv. So the optimizer tendsto change eso that w>vematches signs with ^xv, motivating the name.4Published as a conference paper at ICLR 20173.3 E FFICIENT IMPLEMENTATIONOur derivation of the encoding and decoding functions involves no model assumptions at all, onlyusing the minimax structure and pairwise statistics that the algorithm is allowed to remember.Nevertheless, the (en/de)coders can be learned and implemented efficiently.Decoding is a convex optimization in Hdimensions, which can be done in parallel for each bitv2[V]. This is relatively easy to solve in the parameter regime of primary interest when data areabundant, in which H < Vn. Similarly, encoding is also a convex optimization problem inonlyHdimensions. If the data examples are instead sampled in minibatches of size n, they canbe encoded in parallel, with a new minibatch being sampled to start each epoch t. The number ofexamplesn(per batch) is essentially only limited by nH, the number of compressed representationsthat fit in memory.So far in this paper, we have stated our results in the transductive setting, in which all data are giventogether a priori, with no assumptions whatsoever made about the interdependences between theVfeatures. However, PC-AE operates much more efficiently than this might suggest. Crucially,the encoding and decoding tasks both depend on nonly to average a function of x(i)ore(i)overi2[n], so they can both be solved by stochastic optimization methods that use first-order gradientinformation, like variants of stochastic gradient descent (SGD). We find it remarkable that theminimax optimal encoding and decoding can be efficiently learned by such methods, which do notscale computationally in n. Note that the result of each of these steps involves (n)outputs ( Eand~X), which are all coupled together in complex ways.Furthermore, efficient first-order convex optimization methods for both encoding and decoding stepsmanipulate more intermediate gradient-related quantities, with facile interpretations. For details, seeAppendix A.2.3.4 C ONVERGENCE AND WEIGHT REGULARIZATIONAs we noted previously, the objective function of the optimization is biconvex. This means that thealternating minimization algorithm we specify is an instance of alternating convex search , shownin that literature to converge under broad conditions (Gorski et al. (2007)). It is not guaranteedto converge to the global optimum, but each iteration will monotonically decrease the objectivefunction. In light of our introductory discussion, the properties and rate of such convergence wouldbe interesting to compare to stochastic optimization algorithms for PCA, which converge efficientlyunder broad conditions (Balsubramani et al. (2013); Shamir (2016)).The basic game used so far has assumed perfect knowledge of the pairwise correlations, leading toequality constraints 8v2[V] :1nExv=bv. This makes sense in PC-AE , where the encodingphase of each epoch gives the exact Btfor the decoding phase. However, in other stochastic settingsas for denoising autoencoders (see Sec. 5.2), it may be necessary to relax this constraint. A relaxedconstraint of1nExvbv1exactly corresponds to an extra additive regularization term ofkwvk1on the corresponding weights in the convex optimization used to find W(Appendix D.1).Such regularization leads to provably better generalization (Bartlett (1998)) and is often practical touse, e.g. to encourage sparsity. But we do not use it for our PC-AE experiments in this paper.4 D ISCUSSION AND RELATED WORKOur approach PC-AE is quite different from existing autoencoding work in several ways.First and foremost, we posit no explicit decision rule, and avoid optimizing the highly non-convexdecision surface traversed by traditional autoencoding algorithms that learn with backpropagation(Rumelhart et al. (1986)). The decoding function, given the encodings, is a single layer of artificialneurons only because of the minimax structure of the problem when minimizing worst-case loss. Thisdiffers from reasoning typically used in neural net work (see Jordan (1995)), in which the loss is thenegative log-likelihood (NLL) of the joint probability, which is assumed to follow a form specifiedby logistic artificial neurons and their weights. We instead interpret the loss in the usual direct way asthe NLL of the predicted probability of the data given the visible bits, and avoid any assumptions onthe decision rule (e.g. not monotonicity in the score w>ve(i), or even dependence on such a score).5Published as a conference paper at ICLR 2017This justification of artificial neurons – as the minimax optimal decision rules given information onpairwise correlations – is one of our more distinctive contributions (see Sec. 5.1).Crucially, we make no assumptions whatsoever on the form of the encoding or decoding, excepton the memory used by the decoding. Some such “regularizing" restriction is necessary to rule outthe autoencoder just memorizing the data, and is typically expressed by assuming a model class ofcompositions of artificial neuron layers. We instead impose it axiomiatically by limiting the amountof information transmitted through B, which does not scale in n; but we do not restrict how thisinformation is used. This confers a clear theoretical advantage, allowing us to attain the strongestrobust loss guarantee among all possible autoencoders that use the correlations B.More importantly in practice, avoiding an explicit model class means that we do not have to optimizethe typically non-convex model, which has long been a central issue for backpropagation-basedlearning methods (e.g. Dauphin et al. (2014)). Prior work related in spirit has attempted to avoidthis through convex relaxations, including for multi-layer optimization under various structuralassumptions (Aslan et al. (2014); Zhang et al. (2016)), and when the number of hidden units is variedby the algorithm (Bengio et al. (2005); Bach (2014)).Our approach also isolates the benefit of higher nin dealing with overfitting, as the pairwisecorrelations Bcan be measured progressively more accurately as nincreases. In this respect, wefollow a line of research using such pairwise correlations to model arbitary higher-order structureamong visible units, rooted in early work on (restricted) Boltzmann Machines (Ackley et al. (1985);Smolensky (1986); Rumelhart & McClelland (1987); Freund & Haussler (1992)). More recently,theoretical algorithms have been developed with the perspective of learning from the correlationsbetween units in a network, under various assumptions on the activation function, architecture, andweights, for both deep (Arora et al. (2014)) and shallow networks (using tensor decompositions,e.g. Livni et al. (2014); Janzamin et al. (2015)). Our use of ensemble aggregation techniques (fromBalsubramani & Freund (2015a; 2016)) to study these problems is anticipated in spirit by prior workas well, as discussed at length by Bengio (2009) in the context of distributed representations.4.1 O PTIMALITY , OTHER ARCHITECTURES ,AND DEPTHWe have established that a single layer of logistic artificial neurons is an optimal decoder, givenonly indirect information about the data through pairwise correlations. This is not a claim thatautoencoders need only a single-layer architecture in the worst case. Sec. 3.1 establishes that the bestrepresentations Eare the solution to a convex optimization, with no artificial neurons involved incomputing them from the data. Unlike the decoding function, the optimal encoding function ENCcannot be written explicitly in terms of artificial neurons, and is incomparable to existing architectures(though it is analogous to PCA in prescribing an efficient operation that yields the encodings fromunlabeled data). Also, the encodings are only optimal given the pairwise correlations; trainingalgorithms like backpropagation, which communicate other knowledge of the data through derivativecomposition, can learn final decoding layers that outperform ours, as we see in experiments.In our framework so far, we explore using all the pairwise correlations between hidden and visiblebits to inform learning by constraining the adversary, resulting in a Lagrange parameter – a weight –for each constraint. These VH weights Wconstitute the parameters of the optimal decoding layer,describing a fully connected architecture. If just a select few of these correlations were used, onlythey would constrain the adversary in the minimax problem of Sec. 2, so weights would only beintroduced for them, giving rise to sparser architectures.Our central choices – to store only pairwise correlations and minimize worst-case reconstructionloss – play a similar regularizing role to explicit model assumptions, and other autoencoding methodsmay achieve better performance on data for which these choices are too conservative, by e.g. makingdistributional assumptions on the data. From our perspective, other architectures with more layers– particularly highly successful ones like convolutional, recurrent, residual, and ladder networks(LeCun et al. (2015); He et al. (2015); Rasmus et al. (2015)) – lend the autoencoding algorithm morepower by allowing it to measure more nuanced correlations using more parameters, which decreasesthe worst-case loss. Applying our approach with these would be interesting future work.Extending this paper’s convenient minimax characterization to deep representations with empiricalsuccess is a very interesting open problem. Prior work on stacking autoencoders/RBMs (Vincent et al.6Published as a conference paper at ICLR 2017(2010)) and our learning algorithm PC-AE suggest that we could train a deep network in alternatingforward and backward passes. Using this paper’s ideas, the forward pass would learn the weights toeach layer given the previous layer’s activations (and inter-layer pairwise correlations) by minimizingthe slack function, with the backward pass learning the activations for each layer given the weights to/ activations of the next layer by convex optimization (as we learn E). Both passes would consistof successive convex optimizations dictated by our approach, quite distinct from backpropagation,though loosely resembling the wake-sleep algorithm (Hinton et al. (1995)).4.2 G ENERATIVE APPLICATIONSParticularly recently, autoencoders have been of interest largely for their many applications beyondcompression, especially for their generative uses. The most directly relevant to us involve repurposingdenoising autoencoders (Bengio et al. (2013b); see Sec. 5.2); moment matching among hidden andvisible units (Li et al. (2015)); and generative adversarial network ideas (Goodfellow et al. (2014);Makhzani et al. (2015)), the latter particularly since the techniques of this paper have been applied tobinary classification (Balsubramani & Freund (2015a;b)). These are outside this paper’s scope, butsuggest themselves as future extensions of our approach.5 E XTENSIONS5.1 O THER RECONSTRUCTION LOSSESIt may make sense to use another reconstruction loss other than cross-entropy, for instance theexpected Hamming distance between x(i)and~x(i). It turns out that the minimax manipulations weuse work under very broad conditions, for nearly any loss that additively decomposes over the Vbitsas cross-entropy does. In such cases, all that is required is that the partial losses `+(~x(i)v);`(~x(i)v)aremonotonically decreasing and increasing respectively (recall that for cross-entropy loss, this is true as`(~x(i)v) = ln21~x(i)v); they need not even be convex. This monotonicity is a natural condition,because the loss measures the discrepancy to the true label, and holds for all losses in common use.Changing the partial losses only changes the structure of the minimax solution in two respects: byaltering the form of the transfer function on the decoding neurons, and the univariate potential well optimized to learn the decoding weights. Otherwise, the problem remains convex and the algorithmis identical. Formal statements of these general results are in Appendix E.5.2 D ENOISING AUTOENCODINGOur framework can be easily applied to learn a denoising autoencoder (DAE; Vincent et al. (2008;2010)), which uses noise-corrupted data (call it _X) for training, and uncorrupted data for evaluation.From our perspective, this corresponds to leaving the learning of Wunchanged, but using corrupteddata when learning E. Consequently, the minimization problem over encodings must be changed toaccount for the bias on Bintroduced by the noise; so the algorithm plays given the noisy data, but tominimize loss against X. This is easiest to see for zero-mean noise, for which our algorithms arecompletely unchanged because Bdoes not change (in expectation) after the noise is added.Another common scenario illustrating this technique is to mask a fraction of the input bits uniformlyat random (in our notation, changing 1s to1s). This masking noise changes each pairwise correlationbv;hby an amount v;h:=1nPni=1( _x(i)vx(i)v)e(i)h. Therefore, the optimand Eq. (4)must be modifiedby subtracting this factor v;h. Thisv;hcan be estimated (w.h.p.) given _xv;eh;;xv. But even withjust the noisy data and not xv, we can estimate v;hw.h.p. by extrapolating the correlation of the bitsof_xvthat are left as +1(a1fraction) with the corresponding values in eh(see Appendix C).7Published as a conference paper at ICLR 2017Table 1: Cross-entropy reconstruction losses for PC-AE and a vanilla single-layer autoencoder, withbinary and unconstrained real-valued encodings, and significant results in bold. The PC-AE resultsare significantly better (see Appendix A) than the AE results.PC-AE (bin.) PC-AE (real) AE (bin.) AE (real) PCAMNIST,H= 32 51.9 53.8 65.2 64.3 86.6MNIST,H= 100 9.2 9.9 26.8 25.0 52.7Omniglot,H= 32 76.1 77.2 93.1 90.6 102.8Omniglot,H= 100 12.1 13.2 46.6 45.4 63.6Caltech-101, H= 32 54.5 54.9 97.5 87.6 118.7Caltech-101, H= 100 7.1 7.1 64.3 45.4 75.2notMNIST,H= 32 121.9 122.4 149.6 141.8 174.0notMNIST,H= 100 62.2 63.0 99.6 92.1 115.5Adult,H= 10 7.7 7.8 9.3 8.1 13.5Adult,H= 20 0.65 0.64 2.5 1.5 7.96 E XPERIMENTSIn this section we compare our approach2empirically to a standard autoencoder with one hidden layer(termed AE here) trained with backpropagation, and a thresholded PCA baseline. Our goal is simplyto verify that our approach, though very different, is competitive in reconstruction performance.The datasets we use are first normalized to [0;1], and then binarized by sampling each pixel stochasti-cally in proportion to its intensity, following prior work (Salakhutdinov & Murray (2008)). Changingbetween binary and real-valued encodings in PC-AE requires just a line of code, to project the en-codings into [1;1]Hafter convex optimization updates to compute ENC(). We use Adagrad (Duchiet al. (2011)) for the convex minimizations of our algorithms; we observed that their performance isnot very sensitive to the choice of optimization method, explained by our approach’s convexity.We compare to a basic AE with a single hidden layer, trained using the Adam method with defaultparameters (Kingma & Ba (2014)). Other models like variational autoencoders (Kingma & Welling(2013)) are not shown here because they do not aim to optimize reconstruction loss or are notcomparably general autoencoding architectures. We also use a sign-thresholded PCA baseline(essentially a completely linear autoencoder, but with the output layer thresholded to be in [1;1]);see Appendix A for more details. We vary the number of hidden units Hfor all algorithms, andtry both binary and unconstrained real-valued encodings where appropriate; the respective AE useslogistic sigmoid and ReLU transfer functions for the encoding neurons. The results are in Table 1.The reconstruction performance of PC-AE indicates that it can encode information very well usingpairwise correlations, compared to the directly learned AE and PCA approaches. Loss can becomeextremely low when His raised, giving Bthe capacity to robustly encode almost all the informationin the input bits ^X. The performance is roughly equal between binary hidden units and unconstrainedones, which is expected by our derivations.We also try learning just the decoding layer of Sec. 2, on the encoded representation of the AE. Thisis motivated by the fact that Sec. 2 establishes our decoding method to be worst-case optimal givenanyEandB. We find the results to be significantly worse than the AE alone in all datasets used (e.g.reconstruction loss of 171=133on MNIST, and211=134on Omniglot, with 32=100hiddenunits respectively). This reflects the AE’s training backpropagating information about the data beyondpairwise correlations, through non-convex function compositions – however, this comes at the costof being more difficult to optimize. The representations learned by the E NCfunction of PC-AE arequite different and capture much more of the pairwise correlation information, which is used by thedecoding layer in a worst-case optimal fashion. We attempt to visually depict the differences betweenthe representations in Fig. 3.As discussed in Sec. 4, we do not claim that this paper’s method will always achieve the best empiricalreconstruction loss, even among single-layer autoencoders. We would like to make the encoding2TensorFlow code available at https://github.com/aikanor/pc-autoencoder .8Published as a conference paper at ICLR 2017Figure 1: Top row: randomly chosen test images from Caltech-101 silhouettes. Middle and bottomrows: corresponding reconstructions of PC-AE and AE with H= 32 binary hidden units.Figure 2: As Fig. 2, with H= 100 on Omniglot. Difference in quality is particularly noticeable inthe 1st, 5th, 8th, and 11th columns.function quicker to compute, as well. But we believe this paper’s results, especially when His high,illustrate the potential of using pairwise correlations for autoencoding as in our approach, learning toencode with alternating convex minimization and extremely strong worst-case robustness guarantees.ACKNOWLEDGMENTSI am grateful to Jack Berkowitz, Sanjoy Dasgupta, and Yoav Freund for helpful discussions; DanielHsu and Akshay Krishnamurthy for instructive examples; and Gary Cottrell for an enjoyable chat. Iacknowledge funding from the NIH (grant R01ES02500902).
B1uDSibVe
6: Marginally above acceptance threshold
The paper propose to find an optimal decoder for binary data using a min-max decoder on the binary hypercube given a linear constraint on the correlation between the encoder and the data. The paper gives finally that the optimal decoder as logistic of the lagragian W multiplying the encoding e. Given the weights of the ‘min-max’decoder W the paper finds the best encoding for the data distribution considered, by minimizing that error as a function of the encoding. The paper then alternates that optimization between the encoding and the min-max decoding, starting from random weights W. clarity: -The paper would be easier to follow if the real data (x in section 3 ) is differentiated from the worst case data played by the model (x in section 2). significance Overall I like the paper, however I have some doubts on what the alternating optimization optimum ends up being. The paper ends up implementing a single layer network. The correlation constraints while convenient in the derivation, is a bit intriguing. Since linear relation between the encoding and the data seems to be weak modeling constraint and might be not different from what PCA would implement. - what is the performance of PCA on those tasks? one could you use a simple sign function to decode. This is related to one bit compressive sensing. - what happens if you initialize W in algorithm one with PCA weights? or weighted pca weights? - Have you tried on more complex datasets such as cifar?
4: The reviewer is confident but not absolutely certain that the evaluation is correct
ryelgY5eg
ICLR.cc/2017/conference
2017
Optimal Binary Autoencoding with Pairwise Correlations
["Akshay Balsubramani"]
We formulate learning of a binary autoencoder as a biconvex optimization problem which learns from the pairwise correlations between encoded and decoded bits. Among all possible algorithms that use this information, ours finds the autoencoder that reconstructs its inputs with worst-case optimal loss. The optimal decoder is a single layer of artificial neurons, emerging entirely from the minimax loss minimization, and with weights learned by convex optimization. All this is reflected in competitive experimental results, demonstrating that binary autoencoding can be done efficiently by conveying information in pairwise correlations in an optimal fashion.
["Theory", "Unsupervised Learning", "Games"]
ABSTRACTWe formulate learning of a binary autoencoder as a biconvex optimization problemwhich learns from the pairwise correlations between encoded and decoded bits.Among all possible algorithms that use this information, ours finds the autoencoderthat reconstructs its inputs with worst-case optimal loss. The optimal decoderis a single layer of artificial neurons, emerging entirely from the minimax lossminimization, and with weights learned by convex optimization. All this is reflectedin competitive experimental results, demonstrating that binary autoencoding canbe done efficiently by conveying information in pairwise correlations in an optimalfashion.1 I NTRODUCTIONConsider a general autoencoding scenario, in which an algorithm learns a compression scheme forindependently, identically distributed (i.i.d.) V-dimensional bit vector data^x(1);:::; ^x(n). Forsome encoding dimension H, the algorithm encodes each data example ^x(i)= (^x(i)1;:::; ^x(i)V)>into anH-dimensional representation e(i), withH < V . It then decodes each e(i)back into areconstructed example ~x(i)using some small amount of additional memory, and is evaluated on thequality of the reconstruction by the cross-entropy loss commonly used to compare bit vectors. Agood autoencoder learns to compress the data into Hbits so as to reconstruct it with low loss.When the loss is squared reconstruction error and the goal is to compress data in RVtoRH, this isoften accomplished with principal component analysis (PCA), which projects the input data on thetopHeigenvectors of their covariance matrix (Bourlard & Kamp (1988); Baldi & Hornik (1989)).These eigenvectors in RVconstituteVH real values of additional memory needed to decode thecompressed data in RHback to the reconstructions in RV, which are linear combinations of theeigenvectors. Crucially, this total additional memory does not depend on the amount of data n,making it applicable when data are abundant.This paper considers a similar problem, except using bit-vector data and the cross-entropy recon-struction loss. Since we are compressing samples of i.i.d. V-bit data into H-bit encodings, a naturalapproach is to remember the pairwise statistics: the VH average correlations between pairs of bits inthe encoding and decoding, constituting as much additional memory as the eigenvectors used in PCA.The decoder uses these along with the H-bit encoded data, to produce V-bit reconstructions.We show how to efficiently learn the autoencoder with the worst-case optimal loss in this scenario,without any further assumptions, parametric or otherwise. It has some striking properties.The decoding function is identical in form to the one used in a standard binary autoencoder with onehidden layer (Bengio et al. (2013a)) and cross-entropy reconstruction loss. Specifically, each bit vof the decoding is the output of a logistic sigmoid artificial neuron of the encoded bits, with somelearned weights wv2RH. This form emerges as the uniquely optimal decoding function, and is notassumed as part of any explicit model.We show that the worst-case optimal reconstruction loss suffered by the autoencoder is convex inthese decoding weights W=fwvgv2[V], and in the encoded representations E. Though it is notMost of the work was done as a PhD student at UC San Diego.1Published as a conference paper at ICLR 2017jointly convex in both, the situation still admits a natural and efficient optimization algorithm inwhich the loss is alternately minimized in EandWwhile the other is held fixed. The algorithmis practical and performs well empirically, learning incrementally from minibatches of data in astochastic optimization setting.1.1 N OTATIONThe observed data and encodings can be written in matrix form, representing bits as 1:^X=0B@^x(1)1 ^x(n)1.........^x(1)V ^x(n)V1CA2[1;1]Vn;E=0B@e(1)1e(n)1.........e(1)He(n)H1CA2[1;1]Hn(1)Here the encodings are allowed to be randomized, represented by values in [1;1]instead of just thetwo valuesf1;1g; e.g.e(1)i=12is+1w.p.34and1w.p.14. The data in Xare also allowed to berandomized, which we will see essentially loses no generality (Appendix B). We write the columns of^X;Eas^x(i);e(i)fori2[n](where [s] :=f1;:::;sg), representing the data. The rows are writtenas^xv= (x(1)v;:::;x(n)v)>forv2[V]andeh= (e(1)h;:::;e(n)h)>forh2[H].We also consider the correlation of each bit hof the encoding with each decoded bit vover the data,i.e.bv;h:=1nPni=1x(i)ve(i)h. This too can be written in matrix form as B:=1n^XE>2RVH,whose rows and columns we respectively write as bv= (bv;1;:::;bv;H)>overv2[V]andbh= (b1;h;:::;bV;h)>overh2[H]; the indexing will be clear from context.As alluded to earlier, the loss incurred on any example x(i)is the cross-entropy between the exam-ple and its reconstruction ~x(i), in expectation over the randomness in x(i). Defining`(~x(i)v) =ln21~x(i)v(thepartial losses to true labels1), the loss is written as:`(x(i);~x(i)) :=VXv=1" 1 +x(i)v2!`+(~x(i)v) + 1x(i)v2!`(~x(i)v)#(2)In addition, define a potential well (m) := ln (1 + em) + ln (1 +em)with derivative 0(m) :=1em1+em. Univariate functions like this are applied componentwise to matrices in this paper.1.2 P ROBLEM SETUPWith these definitions, the autoencoding problem we address can be precisely stated as two tasks,encoding and decoding. These share only the side information B. Our goal is to perform these stepsso as to achieve the best possible guarantee on reconstruction loss, with no further assumptions. Thiscan be written as a zero-sum game of an autoencoding algorithm seeking to minimize loss against anadversary, by playing encodings and reconstructions:Using ^X, algorithm plays (randomized) encodings E, resulting in pairwise correlations B.Using EandB, algorithm plays reconstructions ~X=~x(1);:::;~x(n)2[1;1]Vn.Given ~X;E;B, adversary plays X2[1;1]Vnto maximize reconstruction loss1nPni=1`(x(i);~x(i)).To incur low loss, the algorithm must use an EandBsuch that no adversary playing Xcan inflicthigher loss. The algorithm never sees X, which represents the worst the data could be given thealgorithm’s incomplete memory of it ( E;B) and reconstructions ( ~X).We find the autoencoding algorithm’s best strategy in two parts. First, we find the optimal decodingfunction of any encodings EgivenB, in Section 2. Then, we use the resulting optimal reconstructionfunction to outline the best encoding procedure, i.e. one that finds the E;Bthat lead to the bestreconstruction, in Section 3.1. Combining these ideas yields an autoencoding algorithm in Section2Published as a conference paper at ICLR 20173.2 (Algorithm 1), where its implementation and interpretation are specified. Further discussion andrelated work in Section 4 are followed by more extensions of the framework in Section 5. Experimentsin Section 6 show extremely competitive results with equivalent fully-connected autoencoders trainedwith backpropagation.2 O PTIMALLY DECODING AN ENCODED REPRESENTATIONTo address the game of Section 1.2, we first assume EandBare fixed, and derive the optimaldecoding rule given this information. We show in this section that the form of this optimal decoder isprecisely the same as in a classical autoencoder: having learned a weight vector wv2RHfor eachv2[V], thevthbit of each reconstruction ~xiis expressed as a logistic function of a wv-weightedcombination of the Hencoded bits ei– a logistic artificial neuron with weights wv. The weightvectors are learned by convex optimization, despite the nonconvexity of the transfer functions.To develop this, we minimize the worst-case reconstruction error, where Xis constrained by our priorknowledge that B=1nXE>, i.e.1nExv=bv8v2[V]. This can be written as a function of E:LB(E) := min~x(1);:::;~x(n)2[1;1]Vmaxx(1);:::;x(n)2[1;1]V;8v2[V]:1nExv=bv1nnXi=1`(x(i);~x(i)) (3)We solve this minimax problem for the optimal reconstructions played by the minimizing player in(3), written as ~x(1);:::; ~x(n).Theorem 1. Define the bitwise slack function E(w;b) :=b>w+1nPni=1(w>e(i)), which isconvex in w. W.r.t. any bv, this has minimizing weights wv:=wv(E;B) := arg minw2RHE(w;bv).Then the minimax value of the game (3)isLB(E) =12VXv=1E(wv;bv). For any example i2[n],the minimax optimal reconstruction can be written for any bit vas~x(i)v:=1ew>ve(i)1+ew>ve(i).This tells us that the optimization problem of finding the minimax optimal reconstructions ~x(i)isextremely convenient in several respects. The learning problem decomposes over the Vbits in thedecoding, reducing to solving for a weight vector wv2RHfor each bitv, by optimizing each bitwiseslack function. Given the weights, the optimal reconstruction of any example ican be specified by alayer of logistic sigmoid artificial neurons of its encoded bits, with w>ve(i)as the bitwise logits.Hereafter, we write W2RVHas the matrix of decoding weights, with rows fwvgVv=1. In particular,the optimal decoding weights W(E;B)are the matrix with rows fwv(E;B)gVv=1.3 L EARNING AN AUTOENCODER3.1 F INDING AN ENCODED REPRESENTATIONHaving computed the optimal decoding function in the previous section given any EandB, wenow switch perspectives to the encoder, which seeks to compress the input data ^Xinto encodedrepresentations E(from which Bis easily calculated to pass to the decoder). We seek to find (E;B)to ensure the lowest worst-case reconstruction loss after decoding; recall that this is LB(E)from (3).Observe that1n^XE>=Bby definition, and that the encoder is given ^X. Therefore, by using Thm. 1and substituting bv=1nE^xv8v2[V],LB(E) =12nnXi=1VXv=1h^x(i)v(w>ve(i)) + ( w>ve(i))i:=L(W;E) (4)3Published as a conference paper at ICLR 2017So it is convenient to define the feature distortion1for anyv2[V]with respect to W, between anyexample xand its encoding e:Wv(e;x) :=xvw>ve+ (w>ve) (5)From the above discussion, the best Egiven any decoding W, written as E(W), solves theminimizationminE2[1;1]HnL(W;E) =12nnXi=1mine(i)2[1;1]HVXv=1Wv(e(i);^x(i))which immediately yields the following result.Proposition 2. Define the optimal encodings for decoding weights WasE(W) :=arg minE2[1;1]HnL(W;E). Then e(i)(W)can be computed separately for each example ^x(i)2[1;1]V, minimizing its total feature distortion over the decoded bits w.r.t. W:ENC(^x(i);W) :=e(i)(W) := arg mine2[1;1]HVXv=1Wv(e;^x(i)) (6)Observe that the encoding function ENC(^x(i);W)can be efficiently computed to any desired pre-cision since the feature distortion Wv(e;^x(i))of each bitvis convex and Lipschitz in e; anL1error ofcan be reached in O(2)linear-time first-order optimization iterations. Note that theencodings need not be bits, and can be e.g. unconstrained 2RHinstead; the proof of Thm. 1 assumesno structure on them, and the optimization will proceed as above but without projecting into thehypercube.3.2 A NAUTOENCODER LEARNING ALGORITHMOur ultimate goal is to minimize the worst-case reconstruction loss. As we have seen in (3)and(6),it is convex in the encoding Eand in the decoding parameters W, each of which can be fixed whileminimizing with respect to the other. This suggests a learning algorithm that alternately performs twosteps: finding encodings Ethat minimizeL(W;E)as in (6)with a fixed W, and finding decodingparameters W(E;B), as given in Algorithm 1.Algorithm 1 Pairwise Correlation Autoencoder (PC-AE)Input: Size-ndataset ^X, number of epochs TInitialize W0(e.g. with each element being i.i.d. N(0;1))fort= 1toTdoEncode each example to ensure accurate reconstruction using weights Wt1, and compute theassociated pairwise bit correlations Bt:8i2[n] : [e(i)]t=ENC(^x(i);Wt1); Bt=1n^XE>tUpdate weight vectors [wv]tfor eachv2[V]to minimize slack function, using encodings Et:8v2[V] : [wv]t= arg minw2RH"[bv]>tw+1nnXi=1(w>e(i)t)#end forOutput: Weights WT1Noting that (w>ve)w>ve, we see that Wv(e;^x)w>vesgn(w>ve)^xv. So the optimizer tendsto change eso that w>vematches signs with ^xv, motivating the name.4Published as a conference paper at ICLR 20173.3 E FFICIENT IMPLEMENTATIONOur derivation of the encoding and decoding functions involves no model assumptions at all, onlyusing the minimax structure and pairwise statistics that the algorithm is allowed to remember.Nevertheless, the (en/de)coders can be learned and implemented efficiently.Decoding is a convex optimization in Hdimensions, which can be done in parallel for each bitv2[V]. This is relatively easy to solve in the parameter regime of primary interest when data areabundant, in which H < Vn. Similarly, encoding is also a convex optimization problem inonlyHdimensions. If the data examples are instead sampled in minibatches of size n, they canbe encoded in parallel, with a new minibatch being sampled to start each epoch t. The number ofexamplesn(per batch) is essentially only limited by nH, the number of compressed representationsthat fit in memory.So far in this paper, we have stated our results in the transductive setting, in which all data are giventogether a priori, with no assumptions whatsoever made about the interdependences between theVfeatures. However, PC-AE operates much more efficiently than this might suggest. Crucially,the encoding and decoding tasks both depend on nonly to average a function of x(i)ore(i)overi2[n], so they can both be solved by stochastic optimization methods that use first-order gradientinformation, like variants of stochastic gradient descent (SGD). We find it remarkable that theminimax optimal encoding and decoding can be efficiently learned by such methods, which do notscale computationally in n. Note that the result of each of these steps involves (n)outputs ( Eand~X), which are all coupled together in complex ways.Furthermore, efficient first-order convex optimization methods for both encoding and decoding stepsmanipulate more intermediate gradient-related quantities, with facile interpretations. For details, seeAppendix A.2.3.4 C ONVERGENCE AND WEIGHT REGULARIZATIONAs we noted previously, the objective function of the optimization is biconvex. This means that thealternating minimization algorithm we specify is an instance of alternating convex search , shownin that literature to converge under broad conditions (Gorski et al. (2007)). It is not guaranteedto converge to the global optimum, but each iteration will monotonically decrease the objectivefunction. In light of our introductory discussion, the properties and rate of such convergence wouldbe interesting to compare to stochastic optimization algorithms for PCA, which converge efficientlyunder broad conditions (Balsubramani et al. (2013); Shamir (2016)).The basic game used so far has assumed perfect knowledge of the pairwise correlations, leading toequality constraints 8v2[V] :1nExv=bv. This makes sense in PC-AE , where the encodingphase of each epoch gives the exact Btfor the decoding phase. However, in other stochastic settingsas for denoising autoencoders (see Sec. 5.2), it may be necessary to relax this constraint. A relaxedconstraint of1nExvbv1exactly corresponds to an extra additive regularization term ofkwvk1on the corresponding weights in the convex optimization used to find W(Appendix D.1).Such regularization leads to provably better generalization (Bartlett (1998)) and is often practical touse, e.g. to encourage sparsity. But we do not use it for our PC-AE experiments in this paper.4 D ISCUSSION AND RELATED WORKOur approach PC-AE is quite different from existing autoencoding work in several ways.First and foremost, we posit no explicit decision rule, and avoid optimizing the highly non-convexdecision surface traversed by traditional autoencoding algorithms that learn with backpropagation(Rumelhart et al. (1986)). The decoding function, given the encodings, is a single layer of artificialneurons only because of the minimax structure of the problem when minimizing worst-case loss. Thisdiffers from reasoning typically used in neural net work (see Jordan (1995)), in which the loss is thenegative log-likelihood (NLL) of the joint probability, which is assumed to follow a form specifiedby logistic artificial neurons and their weights. We instead interpret the loss in the usual direct way asthe NLL of the predicted probability of the data given the visible bits, and avoid any assumptions onthe decision rule (e.g. not monotonicity in the score w>ve(i), or even dependence on such a score).5Published as a conference paper at ICLR 2017This justification of artificial neurons – as the minimax optimal decision rules given information onpairwise correlations – is one of our more distinctive contributions (see Sec. 5.1).Crucially, we make no assumptions whatsoever on the form of the encoding or decoding, excepton the memory used by the decoding. Some such “regularizing" restriction is necessary to rule outthe autoencoder just memorizing the data, and is typically expressed by assuming a model class ofcompositions of artificial neuron layers. We instead impose it axiomiatically by limiting the amountof information transmitted through B, which does not scale in n; but we do not restrict how thisinformation is used. This confers a clear theoretical advantage, allowing us to attain the strongestrobust loss guarantee among all possible autoencoders that use the correlations B.More importantly in practice, avoiding an explicit model class means that we do not have to optimizethe typically non-convex model, which has long been a central issue for backpropagation-basedlearning methods (e.g. Dauphin et al. (2014)). Prior work related in spirit has attempted to avoidthis through convex relaxations, including for multi-layer optimization under various structuralassumptions (Aslan et al. (2014); Zhang et al. (2016)), and when the number of hidden units is variedby the algorithm (Bengio et al. (2005); Bach (2014)).Our approach also isolates the benefit of higher nin dealing with overfitting, as the pairwisecorrelations Bcan be measured progressively more accurately as nincreases. In this respect, wefollow a line of research using such pairwise correlations to model arbitary higher-order structureamong visible units, rooted in early work on (restricted) Boltzmann Machines (Ackley et al. (1985);Smolensky (1986); Rumelhart & McClelland (1987); Freund & Haussler (1992)). More recently,theoretical algorithms have been developed with the perspective of learning from the correlationsbetween units in a network, under various assumptions on the activation function, architecture, andweights, for both deep (Arora et al. (2014)) and shallow networks (using tensor decompositions,e.g. Livni et al. (2014); Janzamin et al. (2015)). Our use of ensemble aggregation techniques (fromBalsubramani & Freund (2015a; 2016)) to study these problems is anticipated in spirit by prior workas well, as discussed at length by Bengio (2009) in the context of distributed representations.4.1 O PTIMALITY , OTHER ARCHITECTURES ,AND DEPTHWe have established that a single layer of logistic artificial neurons is an optimal decoder, givenonly indirect information about the data through pairwise correlations. This is not a claim thatautoencoders need only a single-layer architecture in the worst case. Sec. 3.1 establishes that the bestrepresentations Eare the solution to a convex optimization, with no artificial neurons involved incomputing them from the data. Unlike the decoding function, the optimal encoding function ENCcannot be written explicitly in terms of artificial neurons, and is incomparable to existing architectures(though it is analogous to PCA in prescribing an efficient operation that yields the encodings fromunlabeled data). Also, the encodings are only optimal given the pairwise correlations; trainingalgorithms like backpropagation, which communicate other knowledge of the data through derivativecomposition, can learn final decoding layers that outperform ours, as we see in experiments.In our framework so far, we explore using all the pairwise correlations between hidden and visiblebits to inform learning by constraining the adversary, resulting in a Lagrange parameter – a weight –for each constraint. These VH weights Wconstitute the parameters of the optimal decoding layer,describing a fully connected architecture. If just a select few of these correlations were used, onlythey would constrain the adversary in the minimax problem of Sec. 2, so weights would only beintroduced for them, giving rise to sparser architectures.Our central choices – to store only pairwise correlations and minimize worst-case reconstructionloss – play a similar regularizing role to explicit model assumptions, and other autoencoding methodsmay achieve better performance on data for which these choices are too conservative, by e.g. makingdistributional assumptions on the data. From our perspective, other architectures with more layers– particularly highly successful ones like convolutional, recurrent, residual, and ladder networks(LeCun et al. (2015); He et al. (2015); Rasmus et al. (2015)) – lend the autoencoding algorithm morepower by allowing it to measure more nuanced correlations using more parameters, which decreasesthe worst-case loss. Applying our approach with these would be interesting future work.Extending this paper’s convenient minimax characterization to deep representations with empiricalsuccess is a very interesting open problem. Prior work on stacking autoencoders/RBMs (Vincent et al.6Published as a conference paper at ICLR 2017(2010)) and our learning algorithm PC-AE suggest that we could train a deep network in alternatingforward and backward passes. Using this paper’s ideas, the forward pass would learn the weights toeach layer given the previous layer’s activations (and inter-layer pairwise correlations) by minimizingthe slack function, with the backward pass learning the activations for each layer given the weights to/ activations of the next layer by convex optimization (as we learn E). Both passes would consistof successive convex optimizations dictated by our approach, quite distinct from backpropagation,though loosely resembling the wake-sleep algorithm (Hinton et al. (1995)).4.2 G ENERATIVE APPLICATIONSParticularly recently, autoencoders have been of interest largely for their many applications beyondcompression, especially for their generative uses. The most directly relevant to us involve repurposingdenoising autoencoders (Bengio et al. (2013b); see Sec. 5.2); moment matching among hidden andvisible units (Li et al. (2015)); and generative adversarial network ideas (Goodfellow et al. (2014);Makhzani et al. (2015)), the latter particularly since the techniques of this paper have been applied tobinary classification (Balsubramani & Freund (2015a;b)). These are outside this paper’s scope, butsuggest themselves as future extensions of our approach.5 E XTENSIONS5.1 O THER RECONSTRUCTION LOSSESIt may make sense to use another reconstruction loss other than cross-entropy, for instance theexpected Hamming distance between x(i)and~x(i). It turns out that the minimax manipulations weuse work under very broad conditions, for nearly any loss that additively decomposes over the Vbitsas cross-entropy does. In such cases, all that is required is that the partial losses `+(~x(i)v);`(~x(i)v)aremonotonically decreasing and increasing respectively (recall that for cross-entropy loss, this is true as`(~x(i)v) = ln21~x(i)v); they need not even be convex. This monotonicity is a natural condition,because the loss measures the discrepancy to the true label, and holds for all losses in common use.Changing the partial losses only changes the structure of the minimax solution in two respects: byaltering the form of the transfer function on the decoding neurons, and the univariate potential well optimized to learn the decoding weights. Otherwise, the problem remains convex and the algorithmis identical. Formal statements of these general results are in Appendix E.5.2 D ENOISING AUTOENCODINGOur framework can be easily applied to learn a denoising autoencoder (DAE; Vincent et al. (2008;2010)), which uses noise-corrupted data (call it _X) for training, and uncorrupted data for evaluation.From our perspective, this corresponds to leaving the learning of Wunchanged, but using corrupteddata when learning E. Consequently, the minimization problem over encodings must be changed toaccount for the bias on Bintroduced by the noise; so the algorithm plays given the noisy data, but tominimize loss against X. This is easiest to see for zero-mean noise, for which our algorithms arecompletely unchanged because Bdoes not change (in expectation) after the noise is added.Another common scenario illustrating this technique is to mask a fraction of the input bits uniformlyat random (in our notation, changing 1s to1s). This masking noise changes each pairwise correlationbv;hby an amount v;h:=1nPni=1( _x(i)vx(i)v)e(i)h. Therefore, the optimand Eq. (4)must be modifiedby subtracting this factor v;h. Thisv;hcan be estimated (w.h.p.) given _xv;eh;;xv. But even withjust the noisy data and not xv, we can estimate v;hw.h.p. by extrapolating the correlation of the bitsof_xvthat are left as +1(a1fraction) with the corresponding values in eh(see Appendix C).7Published as a conference paper at ICLR 2017Table 1: Cross-entropy reconstruction losses for PC-AE and a vanilla single-layer autoencoder, withbinary and unconstrained real-valued encodings, and significant results in bold. The PC-AE resultsare significantly better (see Appendix A) than the AE results.PC-AE (bin.) PC-AE (real) AE (bin.) AE (real) PCAMNIST,H= 32 51.9 53.8 65.2 64.3 86.6MNIST,H= 100 9.2 9.9 26.8 25.0 52.7Omniglot,H= 32 76.1 77.2 93.1 90.6 102.8Omniglot,H= 100 12.1 13.2 46.6 45.4 63.6Caltech-101, H= 32 54.5 54.9 97.5 87.6 118.7Caltech-101, H= 100 7.1 7.1 64.3 45.4 75.2notMNIST,H= 32 121.9 122.4 149.6 141.8 174.0notMNIST,H= 100 62.2 63.0 99.6 92.1 115.5Adult,H= 10 7.7 7.8 9.3 8.1 13.5Adult,H= 20 0.65 0.64 2.5 1.5 7.96 E XPERIMENTSIn this section we compare our approach2empirically to a standard autoencoder with one hidden layer(termed AE here) trained with backpropagation, and a thresholded PCA baseline. Our goal is simplyto verify that our approach, though very different, is competitive in reconstruction performance.The datasets we use are first normalized to [0;1], and then binarized by sampling each pixel stochasti-cally in proportion to its intensity, following prior work (Salakhutdinov & Murray (2008)). Changingbetween binary and real-valued encodings in PC-AE requires just a line of code, to project the en-codings into [1;1]Hafter convex optimization updates to compute ENC(). We use Adagrad (Duchiet al. (2011)) for the convex minimizations of our algorithms; we observed that their performance isnot very sensitive to the choice of optimization method, explained by our approach’s convexity.We compare to a basic AE with a single hidden layer, trained using the Adam method with defaultparameters (Kingma & Ba (2014)). Other models like variational autoencoders (Kingma & Welling(2013)) are not shown here because they do not aim to optimize reconstruction loss or are notcomparably general autoencoding architectures. We also use a sign-thresholded PCA baseline(essentially a completely linear autoencoder, but with the output layer thresholded to be in [1;1]);see Appendix A for more details. We vary the number of hidden units Hfor all algorithms, andtry both binary and unconstrained real-valued encodings where appropriate; the respective AE useslogistic sigmoid and ReLU transfer functions for the encoding neurons. The results are in Table 1.The reconstruction performance of PC-AE indicates that it can encode information very well usingpairwise correlations, compared to the directly learned AE and PCA approaches. Loss can becomeextremely low when His raised, giving Bthe capacity to robustly encode almost all the informationin the input bits ^X. The performance is roughly equal between binary hidden units and unconstrainedones, which is expected by our derivations.We also try learning just the decoding layer of Sec. 2, on the encoded representation of the AE. Thisis motivated by the fact that Sec. 2 establishes our decoding method to be worst-case optimal givenanyEandB. We find the results to be significantly worse than the AE alone in all datasets used (e.g.reconstruction loss of 171=133on MNIST, and211=134on Omniglot, with 32=100hiddenunits respectively). This reflects the AE’s training backpropagating information about the data beyondpairwise correlations, through non-convex function compositions – however, this comes at the costof being more difficult to optimize. The representations learned by the E NCfunction of PC-AE arequite different and capture much more of the pairwise correlation information, which is used by thedecoding layer in a worst-case optimal fashion. We attempt to visually depict the differences betweenthe representations in Fig. 3.As discussed in Sec. 4, we do not claim that this paper’s method will always achieve the best empiricalreconstruction loss, even among single-layer autoencoders. We would like to make the encoding2TensorFlow code available at https://github.com/aikanor/pc-autoencoder .8Published as a conference paper at ICLR 2017Figure 1: Top row: randomly chosen test images from Caltech-101 silhouettes. Middle and bottomrows: corresponding reconstructions of PC-AE and AE with H= 32 binary hidden units.Figure 2: As Fig. 2, with H= 100 on Omniglot. Difference in quality is particularly noticeable inthe 1st, 5th, 8th, and 11th columns.function quicker to compute, as well. But we believe this paper’s results, especially when His high,illustrate the potential of using pairwise correlations for autoencoding as in our approach, learning toencode with alternating convex minimization and extremely strong worst-case robustness guarantees.ACKNOWLEDGMENTSI am grateful to Jack Berkowitz, Sanjoy Dasgupta, and Yoav Freund for helpful discussions; DanielHsu and Akshay Krishnamurthy for instructive examples; and Gary Cottrell for an enjoyable chat. Iacknowledge funding from the NIH (grant R01ES02500902).
HkMl_AzNx
Review
7: Good paper, accept
The author attacks the problem of shallow binary autoencoders using a minmax game approach. The algorithm, though simple, appears to be very effective. The paper is well written and has sound analyses. Although the work does not extend to deep networks immediately, its connections with other popular minmax approaches (eg GANs) could be fruitful in the future.
2: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper
ryelgY5eg
ICLR.cc/2017/conference
2017
Optimal Binary Autoencoding with Pairwise Correlations
["Akshay Balsubramani"]
We formulate learning of a binary autoencoder as a biconvex optimization problem which learns from the pairwise correlations between encoded and decoded bits. Among all possible algorithms that use this information, ours finds the autoencoder that reconstructs its inputs with worst-case optimal loss. The optimal decoder is a single layer of artificial neurons, emerging entirely from the minimax loss minimization, and with weights learned by convex optimization. All this is reflected in competitive experimental results, demonstrating that binary autoencoding can be done efficiently by conveying information in pairwise correlations in an optimal fashion.
["Theory", "Unsupervised Learning", "Games"]
ABSTRACTWe formulate learning of a binary autoencoder as a biconvex optimization problemwhich learns from the pairwise correlations between encoded and decoded bits.Among all possible algorithms that use this information, ours finds the autoencoderthat reconstructs its inputs with worst-case optimal loss. The optimal decoderis a single layer of artificial neurons, emerging entirely from the minimax lossminimization, and with weights learned by convex optimization. All this is reflectedin competitive experimental results, demonstrating that binary autoencoding canbe done efficiently by conveying information in pairwise correlations in an optimalfashion.1 I NTRODUCTIONConsider a general autoencoding scenario, in which an algorithm learns a compression scheme forindependently, identically distributed (i.i.d.) V-dimensional bit vector data^x(1);:::; ^x(n). Forsome encoding dimension H, the algorithm encodes each data example ^x(i)= (^x(i)1;:::; ^x(i)V)>into anH-dimensional representation e(i), withH < V . It then decodes each e(i)back into areconstructed example ~x(i)using some small amount of additional memory, and is evaluated on thequality of the reconstruction by the cross-entropy loss commonly used to compare bit vectors. Agood autoencoder learns to compress the data into Hbits so as to reconstruct it with low loss.When the loss is squared reconstruction error and the goal is to compress data in RVtoRH, this isoften accomplished with principal component analysis (PCA), which projects the input data on thetopHeigenvectors of their covariance matrix (Bourlard & Kamp (1988); Baldi & Hornik (1989)).These eigenvectors in RVconstituteVH real values of additional memory needed to decode thecompressed data in RHback to the reconstructions in RV, which are linear combinations of theeigenvectors. Crucially, this total additional memory does not depend on the amount of data n,making it applicable when data are abundant.This paper considers a similar problem, except using bit-vector data and the cross-entropy recon-struction loss. Since we are compressing samples of i.i.d. V-bit data into H-bit encodings, a naturalapproach is to remember the pairwise statistics: the VH average correlations between pairs of bits inthe encoding and decoding, constituting as much additional memory as the eigenvectors used in PCA.The decoder uses these along with the H-bit encoded data, to produce V-bit reconstructions.We show how to efficiently learn the autoencoder with the worst-case optimal loss in this scenario,without any further assumptions, parametric or otherwise. It has some striking properties.The decoding function is identical in form to the one used in a standard binary autoencoder with onehidden layer (Bengio et al. (2013a)) and cross-entropy reconstruction loss. Specifically, each bit vof the decoding is the output of a logistic sigmoid artificial neuron of the encoded bits, with somelearned weights wv2RH. This form emerges as the uniquely optimal decoding function, and is notassumed as part of any explicit model.We show that the worst-case optimal reconstruction loss suffered by the autoencoder is convex inthese decoding weights W=fwvgv2[V], and in the encoded representations E. Though it is notMost of the work was done as a PhD student at UC San Diego.1Published as a conference paper at ICLR 2017jointly convex in both, the situation still admits a natural and efficient optimization algorithm inwhich the loss is alternately minimized in EandWwhile the other is held fixed. The algorithmis practical and performs well empirically, learning incrementally from minibatches of data in astochastic optimization setting.1.1 N OTATIONThe observed data and encodings can be written in matrix form, representing bits as 1:^X=0B@^x(1)1 ^x(n)1.........^x(1)V ^x(n)V1CA2[1;1]Vn;E=0B@e(1)1e(n)1.........e(1)He(n)H1CA2[1;1]Hn(1)Here the encodings are allowed to be randomized, represented by values in [1;1]instead of just thetwo valuesf1;1g; e.g.e(1)i=12is+1w.p.34and1w.p.14. The data in Xare also allowed to berandomized, which we will see essentially loses no generality (Appendix B). We write the columns of^X;Eas^x(i);e(i)fori2[n](where [s] :=f1;:::;sg), representing the data. The rows are writtenas^xv= (x(1)v;:::;x(n)v)>forv2[V]andeh= (e(1)h;:::;e(n)h)>forh2[H].We also consider the correlation of each bit hof the encoding with each decoded bit vover the data,i.e.bv;h:=1nPni=1x(i)ve(i)h. This too can be written in matrix form as B:=1n^XE>2RVH,whose rows and columns we respectively write as bv= (bv;1;:::;bv;H)>overv2[V]andbh= (b1;h;:::;bV;h)>overh2[H]; the indexing will be clear from context.As alluded to earlier, the loss incurred on any example x(i)is the cross-entropy between the exam-ple and its reconstruction ~x(i), in expectation over the randomness in x(i). Defining`(~x(i)v) =ln21~x(i)v(thepartial losses to true labels1), the loss is written as:`(x(i);~x(i)) :=VXv=1" 1 +x(i)v2!`+(~x(i)v) + 1x(i)v2!`(~x(i)v)#(2)In addition, define a potential well (m) := ln (1 + em) + ln (1 +em)with derivative 0(m) :=1em1+em. Univariate functions like this are applied componentwise to matrices in this paper.1.2 P ROBLEM SETUPWith these definitions, the autoencoding problem we address can be precisely stated as two tasks,encoding and decoding. These share only the side information B. Our goal is to perform these stepsso as to achieve the best possible guarantee on reconstruction loss, with no further assumptions. Thiscan be written as a zero-sum game of an autoencoding algorithm seeking to minimize loss against anadversary, by playing encodings and reconstructions:Using ^X, algorithm plays (randomized) encodings E, resulting in pairwise correlations B.Using EandB, algorithm plays reconstructions ~X=~x(1);:::;~x(n)2[1;1]Vn.Given ~X;E;B, adversary plays X2[1;1]Vnto maximize reconstruction loss1nPni=1`(x(i);~x(i)).To incur low loss, the algorithm must use an EandBsuch that no adversary playing Xcan inflicthigher loss. The algorithm never sees X, which represents the worst the data could be given thealgorithm’s incomplete memory of it ( E;B) and reconstructions ( ~X).We find the autoencoding algorithm’s best strategy in two parts. First, we find the optimal decodingfunction of any encodings EgivenB, in Section 2. Then, we use the resulting optimal reconstructionfunction to outline the best encoding procedure, i.e. one that finds the E;Bthat lead to the bestreconstruction, in Section 3.1. Combining these ideas yields an autoencoding algorithm in Section2Published as a conference paper at ICLR 20173.2 (Algorithm 1), where its implementation and interpretation are specified. Further discussion andrelated work in Section 4 are followed by more extensions of the framework in Section 5. Experimentsin Section 6 show extremely competitive results with equivalent fully-connected autoencoders trainedwith backpropagation.2 O PTIMALLY DECODING AN ENCODED REPRESENTATIONTo address the game of Section 1.2, we first assume EandBare fixed, and derive the optimaldecoding rule given this information. We show in this section that the form of this optimal decoder isprecisely the same as in a classical autoencoder: having learned a weight vector wv2RHfor eachv2[V], thevthbit of each reconstruction ~xiis expressed as a logistic function of a wv-weightedcombination of the Hencoded bits ei– a logistic artificial neuron with weights wv. The weightvectors are learned by convex optimization, despite the nonconvexity of the transfer functions.To develop this, we minimize the worst-case reconstruction error, where Xis constrained by our priorknowledge that B=1nXE>, i.e.1nExv=bv8v2[V]. This can be written as a function of E:LB(E) := min~x(1);:::;~x(n)2[1;1]Vmaxx(1);:::;x(n)2[1;1]V;8v2[V]:1nExv=bv1nnXi=1`(x(i);~x(i)) (3)We solve this minimax problem for the optimal reconstructions played by the minimizing player in(3), written as ~x(1);:::; ~x(n).Theorem 1. Define the bitwise slack function E(w;b) :=b>w+1nPni=1(w>e(i)), which isconvex in w. W.r.t. any bv, this has minimizing weights wv:=wv(E;B) := arg minw2RHE(w;bv).Then the minimax value of the game (3)isLB(E) =12VXv=1E(wv;bv). For any example i2[n],the minimax optimal reconstruction can be written for any bit vas~x(i)v:=1ew>ve(i)1+ew>ve(i).This tells us that the optimization problem of finding the minimax optimal reconstructions ~x(i)isextremely convenient in several respects. The learning problem decomposes over the Vbits in thedecoding, reducing to solving for a weight vector wv2RHfor each bitv, by optimizing each bitwiseslack function. Given the weights, the optimal reconstruction of any example ican be specified by alayer of logistic sigmoid artificial neurons of its encoded bits, with w>ve(i)as the bitwise logits.Hereafter, we write W2RVHas the matrix of decoding weights, with rows fwvgVv=1. In particular,the optimal decoding weights W(E;B)are the matrix with rows fwv(E;B)gVv=1.3 L EARNING AN AUTOENCODER3.1 F INDING AN ENCODED REPRESENTATIONHaving computed the optimal decoding function in the previous section given any EandB, wenow switch perspectives to the encoder, which seeks to compress the input data ^Xinto encodedrepresentations E(from which Bis easily calculated to pass to the decoder). We seek to find (E;B)to ensure the lowest worst-case reconstruction loss after decoding; recall that this is LB(E)from (3).Observe that1n^XE>=Bby definition, and that the encoder is given ^X. Therefore, by using Thm. 1and substituting bv=1nE^xv8v2[V],LB(E) =12nnXi=1VXv=1h^x(i)v(w>ve(i)) + ( w>ve(i))i:=L(W;E) (4)3Published as a conference paper at ICLR 2017So it is convenient to define the feature distortion1for anyv2[V]with respect to W, between anyexample xand its encoding e:Wv(e;x) :=xvw>ve+ (w>ve) (5)From the above discussion, the best Egiven any decoding W, written as E(W), solves theminimizationminE2[1;1]HnL(W;E) =12nnXi=1mine(i)2[1;1]HVXv=1Wv(e(i);^x(i))which immediately yields the following result.Proposition 2. Define the optimal encodings for decoding weights WasE(W) :=arg minE2[1;1]HnL(W;E). Then e(i)(W)can be computed separately for each example ^x(i)2[1;1]V, minimizing its total feature distortion over the decoded bits w.r.t. W:ENC(^x(i);W) :=e(i)(W) := arg mine2[1;1]HVXv=1Wv(e;^x(i)) (6)Observe that the encoding function ENC(^x(i);W)can be efficiently computed to any desired pre-cision since the feature distortion Wv(e;^x(i))of each bitvis convex and Lipschitz in e; anL1error ofcan be reached in O(2)linear-time first-order optimization iterations. Note that theencodings need not be bits, and can be e.g. unconstrained 2RHinstead; the proof of Thm. 1 assumesno structure on them, and the optimization will proceed as above but without projecting into thehypercube.3.2 A NAUTOENCODER LEARNING ALGORITHMOur ultimate goal is to minimize the worst-case reconstruction loss. As we have seen in (3)and(6),it is convex in the encoding Eand in the decoding parameters W, each of which can be fixed whileminimizing with respect to the other. This suggests a learning algorithm that alternately performs twosteps: finding encodings Ethat minimizeL(W;E)as in (6)with a fixed W, and finding decodingparameters W(E;B), as given in Algorithm 1.Algorithm 1 Pairwise Correlation Autoencoder (PC-AE)Input: Size-ndataset ^X, number of epochs TInitialize W0(e.g. with each element being i.i.d. N(0;1))fort= 1toTdoEncode each example to ensure accurate reconstruction using weights Wt1, and compute theassociated pairwise bit correlations Bt:8i2[n] : [e(i)]t=ENC(^x(i);Wt1); Bt=1n^XE>tUpdate weight vectors [wv]tfor eachv2[V]to minimize slack function, using encodings Et:8v2[V] : [wv]t= arg minw2RH"[bv]>tw+1nnXi=1(w>e(i)t)#end forOutput: Weights WT1Noting that (w>ve)w>ve, we see that Wv(e;^x)w>vesgn(w>ve)^xv. So the optimizer tendsto change eso that w>vematches signs with ^xv, motivating the name.4Published as a conference paper at ICLR 20173.3 E FFICIENT IMPLEMENTATIONOur derivation of the encoding and decoding functions involves no model assumptions at all, onlyusing the minimax structure and pairwise statistics that the algorithm is allowed to remember.Nevertheless, the (en/de)coders can be learned and implemented efficiently.Decoding is a convex optimization in Hdimensions, which can be done in parallel for each bitv2[V]. This is relatively easy to solve in the parameter regime of primary interest when data areabundant, in which H < Vn. Similarly, encoding is also a convex optimization problem inonlyHdimensions. If the data examples are instead sampled in minibatches of size n, they canbe encoded in parallel, with a new minibatch being sampled to start each epoch t. The number ofexamplesn(per batch) is essentially only limited by nH, the number of compressed representationsthat fit in memory.So far in this paper, we have stated our results in the transductive setting, in which all data are giventogether a priori, with no assumptions whatsoever made about the interdependences between theVfeatures. However, PC-AE operates much more efficiently than this might suggest. Crucially,the encoding and decoding tasks both depend on nonly to average a function of x(i)ore(i)overi2[n], so they can both be solved by stochastic optimization methods that use first-order gradientinformation, like variants of stochastic gradient descent (SGD). We find it remarkable that theminimax optimal encoding and decoding can be efficiently learned by such methods, which do notscale computationally in n. Note that the result of each of these steps involves (n)outputs ( Eand~X), which are all coupled together in complex ways.Furthermore, efficient first-order convex optimization methods for both encoding and decoding stepsmanipulate more intermediate gradient-related quantities, with facile interpretations. For details, seeAppendix A.2.3.4 C ONVERGENCE AND WEIGHT REGULARIZATIONAs we noted previously, the objective function of the optimization is biconvex. This means that thealternating minimization algorithm we specify is an instance of alternating convex search , shownin that literature to converge under broad conditions (Gorski et al. (2007)). It is not guaranteedto converge to the global optimum, but each iteration will monotonically decrease the objectivefunction. In light of our introductory discussion, the properties and rate of such convergence wouldbe interesting to compare to stochastic optimization algorithms for PCA, which converge efficientlyunder broad conditions (Balsubramani et al. (2013); Shamir (2016)).The basic game used so far has assumed perfect knowledge of the pairwise correlations, leading toequality constraints 8v2[V] :1nExv=bv. This makes sense in PC-AE , where the encodingphase of each epoch gives the exact Btfor the decoding phase. However, in other stochastic settingsas for denoising autoencoders (see Sec. 5.2), it may be necessary to relax this constraint. A relaxedconstraint of1nExvbv1exactly corresponds to an extra additive regularization term ofkwvk1on the corresponding weights in the convex optimization used to find W(Appendix D.1).Such regularization leads to provably better generalization (Bartlett (1998)) and is often practical touse, e.g. to encourage sparsity. But we do not use it for our PC-AE experiments in this paper.4 D ISCUSSION AND RELATED WORKOur approach PC-AE is quite different from existing autoencoding work in several ways.First and foremost, we posit no explicit decision rule, and avoid optimizing the highly non-convexdecision surface traversed by traditional autoencoding algorithms that learn with backpropagation(Rumelhart et al. (1986)). The decoding function, given the encodings, is a single layer of artificialneurons only because of the minimax structure of the problem when minimizing worst-case loss. Thisdiffers from reasoning typically used in neural net work (see Jordan (1995)), in which the loss is thenegative log-likelihood (NLL) of the joint probability, which is assumed to follow a form specifiedby logistic artificial neurons and their weights. We instead interpret the loss in the usual direct way asthe NLL of the predicted probability of the data given the visible bits, and avoid any assumptions onthe decision rule (e.g. not monotonicity in the score w>ve(i), or even dependence on such a score).5Published as a conference paper at ICLR 2017This justification of artificial neurons – as the minimax optimal decision rules given information onpairwise correlations – is one of our more distinctive contributions (see Sec. 5.1).Crucially, we make no assumptions whatsoever on the form of the encoding or decoding, excepton the memory used by the decoding. Some such “regularizing" restriction is necessary to rule outthe autoencoder just memorizing the data, and is typically expressed by assuming a model class ofcompositions of artificial neuron layers. We instead impose it axiomiatically by limiting the amountof information transmitted through B, which does not scale in n; but we do not restrict how thisinformation is used. This confers a clear theoretical advantage, allowing us to attain the strongestrobust loss guarantee among all possible autoencoders that use the correlations B.More importantly in practice, avoiding an explicit model class means that we do not have to optimizethe typically non-convex model, which has long been a central issue for backpropagation-basedlearning methods (e.g. Dauphin et al. (2014)). Prior work related in spirit has attempted to avoidthis through convex relaxations, including for multi-layer optimization under various structuralassumptions (Aslan et al. (2014); Zhang et al. (2016)), and when the number of hidden units is variedby the algorithm (Bengio et al. (2005); Bach (2014)).Our approach also isolates the benefit of higher nin dealing with overfitting, as the pairwisecorrelations Bcan be measured progressively more accurately as nincreases. In this respect, wefollow a line of research using such pairwise correlations to model arbitary higher-order structureamong visible units, rooted in early work on (restricted) Boltzmann Machines (Ackley et al. (1985);Smolensky (1986); Rumelhart & McClelland (1987); Freund & Haussler (1992)). More recently,theoretical algorithms have been developed with the perspective of learning from the correlationsbetween units in a network, under various assumptions on the activation function, architecture, andweights, for both deep (Arora et al. (2014)) and shallow networks (using tensor decompositions,e.g. Livni et al. (2014); Janzamin et al. (2015)). Our use of ensemble aggregation techniques (fromBalsubramani & Freund (2015a; 2016)) to study these problems is anticipated in spirit by prior workas well, as discussed at length by Bengio (2009) in the context of distributed representations.4.1 O PTIMALITY , OTHER ARCHITECTURES ,AND DEPTHWe have established that a single layer of logistic artificial neurons is an optimal decoder, givenonly indirect information about the data through pairwise correlations. This is not a claim thatautoencoders need only a single-layer architecture in the worst case. Sec. 3.1 establishes that the bestrepresentations Eare the solution to a convex optimization, with no artificial neurons involved incomputing them from the data. Unlike the decoding function, the optimal encoding function ENCcannot be written explicitly in terms of artificial neurons, and is incomparable to existing architectures(though it is analogous to PCA in prescribing an efficient operation that yields the encodings fromunlabeled data). Also, the encodings are only optimal given the pairwise correlations; trainingalgorithms like backpropagation, which communicate other knowledge of the data through derivativecomposition, can learn final decoding layers that outperform ours, as we see in experiments.In our framework so far, we explore using all the pairwise correlations between hidden and visiblebits to inform learning by constraining the adversary, resulting in a Lagrange parameter – a weight –for each constraint. These VH weights Wconstitute the parameters of the optimal decoding layer,describing a fully connected architecture. If just a select few of these correlations were used, onlythey would constrain the adversary in the minimax problem of Sec. 2, so weights would only beintroduced for them, giving rise to sparser architectures.Our central choices – to store only pairwise correlations and minimize worst-case reconstructionloss – play a similar regularizing role to explicit model assumptions, and other autoencoding methodsmay achieve better performance on data for which these choices are too conservative, by e.g. makingdistributional assumptions on the data. From our perspective, other architectures with more layers– particularly highly successful ones like convolutional, recurrent, residual, and ladder networks(LeCun et al. (2015); He et al. (2015); Rasmus et al. (2015)) – lend the autoencoding algorithm morepower by allowing it to measure more nuanced correlations using more parameters, which decreasesthe worst-case loss. Applying our approach with these would be interesting future work.Extending this paper’s convenient minimax characterization to deep representations with empiricalsuccess is a very interesting open problem. Prior work on stacking autoencoders/RBMs (Vincent et al.6Published as a conference paper at ICLR 2017(2010)) and our learning algorithm PC-AE suggest that we could train a deep network in alternatingforward and backward passes. Using this paper’s ideas, the forward pass would learn the weights toeach layer given the previous layer’s activations (and inter-layer pairwise correlations) by minimizingthe slack function, with the backward pass learning the activations for each layer given the weights to/ activations of the next layer by convex optimization (as we learn E). Both passes would consistof successive convex optimizations dictated by our approach, quite distinct from backpropagation,though loosely resembling the wake-sleep algorithm (Hinton et al. (1995)).4.2 G ENERATIVE APPLICATIONSParticularly recently, autoencoders have been of interest largely for their many applications beyondcompression, especially for their generative uses. The most directly relevant to us involve repurposingdenoising autoencoders (Bengio et al. (2013b); see Sec. 5.2); moment matching among hidden andvisible units (Li et al. (2015)); and generative adversarial network ideas (Goodfellow et al. (2014);Makhzani et al. (2015)), the latter particularly since the techniques of this paper have been applied tobinary classification (Balsubramani & Freund (2015a;b)). These are outside this paper’s scope, butsuggest themselves as future extensions of our approach.5 E XTENSIONS5.1 O THER RECONSTRUCTION LOSSESIt may make sense to use another reconstruction loss other than cross-entropy, for instance theexpected Hamming distance between x(i)and~x(i). It turns out that the minimax manipulations weuse work under very broad conditions, for nearly any loss that additively decomposes over the Vbitsas cross-entropy does. In such cases, all that is required is that the partial losses `+(~x(i)v);`(~x(i)v)aremonotonically decreasing and increasing respectively (recall that for cross-entropy loss, this is true as`(~x(i)v) = ln21~x(i)v); they need not even be convex. This monotonicity is a natural condition,because the loss measures the discrepancy to the true label, and holds for all losses in common use.Changing the partial losses only changes the structure of the minimax solution in two respects: byaltering the form of the transfer function on the decoding neurons, and the univariate potential well optimized to learn the decoding weights. Otherwise, the problem remains convex and the algorithmis identical. Formal statements of these general results are in Appendix E.5.2 D ENOISING AUTOENCODINGOur framework can be easily applied to learn a denoising autoencoder (DAE; Vincent et al. (2008;2010)), which uses noise-corrupted data (call it _X) for training, and uncorrupted data for evaluation.From our perspective, this corresponds to leaving the learning of Wunchanged, but using corrupteddata when learning E. Consequently, the minimization problem over encodings must be changed toaccount for the bias on Bintroduced by the noise; so the algorithm plays given the noisy data, but tominimize loss against X. This is easiest to see for zero-mean noise, for which our algorithms arecompletely unchanged because Bdoes not change (in expectation) after the noise is added.Another common scenario illustrating this technique is to mask a fraction of the input bits uniformlyat random (in our notation, changing 1s to1s). This masking noise changes each pairwise correlationbv;hby an amount v;h:=1nPni=1( _x(i)vx(i)v)e(i)h. Therefore, the optimand Eq. (4)must be modifiedby subtracting this factor v;h. Thisv;hcan be estimated (w.h.p.) given _xv;eh;;xv. But even withjust the noisy data and not xv, we can estimate v;hw.h.p. by extrapolating the correlation of the bitsof_xvthat are left as +1(a1fraction) with the corresponding values in eh(see Appendix C).7Published as a conference paper at ICLR 2017Table 1: Cross-entropy reconstruction losses for PC-AE and a vanilla single-layer autoencoder, withbinary and unconstrained real-valued encodings, and significant results in bold. The PC-AE resultsare significantly better (see Appendix A) than the AE results.PC-AE (bin.) PC-AE (real) AE (bin.) AE (real) PCAMNIST,H= 32 51.9 53.8 65.2 64.3 86.6MNIST,H= 100 9.2 9.9 26.8 25.0 52.7Omniglot,H= 32 76.1 77.2 93.1 90.6 102.8Omniglot,H= 100 12.1 13.2 46.6 45.4 63.6Caltech-101, H= 32 54.5 54.9 97.5 87.6 118.7Caltech-101, H= 100 7.1 7.1 64.3 45.4 75.2notMNIST,H= 32 121.9 122.4 149.6 141.8 174.0notMNIST,H= 100 62.2 63.0 99.6 92.1 115.5Adult,H= 10 7.7 7.8 9.3 8.1 13.5Adult,H= 20 0.65 0.64 2.5 1.5 7.96 E XPERIMENTSIn this section we compare our approach2empirically to a standard autoencoder with one hidden layer(termed AE here) trained with backpropagation, and a thresholded PCA baseline. Our goal is simplyto verify that our approach, though very different, is competitive in reconstruction performance.The datasets we use are first normalized to [0;1], and then binarized by sampling each pixel stochasti-cally in proportion to its intensity, following prior work (Salakhutdinov & Murray (2008)). Changingbetween binary and real-valued encodings in PC-AE requires just a line of code, to project the en-codings into [1;1]Hafter convex optimization updates to compute ENC(). We use Adagrad (Duchiet al. (2011)) for the convex minimizations of our algorithms; we observed that their performance isnot very sensitive to the choice of optimization method, explained by our approach’s convexity.We compare to a basic AE with a single hidden layer, trained using the Adam method with defaultparameters (Kingma & Ba (2014)). Other models like variational autoencoders (Kingma & Welling(2013)) are not shown here because they do not aim to optimize reconstruction loss or are notcomparably general autoencoding architectures. We also use a sign-thresholded PCA baseline(essentially a completely linear autoencoder, but with the output layer thresholded to be in [1;1]);see Appendix A for more details. We vary the number of hidden units Hfor all algorithms, andtry both binary and unconstrained real-valued encodings where appropriate; the respective AE useslogistic sigmoid and ReLU transfer functions for the encoding neurons. The results are in Table 1.The reconstruction performance of PC-AE indicates that it can encode information very well usingpairwise correlations, compared to the directly learned AE and PCA approaches. Loss can becomeextremely low when His raised, giving Bthe capacity to robustly encode almost all the informationin the input bits ^X. The performance is roughly equal between binary hidden units and unconstrainedones, which is expected by our derivations.We also try learning just the decoding layer of Sec. 2, on the encoded representation of the AE. Thisis motivated by the fact that Sec. 2 establishes our decoding method to be worst-case optimal givenanyEandB. We find the results to be significantly worse than the AE alone in all datasets used (e.g.reconstruction loss of 171=133on MNIST, and211=134on Omniglot, with 32=100hiddenunits respectively). This reflects the AE’s training backpropagating information about the data beyondpairwise correlations, through non-convex function compositions – however, this comes at the costof being more difficult to optimize. The representations learned by the E NCfunction of PC-AE arequite different and capture much more of the pairwise correlation information, which is used by thedecoding layer in a worst-case optimal fashion. We attempt to visually depict the differences betweenthe representations in Fig. 3.As discussed in Sec. 4, we do not claim that this paper’s method will always achieve the best empiricalreconstruction loss, even among single-layer autoencoders. We would like to make the encoding2TensorFlow code available at https://github.com/aikanor/pc-autoencoder .8Published as a conference paper at ICLR 2017Figure 1: Top row: randomly chosen test images from Caltech-101 silhouettes. Middle and bottomrows: corresponding reconstructions of PC-AE and AE with H= 32 binary hidden units.Figure 2: As Fig. 2, with H= 100 on Omniglot. Difference in quality is particularly noticeable inthe 1st, 5th, 8th, and 11th columns.function quicker to compute, as well. But we believe this paper’s results, especially when His high,illustrate the potential of using pairwise correlations for autoencoding as in our approach, learning toencode with alternating convex minimization and extremely strong worst-case robustness guarantees.ACKNOWLEDGMENTSI am grateful to Jack Berkowitz, Sanjoy Dasgupta, and Yoav Freund for helpful discussions; DanielHsu and Akshay Krishnamurthy for instructive examples; and Gary Cottrell for an enjoyable chat. Iacknowledge funding from the NIH (grant R01ES02500902).
Sy2MfVHNl
Review
7: Good paper, accept
The paper presents a novel look at binary auto-encoders, formulating the objective function as a min-max reconstruction error over a training set given the observed intermediate representations. The author shows that this formulation leads to a bi-convex problem that can be solved by alternating minimisation methods; this part is non-trivial and is the main contribution of the paper. Proof-of-concept experiments are performed, showing improvements for 1-hidden layer auto-encoders with respect to a vanilla approach. The experimental section is fairly weak because the literature on auto-encoders is huge and many variants were shown to perform better than straightforward approaches without being more complicated (e.g., denoising auto-encoders). Yet, the paper presents an analysis that leads to a new learning algorithm for an old problem, and is likely worth discussing.
3: The reviewer is fairly confident that the evaluation is correct
rJY0-Kcll
ICLR.cc/2017/conference
2017
Optimization as a Model for Few-Shot Learning
["Sachin Ravi", "Hugo Larochelle"]
Though deep neural networks have shown great success in the large data domain, they generally perform poorly on few-shot learning tasks, where a model has to quickly generalize after seeing very few examples from each class. The general belief is that gradient-based optimization in high capacity models requires many iterative steps over many examples to perform well. Here, we propose an LSTM-based meta-learner model to learn the exact optimization algorithm used to train another learner neural network in the few-shot regime. The parametrization of our model allows it to learn appropriate parameter updates specifically for the scenario where a set amount of updates will be made, while also learning a general initialization of the learner network that allows for quick convergence of training. We demonstrate that this meta-learning model is competitive with deep metric-learning techniques for few-shot learning.
["model", "optimization", "learning", "learning optimization", "deep neural networks", "great success", "large data domain", "learning tasks", "examples", "class"]
ABSTRACTThough deep neural networks have shown great success in the large data domain,they generally perform poorly on few-shot learning tasks, where a classifier has toquickly generalize after seeing very few examples from each class. The generalbelief is that gradient-based optimization in high capacity classifiers requires manyiterative steps over many examples to perform well. Here, we propose an LSTM-based meta-learner model to learn the exact optimization algorithm used to trainanother learner neural network classifier in the few-shot regime. The parametriza-tion of our model allows it to learn appropriate parameter updates specifically forthe scenario where a set amount of updates will be made, while also learning ageneral initialization of the learner (classifier) network that allows for quick con-vergence of training. We demonstrate that this meta-learning model is competitivewith deep metric-learning techniques for few-shot learning.1 I NTRODUCTIONDeep learning has shown great success in a variety of tasks with large amounts of labeled data inimage classification (He et al., 2015), machine translation (Wu et al., 2016), and speech modeling(Oord et al., 2016). These achievements have relied on the fact that optimization of these deep,high-capacity models requires many iterative updates across many labeled examples. This type ofoptimization breaks down in the small data regime where we want to learn from very few labeledexamples. In this setting, rather than have one large dataset, we have a set of datasets, each with fewannotated examples per class. The motivation for this task lies not only in the fact that humans, evenchildren, can usually generalize after just one example of a given object, but also because modelsexcelling at this task would have many useful applications. Firstly, they would help alleviate datacollection as we would not require millions of labeled examples to attain reasonable performance.Furthermore, in many fields, data exhibits the characteristic of having many different classes but fewexamples per class. Models that are able to generalize from few examples would be able to capturethis type of data effectively.There seem to be two main reasons why gradient-based optimization fails in the face of few la-beled examples. Firstly, the variants of gradient-based optimization algorithms, such as momentum(Nesterov, 1983), Adagrad (Duchi et al., 2011), Adadelta (Zeiler, 2012), and ADAM (Kingma &Ba, 2014), weren’t designed specifically to perform well under the constraint of a set number ofupdates. Specifically when applied to non-convex optimization problems, with a reasonable choiceof hyperparameters these algorithms don’t have very strong guarantees of speed of convergence,beyond that they will eventually converge to a good solution after what could be many millions ofiterations. Secondly, for each separate dataset considered, the network would have to start from arandom initialization of its parameters, which considerably hurts its ability to converge to a goodsolution after a few updates. Transfer learning (Caruana, 1995; Bengio et al., 2012; Donahue et al.,2013) can be applied to alleviate this problem by fine-tuning a pre-trained network from another taskwhich has more labelled data; however, it has been observed that the benefit of a pre-trained networkgreatly decreases as the task the network was trained on diverges from the target task (Yosinski et al.,2014). What is needed is a systematic way to learn a beneficial common initialization that wouldWork done as an intern at Twitter. Sachin is a PhD student at Princeton University and can be reached atsachinr@princeton.edu .1Published as a conference paper at ICLR 2017serve as a good point to start training for the set of datasets being considered. This would provide thesame benefits as transfer learning, but with the guarantee that the initialization is an optimal startingpoint for fine-tuning.Previous work has suggested one manner in which to acquire quick knowledge from few examples,through the idea of meta-learning (Thrun, 1998; Schmidhuber et al., 1997). Meta-learning suggestsframing the learning problem at two levels. The first is quick acquisition of knowledge within eachseparate task presented. This process is guided by the second, which involves slower extraction ofinformation learned across all the tasks.We present a method here that addresses the weakness of neutral networks trained with gradient-based optimization on the few-shot learning problem by framing the problem within a meta-learningsetting. We propose an LSTM-based meta-learner optimizer that is trained to optimize a learnerneural network classifier. The meta-learner captures both short-term knowledge within a task andlong-term knowledge common among all the tasks. By using an objective that directly captures anoptimization algorithm’s ability to have good generalization performance given only a set numberof updates, the meta-learner model is trained to converge a learner classifier to a good solutionquickly on each task. Additionally, the formulation of our meta-learner model allows it to learn atask-common initialization for the learner classifier, which captures fundamental knowledge sharedamong all the tasks.2 T ASK DESCRIPTIONWe first begin by detailing the meta-learning formulation we use. In the typical machine learningsetting, we are interested in a dataset Dand usually split Dso that we optimize parameters on atraining setDtrain and evaluate its generalization on the test set Dtest. In meta-learning, however,we are dealing with meta-sets Dcontaining multiple regular datasets, where each D2Dhas a splitofDtrain andDtest.We consider the k-shot,N-class classification task, where for each dataset D, the training set con-sists ofklabelled examples for each of Nclasses, meaning that Dtrain consists ofkNexamples,andDtesthas a set number of examples for evaluation. We note that previous work (Vinyals et al.,2016) has used the term episode to describe each dataset consisting of a training and test set.In meta-learning, we thus have different meta-sets for meta-training, meta-validation, and meta-testing ( Dmetatrain ,Dmetavalidation , andDmetatest, respectively). On Dmetatrain , we areinterested in training a learning procedure (the meta-learner) that can take as input one of its train-ing setsDtrain and produce a classifier (the learner) that achieves high average classification perfor-mance on its corresponding test set Dtest. Using Dmetavalidation we can perform hyper-parameterselection of the meta-learner and evaluate its generalization performance on Dmetatest.For this formulation to correspond to the few-shot learning setting, each training set in datasetsD2Dwill contain few labeled examples (we consider k= 1 ork= 5), that must be used togeneralize to good performance on the corresponding test set. An example of this formulation isgiven in Figure 1.3 M ODELWe now move to the description of our proposed model for meta-learning.3.1 M ODEL DESCRIPTIONConsider a single dataset, or episode, D2Dmetatrain . Suppose we have a learner neural netclassifier with parameters that we want to train on Dtrain . The standard optimization algorithmsused to train deep neural networks are some variant of gradient descent, which uses updates of theformt=t1trt1Lt; (1)2Published as a conference paper at ICLR 2017Figure 1: Example of meta-learning setup. The top represents the meta-training set Dmetatrain ,where inside each gray box is a separate dataset that consists of the training set Dtrain (left side ofdashed line) and the test set Dtest(right side of dashed line). In this illustration, we are consideringthe1-shot, 5-class classification task where for each dataset, we have one example from each of5classes (each given a label 1-5) in the training set and 2examples for evaluation in the test set.The meta-test set Dmetatestis defined in the same way, but with a different set of datasets thatcover classes not present in any of the datasets in Dmetatrain (similarly, we additionally have ameta-validation set that is used to determine hyper-parameters).wheret1are the parameters of the learner after t1updates,tis the learning rate at time t,Ltis the loss optimized by the learner for its tthupdate,rt1Ltis the gradient of that loss withrespect to parameters t1, andtis the updated parameters of the learner.Our key observation that we leverage here is that this update resembles the update for the cell statein an LSTM (Hochreiter & Schmidhuber, 1997)ct=ftct1+it~ct; (2)ifft= 1;ct1=t1;it=t;and~ct=rt1Lt.Thus, we propose training a meta-learner LSTM to learn an update rule for training a neural net-work. We set the cell state of the LSTM to be the parameters of the learner, or ct=t, and thecandidate cell state ~ct=rt1Lt, given how valuable information about the gradient is for opti-mization. We define parametric forms for itandftso that the meta-learner can determine optimalvalues through the course of the updates.Let us start with it, which corresponds to the learning rate for the updates. We letit=WIrt1Lt;Lt;t1;it1+bI;meaning that the learning rate is a function of the current parameter value t1, the current gradientrt1Lt, the current lossLt, and the previous learning rate it1. With this information, the meta-learner should be able to finely control the learning rate so as to train the learner quickly whileavoiding divergence.As forft, it seems possible that the optimal choice isn’t the constant 1. Intuitively, what wouldjustify shrinking the parameters of the learner and forgetting part of its previous value would beif the learner is currently in a bad local optima and needs a large change to escape. This wouldcorrespond to a situation where the loss is high but the gradient is close to zero. Thus, one proposalfor the forget gate is to have it be a function of that information, as well as the previous value of theforget gate:ft=WFrt1Lt;Lt;t1;ft1+bF:Additionally, notice that we can also learn the initial value of the cell state c0for the LSTM, treatingit as a parameter of the meta-learner. This corresponds to the initial weights of the classifier (that3Published as a conference paper at ICLR 2017the meta-learner is training). Learning this initial value lets the meta-learner determine the optimalinitial weights of the learner so that training begins from a beneficial starting point that allowsoptimization to proceed rapidly. Lastly, note that though the meta-learner’s update rule matches thecell state update of the LSTM, the meta-learner also bears similarity to the GRU (Cho et al., 2014)hidden state update, with the exception that the forget and input gates aren’t tied to sum to one.3.2 P ARAMETER SHARING & P REPROCESSINGBecause we want our meta-learner to produce updates for deep neural networks, which consistof tens of thousands of parameters, to prevent an explosion of meta-learner parameters we need toemploy some sort of parameter sharing. Thus as in Andrychowicz et al. (2016), we share parametersacross the coordinates of the learner gradient. This means each coordinate has its own hidden andcell state values but the LSTM parameters are the same across all coordinates. This allows us touse a compact LSTM model and additionally has the nice property that the same update rule is usedfor each coordinate, but one that is dependent on the respective history of each coordinate duringoptimization. We can easily implement parameter sharing by having the input be a batch of gradientcoordinates and loss inputs (rt;iLt;Lt)for each dimension i.Because the different coordinates of the gradients and the losses can be of very different magnitudes,we need to be careful in normalizing the values so that the meta-learner is able to use them properlyduring training. Thus, we also found that the preprocessing method of Andrychowicz et al. (2016)worked well when applied to both the dimensions of the gradients and the losses at each time step:x!(log(jxj)p;sgn(x)ifjxjep(1;epx) otherwiseThis preprocessing adjusts the scaling of gradients and losses, while also separating the informationabout their magnitude and their sign (the latter being mostly useful for gradients). We found that thesuggested value of p= 10 in the above formula worked well in our experiments.3.3 T RAININGThe question now is how do we train the LSTM meta-learner model to be effective at few-shotlearning tasks? As observed in Vinyals et al. (2016), in order to perform well at this task, it is keyto have training conditions match those of test time. During evaluation of the meta-learning, foreach dataset (episode), D= (Dtrain;Dtest)2Dmetatest, a good meta-learner model will, givena series of learner gradients and losses on the training set Dtrain , suggest a series of updates for theclassifier that pushes it towards good performance on the test set Dtest.Thus to match test time conditions, when considering each dataset D2Dmetatrain , the trainingobjective we use is the loss Ltestof the produced classifier on D’s test setDtest. While iteratingover the examples in D’s training set Dtrain , at each time step tthe LSTM meta-learner receives(rt1Lt;Lt)from the learner (the classifier) and proposes the new set of parameters t. Theprocess repeats for Tsteps, after which the classifier and its final parameters are evaluated on thetest set to produce the loss that is then used to train the meta-learner. The training algorithm isdescribed in Algorithm 1 and the corresponding computational graph is shown in Figure 2.3.3.1 G RADIENT INDEPENDENCE ASSUMPTIONNotice that our formulation would imply that the losses Ltand gradientsrt1Ltof the learner aredependent on the parameters of the meta-learner. Gradients on the meta-learner’s parameters shouldnormally take this dependency into account. However, as discussed by Andrychowicz et al. (2016),this complicates the computation of the meta-learner’s gradients. Thus, following Andrychowiczet al. (2016), we make the simplifying assumption that these contributions to the gradients aren’timportant and can be ignored, which allows us to avoid taking second derivatives, a considerablyexpensive operation. We were still able to train the meta-learner effectively in spite of this simplify-ing assumption.4Published as a conference paper at ICLR 2017Figure 2: Computational graph for the forward pass of the meta-learner. The dashed line dividesexamples from the training set Dtrain and test setDtest. Each (Xi;Yi)is theithbatch from thetraining set whereas (X;Y)is all the elements from the test set. The dashed arrows indicate that wedo not back-propagate through that step when training the meta-learner. We refer to the learner asM, whereM(X;)is the output of learner Musing parameters for inputs X. We also usertasa shorthand forrt1Lt.3.3.2 I NITIALIZATION OF META-LEARNER LSTMWhen training LSTMs, it is advised to initialize the LSTM with small random weights and to set theforget gate bias to a large value so that the forget gate is initialized to be close to 1, thus enablinggradient flow (Zaremba, 2015). In addition to the forget gate bias setting, we found that we neededto initialize the input gate bias to be small so that the input gate value (and thus the learning rate)used by the meta-learner LSTM starts out being small. With this combined initialization, the meta-learner starts close to normal gradient descent with a small learning rate, which helps initial stabilityof training.3.4 B ATCH NORMALIZATIONBatch Normalization (Ioffe & Szegedy, 2015) is a recently proposed method to stabilize and thusspeed up learning of deep neural networks by reducing internal covariate shift within the learner’shidden layers. This reduction is achieved by normalizing each layer’s pre-activation, by subtractingby the mean and dividing by the standard deviation. During training, the mean and standard devi-ation are estimated using the current batch being trained on, whereas during evaluation a runningaverage of both statistics calculated on the training set is used. We need to be careful with batchnormalization for the learner network in the meta-learning setting, because we do not want to collectmean and standard deviation statistics during meta-testing in a way that allows information to leakbetween different datasets (episodes), being considered. One easy way to prevent this issue is to notcollect statistics at all during the meta-testing phase, but just use our running averages from meta-training. This, however, has a bad impact on performance, because we have changed meta-trainingand meta-testing conditions, causing the meta-learner to learn a method of optimization that relieson batch statistics which it now does not have at meta-testing time. In order to keep the two phasesas similar as possible, we found that a better strategy was to collect statistics for each dataset D2Dduring Dmetatest, but then erase the running statistics when we consider the next dataset. Thus,during meta-training, we use batch statistics for both the training and testing set whereas duringmeta-testing, we use batch statistics for the training set (and to compute our running averages) butthen use the running averages during testing. This does not cause any information to leak betweendifferent datasets, but also allows the meta-learner to be trained on conditions that are matched be-tween training and testing. Lastly, because we are doing very few training steps, we computed therunning averages so that higher preference is given to the later values.5Published as a conference paper at ICLR 2017Algorithm 1 Train Meta-LearnerInput : Meta-training set Dmetatrain , LearnerMwith parameters , Meta-Learner Rwithparameters .1:0 random initialization2:3:ford= 1;ndo4:Dtrain;Dtest random dataset from Dmetatrain5:0 c0 .Intialize learner parameters6:7: fort= 1;Tdo8: Xt;Yt random batch from Dtrain9:Lt L(M(Xt;t1);Yt) .Get loss of learner on train batch10:ct R((rt1Lt;Lt); d1).Get output of meta-learner using Equation 211:t ct .Update learner parameters12: end for13:14: X;Y Dtest15:Ltest L(M(X;T);Y) .Get loss of learner on test batch16: Update dusingrd1Ltest .Update meta-learner parameters17:18:end for4 R ELATED WORKWhile this work falls within the broad literature of transfer learning in general, we focus here onpositioning it relative to previous work on meta-learning and few-shot learning.4.1 M ETA-LEARNINGMeta-learning has a long history, but has grown to prominence recently as many have advocatedfor it as a key to achieving human-level intelligence in the future (Lake et al., 2016). The ability tolearn at two levels (learning within each task presented, while accumulating knowledge about thesimilarities and differences between tasks) is seen as being crucial to improving AI. Previous workhas used a variety of techniques in the meta-learning setting.Schmidhuber (1992; 1993) explored using networks that learn how to modify their own weights overa number of computations steps on the input. The updating of the weights is defined in a parametricform that allows the prediction and weight-change process to be differentiable end-to-end. Thework of Bengio et al. (1990; 1995) and Bengio (1993) considered learning update rules for neuralnetworks that are biologically plausible. This property is enforced by allowing the parametric formof the update to only have as input local information at each hidden unit to determine the weightchange. Different optimization methods, such as genetic programming or simulated annealing, areused to train the learning rule.In Santoro et al. (2016), a memory-augmented neural network is trained to learn how to store andretrieve memories to use for each classification task. The work of Andrychowicz et al. (2016) usesan LSTM to train a neural network; however, they are interested in learning a general optimizationalgorithm to train neural networks for large-scale classification, whereas we are interested in thefew-shot learning problem. This work also builds upon Hochreiter et al. (2001) and Bosc, bothof which used LSTMs to train multi-layer perceptrons to learn on binary classification and time-series prediction tasks. Another related method is the work of Bertinetto et al. (2016), who traina meta-learner to map a training example to the weights of a neural network that is then used toclassify future examples from this class; however, unlike our method the classifier network is directlyproduced rather than being fine-tuned after multiple training steps. Our work also bears similarityto Maclaurin et al. (2015), who tune the hyperparameters of gradient descent with momentum bybackpropagating through the chain of gradient steps to optimize the validation performance.6Published as a conference paper at ICLR 20174.2 F EW-SHOT LEARNINGThe best performing methods for few-shot learning have been mainly metric learning methods.Deep siamese networks (Koch, 2015) train a convolutional network to embed examples so thatitems in the same class are close while items in different classes are far away, according to somedistance metric. Matching networks (Vinyals et al., 2016) refine this idea so that training and testingconditions match, by defining a differentiable nearest neighbor loss involving the cosine similaritiesof embeddings produced by a convolutional network.5 E VALUATIONIn this section, we describe the results of experiments, examining the properties of our model andcomparing our method’s performance against different approaches. Following Vinyals et al. (2016),we consider the k-shot,N-class classification setting where a meta-learner trains on many relatedbut small training sets of kexamples for each of Nclasses. We first split the list of all classes inthe data into disjoint sets and assign them to each meta-set of meta-training, meta-validation, andmeta-testing. To generate each instance of a k-shot,N-class task dataset D= (Dtrain;Dtest)2D,we do the following: we first sample Nclasses from the list of classes corresponding to the meta-setwe consider. We then sample kexamples from each of those classes. These kexamples togethercompose the training set Dtrain . Then, an additional fixed amount of the rest of the examples aresampled to yield a test set Dtest. We generally have 15examples per class in the test sets. Whentraining the meta-learner, we iterate by sampling these datasets (episodes) repeatedly. For meta-validation and meta-testing, however, we produce a fixed number of these datasets to evaluate eachmethod. We produce enough datasets to ensure that the confidence interval of the mean accuracy issmall.For the learner, we use a simple CNN containing 4convolutional layers, each of which is a 33convolution with 32filters, followed by batch normalization, a ReLU non-linearity, and lastly a22max-pooling. The network then has a final linear layer followed by a softmax for the numberof classes being considered. The loss function Lis the average negative log-probability assigned bythe learner to the correct class. For the meta-learner, we use a 2-layer LSTM, where the first layer isa normal LSTM and the second layer is our modified LSTM meta-learner. The gradients and lossesare preprocessed and fed into the first layer LSTM, and the regular gradient coordinates are alsoused by the second layer LSTM to implement the state update rule shown in (1). At each time step,the learner’s loss and gradient is computed on a batch consisting of the entire training set Dtrain ,because we consider training sets with only a total of 5or25examples. We train our LSTM withADAM using a learning rate of 0:001and with gradient clipping using a value of 0:25.5.1 E XPERIMENT RESULTSThe Mini-ImageNet dataset was proposed by Vinyals et al. (2016) as a benchmark offering thechallenges of the complexity of ImageNet images, without requiring the resources and infrastructurenecessary to run on the full ImageNet dataset. Because the exact splits used in Vinyals et al. (2016)were not released, we create our own version of the Mini-Imagenet dataset by selecting a random100classes from ImageNet and picking 600examples of each class. We use 64,16, and 20classesfor training, validation and testing, respectively. We consider 1-shot and 5-shot classification for5classes. We use 15examples per class for evaluation in each test set. We compare against twobaselines and a recent metric-learning technique, Matching Networks (Vinyals et al., 2016), whichhas achieved state-of-the-art results in few-shot learning. The results are shown in Table 1.The first baseline we use is a nearest-neighbor baseline ( Baseline-nearest-neighbor ), where we firsttrain a network to classify between all the classes jointly in the original meta-training set. At meta-test time, for each dataset D, we embed all the items in the training set using our trained networkand then use nearest-neighbor matching among the embedded training examples to classify each testexample. The second baseline we use ( Baseline-finetune ) represents a coarser version of our meta-learner model. As in the first baseline, we start by training a network to classify jointly between allclasses in the meta-training set. We then use the meta-validation set to search over SGD hyperpa-rameters, where each training set is used to fine-tune the pre-trained network before evaluating onCode can be found at https://github.com/twitter/meta-learning-lstm .7Published as a conference paper at ICLR 2017Model5-class1-shot 5-shotBaseline-finetune 28:860:54% 49:790:79%Baseline-nearest-neighbor 41:080:70% 51:040:65%Matching Network 43:400:78% 51:090:71%Matching Network FCE 43:560:84% 55:310:73%Meta-Learner LSTM (OURS) 43:440:77%60:600:71%Table 1: Average classification accuracies on Mini-ImageNet with 95% confidence intervals.Marked in bold are the best results for each scenario, as well as other results with an overlappingconfidence interval.the test set. We use a fixed number of updates for fine tuning and search over the learning rate andlearning rate decay used during the course of these updates.For Matching Networks, we implemented our own version of both the basic and the fully-conditionalembedding (FCE) versions. In the basic version, a convolutional network is trained to learn indepen-dent embeddings for examples in the training and test set. In the FCE version, a bidirectional-LSTMis used to learn an embedding for the training set such that each training example’s embedding isalso a function of all the other training examples. Additionally, an attention-LSTM is used so thata test example embedding is also a function of all the embeddings of the training set. We do notconsider fine-tuning the network using the train set during meta-testing to improve performance asmentioned in Vinyals et al. (2016), but do note that our meta-learner could also be fine-tuned usingthis data. Note that to remain consistent with Vinyals et al. (2016), our baseline and matching netconvolutional networks have 4layers each with 64filters. We also added dropout to each convolu-tional block in matching nets to prevent overfitting.For our meta-learner, we train different models for the 1-shot and 5-shot tasks, that make 12and5updates, respectively. We noticed that better performance for each task was attained if the meta-learner is explicitly trained to do the set number of updates during meta-training that will be usedduring meta-testing.We attain results that are much better than the baselines discussed and competitive with MatchingNetworks. For 5-shot, we are able to do much better than Matching Networks, whereas for 1-shot,the confidence interval for our performance intersects the interval for Matching Networks. Again,we note that the numbers do not match the ones provided by Vinyals et al. (2016) simply because wecreated our version of the dataset and implemented our own versions of their model. It is interestingto note that the fine-tuned baseline is worse than the nearest-neighbor baseline. Because we are notregularizing the classifier, with very few updates the fine-tuning model overfits, especially in the1-shot case. This propensity to overfit speaks to the benefit of meta-training the initialization of theclassifier end-to-end as is done in the meta-learning LSTM.5.2 V ISUALIZATION OF META -LEARNERWe also visualize the optimization strategy learned by the meta-learner, in Figure 3. We can lookat theitandftgate values in Equation 2 at each update step, to try to get an understanding of howthe meta-learner updates the learner during training. We visualize the gate values while trainingon different datasets Dtrain , to observe whether there are variations between training sets. Weconsider both 1-shot and 5-shot classification settings, where the meta-learner is making 10and5updates, respectively. For the forget gate values for both tasks, the meta-learner seems to adopt asimple weight decay strategy that seems consistent across different layers. The input gate valuesare harder to interpret to glean the meta-learner’s strategy. However, there seems to a be a lot ofvariability between different datasets, indicating that the meta-learner isn’t simply learning a fixedoptimization strategy. Additionally, there seem to be differences between the two tasks, suggestingthat the meta-learner has adopted different methods to deal with the different conditions of eachsetting.8Published as a conference paper at ICLR 2017(a) Forget gate values for 1-shot meta-learner(b) Input gate values for 1-shot meta-learner(c) Forget gate values for 5-shot meta-learner (d) Input gate values for 5-shot meta-learnerFigure 3: Visualization of the input and forget values output by the meta-learner during the courseof its updates. Layers 14represent the values for a randomly selected parameter from the 4convolutional layers and layer 5represents the values for a random parameter from fully-connectedlayer. The different curves represent training steps on different datasets.6 C ONCLUSIONWe described an LSTM-based model for meta-learning, which is inspired from the parameter up-dates suggested by gradient descent optimization algorithms. Our LSTM meta-learner uses its stateto represent the learning updates of the parameters of a classifier. It is trained to discover both agood initialization for the learner’s parameters, as well as a successful mechanism for updating thelearner’s parameters to a given small training set for some new classification task. Our experimentsdemonstrate that our approach outperforms natural baselines and is competitive to the state-of-the-art in metric learning for few-shot learning.In this work, we focused our study to the few-shot and few-classes setting. However, it would bemore valuable to train meta-learners that can perform well across a full spectrum of settings, i.e. forfew or lots of training examples and for few or lots of possible classes. Our future work will thusconsider moving towards this more challenging scenario.ACKNOWLEDGMENTSWe thank Jake Snell, Kevin Swersky, and Oriol Vinyals for helpful discussions of this work.
r1bVaaUNx
An interesting work to understand gradient descent as recurrent process
6: Marginally above acceptance threshold
This paper describes a new approach to meta learning by interpreting the SGD update rule as gated recurrent model with trainable parameters. The idea is original and important for research related to transfer learning. The paper has a clear structure, but clarity could be improved at some points. Pros: - An interesting and feasible approach to meta-learning - Competitive results and proper comparison to state-of-the-art - Good recommendations for practical systems Cons: - The analogy would be closer to GRUs than LSTMs - The description of the data separation in meta sets is hard to follow and could be visualized - The experimental evaluation is only partly satisfying, especially the effect of the parameters of i_t and f_t would be of interest - Fig 2 doesn't have much value Remarks: - Small typo in 3.2: "This means each coordinate has it" -> its > We plan on releasing the code used in our evaluation experiments. This would certainly be a major plus.
4: The reviewer is confident but not absolutely certain that the evaluation is correct
rJY0-Kcll
ICLR.cc/2017/conference
2017
Optimization as a Model for Few-Shot Learning
["Sachin Ravi", "Hugo Larochelle"]
Though deep neural networks have shown great success in the large data domain, they generally perform poorly on few-shot learning tasks, where a model has to quickly generalize after seeing very few examples from each class. The general belief is that gradient-based optimization in high capacity models requires many iterative steps over many examples to perform well. Here, we propose an LSTM-based meta-learner model to learn the exact optimization algorithm used to train another learner neural network in the few-shot regime. The parametrization of our model allows it to learn appropriate parameter updates specifically for the scenario where a set amount of updates will be made, while also learning a general initialization of the learner network that allows for quick convergence of training. We demonstrate that this meta-learning model is competitive with deep metric-learning techniques for few-shot learning.
["model", "optimization", "learning", "learning optimization", "deep neural networks", "great success", "large data domain", "learning tasks", "examples", "class"]
ABSTRACTThough deep neural networks have shown great success in the large data domain,they generally perform poorly on few-shot learning tasks, where a classifier has toquickly generalize after seeing very few examples from each class. The generalbelief is that gradient-based optimization in high capacity classifiers requires manyiterative steps over many examples to perform well. Here, we propose an LSTM-based meta-learner model to learn the exact optimization algorithm used to trainanother learner neural network classifier in the few-shot regime. The parametriza-tion of our model allows it to learn appropriate parameter updates specifically forthe scenario where a set amount of updates will be made, while also learning ageneral initialization of the learner (classifier) network that allows for quick con-vergence of training. We demonstrate that this meta-learning model is competitivewith deep metric-learning techniques for few-shot learning.1 I NTRODUCTIONDeep learning has shown great success in a variety of tasks with large amounts of labeled data inimage classification (He et al., 2015), machine translation (Wu et al., 2016), and speech modeling(Oord et al., 2016). These achievements have relied on the fact that optimization of these deep,high-capacity models requires many iterative updates across many labeled examples. This type ofoptimization breaks down in the small data regime where we want to learn from very few labeledexamples. In this setting, rather than have one large dataset, we have a set of datasets, each with fewannotated examples per class. The motivation for this task lies not only in the fact that humans, evenchildren, can usually generalize after just one example of a given object, but also because modelsexcelling at this task would have many useful applications. Firstly, they would help alleviate datacollection as we would not require millions of labeled examples to attain reasonable performance.Furthermore, in many fields, data exhibits the characteristic of having many different classes but fewexamples per class. Models that are able to generalize from few examples would be able to capturethis type of data effectively.There seem to be two main reasons why gradient-based optimization fails in the face of few la-beled examples. Firstly, the variants of gradient-based optimization algorithms, such as momentum(Nesterov, 1983), Adagrad (Duchi et al., 2011), Adadelta (Zeiler, 2012), and ADAM (Kingma &Ba, 2014), weren’t designed specifically to perform well under the constraint of a set number ofupdates. Specifically when applied to non-convex optimization problems, with a reasonable choiceof hyperparameters these algorithms don’t have very strong guarantees of speed of convergence,beyond that they will eventually converge to a good solution after what could be many millions ofiterations. Secondly, for each separate dataset considered, the network would have to start from arandom initialization of its parameters, which considerably hurts its ability to converge to a goodsolution after a few updates. Transfer learning (Caruana, 1995; Bengio et al., 2012; Donahue et al.,2013) can be applied to alleviate this problem by fine-tuning a pre-trained network from another taskwhich has more labelled data; however, it has been observed that the benefit of a pre-trained networkgreatly decreases as the task the network was trained on diverges from the target task (Yosinski et al.,2014). What is needed is a systematic way to learn a beneficial common initialization that wouldWork done as an intern at Twitter. Sachin is a PhD student at Princeton University and can be reached atsachinr@princeton.edu .1Published as a conference paper at ICLR 2017serve as a good point to start training for the set of datasets being considered. This would provide thesame benefits as transfer learning, but with the guarantee that the initialization is an optimal startingpoint for fine-tuning.Previous work has suggested one manner in which to acquire quick knowledge from few examples,through the idea of meta-learning (Thrun, 1998; Schmidhuber et al., 1997). Meta-learning suggestsframing the learning problem at two levels. The first is quick acquisition of knowledge within eachseparate task presented. This process is guided by the second, which involves slower extraction ofinformation learned across all the tasks.We present a method here that addresses the weakness of neutral networks trained with gradient-based optimization on the few-shot learning problem by framing the problem within a meta-learningsetting. We propose an LSTM-based meta-learner optimizer that is trained to optimize a learnerneural network classifier. The meta-learner captures both short-term knowledge within a task andlong-term knowledge common among all the tasks. By using an objective that directly captures anoptimization algorithm’s ability to have good generalization performance given only a set numberof updates, the meta-learner model is trained to converge a learner classifier to a good solutionquickly on each task. Additionally, the formulation of our meta-learner model allows it to learn atask-common initialization for the learner classifier, which captures fundamental knowledge sharedamong all the tasks.2 T ASK DESCRIPTIONWe first begin by detailing the meta-learning formulation we use. In the typical machine learningsetting, we are interested in a dataset Dand usually split Dso that we optimize parameters on atraining setDtrain and evaluate its generalization on the test set Dtest. In meta-learning, however,we are dealing with meta-sets Dcontaining multiple regular datasets, where each D2Dhas a splitofDtrain andDtest.We consider the k-shot,N-class classification task, where for each dataset D, the training set con-sists ofklabelled examples for each of Nclasses, meaning that Dtrain consists ofkNexamples,andDtesthas a set number of examples for evaluation. We note that previous work (Vinyals et al.,2016) has used the term episode to describe each dataset consisting of a training and test set.In meta-learning, we thus have different meta-sets for meta-training, meta-validation, and meta-testing ( Dmetatrain ,Dmetavalidation , andDmetatest, respectively). On Dmetatrain , we areinterested in training a learning procedure (the meta-learner) that can take as input one of its train-ing setsDtrain and produce a classifier (the learner) that achieves high average classification perfor-mance on its corresponding test set Dtest. Using Dmetavalidation we can perform hyper-parameterselection of the meta-learner and evaluate its generalization performance on Dmetatest.For this formulation to correspond to the few-shot learning setting, each training set in datasetsD2Dwill contain few labeled examples (we consider k= 1 ork= 5), that must be used togeneralize to good performance on the corresponding test set. An example of this formulation isgiven in Figure 1.3 M ODELWe now move to the description of our proposed model for meta-learning.3.1 M ODEL DESCRIPTIONConsider a single dataset, or episode, D2Dmetatrain . Suppose we have a learner neural netclassifier with parameters that we want to train on Dtrain . The standard optimization algorithmsused to train deep neural networks are some variant of gradient descent, which uses updates of theformt=t1trt1Lt; (1)2Published as a conference paper at ICLR 2017Figure 1: Example of meta-learning setup. The top represents the meta-training set Dmetatrain ,where inside each gray box is a separate dataset that consists of the training set Dtrain (left side ofdashed line) and the test set Dtest(right side of dashed line). In this illustration, we are consideringthe1-shot, 5-class classification task where for each dataset, we have one example from each of5classes (each given a label 1-5) in the training set and 2examples for evaluation in the test set.The meta-test set Dmetatestis defined in the same way, but with a different set of datasets thatcover classes not present in any of the datasets in Dmetatrain (similarly, we additionally have ameta-validation set that is used to determine hyper-parameters).wheret1are the parameters of the learner after t1updates,tis the learning rate at time t,Ltis the loss optimized by the learner for its tthupdate,rt1Ltis the gradient of that loss withrespect to parameters t1, andtis the updated parameters of the learner.Our key observation that we leverage here is that this update resembles the update for the cell statein an LSTM (Hochreiter & Schmidhuber, 1997)ct=ftct1+it~ct; (2)ifft= 1;ct1=t1;it=t;and~ct=rt1Lt.Thus, we propose training a meta-learner LSTM to learn an update rule for training a neural net-work. We set the cell state of the LSTM to be the parameters of the learner, or ct=t, and thecandidate cell state ~ct=rt1Lt, given how valuable information about the gradient is for opti-mization. We define parametric forms for itandftso that the meta-learner can determine optimalvalues through the course of the updates.Let us start with it, which corresponds to the learning rate for the updates. We letit=WIrt1Lt;Lt;t1;it1+bI;meaning that the learning rate is a function of the current parameter value t1, the current gradientrt1Lt, the current lossLt, and the previous learning rate it1. With this information, the meta-learner should be able to finely control the learning rate so as to train the learner quickly whileavoiding divergence.As forft, it seems possible that the optimal choice isn’t the constant 1. Intuitively, what wouldjustify shrinking the parameters of the learner and forgetting part of its previous value would beif the learner is currently in a bad local optima and needs a large change to escape. This wouldcorrespond to a situation where the loss is high but the gradient is close to zero. Thus, one proposalfor the forget gate is to have it be a function of that information, as well as the previous value of theforget gate:ft=WFrt1Lt;Lt;t1;ft1+bF:Additionally, notice that we can also learn the initial value of the cell state c0for the LSTM, treatingit as a parameter of the meta-learner. This corresponds to the initial weights of the classifier (that3Published as a conference paper at ICLR 2017the meta-learner is training). Learning this initial value lets the meta-learner determine the optimalinitial weights of the learner so that training begins from a beneficial starting point that allowsoptimization to proceed rapidly. Lastly, note that though the meta-learner’s update rule matches thecell state update of the LSTM, the meta-learner also bears similarity to the GRU (Cho et al., 2014)hidden state update, with the exception that the forget and input gates aren’t tied to sum to one.3.2 P ARAMETER SHARING & P REPROCESSINGBecause we want our meta-learner to produce updates for deep neural networks, which consistof tens of thousands of parameters, to prevent an explosion of meta-learner parameters we need toemploy some sort of parameter sharing. Thus as in Andrychowicz et al. (2016), we share parametersacross the coordinates of the learner gradient. This means each coordinate has its own hidden andcell state values but the LSTM parameters are the same across all coordinates. This allows us touse a compact LSTM model and additionally has the nice property that the same update rule is usedfor each coordinate, but one that is dependent on the respective history of each coordinate duringoptimization. We can easily implement parameter sharing by having the input be a batch of gradientcoordinates and loss inputs (rt;iLt;Lt)for each dimension i.Because the different coordinates of the gradients and the losses can be of very different magnitudes,we need to be careful in normalizing the values so that the meta-learner is able to use them properlyduring training. Thus, we also found that the preprocessing method of Andrychowicz et al. (2016)worked well when applied to both the dimensions of the gradients and the losses at each time step:x!(log(jxj)p;sgn(x)ifjxjep(1;epx) otherwiseThis preprocessing adjusts the scaling of gradients and losses, while also separating the informationabout their magnitude and their sign (the latter being mostly useful for gradients). We found that thesuggested value of p= 10 in the above formula worked well in our experiments.3.3 T RAININGThe question now is how do we train the LSTM meta-learner model to be effective at few-shotlearning tasks? As observed in Vinyals et al. (2016), in order to perform well at this task, it is keyto have training conditions match those of test time. During evaluation of the meta-learning, foreach dataset (episode), D= (Dtrain;Dtest)2Dmetatest, a good meta-learner model will, givena series of learner gradients and losses on the training set Dtrain , suggest a series of updates for theclassifier that pushes it towards good performance on the test set Dtest.Thus to match test time conditions, when considering each dataset D2Dmetatrain , the trainingobjective we use is the loss Ltestof the produced classifier on D’s test setDtest. While iteratingover the examples in D’s training set Dtrain , at each time step tthe LSTM meta-learner receives(rt1Lt;Lt)from the learner (the classifier) and proposes the new set of parameters t. Theprocess repeats for Tsteps, after which the classifier and its final parameters are evaluated on thetest set to produce the loss that is then used to train the meta-learner. The training algorithm isdescribed in Algorithm 1 and the corresponding computational graph is shown in Figure 2.3.3.1 G RADIENT INDEPENDENCE ASSUMPTIONNotice that our formulation would imply that the losses Ltand gradientsrt1Ltof the learner aredependent on the parameters of the meta-learner. Gradients on the meta-learner’s parameters shouldnormally take this dependency into account. However, as discussed by Andrychowicz et al. (2016),this complicates the computation of the meta-learner’s gradients. Thus, following Andrychowiczet al. (2016), we make the simplifying assumption that these contributions to the gradients aren’timportant and can be ignored, which allows us to avoid taking second derivatives, a considerablyexpensive operation. We were still able to train the meta-learner effectively in spite of this simplify-ing assumption.4Published as a conference paper at ICLR 2017Figure 2: Computational graph for the forward pass of the meta-learner. The dashed line dividesexamples from the training set Dtrain and test setDtest. Each (Xi;Yi)is theithbatch from thetraining set whereas (X;Y)is all the elements from the test set. The dashed arrows indicate that wedo not back-propagate through that step when training the meta-learner. We refer to the learner asM, whereM(X;)is the output of learner Musing parameters for inputs X. We also usertasa shorthand forrt1Lt.3.3.2 I NITIALIZATION OF META-LEARNER LSTMWhen training LSTMs, it is advised to initialize the LSTM with small random weights and to set theforget gate bias to a large value so that the forget gate is initialized to be close to 1, thus enablinggradient flow (Zaremba, 2015). In addition to the forget gate bias setting, we found that we neededto initialize the input gate bias to be small so that the input gate value (and thus the learning rate)used by the meta-learner LSTM starts out being small. With this combined initialization, the meta-learner starts close to normal gradient descent with a small learning rate, which helps initial stabilityof training.3.4 B ATCH NORMALIZATIONBatch Normalization (Ioffe & Szegedy, 2015) is a recently proposed method to stabilize and thusspeed up learning of deep neural networks by reducing internal covariate shift within the learner’shidden layers. This reduction is achieved by normalizing each layer’s pre-activation, by subtractingby the mean and dividing by the standard deviation. During training, the mean and standard devi-ation are estimated using the current batch being trained on, whereas during evaluation a runningaverage of both statistics calculated on the training set is used. We need to be careful with batchnormalization for the learner network in the meta-learning setting, because we do not want to collectmean and standard deviation statistics during meta-testing in a way that allows information to leakbetween different datasets (episodes), being considered. One easy way to prevent this issue is to notcollect statistics at all during the meta-testing phase, but just use our running averages from meta-training. This, however, has a bad impact on performance, because we have changed meta-trainingand meta-testing conditions, causing the meta-learner to learn a method of optimization that relieson batch statistics which it now does not have at meta-testing time. In order to keep the two phasesas similar as possible, we found that a better strategy was to collect statistics for each dataset D2Dduring Dmetatest, but then erase the running statistics when we consider the next dataset. Thus,during meta-training, we use batch statistics for both the training and testing set whereas duringmeta-testing, we use batch statistics for the training set (and to compute our running averages) butthen use the running averages during testing. This does not cause any information to leak betweendifferent datasets, but also allows the meta-learner to be trained on conditions that are matched be-tween training and testing. Lastly, because we are doing very few training steps, we computed therunning averages so that higher preference is given to the later values.5Published as a conference paper at ICLR 2017Algorithm 1 Train Meta-LearnerInput : Meta-training set Dmetatrain , LearnerMwith parameters , Meta-Learner Rwithparameters .1:0 random initialization2:3:ford= 1;ndo4:Dtrain;Dtest random dataset from Dmetatrain5:0 c0 .Intialize learner parameters6:7: fort= 1;Tdo8: Xt;Yt random batch from Dtrain9:Lt L(M(Xt;t1);Yt) .Get loss of learner on train batch10:ct R((rt1Lt;Lt); d1).Get output of meta-learner using Equation 211:t ct .Update learner parameters12: end for13:14: X;Y Dtest15:Ltest L(M(X;T);Y) .Get loss of learner on test batch16: Update dusingrd1Ltest .Update meta-learner parameters17:18:end for4 R ELATED WORKWhile this work falls within the broad literature of transfer learning in general, we focus here onpositioning it relative to previous work on meta-learning and few-shot learning.4.1 M ETA-LEARNINGMeta-learning has a long history, but has grown to prominence recently as many have advocatedfor it as a key to achieving human-level intelligence in the future (Lake et al., 2016). The ability tolearn at two levels (learning within each task presented, while accumulating knowledge about thesimilarities and differences between tasks) is seen as being crucial to improving AI. Previous workhas used a variety of techniques in the meta-learning setting.Schmidhuber (1992; 1993) explored using networks that learn how to modify their own weights overa number of computations steps on the input. The updating of the weights is defined in a parametricform that allows the prediction and weight-change process to be differentiable end-to-end. Thework of Bengio et al. (1990; 1995) and Bengio (1993) considered learning update rules for neuralnetworks that are biologically plausible. This property is enforced by allowing the parametric formof the update to only have as input local information at each hidden unit to determine the weightchange. Different optimization methods, such as genetic programming or simulated annealing, areused to train the learning rule.In Santoro et al. (2016), a memory-augmented neural network is trained to learn how to store andretrieve memories to use for each classification task. The work of Andrychowicz et al. (2016) usesan LSTM to train a neural network; however, they are interested in learning a general optimizationalgorithm to train neural networks for large-scale classification, whereas we are interested in thefew-shot learning problem. This work also builds upon Hochreiter et al. (2001) and Bosc, bothof which used LSTMs to train multi-layer perceptrons to learn on binary classification and time-series prediction tasks. Another related method is the work of Bertinetto et al. (2016), who traina meta-learner to map a training example to the weights of a neural network that is then used toclassify future examples from this class; however, unlike our method the classifier network is directlyproduced rather than being fine-tuned after multiple training steps. Our work also bears similarityto Maclaurin et al. (2015), who tune the hyperparameters of gradient descent with momentum bybackpropagating through the chain of gradient steps to optimize the validation performance.6Published as a conference paper at ICLR 20174.2 F EW-SHOT LEARNINGThe best performing methods for few-shot learning have been mainly metric learning methods.Deep siamese networks (Koch, 2015) train a convolutional network to embed examples so thatitems in the same class are close while items in different classes are far away, according to somedistance metric. Matching networks (Vinyals et al., 2016) refine this idea so that training and testingconditions match, by defining a differentiable nearest neighbor loss involving the cosine similaritiesof embeddings produced by a convolutional network.5 E VALUATIONIn this section, we describe the results of experiments, examining the properties of our model andcomparing our method’s performance against different approaches. Following Vinyals et al. (2016),we consider the k-shot,N-class classification setting where a meta-learner trains on many relatedbut small training sets of kexamples for each of Nclasses. We first split the list of all classes inthe data into disjoint sets and assign them to each meta-set of meta-training, meta-validation, andmeta-testing. To generate each instance of a k-shot,N-class task dataset D= (Dtrain;Dtest)2D,we do the following: we first sample Nclasses from the list of classes corresponding to the meta-setwe consider. We then sample kexamples from each of those classes. These kexamples togethercompose the training set Dtrain . Then, an additional fixed amount of the rest of the examples aresampled to yield a test set Dtest. We generally have 15examples per class in the test sets. Whentraining the meta-learner, we iterate by sampling these datasets (episodes) repeatedly. For meta-validation and meta-testing, however, we produce a fixed number of these datasets to evaluate eachmethod. We produce enough datasets to ensure that the confidence interval of the mean accuracy issmall.For the learner, we use a simple CNN containing 4convolutional layers, each of which is a 33convolution with 32filters, followed by batch normalization, a ReLU non-linearity, and lastly a22max-pooling. The network then has a final linear layer followed by a softmax for the numberof classes being considered. The loss function Lis the average negative log-probability assigned bythe learner to the correct class. For the meta-learner, we use a 2-layer LSTM, where the first layer isa normal LSTM and the second layer is our modified LSTM meta-learner. The gradients and lossesare preprocessed and fed into the first layer LSTM, and the regular gradient coordinates are alsoused by the second layer LSTM to implement the state update rule shown in (1). At each time step,the learner’s loss and gradient is computed on a batch consisting of the entire training set Dtrain ,because we consider training sets with only a total of 5or25examples. We train our LSTM withADAM using a learning rate of 0:001and with gradient clipping using a value of 0:25.5.1 E XPERIMENT RESULTSThe Mini-ImageNet dataset was proposed by Vinyals et al. (2016) as a benchmark offering thechallenges of the complexity of ImageNet images, without requiring the resources and infrastructurenecessary to run on the full ImageNet dataset. Because the exact splits used in Vinyals et al. (2016)were not released, we create our own version of the Mini-Imagenet dataset by selecting a random100classes from ImageNet and picking 600examples of each class. We use 64,16, and 20classesfor training, validation and testing, respectively. We consider 1-shot and 5-shot classification for5classes. We use 15examples per class for evaluation in each test set. We compare against twobaselines and a recent metric-learning technique, Matching Networks (Vinyals et al., 2016), whichhas achieved state-of-the-art results in few-shot learning. The results are shown in Table 1.The first baseline we use is a nearest-neighbor baseline ( Baseline-nearest-neighbor ), where we firsttrain a network to classify between all the classes jointly in the original meta-training set. At meta-test time, for each dataset D, we embed all the items in the training set using our trained networkand then use nearest-neighbor matching among the embedded training examples to classify each testexample. The second baseline we use ( Baseline-finetune ) represents a coarser version of our meta-learner model. As in the first baseline, we start by training a network to classify jointly between allclasses in the meta-training set. We then use the meta-validation set to search over SGD hyperpa-rameters, where each training set is used to fine-tune the pre-trained network before evaluating onCode can be found at https://github.com/twitter/meta-learning-lstm .7Published as a conference paper at ICLR 2017Model5-class1-shot 5-shotBaseline-finetune 28:860:54% 49:790:79%Baseline-nearest-neighbor 41:080:70% 51:040:65%Matching Network 43:400:78% 51:090:71%Matching Network FCE 43:560:84% 55:310:73%Meta-Learner LSTM (OURS) 43:440:77%60:600:71%Table 1: Average classification accuracies on Mini-ImageNet with 95% confidence intervals.Marked in bold are the best results for each scenario, as well as other results with an overlappingconfidence interval.the test set. We use a fixed number of updates for fine tuning and search over the learning rate andlearning rate decay used during the course of these updates.For Matching Networks, we implemented our own version of both the basic and the fully-conditionalembedding (FCE) versions. In the basic version, a convolutional network is trained to learn indepen-dent embeddings for examples in the training and test set. In the FCE version, a bidirectional-LSTMis used to learn an embedding for the training set such that each training example’s embedding isalso a function of all the other training examples. Additionally, an attention-LSTM is used so thata test example embedding is also a function of all the embeddings of the training set. We do notconsider fine-tuning the network using the train set during meta-testing to improve performance asmentioned in Vinyals et al. (2016), but do note that our meta-learner could also be fine-tuned usingthis data. Note that to remain consistent with Vinyals et al. (2016), our baseline and matching netconvolutional networks have 4layers each with 64filters. We also added dropout to each convolu-tional block in matching nets to prevent overfitting.For our meta-learner, we train different models for the 1-shot and 5-shot tasks, that make 12and5updates, respectively. We noticed that better performance for each task was attained if the meta-learner is explicitly trained to do the set number of updates during meta-training that will be usedduring meta-testing.We attain results that are much better than the baselines discussed and competitive with MatchingNetworks. For 5-shot, we are able to do much better than Matching Networks, whereas for 1-shot,the confidence interval for our performance intersects the interval for Matching Networks. Again,we note that the numbers do not match the ones provided by Vinyals et al. (2016) simply because wecreated our version of the dataset and implemented our own versions of their model. It is interestingto note that the fine-tuned baseline is worse than the nearest-neighbor baseline. Because we are notregularizing the classifier, with very few updates the fine-tuning model overfits, especially in the1-shot case. This propensity to overfit speaks to the benefit of meta-training the initialization of theclassifier end-to-end as is done in the meta-learning LSTM.5.2 V ISUALIZATION OF META -LEARNERWe also visualize the optimization strategy learned by the meta-learner, in Figure 3. We can lookat theitandftgate values in Equation 2 at each update step, to try to get an understanding of howthe meta-learner updates the learner during training. We visualize the gate values while trainingon different datasets Dtrain , to observe whether there are variations between training sets. Weconsider both 1-shot and 5-shot classification settings, where the meta-learner is making 10and5updates, respectively. For the forget gate values for both tasks, the meta-learner seems to adopt asimple weight decay strategy that seems consistent across different layers. The input gate valuesare harder to interpret to glean the meta-learner’s strategy. However, there seems to a be a lot ofvariability between different datasets, indicating that the meta-learner isn’t simply learning a fixedoptimization strategy. Additionally, there seem to be differences between the two tasks, suggestingthat the meta-learner has adopted different methods to deal with the different conditions of eachsetting.8Published as a conference paper at ICLR 2017(a) Forget gate values for 1-shot meta-learner(b) Input gate values for 1-shot meta-learner(c) Forget gate values for 5-shot meta-learner (d) Input gate values for 5-shot meta-learnerFigure 3: Visualization of the input and forget values output by the meta-learner during the courseof its updates. Layers 14represent the values for a randomly selected parameter from the 4convolutional layers and layer 5represents the values for a random parameter from fully-connectedlayer. The different curves represent training steps on different datasets.6 C ONCLUSIONWe described an LSTM-based model for meta-learning, which is inspired from the parameter up-dates suggested by gradient descent optimization algorithms. Our LSTM meta-learner uses its stateto represent the learning updates of the parameters of a classifier. It is trained to discover both agood initialization for the learner’s parameters, as well as a successful mechanism for updating thelearner’s parameters to a given small training set for some new classification task. Our experimentsdemonstrate that our approach outperforms natural baselines and is competitive to the state-of-the-art in metric learning for few-shot learning.In this work, we focused our study to the few-shot and few-classes setting. However, it would bemore valuable to train meta-learners that can perform well across a full spectrum of settings, i.e. forfew or lots of training examples and for few or lots of possible classes. Our future work will thusconsider moving towards this more challenging scenario.ACKNOWLEDGMENTSWe thank Jake Snell, Kevin Swersky, and Oriol Vinyals for helpful discussions of this work.
SyiRxi7El
Strong paper but presentation unclear at times
8: Top 50% of accepted papers, clear accept
In light of the authors' responsiveness and the updates to the manuscript -- in particular to clarify the meta-learning task -- I am updating my score to an 8. ----- This manuscript proposes to tackle few-shot learning with neural networks by leveraging meta-learning, a classic idea that has seen a renaissance in the last 12 months. The authors formulate few-shot learning as a sequential meta-learning problem: each "example" includes a sequence of batches of "training" pairs, followed by a final "test" batch. The inputs at each "step" include the outputs of a "base learner" (e.g., training loss and gradients), as well as the base learner's current state (parameters). The paper applies an LSTM to this meta-learning problem, using the inner memory cells in the *second* layer to directly model the updated parameters of the base learner. In doing this, they note similarities between the respective update rules of LSTM memory cells and gradient descent. Updates to the LSTM meta-learner are computed based on the base learner's prediction loss for the final "test" batch. The authors make several simplifying assumptions, such as sharing weights across all second layer cells (analogous to using the same learning rate for all parameters). The paper recreates the Mini-ImageNet data set proposed in Vinyals et al 2016, and shows that the meta-learner LSTM is competitive with the current state-of-the-art (Matchin Networks, Vinyals 2016) on 1- and 5-shot learning. Strengths: - It is intriguing -- and in hindsight, natural -- to cast the few-shot learning problem as a sequential (meta-)learning problem. While the authors did not originate the general idea of persisting learning across a series of learning problems, I think it is fair to say that they have advanced the state of the art, though I cannot confidently assert its novelty as I am not deeply familiar with recent work on meta-learning. - The proposed approach is competitive with and outperforms Vinyals 2016 in 1-shot and 5-shot Mini-ImageNet experiments. - The base learner in this setting (simple ConvNet classifier) is quite different from the nearest-neighbor-on-top-of-learned-embedding approach used in Vinyals 2016. It is always exciting when state-of-the-art results can be reported using very different approaches, rather than incremental follow-up work. - As far as I know, the insight about the relationship between the memory cell and gradient descent updates is novel here. It is interesting regardless. - The paper offers several practical insights about how to design and train an LSTM meta-learner, which should make it easier for others to replicate this work and apply these ideas to new problems. These include proper initialization, weight sharing across coordinates, and the importance of normalizing/rescaling the loss, gradient, and parameter inputs. Some of the insights have been previously described (the importance of simulating test conditions during meta-training; assuming independence between meta-learner and base learner parameters when taking gradients with respect to the meta-learner parameters), but the discussion here is useful nonetheless. Weaknesses: - The writing is at times quite opaque. While it describes very interesting work, I would not call the paper an enjoyable read. It took me multiple passes (as well as consulting related work) to understand the general learning problem. The task description in Section 2 (Page 2) is very abstract and uses notation and language that is not common outside of this sub-area. The paper could benefit from a brief concrete example (based on MNIST is fine), perhaps paired with a diagram illustrating a sequence of few-shot learning tasks. This would definitely make it accessible to a wider audience. - Following up on that note, the precise nature of the N-class, few-shot learning problem here is unclear to me. Specifically, the Mini-ImageNet data set has 100 labels, of which 64/16/20 are used during meta-training/validation/testing. Does this mean that only 64/100 classes are observed through meta-training? Or does it mean that only 64/100 are observed in each batch, but on average all 100 are observed during meta-training? If it's the former, how many outputs does the softmax layer of the ConvNet base learner have during meta-training? 64 (only those observed in training) or 100 (of which 36 are never observed)? Many other details like these are unclear (see question). - The plots in Figure 2 are pretty uninformative in and of themselves, and the discussion section offers very little insight around them. This is an interesting paper with convincing results. It seems like a fairly clear accept, but the presentation of the ideas and work therein could be improved. I will definitely raise my score if the writing is improved.
4: The reviewer is confident but not absolutely certain that the evaluation is correct
rJY0-Kcll
ICLR.cc/2017/conference
2017
Optimization as a Model for Few-Shot Learning
["Sachin Ravi", "Hugo Larochelle"]
Though deep neural networks have shown great success in the large data domain, they generally perform poorly on few-shot learning tasks, where a model has to quickly generalize after seeing very few examples from each class. The general belief is that gradient-based optimization in high capacity models requires many iterative steps over many examples to perform well. Here, we propose an LSTM-based meta-learner model to learn the exact optimization algorithm used to train another learner neural network in the few-shot regime. The parametrization of our model allows it to learn appropriate parameter updates specifically for the scenario where a set amount of updates will be made, while also learning a general initialization of the learner network that allows for quick convergence of training. We demonstrate that this meta-learning model is competitive with deep metric-learning techniques for few-shot learning.
["model", "optimization", "learning", "learning optimization", "deep neural networks", "great success", "large data domain", "learning tasks", "examples", "class"]
ABSTRACTThough deep neural networks have shown great success in the large data domain,they generally perform poorly on few-shot learning tasks, where a classifier has toquickly generalize after seeing very few examples from each class. The generalbelief is that gradient-based optimization in high capacity classifiers requires manyiterative steps over many examples to perform well. Here, we propose an LSTM-based meta-learner model to learn the exact optimization algorithm used to trainanother learner neural network classifier in the few-shot regime. The parametriza-tion of our model allows it to learn appropriate parameter updates specifically forthe scenario where a set amount of updates will be made, while also learning ageneral initialization of the learner (classifier) network that allows for quick con-vergence of training. We demonstrate that this meta-learning model is competitivewith deep metric-learning techniques for few-shot learning.1 I NTRODUCTIONDeep learning has shown great success in a variety of tasks with large amounts of labeled data inimage classification (He et al., 2015), machine translation (Wu et al., 2016), and speech modeling(Oord et al., 2016). These achievements have relied on the fact that optimization of these deep,high-capacity models requires many iterative updates across many labeled examples. This type ofoptimization breaks down in the small data regime where we want to learn from very few labeledexamples. In this setting, rather than have one large dataset, we have a set of datasets, each with fewannotated examples per class. The motivation for this task lies not only in the fact that humans, evenchildren, can usually generalize after just one example of a given object, but also because modelsexcelling at this task would have many useful applications. Firstly, they would help alleviate datacollection as we would not require millions of labeled examples to attain reasonable performance.Furthermore, in many fields, data exhibits the characteristic of having many different classes but fewexamples per class. Models that are able to generalize from few examples would be able to capturethis type of data effectively.There seem to be two main reasons why gradient-based optimization fails in the face of few la-beled examples. Firstly, the variants of gradient-based optimization algorithms, such as momentum(Nesterov, 1983), Adagrad (Duchi et al., 2011), Adadelta (Zeiler, 2012), and ADAM (Kingma &Ba, 2014), weren’t designed specifically to perform well under the constraint of a set number ofupdates. Specifically when applied to non-convex optimization problems, with a reasonable choiceof hyperparameters these algorithms don’t have very strong guarantees of speed of convergence,beyond that they will eventually converge to a good solution after what could be many millions ofiterations. Secondly, for each separate dataset considered, the network would have to start from arandom initialization of its parameters, which considerably hurts its ability to converge to a goodsolution after a few updates. Transfer learning (Caruana, 1995; Bengio et al., 2012; Donahue et al.,2013) can be applied to alleviate this problem by fine-tuning a pre-trained network from another taskwhich has more labelled data; however, it has been observed that the benefit of a pre-trained networkgreatly decreases as the task the network was trained on diverges from the target task (Yosinski et al.,2014). What is needed is a systematic way to learn a beneficial common initialization that wouldWork done as an intern at Twitter. Sachin is a PhD student at Princeton University and can be reached atsachinr@princeton.edu .1Published as a conference paper at ICLR 2017serve as a good point to start training for the set of datasets being considered. This would provide thesame benefits as transfer learning, but with the guarantee that the initialization is an optimal startingpoint for fine-tuning.Previous work has suggested one manner in which to acquire quick knowledge from few examples,through the idea of meta-learning (Thrun, 1998; Schmidhuber et al., 1997). Meta-learning suggestsframing the learning problem at two levels. The first is quick acquisition of knowledge within eachseparate task presented. This process is guided by the second, which involves slower extraction ofinformation learned across all the tasks.We present a method here that addresses the weakness of neutral networks trained with gradient-based optimization on the few-shot learning problem by framing the problem within a meta-learningsetting. We propose an LSTM-based meta-learner optimizer that is trained to optimize a learnerneural network classifier. The meta-learner captures both short-term knowledge within a task andlong-term knowledge common among all the tasks. By using an objective that directly captures anoptimization algorithm’s ability to have good generalization performance given only a set numberof updates, the meta-learner model is trained to converge a learner classifier to a good solutionquickly on each task. Additionally, the formulation of our meta-learner model allows it to learn atask-common initialization for the learner classifier, which captures fundamental knowledge sharedamong all the tasks.2 T ASK DESCRIPTIONWe first begin by detailing the meta-learning formulation we use. In the typical machine learningsetting, we are interested in a dataset Dand usually split Dso that we optimize parameters on atraining setDtrain and evaluate its generalization on the test set Dtest. In meta-learning, however,we are dealing with meta-sets Dcontaining multiple regular datasets, where each D2Dhas a splitofDtrain andDtest.We consider the k-shot,N-class classification task, where for each dataset D, the training set con-sists ofklabelled examples for each of Nclasses, meaning that Dtrain consists ofkNexamples,andDtesthas a set number of examples for evaluation. We note that previous work (Vinyals et al.,2016) has used the term episode to describe each dataset consisting of a training and test set.In meta-learning, we thus have different meta-sets for meta-training, meta-validation, and meta-testing ( Dmetatrain ,Dmetavalidation , andDmetatest, respectively). On Dmetatrain , we areinterested in training a learning procedure (the meta-learner) that can take as input one of its train-ing setsDtrain and produce a classifier (the learner) that achieves high average classification perfor-mance on its corresponding test set Dtest. Using Dmetavalidation we can perform hyper-parameterselection of the meta-learner and evaluate its generalization performance on Dmetatest.For this formulation to correspond to the few-shot learning setting, each training set in datasetsD2Dwill contain few labeled examples (we consider k= 1 ork= 5), that must be used togeneralize to good performance on the corresponding test set. An example of this formulation isgiven in Figure 1.3 M ODELWe now move to the description of our proposed model for meta-learning.3.1 M ODEL DESCRIPTIONConsider a single dataset, or episode, D2Dmetatrain . Suppose we have a learner neural netclassifier with parameters that we want to train on Dtrain . The standard optimization algorithmsused to train deep neural networks are some variant of gradient descent, which uses updates of theformt=t1trt1Lt; (1)2Published as a conference paper at ICLR 2017Figure 1: Example of meta-learning setup. The top represents the meta-training set Dmetatrain ,where inside each gray box is a separate dataset that consists of the training set Dtrain (left side ofdashed line) and the test set Dtest(right side of dashed line). In this illustration, we are consideringthe1-shot, 5-class classification task where for each dataset, we have one example from each of5classes (each given a label 1-5) in the training set and 2examples for evaluation in the test set.The meta-test set Dmetatestis defined in the same way, but with a different set of datasets thatcover classes not present in any of the datasets in Dmetatrain (similarly, we additionally have ameta-validation set that is used to determine hyper-parameters).wheret1are the parameters of the learner after t1updates,tis the learning rate at time t,Ltis the loss optimized by the learner for its tthupdate,rt1Ltis the gradient of that loss withrespect to parameters t1, andtis the updated parameters of the learner.Our key observation that we leverage here is that this update resembles the update for the cell statein an LSTM (Hochreiter & Schmidhuber, 1997)ct=ftct1+it~ct; (2)ifft= 1;ct1=t1;it=t;and~ct=rt1Lt.Thus, we propose training a meta-learner LSTM to learn an update rule for training a neural net-work. We set the cell state of the LSTM to be the parameters of the learner, or ct=t, and thecandidate cell state ~ct=rt1Lt, given how valuable information about the gradient is for opti-mization. We define parametric forms for itandftso that the meta-learner can determine optimalvalues through the course of the updates.Let us start with it, which corresponds to the learning rate for the updates. We letit=WIrt1Lt;Lt;t1;it1+bI;meaning that the learning rate is a function of the current parameter value t1, the current gradientrt1Lt, the current lossLt, and the previous learning rate it1. With this information, the meta-learner should be able to finely control the learning rate so as to train the learner quickly whileavoiding divergence.As forft, it seems possible that the optimal choice isn’t the constant 1. Intuitively, what wouldjustify shrinking the parameters of the learner and forgetting part of its previous value would beif the learner is currently in a bad local optima and needs a large change to escape. This wouldcorrespond to a situation where the loss is high but the gradient is close to zero. Thus, one proposalfor the forget gate is to have it be a function of that information, as well as the previous value of theforget gate:ft=WFrt1Lt;Lt;t1;ft1+bF:Additionally, notice that we can also learn the initial value of the cell state c0for the LSTM, treatingit as a parameter of the meta-learner. This corresponds to the initial weights of the classifier (that3Published as a conference paper at ICLR 2017the meta-learner is training). Learning this initial value lets the meta-learner determine the optimalinitial weights of the learner so that training begins from a beneficial starting point that allowsoptimization to proceed rapidly. Lastly, note that though the meta-learner’s update rule matches thecell state update of the LSTM, the meta-learner also bears similarity to the GRU (Cho et al., 2014)hidden state update, with the exception that the forget and input gates aren’t tied to sum to one.3.2 P ARAMETER SHARING & P REPROCESSINGBecause we want our meta-learner to produce updates for deep neural networks, which consistof tens of thousands of parameters, to prevent an explosion of meta-learner parameters we need toemploy some sort of parameter sharing. Thus as in Andrychowicz et al. (2016), we share parametersacross the coordinates of the learner gradient. This means each coordinate has its own hidden andcell state values but the LSTM parameters are the same across all coordinates. This allows us touse a compact LSTM model and additionally has the nice property that the same update rule is usedfor each coordinate, but one that is dependent on the respective history of each coordinate duringoptimization. We can easily implement parameter sharing by having the input be a batch of gradientcoordinates and loss inputs (rt;iLt;Lt)for each dimension i.Because the different coordinates of the gradients and the losses can be of very different magnitudes,we need to be careful in normalizing the values so that the meta-learner is able to use them properlyduring training. Thus, we also found that the preprocessing method of Andrychowicz et al. (2016)worked well when applied to both the dimensions of the gradients and the losses at each time step:x!(log(jxj)p;sgn(x)ifjxjep(1;epx) otherwiseThis preprocessing adjusts the scaling of gradients and losses, while also separating the informationabout their magnitude and their sign (the latter being mostly useful for gradients). We found that thesuggested value of p= 10 in the above formula worked well in our experiments.3.3 T RAININGThe question now is how do we train the LSTM meta-learner model to be effective at few-shotlearning tasks? As observed in Vinyals et al. (2016), in order to perform well at this task, it is keyto have training conditions match those of test time. During evaluation of the meta-learning, foreach dataset (episode), D= (Dtrain;Dtest)2Dmetatest, a good meta-learner model will, givena series of learner gradients and losses on the training set Dtrain , suggest a series of updates for theclassifier that pushes it towards good performance on the test set Dtest.Thus to match test time conditions, when considering each dataset D2Dmetatrain , the trainingobjective we use is the loss Ltestof the produced classifier on D’s test setDtest. While iteratingover the examples in D’s training set Dtrain , at each time step tthe LSTM meta-learner receives(rt1Lt;Lt)from the learner (the classifier) and proposes the new set of parameters t. Theprocess repeats for Tsteps, after which the classifier and its final parameters are evaluated on thetest set to produce the loss that is then used to train the meta-learner. The training algorithm isdescribed in Algorithm 1 and the corresponding computational graph is shown in Figure 2.3.3.1 G RADIENT INDEPENDENCE ASSUMPTIONNotice that our formulation would imply that the losses Ltand gradientsrt1Ltof the learner aredependent on the parameters of the meta-learner. Gradients on the meta-learner’s parameters shouldnormally take this dependency into account. However, as discussed by Andrychowicz et al. (2016),this complicates the computation of the meta-learner’s gradients. Thus, following Andrychowiczet al. (2016), we make the simplifying assumption that these contributions to the gradients aren’timportant and can be ignored, which allows us to avoid taking second derivatives, a considerablyexpensive operation. We were still able to train the meta-learner effectively in spite of this simplify-ing assumption.4Published as a conference paper at ICLR 2017Figure 2: Computational graph for the forward pass of the meta-learner. The dashed line dividesexamples from the training set Dtrain and test setDtest. Each (Xi;Yi)is theithbatch from thetraining set whereas (X;Y)is all the elements from the test set. The dashed arrows indicate that wedo not back-propagate through that step when training the meta-learner. We refer to the learner asM, whereM(X;)is the output of learner Musing parameters for inputs X. We also usertasa shorthand forrt1Lt.3.3.2 I NITIALIZATION OF META-LEARNER LSTMWhen training LSTMs, it is advised to initialize the LSTM with small random weights and to set theforget gate bias to a large value so that the forget gate is initialized to be close to 1, thus enablinggradient flow (Zaremba, 2015). In addition to the forget gate bias setting, we found that we neededto initialize the input gate bias to be small so that the input gate value (and thus the learning rate)used by the meta-learner LSTM starts out being small. With this combined initialization, the meta-learner starts close to normal gradient descent with a small learning rate, which helps initial stabilityof training.3.4 B ATCH NORMALIZATIONBatch Normalization (Ioffe & Szegedy, 2015) is a recently proposed method to stabilize and thusspeed up learning of deep neural networks by reducing internal covariate shift within the learner’shidden layers. This reduction is achieved by normalizing each layer’s pre-activation, by subtractingby the mean and dividing by the standard deviation. During training, the mean and standard devi-ation are estimated using the current batch being trained on, whereas during evaluation a runningaverage of both statistics calculated on the training set is used. We need to be careful with batchnormalization for the learner network in the meta-learning setting, because we do not want to collectmean and standard deviation statistics during meta-testing in a way that allows information to leakbetween different datasets (episodes), being considered. One easy way to prevent this issue is to notcollect statistics at all during the meta-testing phase, but just use our running averages from meta-training. This, however, has a bad impact on performance, because we have changed meta-trainingand meta-testing conditions, causing the meta-learner to learn a method of optimization that relieson batch statistics which it now does not have at meta-testing time. In order to keep the two phasesas similar as possible, we found that a better strategy was to collect statistics for each dataset D2Dduring Dmetatest, but then erase the running statistics when we consider the next dataset. Thus,during meta-training, we use batch statistics for both the training and testing set whereas duringmeta-testing, we use batch statistics for the training set (and to compute our running averages) butthen use the running averages during testing. This does not cause any information to leak betweendifferent datasets, but also allows the meta-learner to be trained on conditions that are matched be-tween training and testing. Lastly, because we are doing very few training steps, we computed therunning averages so that higher preference is given to the later values.5Published as a conference paper at ICLR 2017Algorithm 1 Train Meta-LearnerInput : Meta-training set Dmetatrain , LearnerMwith parameters , Meta-Learner Rwithparameters .1:0 random initialization2:3:ford= 1;ndo4:Dtrain;Dtest random dataset from Dmetatrain5:0 c0 .Intialize learner parameters6:7: fort= 1;Tdo8: Xt;Yt random batch from Dtrain9:Lt L(M(Xt;t1);Yt) .Get loss of learner on train batch10:ct R((rt1Lt;Lt); d1).Get output of meta-learner using Equation 211:t ct .Update learner parameters12: end for13:14: X;Y Dtest15:Ltest L(M(X;T);Y) .Get loss of learner on test batch16: Update dusingrd1Ltest .Update meta-learner parameters17:18:end for4 R ELATED WORKWhile this work falls within the broad literature of transfer learning in general, we focus here onpositioning it relative to previous work on meta-learning and few-shot learning.4.1 M ETA-LEARNINGMeta-learning has a long history, but has grown to prominence recently as many have advocatedfor it as a key to achieving human-level intelligence in the future (Lake et al., 2016). The ability tolearn at two levels (learning within each task presented, while accumulating knowledge about thesimilarities and differences between tasks) is seen as being crucial to improving AI. Previous workhas used a variety of techniques in the meta-learning setting.Schmidhuber (1992; 1993) explored using networks that learn how to modify their own weights overa number of computations steps on the input. The updating of the weights is defined in a parametricform that allows the prediction and weight-change process to be differentiable end-to-end. Thework of Bengio et al. (1990; 1995) and Bengio (1993) considered learning update rules for neuralnetworks that are biologically plausible. This property is enforced by allowing the parametric formof the update to only have as input local information at each hidden unit to determine the weightchange. Different optimization methods, such as genetic programming or simulated annealing, areused to train the learning rule.In Santoro et al. (2016), a memory-augmented neural network is trained to learn how to store andretrieve memories to use for each classification task. The work of Andrychowicz et al. (2016) usesan LSTM to train a neural network; however, they are interested in learning a general optimizationalgorithm to train neural networks for large-scale classification, whereas we are interested in thefew-shot learning problem. This work also builds upon Hochreiter et al. (2001) and Bosc, bothof which used LSTMs to train multi-layer perceptrons to learn on binary classification and time-series prediction tasks. Another related method is the work of Bertinetto et al. (2016), who traina meta-learner to map a training example to the weights of a neural network that is then used toclassify future examples from this class; however, unlike our method the classifier network is directlyproduced rather than being fine-tuned after multiple training steps. Our work also bears similarityto Maclaurin et al. (2015), who tune the hyperparameters of gradient descent with momentum bybackpropagating through the chain of gradient steps to optimize the validation performance.6Published as a conference paper at ICLR 20174.2 F EW-SHOT LEARNINGThe best performing methods for few-shot learning have been mainly metric learning methods.Deep siamese networks (Koch, 2015) train a convolutional network to embed examples so thatitems in the same class are close while items in different classes are far away, according to somedistance metric. Matching networks (Vinyals et al., 2016) refine this idea so that training and testingconditions match, by defining a differentiable nearest neighbor loss involving the cosine similaritiesof embeddings produced by a convolutional network.5 E VALUATIONIn this section, we describe the results of experiments, examining the properties of our model andcomparing our method’s performance against different approaches. Following Vinyals et al. (2016),we consider the k-shot,N-class classification setting where a meta-learner trains on many relatedbut small training sets of kexamples for each of Nclasses. We first split the list of all classes inthe data into disjoint sets and assign them to each meta-set of meta-training, meta-validation, andmeta-testing. To generate each instance of a k-shot,N-class task dataset D= (Dtrain;Dtest)2D,we do the following: we first sample Nclasses from the list of classes corresponding to the meta-setwe consider. We then sample kexamples from each of those classes. These kexamples togethercompose the training set Dtrain . Then, an additional fixed amount of the rest of the examples aresampled to yield a test set Dtest. We generally have 15examples per class in the test sets. Whentraining the meta-learner, we iterate by sampling these datasets (episodes) repeatedly. For meta-validation and meta-testing, however, we produce a fixed number of these datasets to evaluate eachmethod. We produce enough datasets to ensure that the confidence interval of the mean accuracy issmall.For the learner, we use a simple CNN containing 4convolutional layers, each of which is a 33convolution with 32filters, followed by batch normalization, a ReLU non-linearity, and lastly a22max-pooling. The network then has a final linear layer followed by a softmax for the numberof classes being considered. The loss function Lis the average negative log-probability assigned bythe learner to the correct class. For the meta-learner, we use a 2-layer LSTM, where the first layer isa normal LSTM and the second layer is our modified LSTM meta-learner. The gradients and lossesare preprocessed and fed into the first layer LSTM, and the regular gradient coordinates are alsoused by the second layer LSTM to implement the state update rule shown in (1). At each time step,the learner’s loss and gradient is computed on a batch consisting of the entire training set Dtrain ,because we consider training sets with only a total of 5or25examples. We train our LSTM withADAM using a learning rate of 0:001and with gradient clipping using a value of 0:25.5.1 E XPERIMENT RESULTSThe Mini-ImageNet dataset was proposed by Vinyals et al. (2016) as a benchmark offering thechallenges of the complexity of ImageNet images, without requiring the resources and infrastructurenecessary to run on the full ImageNet dataset. Because the exact splits used in Vinyals et al. (2016)were not released, we create our own version of the Mini-Imagenet dataset by selecting a random100classes from ImageNet and picking 600examples of each class. We use 64,16, and 20classesfor training, validation and testing, respectively. We consider 1-shot and 5-shot classification for5classes. We use 15examples per class for evaluation in each test set. We compare against twobaselines and a recent metric-learning technique, Matching Networks (Vinyals et al., 2016), whichhas achieved state-of-the-art results in few-shot learning. The results are shown in Table 1.The first baseline we use is a nearest-neighbor baseline ( Baseline-nearest-neighbor ), where we firsttrain a network to classify between all the classes jointly in the original meta-training set. At meta-test time, for each dataset D, we embed all the items in the training set using our trained networkand then use nearest-neighbor matching among the embedded training examples to classify each testexample. The second baseline we use ( Baseline-finetune ) represents a coarser version of our meta-learner model. As in the first baseline, we start by training a network to classify jointly between allclasses in the meta-training set. We then use the meta-validation set to search over SGD hyperpa-rameters, where each training set is used to fine-tune the pre-trained network before evaluating onCode can be found at https://github.com/twitter/meta-learning-lstm .7Published as a conference paper at ICLR 2017Model5-class1-shot 5-shotBaseline-finetune 28:860:54% 49:790:79%Baseline-nearest-neighbor 41:080:70% 51:040:65%Matching Network 43:400:78% 51:090:71%Matching Network FCE 43:560:84% 55:310:73%Meta-Learner LSTM (OURS) 43:440:77%60:600:71%Table 1: Average classification accuracies on Mini-ImageNet with 95% confidence intervals.Marked in bold are the best results for each scenario, as well as other results with an overlappingconfidence interval.the test set. We use a fixed number of updates for fine tuning and search over the learning rate andlearning rate decay used during the course of these updates.For Matching Networks, we implemented our own version of both the basic and the fully-conditionalembedding (FCE) versions. In the basic version, a convolutional network is trained to learn indepen-dent embeddings for examples in the training and test set. In the FCE version, a bidirectional-LSTMis used to learn an embedding for the training set such that each training example’s embedding isalso a function of all the other training examples. Additionally, an attention-LSTM is used so thata test example embedding is also a function of all the embeddings of the training set. We do notconsider fine-tuning the network using the train set during meta-testing to improve performance asmentioned in Vinyals et al. (2016), but do note that our meta-learner could also be fine-tuned usingthis data. Note that to remain consistent with Vinyals et al. (2016), our baseline and matching netconvolutional networks have 4layers each with 64filters. We also added dropout to each convolu-tional block in matching nets to prevent overfitting.For our meta-learner, we train different models for the 1-shot and 5-shot tasks, that make 12and5updates, respectively. We noticed that better performance for each task was attained if the meta-learner is explicitly trained to do the set number of updates during meta-training that will be usedduring meta-testing.We attain results that are much better than the baselines discussed and competitive with MatchingNetworks. For 5-shot, we are able to do much better than Matching Networks, whereas for 1-shot,the confidence interval for our performance intersects the interval for Matching Networks. Again,we note that the numbers do not match the ones provided by Vinyals et al. (2016) simply because wecreated our version of the dataset and implemented our own versions of their model. It is interestingto note that the fine-tuned baseline is worse than the nearest-neighbor baseline. Because we are notregularizing the classifier, with very few updates the fine-tuning model overfits, especially in the1-shot case. This propensity to overfit speaks to the benefit of meta-training the initialization of theclassifier end-to-end as is done in the meta-learning LSTM.5.2 V ISUALIZATION OF META -LEARNERWe also visualize the optimization strategy learned by the meta-learner, in Figure 3. We can lookat theitandftgate values in Equation 2 at each update step, to try to get an understanding of howthe meta-learner updates the learner during training. We visualize the gate values while trainingon different datasets Dtrain , to observe whether there are variations between training sets. Weconsider both 1-shot and 5-shot classification settings, where the meta-learner is making 10and5updates, respectively. For the forget gate values for both tasks, the meta-learner seems to adopt asimple weight decay strategy that seems consistent across different layers. The input gate valuesare harder to interpret to glean the meta-learner’s strategy. However, there seems to a be a lot ofvariability between different datasets, indicating that the meta-learner isn’t simply learning a fixedoptimization strategy. Additionally, there seem to be differences between the two tasks, suggestingthat the meta-learner has adopted different methods to deal with the different conditions of eachsetting.8Published as a conference paper at ICLR 2017(a) Forget gate values for 1-shot meta-learner(b) Input gate values for 1-shot meta-learner(c) Forget gate values for 5-shot meta-learner (d) Input gate values for 5-shot meta-learnerFigure 3: Visualization of the input and forget values output by the meta-learner during the courseof its updates. Layers 14represent the values for a randomly selected parameter from the 4convolutional layers and layer 5represents the values for a random parameter from fully-connectedlayer. The different curves represent training steps on different datasets.6 C ONCLUSIONWe described an LSTM-based model for meta-learning, which is inspired from the parameter up-dates suggested by gradient descent optimization algorithms. Our LSTM meta-learner uses its stateto represent the learning updates of the parameters of a classifier. It is trained to discover both agood initialization for the learner’s parameters, as well as a successful mechanism for updating thelearner’s parameters to a given small training set for some new classification task. Our experimentsdemonstrate that our approach outperforms natural baselines and is competitive to the state-of-the-art in metric learning for few-shot learning.In this work, we focused our study to the few-shot and few-classes setting. However, it would bemore valuable to train meta-learners that can perform well across a full spectrum of settings, i.e. forfew or lots of training examples and for few or lots of possible classes. Our future work will thusconsider moving towards this more challenging scenario.ACKNOWLEDGMENTSWe thank Jake Snell, Kevin Swersky, and Oriol Vinyals for helpful discussions of this work.
BJPokH_Vg
nice paper
9: Top 15% of accepted papers, strong accept
This work presents an LSTM based meta-learning framework to learn the optimization algorithm of a another learning algorithm (here a NN). The paper is globally well written and the presentation of the main material is clear. The crux of the paper: drawing the parallel between Robbins Monroe update rule and the LSTM update rule and exploit it to satisfy the two main desiderata of few shot learning (1- quick acquisition of new knowledge, 2- slower extraction of general transferable knowledge) is intriguing. Several tricks re-used from (Andrychowicz et al. 2016) such as parameter sharing and normalization, and novel design choices (specific implementation of batch normalization) are well motivated. The experiments are convincing. This is a strong paper. My only concerns/questions are the following: 1. Can it be redundant to use the loss, gradient and parameters as input to the meta-learner? Did you do ablative studies to make sure simpler combinations are not enough. 2. It would be great if other architectural components of the network can be learned in a similar fashion (number of neurons, type of units, etc.). Do you have an opinion about this? 3. The related work section (mainly focused on meta learning) is a bit shallow. Meta-learning is a rather old topic and similar approaches have been tried to solve the same problem even if they were not using LSTMs: - Samy Bengio PhD thesis (1989) is all about this ;-) - Use of genetic programming for the search of a new learning rule for neural networks (S. Bengio, Y. Bengio, and J. Cloutier. 1994) - I am convince Schmidhuber has done something, make sure you find it and update related work section. Overall, I like the paper. I believe the discussed material is relevant to a wide audience at ICLR.
5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature
r1PRvK9el
ICLR.cc/2017/conference
2017
Implicit ReasoNet: Modeling Large-Scale Structured Relationships with Shared Memory
["Yelong Shen*", "Po-Sen Huang*", "Ming-Wei Chang", "Jianfeng Gao"]
Recent studies on knowledge base completion, the task of recovering missing relationships based on recorded relations, demonstrate the importance of learning embeddings from multi-step relations. However, due to the size of knowledge bases, learning multi-step relations directly on top of observed instances could be costly. In this paper, we propose Implicit ReasoNets (IRNs), which is designed to perform large-scale inference implicitly through a search controller and shared memory. Unlike previous work, IRNs use training data to learn to perform multi-step inference through the shared memory, which is also jointly updated during training. While the inference procedure is not operating on top of observed instances for IRNs, our proposed model outperforms all previous approaches on the popular FB15k benchmark by more than 5.7%.
["Deep learning", "Reinforcement Learning"]
ABSTRACTRecent studies on knowledge base completion, the task of recovering missingrelationships based on recorded relations, demonstrate the importance of learningembeddings from multi-step relations. However, due to the size of knowledge bases,learning multi-step relations directly on top of observed instances could be costly.In this paper, we propose Implicit ReasoNets ( IRNs ), which is designed to performlarge-scale inference implicitly through a search controller and shared memory.Unlike previous work, IRNs use training data to learn to perform multi-stepinference through the shared memory, which is also jointly updated during training.While the inference procedure is not operating on top of observed instances forIRNs , our proposed model outperforms all previous approaches on the popularFB15k benchmark by more than 5.7%.1 I NTRODUCTIONKnowledge bases such as WordNet (Fellbaum, 1998), Freebase (Bollacker et al., 2008), orYago (Suchanek et al., 2007) contain many real-world facts expressed as triples, e.g., ( Bill Gates ,FounderOf ,Microsoft ). These knowledge bases are useful for many downstream applicationssuch as question answering (Berant et al., 2013; Yih et al., 2015) and information extraction (Mintzet al., 2009). However, despite the formidable size of knowledge bases, many important facts arestill missing. For example, West et al. (2014) showed that 21% of the 100K most frequent PERSONentities have no recorded nationality in a recent version of Freebase. We seek to infer unknownrelations based on the observed triples. Thus, the knowledge base completion (KBC) task has emergedan important open research problem (Nickel et al., 2011).Neural-network based methods have been very popular for solving the KBC task. Following Bordeset al. (2013), one of the most popular approaches for KBC is to learn vector-space representations ofentities and relations during training, and then apply linear or bi-linear operations to infer the missingrelations at test time. However, several recent papers demonstrate limitations of prior approachesrelying upon vector-space models alone. By themselves, there is no straightforward way to capturethe structured relationships between multiple triples adequately (Guu et al., 2015; Toutanova et al.,2016; Lin et al., 2015a). For example, assume that we want to fill in the missing relation for thetriple ( Obama ,NATIONALITY , ?), a multi-step search procedure might be needed to discover theevidence in the observed triples such as ( Obama ,BORNIN,Hawaii ) and ( Hawaii ,PARTOF,U.S.A ). To address this issue, Guu et al. (2015); Toutanova et al. (2016); Lin et al. (2015a) proposedifferent approaches of injecting structured information by directly operating on the observed triplets.Unfortunately, due to the size of knowledge bases, these newly proposed approaches suffer fromsome limitations, as most paths are not informative for inferring missing relations, and it is prohibitiveto consider all possible paths during the training time with expressive models.In this paper, we take a different approach from prior work on KBC by addressing the challenges ofperforming large-scale inference through the design of search controller andshared memory . Ourinference procedure centers around the search controller , which only operates on the shared memoryinstead of directly manipulating the observed triples in knowledge base. IRNs use training data toEqual contribution.1Under review as a conference paper at ICLR 2017Search ControllerS1 St St+1 St+2XtTt Tt+1ftc(St) ftc(St+1) FalseTruefo(St) TrueOt Ot+1fa(St,M) Xt+1 fa(St+1,M)FalseShared Memory AttentionTerminationOutput ModuleInput Moduleftc(St+2) fo(St+2) Ot+2Tt+2Mqfo(St+1) Figure 1: An IRN Architecture.learn to perform multi-step inference through the shared memory. First, input module generates arepresentation of the query. Then, the search controller repeatedly interacts with the shared memoryand checks the termination gate . After each iteration, if the termination condition is met, the modelstops the search process and calls the output module to generate a prediction. The shared memory isdesigned to store key information about the overall structures it learned during training, and hencethe search controller only needs to access the shared memory instead of operating on the observedtriples.There are several advantages of using IRNs . First, the cost of inference can be controlled becausethe search controller only needs to access the shared memory. Second, all the modules, including thesearch controller and memory, are jointly trained, and hence alleviate the needs to inject structuredrelationships between instances manually. Finally, we can easily extend IRNs to other tasks thatrequire modeling structured relationships between instances by switching the input and outputmodules.The main contributions of our paper are as follows:We propose Implicit ReasoNets ( IRNs ), which use a shared memory guided by a searchcontroller to model large-scale structured relationships implicitly.We evaluate IRNs and demonstrate that our proposed model achieves the state-of-the-artresults on the popular FB15k benchmark, surpassing prior approaches by more than 5.7%.We analyze the behavior of IRNs for shortest path synthesis. We show that IRNs outper-form a standard sequence-to-sequence model and execute meaningful multi-step inference.2 R EASO NET FOR IMPLICIT INFERENCEIn this section, we describe the general architecture of IRNs in a way that is agnostic to KBC. IRNsare composed of four main components: an input component, an output component, a shared memory,and a search controller, as shown in Figure 1. In this section, we briefly describe each component.Input/Output Modules : These two modules are task-dependent. The input module takes a query andconverts the query into a vector representation q. The output module is a function fo, which convertsthe hidden state received from the search controller ( s) into an output O. We optimize the whole2Under review as a conference paper at ICLR 2017model using the output prediction Owith respect to a ground-truth target using a task-specified lossfunction.Shared Memory : The shared memory is denoted as M. It consists of a list of memory vectors,M=fmigi=1:::I, wheremiis a fixed dimensional vector. The memory vectors are randomlyinitialized and automatically updated through back-propagation. The shared memory component isshared across all instances.Search Controller : The search controller is a recurrent neural network and controls the search processby keeping internal state sequences to track the current search process and history. The searchcontroller uses an attention mechanism to fetch information from relevant memory vectors in M, anddecides if the model should output the prediction or continue to generate the next possible output.Internal State : The internal state of the search controller is denoted as S, which is a vectorrepresentation of the search process. The initial state s1is usually the vector representationof the input vector q. The internal state at t-th time step is represented by st. The sequenceof internal states is modeled by an RNN: st+1=RNN (st;xt;s).Attention to memory : The attention vector xtatt-th time step is generated based on thecurrent internal state stand the shared memory M:xt=fatt(st;M;x). Specifically,the attention score at;ion a memory vector migiven a state stis computed as at;i=softmaxi=1;:::;jMjcos(W1mi;W 2st), whereis set to 10 in our experiments and theweight matrices W1andW2are learned during training. The attention vector xtcan bewritten asxt=fatt(st;M;x) =PjMjiat;imi.Termination Control : The terminate gate produces a stochastic random variable accordingto the current internal state, ttp(jftc(st;tc))).ttis a binary random variable. If ttistrue, the IRN will finish the search process, and the output module will execute at time stept; otherwise the IRN will generate the next attention vector xt+1and feed into the statenetwork to update the next internal state st+1. In our experiments, the termination variable ismodeled by a logistical regression: ftc(st;tc) =sigmoid (Wtcst+btc), where the weightmatrixWtcand bias vector btcare learned during training.Compared IRNs to Memory Networks (MemNN) (Weston et al., 2014; Sukhbaatar et al., 2015; ?)and Neural Turing Machines (NTM) (Graves et al., 2014; 2016), the biggest difference between ourmodel and the existing frameworks is the search controller and the use of the shared memory. Webuild upon our previous work (Shen et al., 2016) for using a search controller module to dynamicallyperform a multi-step inference depending on the complexity of the instance. MemNN and NTMexplicitly store inputs (such as graph definition, supporting facts) in the memory. In contrast, in IRNs ,we do not explicitly store all the observed inputs in the shared memory. Instead, we directly operateon the shared memory, which modeling the structured relationships implicitly. We randomly initializethe memory and update the memory with respect to task-specific objectives. The idea of exploitingshared memory is proposed by Munkhdalai & Yu (2016) independently. Despite of using the sameterm, the goal and the operations used by IRNs are different from the one used in Munkhdalai & Yu(2016), as IRNs allow the model to perform multi-step for each instance dynamically.2.1 S TOCHASTIC INFERENCE PROCESSThe inference process of an IRN is as follows. First, the model converts a task-dependent input toa vector representation through the input module. Then, the model uses the input representationto initialize the search controller. In every time step, the search controller determines whether theprocess is finished by sampling from the distribution according to the terminate gate. If the outcome istermination, the output module will generate a task-dependent prediction given the search controllerstates. If the outcome is continuation, the search controller will move on to the next time step,and create an attention vector based on the current search controller state and the shared memory.Intuitively, we design whole process by mimicking a search procedure that iteratively finds its targetthrough a structure and output its prediction when a satisfying answer is found. The detailed inferenceprocess is described in Algorithm 1.The inference process of an IRN is considered as a Partially Observable Markov Decision Process(POMDP) (Kaelbling et al., 1998) in the reinforcement learning (RL) literature. The IRN produces3Under review as a conference paper at ICLR 2017Algorithm 1: Stochastic Inference Process in an IRNInput : Randomly initialized shared memory M; Input vector q; Maximum step TmaxOutput : Output vector o1Defines1=q;t= 1;2Samplettfrom the distribution p(jftc(st;tc));3ifttis false, go to Step 4; otherwise Step 7;4Generate an attention vector xt=fatt(st;M;x);5Update the internal state st+1=RNN (st;xt;s);6Sett=t+ 1; ift<T maxgo to Step 2; otherwise Step 7;7Generate output ot=fo(st;o);8Returno=ot;the output vector oTat theT-th step, which implies termination gate variables t1:T= (t1= 0;t2=0;:::;tT1= 0;tT= 1) , and then takes prediction action pTaccording to the probability distributiongivenoT. Therefore, the IRN learns a stochastic policy ((t1:T;pT)jq;)with parameters to get adistribution over termination actions, and over prediction actions. The termination step Tvaries frominstance to instance. The parameters of the IRNare given by the parameters of the embeddingmatricesWfor the input/output module, the shared memory M, the attention network x, the searchcontroller RNN network s, the output generation network o, and the termination gate network tc.The parameters =fW;M;x;s;o;tcgare trained to maximize the total expected reward thattheIRN when interacting with the environment. The expected reward for an instance is defined as:J() =E(t1:T;pT;)"TXt=1rt#The reward can only be received at the final termination step when a prediction action pTis performed.The rewards on intermediate steps are zeros, frt= 0gt=1:::T1.We employ the approach from our previous work (Shen et al., 2016), REINFORCE (Williams, 1992)based Contrastive Reward method, to maximize the expected reward. The gradient of Jcan bewritten as:rJ() =X(t1:T;pT)2Ay(t1:T;pT;)hrlog(t1:T;pT;)(rTbi1)iwhere Ayis all the possible episodes, the baseline bi=P(t1:T;pT)2Ay(t1:T;pT;)rTis theexpected reward on the jAyjepisodes for the i-th training instance.3 A PPLYING IRN STOKNOWLEDGE BASE COMPLETIONThe goal of KBC tasks (Bordes et al., 2013) is to predict a head or a tail entity given the relation typeand the other entity, i.e. predicting hgiven (?;r;t)or predicting tgiven (h;r;?), where ?denotesthe missing entity. For a KBC task, the input to our model is a subject entity (a head or tail entity)and a relation. The task-dependent input module first extracts the embedding vectors for the entityand relation from an embedding matrix. We then represent the query vector qfor an IRN as theconcatenation of the two vectors. We randomly initialize the shared memory component. At each step,a training triplet is processed through the model by Algorithm 1, where no explicit path informationis given. The IRN updates the shared memory implicitly with respect to the objective function. Forthe task dependent output module, we use a nonlinear projection to project the search controller stateinto an output vector o:fo(st;o) =tanh(Wost+bo), where theWoandboare the weight matrixand bias vector, respectively. We define the ground truth target (object) entity embedding as y, anduse theL1distance measure between the output oand target entity y, namelyd(o;y) =joyj1. Wesample a set of incorrect entity embeddings N=fyigjNji=1as negative examples. The probability of4Under review as a conference paper at ICLR 2017selecting a prediction ^y2Dcan be approximated asp(^yjo) =exp(d(o;^y))Pyk2Dexp(d(o;yk))whereD=N[fyg. We setjNjandto 20 and 5, respectively, for the experiments on FB15k andWN18 datasets. The IRN performs a prediction action pTon selecting ^ywith probability p(^yjo).We define the reward of the prediction action as one if the ground truth entity is selected, and zerootherwise.4 E XPERIMENTAL RESULTSIn this section, we evaluate the performance of our model on the benchmark FB15k and WN18datasets for KBC tasks (Bordes et al., 2013). These datasets contain multi-relations between headand tail entities. Given a head entity and a relation, the model produces a ranked list of the entitiesaccording to the score of the entity being the tail entity of this triple. To evaluate the ranking, wereport mean rank (MR) , the mean of rank of the correct entity across the test examples, and hits@10 ,the proportion of correct entities ranked in the top-10 predictions. Lower MR or higher hits@10indicates a better prediction performance. We follow the evaluation protocol in Bordes et al. (2013)to report filtered results, where negative examples Nare removed from the dataset. In this case, wecan avoid some negative examples being valid and ranked above the target triplet.We use the same hyper-parameters of our model for both FB15k and WN18 datasets. Entity embed-dings (which are not shared between input and output modules) and relation embedding are both100-dimensions. We use the input module and output module to encode subject and object entities,respectively. There are 64 memory vectors with 200 dimensions each, initialized by random vectorswith unitL2-norm. We use single-layer GRU with 200 cells as the search controller. We set themaximum inference step of the IRN to 5. We randomly initialize all model parameters, and use SGDas the training algorithm with mini-batch size of 64. We set the learning rate to a constant number,0.01. To prevent the model from learning a trivial solution by increasing entity embeddings norms,we follow Bordes et al. (2013) to enforce the L2-norm of the entity embeddings as 1. We use hits@10as the validation metric for the IRN. Following the work (Lin et al., 2015a), we add reverse relationsinto the training triplet set to increase the training data.Following Nguyen et al. (2016), we divide the results of previous work into two groups. The firstgroup contains the models that directly optimize a scoring function for the triples in a knowledge basewithout using extra information. The second group of models make uses of additional informationfrom multi-step relations. For example, RTransE (García-Durán et al., 2015) and PTransE (Lin et al.,2015a) models are extensions of the TransE (Bordes et al., 2013) model by explicitly exploringmulti-step relations in the knowledge base to regularize the trained embeddings. The NLFeat model(Toutanova et al., 2015) is a log-linear model that makes use of simple node and link features.Table 1 presents the experimental results. According to the table, our model significantly outperformsprevious baselines, regardless of whether previous approaches use additional information or not.Specifically, on FB15k, the MR of our model surpasses all previous results by 12, and our hit@10outperforms others by 5.7%. On WN18, the IRN obtains the highest hit@10 while maintainingsimilar MR results compared to previous work.1To better understand the behavior of IRNs , we report the results of IRNs with different memory sizesand different Tmax on FB15K in Table 2. We find the performance of IRNs increases significantlyif the number of inference step increases. Note that an IRN withTmax = 1 is the case that an IRNwithout the shared memory. Interestingly, given Tmax = 5,IRNs are not sensitive to memory sizes.In particular, larger memory always improves the MR score, but the best hit@10 is obtained byjMj=64 memory vectors. A possible reason is that the best memory size is determined by thecomplexity of the tasks.We analyze hits@10 results on FB15k with respect to the relation categories. Following the evaluationin Bordes et al. (2013), we evaluate the performance in four types of relation: 1-1 if a head entity1Nguyen et al. (2016) reported two results on WN18, where the first one is obtained by choosing to optimizehits@10 on the validation set, and second one is obtained by choosing to optimize MR on the validation set. Welist both of them in Table 1.5Under review as a conference paper at ICLR 2017Table 1: The knowledge base completion (link prediction) results on WN18 and FB15k.Model Additional Information WN18 FB15kHits@10 (%) MR Hits@10 (%) MRSE (Bordes et al., 2011) NO 80.5 985 39.8 162Unstructured (Bordes et al., 2014) NO 38.2 304 6.3 979TransE (Bordes et al., 2013) NO 89.2 251 47.1 125TransH (Wang et al., 2014) NO 86.7 303 64.4 87TransR (Lin et al., 2015b) NO 92.0 225 68.7 77CTransR (Lin et al., 2015b) NO 92.3 218 70.2 75KG2E (He et al., 2015) NO 93.2 348 74.0 59TransD (Ji et al., 2015) NO 92.2 212 77.3 91TATEC (García-Durán et al., 2015) NO - - 76.7 58NTN (Socher et al., 2013) NO 66.1 - 41.4 -DISTMULT (Yang et al., 2014) NO 94.2 - 57.7 -STransE (Nguyen et al., 2016) NO 94.7 (93) 244 ( 206) 79.7 69RTransE (García-Durán et al., 2015) Path - - 76.2 50PTransE (Lin et al., 2015a) Path - - 84.6 58NLFeat (Toutanova et al., 2015) Node + Link Features 94.3 - 87.0 -Random Walk (Wei et al., 2016) Path 94.8 - 74.7 -IRN NO 95.3 249 92.7 38Table 2: The performance of IRNs with different memory sizes and inference steps on FB15K.Number of memory vectors Maximum inference step FB15kHits@10 (%) MRjMj=64 Tmax = 1 80.7 55.7jMj=64 Tmax = 2 87.4 49.2jMj=64 Tmax = 5 92.7 38.0jMj=64 Tmax = 8 88.8 32.9jMj=32 Tmax = 5 90.1 38.7jMj=64 Tmax = 5 92.7 38.0jMj=128 Tmax = 5 92.2 36.1jMj=512 Tmax = 5 90.0 35.3jMj=4096 Tmax = 5 88.7 34.7can appear with at most one tail entity, 1-Many if a head entity can appear with many tail entities,Many-1 if multiple heads can appear with the same tail entity, and Many-Many if multiple headentities can appear with multiple tail entities. The detailed results are shown in Table 3. The IRNsignificantly improves the hits@10 results in the Many-1 category on predicting the head entity(18:8%), the 1-Many category on predicting the tail entity ( 16:5%), and the Many-Many category(over 8%in average).To analyze the behavior of IRNs , we pick some examples for the tail entity prediction in Table 4.Interestingly, we observed that the model can gradually increase the ranking score of the correct tailentity during the inference process.5 A NALYSIS : APPLYING IRN STO A SHORTEST PATH SYNTHESIS TASKWe construct a synthetic task, shortest path synthesis, to evaluate the inference capability over ashared memory. The motivations of applying our model to this task are as follows. First, we wantto evaluate IRNs on another task requiring multi-step inference. Second, we select the sequencegeneration task so that we are able to analyze the inference capability of IRNs in details.In the shortest path synthesis task, as illustrated in Figure 2, a training instance consists of a startnode and an end node (e.g., 215 493) of an underlying weighted directed graph that is unknown tomodels. The output of each instance is the shortest path between the given start and end nodes of theunderlying graph (e.g., 215!101!493). Specifically, models can only observe the start-end node6Under review as a conference paper at ICLR 2017Table 3: Hits@10 (%) in the relation category on FB15k. ( Mstands for Many )ModelPredicting head h Predicting tail t1-1 1-M M-1 M-M 1-1 1-M M-1 M-MSE (Bordes et al., 2011) 35.6 62.6 17.2 37.5 34.9 14.6 68.3 41.3Unstructured (Bordes et al., 2014) 34.5 2.5 6.1 6.6 34.3 4.2 1.9 6.6TransE (Bordes et al., 2013) 43.7 65.7 18.2 47.2 43.7 19.7 66.7 50.0TransH (Wang et al., 2014) 66.8 87.6 28.7 64.5 65.5 39.8 83.3 67.2TransR (Lin et al., 2015b) 78.8 89.2 34.1 69.2 79.2 37.4 90.4 72.1CTransR (Lin et al., 2015b) 81.5 89.0 34.7 71.2 80.8 38.6 90.1 73.8KG2E (He et al., 2015) 92.3 94.6 66.0 69.6 92.6 67.9 94.4 73.4TransD (Ji et al., 2015) 86.1 95.5 39.8 78.5 85.4 50.6 94.4 81.2TATEC (García-Durán et al., 2015) 79.3 93.2 42.3 77.2 78.5 51.5 92.7 80.7STransE (Nguyen et al., 2016) 82.8 94.2 50.4 80.1 82.4 56.9 93.4 83.1PTransE (Lin et al., 2015a) 91.0 92.8 60.9 83.8 91.2 74.0 88.9 86.4IRN 87.2 96.1 84.8 92.9 86.9 90.5 95.3 94.1Table 4: Test examples in FB15k dataset, given a head entity and a relation, the IRN predicts the tailentity with multiple search steps.Input : (Dean Koontz , /PEOPLE /PERSON /PROFESSION )Target :Film ProducerStep Termination Prob. Rank Predict top-3 entities1 0.018 9 Author TV. Director Songwriter2 0.052 7 Actor Singer Songwriter3 0.095 4 Actor Singer Songwriter4 0.132 4 Actor Singer Songwriter5 0.702 3 Actor Singer Film ProducerInput : (War and Peace , /FILM /FILM /PRODUCED _BY)Target :Carlo PontiStep Termination Prob. Rank Predict top-3 entities1 0.001 13 Scott Rudin Stephen Woolley Hal B. Wallis2 5.8E-13 7 Billy Wilder William Wyler Elia Kazan3 0.997 1 Carlo Ponti King Vidor Hal B. Wallispairs as input and their shortest path as output. The whole graph is unknown to the models and theedge weights are not revealed in the training data. At test time, a path sequence is considered correctif it connects the start node and the end node of the underlying graph, and the cost of the predictedpath is the same as the optimal path.Note that the task is very difficult and cannot be solved by dynamic programming algorithms since theweights on the edges are not revealed to the algorithms or the models. To recover some of the shortestpaths at the test time, the model needs to infer the correct path from the observed instances. Forexample, assume that we observe two instances in the training data, “ A D:A!B!G!D”and “B E:B!C!E”. In order to answer the shortest path between AandE, the modelneeds to infer that “ A!B!C!E” is a possible path between AandE. If there are multiplepossible paths, the model has to decide which one is the shortest one using statistical information.In the experiments, we construct a graph with 500 nodes and we randomly assign two nodes to forman edge. We split 20,000 instances for training, 10,000 instances for validation, and 10,000 instancesfor testing. We create the training and testing instances carefully so that the model needs to performinference to recover the correct path. We describe the details of the graph and data construction partsin the appendix section. A sub-graph of the data is shown in Figure 2.For the settings of the IRN, we switch the output module to a GRU decoder for a sequence generationtask. We assign reward rT= 1if all the prediction symbols are correct and 0otherwise. We use a64-dimensional embedding vector for input symbols, a GRU controller with 128cells, and a GRUdecoder with 128cells. We set the maximum inference step Tmaxto5.7Under review as a conference paper at ICLR 2017Step Termination Distance PredictionsProbability1 0.001 N/A 215!158!89!458!49320 N/A 215!479!277!353!49330 N/A 215!49!49340 0.77 215!140!4935 0.999 0.70 215!101!493Figure 2: An example of the shortest path synthesis dataset, given an input “ 215 493” (Answer: 215!101!493). Note that we only show the nodes that are related to this example here. The correspondingtermination probability and prediction results are shown in the table. The model terminates at step 5.We compare the IRN with two baseline approaches: dynamic programming without edge-weightinformation and a standard sequence-to-sequence model (Sutskever et al., 2014) using a similarparameter size to our model. Without knowing the edge weights, dynamic programming only recovers589 correct paths at test time. The sequence-to-sequence model recovers 904 correct paths. The IRNoutperforms both baselines, recovering 1,319 paths. Furthermore, 76.9% of the predicted paths fromIRN arevalid paths, where a path is valid if the path connects the start and end node nodes of theunderlying graph. In contrast, only 69.1% of the predicted paths from the sequence-to-sequencemodel are valid.To further understand the inference process of the IRN, Figure 2 shows the inference process of a testinstance. Interestingly, to make the correct prediction on this instance, the model has to perform afairly complicated inference.2We observe that the model cannot find a connected path in the firstthree steps. Finally, the model finds a valid path at the forth step and predict the correct shortest pathsequence at the fifth step.6 R ELATED WORKLink Prediction and Knowledge Base Completion Given thatris a relation, his the head entity,andtis the tail entity, most of the embedding models for link prediction focus on finding the scoringfunctionfr(h;t)that represents the implausibility of a triple. (Bordes et al., 2011; 2014; 2013; Wanget al., 2014; Ji et al., 2015; Nguyen et al., 2016). In many studies, the scoring function fr(h;t)islinear or bi-linear. For example, in TransE (Bordes et al., 2013), the function is implemented asfr(h;t) =kh+rtk, where h,randtare the corresponding vector representations.Recently, different studies (Guu et al., 2015; Lin et al., 2015a; Toutanova et al., 2016) demonstratethe importance for models to also learn from multi-step relations. Learning from multi-step relationsinjects the structured relationships between triples into the model. However, this also poses a technicalchallenge of considering exponential numbers of multi-step relationships. Prior approaches addressthis issue by designing path-mining algorithms (Lin et al., 2015a) or considering all possible pathsusing a dynamic programming algorithm with the restriction of using linear or bi-linear modelsonly (Toutanova et al., 2016). Toutanova & Chen (2015) shows the effectiveness of using simple nodeand link features that encode structured information on FB15k and WN18. In our work, the IRNoutperforms prior results and shows that similar information can be captured by the model withoutexplicitly designing features.2In the example, to find the right path, the model needs to search over observed instances “ 215 448:215!101!448” and “ 76 493:76!308!101!493”, and to figure out the distance of “ 140!493”is longer than “ 101!493” (there are four shortest paths between 101!493and three shortest paths between140!493in the training set).8Under review as a conference paper at ICLR 2017Studies such as (Riedel et al., 2013) show that incorporating textual information can further improvethe knowledge base completion tasks. It would be interesting to incorporate the information outsidethe knowledge bases in our model in the future.Neural Frameworks Sequence-to-sequence models (Sutskever et al., 2014; Cho et al., 2014) haveshown to be successful in many applications such as machine translation and conversation model-ing (Sordoni et al., 2015). While sequence-to-sequence models are powerful, recent work has shownthat the necessity of incorporating an external memory to perform inference in simple algorithmictasks (Graves et al., 2014; 2016).7 C ONCLUSIONIn this paper, we propose Implicit ReasoNets ( IRNs ), which perform inference over a shared memorythat models large-scale structured relationships implicitly. The inference process is guided by a searchcontroller to access the memory that is shared across instances. We demonstrate and analyze themulti-step inference capability of IRNs in the knowledge base completion tasks and a shortest pathsynthesis task. Our model, without using any explicit knowledge base information in the inferenceprocedure, outperforms all prior approaches on the popular FB15k benchmark by more than 5.7%.For future work, we aim to further extend IRNs in two ways. First, inspired from Ribeiro et al. (2016),we would like to develop techniques to exploit ways to generate human understandable reasoninginterpretation from the shared memory. Second, we plan to apply IRNs to infer the relationshipsin unstructured data such as natural language. For example, given a natural language query suchas “are rabbits animals?”, the model can infer a natural language answer implicitly in the sharedmemory without performing inference directly on top of huge amount of observed sentences such as“all mammals are animals” and “rabbits are animals”. We believe the ability to perform inferenceimplicitly is crucial for modeling large-scale structured relationships.ACKNOWLEDGMENTSWe thank Scott Wen-Tau Yih, Kristina Toutanova, Jian Tang and Zachary Lipton for their thoughtfulfeedback and discussions.
BJF_A9INl
Interesting paper
6: Marginally above acceptance threshold
In this paper, the authors proposed an implicit ResoNet model for knowledge base completion. The proposed model performs inference implicitly by a search controller and shared memory. The proposed approach demonstrates promising results on FB15k benchmark dataset. Pros: - The proposed approach demonstrates strong performance on FB15k dataset. - The idea of using shared memory for knowledge base completion is new and interesting. - The proposed approach is general and can be applied in various tasks. Cons: - There is no qualitative analysis on the results, and it is hard to see why the proposed approach works on the knowledge-base completion task. - The introduction section can be improved. Specifically, the authors should motivate "shared memory" more in the introduction and how it different from existing methods that using "unshared memory" for knowledge base completion. Similarly, the function of search controller is unclear in the introduction section as it is unclear what does search mean in the content of knowledge base completion. The concept of shared memory and search controller only make sense to me after reading through section 2.
4: The reviewer is confident but not absolutely certain that the evaluation is correct
r1PRvK9el
ICLR.cc/2017/conference
2017
Implicit ReasoNet: Modeling Large-Scale Structured Relationships with Shared Memory
["Yelong Shen*", "Po-Sen Huang*", "Ming-Wei Chang", "Jianfeng Gao"]
Recent studies on knowledge base completion, the task of recovering missing relationships based on recorded relations, demonstrate the importance of learning embeddings from multi-step relations. However, due to the size of knowledge bases, learning multi-step relations directly on top of observed instances could be costly. In this paper, we propose Implicit ReasoNets (IRNs), which is designed to perform large-scale inference implicitly through a search controller and shared memory. Unlike previous work, IRNs use training data to learn to perform multi-step inference through the shared memory, which is also jointly updated during training. While the inference procedure is not operating on top of observed instances for IRNs, our proposed model outperforms all previous approaches on the popular FB15k benchmark by more than 5.7%.
["Deep learning", "Reinforcement Learning"]
ABSTRACTRecent studies on knowledge base completion, the task of recovering missingrelationships based on recorded relations, demonstrate the importance of learningembeddings from multi-step relations. However, due to the size of knowledge bases,learning multi-step relations directly on top of observed instances could be costly.In this paper, we propose Implicit ReasoNets ( IRNs ), which is designed to performlarge-scale inference implicitly through a search controller and shared memory.Unlike previous work, IRNs use training data to learn to perform multi-stepinference through the shared memory, which is also jointly updated during training.While the inference procedure is not operating on top of observed instances forIRNs , our proposed model outperforms all previous approaches on the popularFB15k benchmark by more than 5.7%.1 I NTRODUCTIONKnowledge bases such as WordNet (Fellbaum, 1998), Freebase (Bollacker et al., 2008), orYago (Suchanek et al., 2007) contain many real-world facts expressed as triples, e.g., ( Bill Gates ,FounderOf ,Microsoft ). These knowledge bases are useful for many downstream applicationssuch as question answering (Berant et al., 2013; Yih et al., 2015) and information extraction (Mintzet al., 2009). However, despite the formidable size of knowledge bases, many important facts arestill missing. For example, West et al. (2014) showed that 21% of the 100K most frequent PERSONentities have no recorded nationality in a recent version of Freebase. We seek to infer unknownrelations based on the observed triples. Thus, the knowledge base completion (KBC) task has emergedan important open research problem (Nickel et al., 2011).Neural-network based methods have been very popular for solving the KBC task. Following Bordeset al. (2013), one of the most popular approaches for KBC is to learn vector-space representations ofentities and relations during training, and then apply linear or bi-linear operations to infer the missingrelations at test time. However, several recent papers demonstrate limitations of prior approachesrelying upon vector-space models alone. By themselves, there is no straightforward way to capturethe structured relationships between multiple triples adequately (Guu et al., 2015; Toutanova et al.,2016; Lin et al., 2015a). For example, assume that we want to fill in the missing relation for thetriple ( Obama ,NATIONALITY , ?), a multi-step search procedure might be needed to discover theevidence in the observed triples such as ( Obama ,BORNIN,Hawaii ) and ( Hawaii ,PARTOF,U.S.A ). To address this issue, Guu et al. (2015); Toutanova et al. (2016); Lin et al. (2015a) proposedifferent approaches of injecting structured information by directly operating on the observed triplets.Unfortunately, due to the size of knowledge bases, these newly proposed approaches suffer fromsome limitations, as most paths are not informative for inferring missing relations, and it is prohibitiveto consider all possible paths during the training time with expressive models.In this paper, we take a different approach from prior work on KBC by addressing the challenges ofperforming large-scale inference through the design of search controller andshared memory . Ourinference procedure centers around the search controller , which only operates on the shared memoryinstead of directly manipulating the observed triples in knowledge base. IRNs use training data toEqual contribution.1Under review as a conference paper at ICLR 2017Search ControllerS1 St St+1 St+2XtTt Tt+1ftc(St) ftc(St+1) FalseTruefo(St) TrueOt Ot+1fa(St,M) Xt+1 fa(St+1,M)FalseShared Memory AttentionTerminationOutput ModuleInput Moduleftc(St+2) fo(St+2) Ot+2Tt+2Mqfo(St+1) Figure 1: An IRN Architecture.learn to perform multi-step inference through the shared memory. First, input module generates arepresentation of the query. Then, the search controller repeatedly interacts with the shared memoryand checks the termination gate . After each iteration, if the termination condition is met, the modelstops the search process and calls the output module to generate a prediction. The shared memory isdesigned to store key information about the overall structures it learned during training, and hencethe search controller only needs to access the shared memory instead of operating on the observedtriples.There are several advantages of using IRNs . First, the cost of inference can be controlled becausethe search controller only needs to access the shared memory. Second, all the modules, including thesearch controller and memory, are jointly trained, and hence alleviate the needs to inject structuredrelationships between instances manually. Finally, we can easily extend IRNs to other tasks thatrequire modeling structured relationships between instances by switching the input and outputmodules.The main contributions of our paper are as follows:We propose Implicit ReasoNets ( IRNs ), which use a shared memory guided by a searchcontroller to model large-scale structured relationships implicitly.We evaluate IRNs and demonstrate that our proposed model achieves the state-of-the-artresults on the popular FB15k benchmark, surpassing prior approaches by more than 5.7%.We analyze the behavior of IRNs for shortest path synthesis. We show that IRNs outper-form a standard sequence-to-sequence model and execute meaningful multi-step inference.2 R EASO NET FOR IMPLICIT INFERENCEIn this section, we describe the general architecture of IRNs in a way that is agnostic to KBC. IRNsare composed of four main components: an input component, an output component, a shared memory,and a search controller, as shown in Figure 1. In this section, we briefly describe each component.Input/Output Modules : These two modules are task-dependent. The input module takes a query andconverts the query into a vector representation q. The output module is a function fo, which convertsthe hidden state received from the search controller ( s) into an output O. We optimize the whole2Under review as a conference paper at ICLR 2017model using the output prediction Owith respect to a ground-truth target using a task-specified lossfunction.Shared Memory : The shared memory is denoted as M. It consists of a list of memory vectors,M=fmigi=1:::I, wheremiis a fixed dimensional vector. The memory vectors are randomlyinitialized and automatically updated through back-propagation. The shared memory component isshared across all instances.Search Controller : The search controller is a recurrent neural network and controls the search processby keeping internal state sequences to track the current search process and history. The searchcontroller uses an attention mechanism to fetch information from relevant memory vectors in M, anddecides if the model should output the prediction or continue to generate the next possible output.Internal State : The internal state of the search controller is denoted as S, which is a vectorrepresentation of the search process. The initial state s1is usually the vector representationof the input vector q. The internal state at t-th time step is represented by st. The sequenceof internal states is modeled by an RNN: st+1=RNN (st;xt;s).Attention to memory : The attention vector xtatt-th time step is generated based on thecurrent internal state stand the shared memory M:xt=fatt(st;M;x). Specifically,the attention score at;ion a memory vector migiven a state stis computed as at;i=softmaxi=1;:::;jMjcos(W1mi;W 2st), whereis set to 10 in our experiments and theweight matrices W1andW2are learned during training. The attention vector xtcan bewritten asxt=fatt(st;M;x) =PjMjiat;imi.Termination Control : The terminate gate produces a stochastic random variable accordingto the current internal state, ttp(jftc(st;tc))).ttis a binary random variable. If ttistrue, the IRN will finish the search process, and the output module will execute at time stept; otherwise the IRN will generate the next attention vector xt+1and feed into the statenetwork to update the next internal state st+1. In our experiments, the termination variable ismodeled by a logistical regression: ftc(st;tc) =sigmoid (Wtcst+btc), where the weightmatrixWtcand bias vector btcare learned during training.Compared IRNs to Memory Networks (MemNN) (Weston et al., 2014; Sukhbaatar et al., 2015; ?)and Neural Turing Machines (NTM) (Graves et al., 2014; 2016), the biggest difference between ourmodel and the existing frameworks is the search controller and the use of the shared memory. Webuild upon our previous work (Shen et al., 2016) for using a search controller module to dynamicallyperform a multi-step inference depending on the complexity of the instance. MemNN and NTMexplicitly store inputs (such as graph definition, supporting facts) in the memory. In contrast, in IRNs ,we do not explicitly store all the observed inputs in the shared memory. Instead, we directly operateon the shared memory, which modeling the structured relationships implicitly. We randomly initializethe memory and update the memory with respect to task-specific objectives. The idea of exploitingshared memory is proposed by Munkhdalai & Yu (2016) independently. Despite of using the sameterm, the goal and the operations used by IRNs are different from the one used in Munkhdalai & Yu(2016), as IRNs allow the model to perform multi-step for each instance dynamically.2.1 S TOCHASTIC INFERENCE PROCESSThe inference process of an IRN is as follows. First, the model converts a task-dependent input toa vector representation through the input module. Then, the model uses the input representationto initialize the search controller. In every time step, the search controller determines whether theprocess is finished by sampling from the distribution according to the terminate gate. If the outcome istermination, the output module will generate a task-dependent prediction given the search controllerstates. If the outcome is continuation, the search controller will move on to the next time step,and create an attention vector based on the current search controller state and the shared memory.Intuitively, we design whole process by mimicking a search procedure that iteratively finds its targetthrough a structure and output its prediction when a satisfying answer is found. The detailed inferenceprocess is described in Algorithm 1.The inference process of an IRN is considered as a Partially Observable Markov Decision Process(POMDP) (Kaelbling et al., 1998) in the reinforcement learning (RL) literature. The IRN produces3Under review as a conference paper at ICLR 2017Algorithm 1: Stochastic Inference Process in an IRNInput : Randomly initialized shared memory M; Input vector q; Maximum step TmaxOutput : Output vector o1Defines1=q;t= 1;2Samplettfrom the distribution p(jftc(st;tc));3ifttis false, go to Step 4; otherwise Step 7;4Generate an attention vector xt=fatt(st;M;x);5Update the internal state st+1=RNN (st;xt;s);6Sett=t+ 1; ift<T maxgo to Step 2; otherwise Step 7;7Generate output ot=fo(st;o);8Returno=ot;the output vector oTat theT-th step, which implies termination gate variables t1:T= (t1= 0;t2=0;:::;tT1= 0;tT= 1) , and then takes prediction action pTaccording to the probability distributiongivenoT. Therefore, the IRN learns a stochastic policy ((t1:T;pT)jq;)with parameters to get adistribution over termination actions, and over prediction actions. The termination step Tvaries frominstance to instance. The parameters of the IRNare given by the parameters of the embeddingmatricesWfor the input/output module, the shared memory M, the attention network x, the searchcontroller RNN network s, the output generation network o, and the termination gate network tc.The parameters =fW;M;x;s;o;tcgare trained to maximize the total expected reward thattheIRN when interacting with the environment. The expected reward for an instance is defined as:J() =E(t1:T;pT;)"TXt=1rt#The reward can only be received at the final termination step when a prediction action pTis performed.The rewards on intermediate steps are zeros, frt= 0gt=1:::T1.We employ the approach from our previous work (Shen et al., 2016), REINFORCE (Williams, 1992)based Contrastive Reward method, to maximize the expected reward. The gradient of Jcan bewritten as:rJ() =X(t1:T;pT)2Ay(t1:T;pT;)hrlog(t1:T;pT;)(rTbi1)iwhere Ayis all the possible episodes, the baseline bi=P(t1:T;pT)2Ay(t1:T;pT;)rTis theexpected reward on the jAyjepisodes for the i-th training instance.3 A PPLYING IRN STOKNOWLEDGE BASE COMPLETIONThe goal of KBC tasks (Bordes et al., 2013) is to predict a head or a tail entity given the relation typeand the other entity, i.e. predicting hgiven (?;r;t)or predicting tgiven (h;r;?), where ?denotesthe missing entity. For a KBC task, the input to our model is a subject entity (a head or tail entity)and a relation. The task-dependent input module first extracts the embedding vectors for the entityand relation from an embedding matrix. We then represent the query vector qfor an IRN as theconcatenation of the two vectors. We randomly initialize the shared memory component. At each step,a training triplet is processed through the model by Algorithm 1, where no explicit path informationis given. The IRN updates the shared memory implicitly with respect to the objective function. Forthe task dependent output module, we use a nonlinear projection to project the search controller stateinto an output vector o:fo(st;o) =tanh(Wost+bo), where theWoandboare the weight matrixand bias vector, respectively. We define the ground truth target (object) entity embedding as y, anduse theL1distance measure between the output oand target entity y, namelyd(o;y) =joyj1. Wesample a set of incorrect entity embeddings N=fyigjNji=1as negative examples. The probability of4Under review as a conference paper at ICLR 2017selecting a prediction ^y2Dcan be approximated asp(^yjo) =exp(d(o;^y))Pyk2Dexp(d(o;yk))whereD=N[fyg. We setjNjandto 20 and 5, respectively, for the experiments on FB15k andWN18 datasets. The IRN performs a prediction action pTon selecting ^ywith probability p(^yjo).We define the reward of the prediction action as one if the ground truth entity is selected, and zerootherwise.4 E XPERIMENTAL RESULTSIn this section, we evaluate the performance of our model on the benchmark FB15k and WN18datasets for KBC tasks (Bordes et al., 2013). These datasets contain multi-relations between headand tail entities. Given a head entity and a relation, the model produces a ranked list of the entitiesaccording to the score of the entity being the tail entity of this triple. To evaluate the ranking, wereport mean rank (MR) , the mean of rank of the correct entity across the test examples, and hits@10 ,the proportion of correct entities ranked in the top-10 predictions. Lower MR or higher hits@10indicates a better prediction performance. We follow the evaluation protocol in Bordes et al. (2013)to report filtered results, where negative examples Nare removed from the dataset. In this case, wecan avoid some negative examples being valid and ranked above the target triplet.We use the same hyper-parameters of our model for both FB15k and WN18 datasets. Entity embed-dings (which are not shared between input and output modules) and relation embedding are both100-dimensions. We use the input module and output module to encode subject and object entities,respectively. There are 64 memory vectors with 200 dimensions each, initialized by random vectorswith unitL2-norm. We use single-layer GRU with 200 cells as the search controller. We set themaximum inference step of the IRN to 5. We randomly initialize all model parameters, and use SGDas the training algorithm with mini-batch size of 64. We set the learning rate to a constant number,0.01. To prevent the model from learning a trivial solution by increasing entity embeddings norms,we follow Bordes et al. (2013) to enforce the L2-norm of the entity embeddings as 1. We use hits@10as the validation metric for the IRN. Following the work (Lin et al., 2015a), we add reverse relationsinto the training triplet set to increase the training data.Following Nguyen et al. (2016), we divide the results of previous work into two groups. The firstgroup contains the models that directly optimize a scoring function for the triples in a knowledge basewithout using extra information. The second group of models make uses of additional informationfrom multi-step relations. For example, RTransE (García-Durán et al., 2015) and PTransE (Lin et al.,2015a) models are extensions of the TransE (Bordes et al., 2013) model by explicitly exploringmulti-step relations in the knowledge base to regularize the trained embeddings. The NLFeat model(Toutanova et al., 2015) is a log-linear model that makes use of simple node and link features.Table 1 presents the experimental results. According to the table, our model significantly outperformsprevious baselines, regardless of whether previous approaches use additional information or not.Specifically, on FB15k, the MR of our model surpasses all previous results by 12, and our hit@10outperforms others by 5.7%. On WN18, the IRN obtains the highest hit@10 while maintainingsimilar MR results compared to previous work.1To better understand the behavior of IRNs , we report the results of IRNs with different memory sizesand different Tmax on FB15K in Table 2. We find the performance of IRNs increases significantlyif the number of inference step increases. Note that an IRN withTmax = 1 is the case that an IRNwithout the shared memory. Interestingly, given Tmax = 5,IRNs are not sensitive to memory sizes.In particular, larger memory always improves the MR score, but the best hit@10 is obtained byjMj=64 memory vectors. A possible reason is that the best memory size is determined by thecomplexity of the tasks.We analyze hits@10 results on FB15k with respect to the relation categories. Following the evaluationin Bordes et al. (2013), we evaluate the performance in four types of relation: 1-1 if a head entity1Nguyen et al. (2016) reported two results on WN18, where the first one is obtained by choosing to optimizehits@10 on the validation set, and second one is obtained by choosing to optimize MR on the validation set. Welist both of them in Table 1.5Under review as a conference paper at ICLR 2017Table 1: The knowledge base completion (link prediction) results on WN18 and FB15k.Model Additional Information WN18 FB15kHits@10 (%) MR Hits@10 (%) MRSE (Bordes et al., 2011) NO 80.5 985 39.8 162Unstructured (Bordes et al., 2014) NO 38.2 304 6.3 979TransE (Bordes et al., 2013) NO 89.2 251 47.1 125TransH (Wang et al., 2014) NO 86.7 303 64.4 87TransR (Lin et al., 2015b) NO 92.0 225 68.7 77CTransR (Lin et al., 2015b) NO 92.3 218 70.2 75KG2E (He et al., 2015) NO 93.2 348 74.0 59TransD (Ji et al., 2015) NO 92.2 212 77.3 91TATEC (García-Durán et al., 2015) NO - - 76.7 58NTN (Socher et al., 2013) NO 66.1 - 41.4 -DISTMULT (Yang et al., 2014) NO 94.2 - 57.7 -STransE (Nguyen et al., 2016) NO 94.7 (93) 244 ( 206) 79.7 69RTransE (García-Durán et al., 2015) Path - - 76.2 50PTransE (Lin et al., 2015a) Path - - 84.6 58NLFeat (Toutanova et al., 2015) Node + Link Features 94.3 - 87.0 -Random Walk (Wei et al., 2016) Path 94.8 - 74.7 -IRN NO 95.3 249 92.7 38Table 2: The performance of IRNs with different memory sizes and inference steps on FB15K.Number of memory vectors Maximum inference step FB15kHits@10 (%) MRjMj=64 Tmax = 1 80.7 55.7jMj=64 Tmax = 2 87.4 49.2jMj=64 Tmax = 5 92.7 38.0jMj=64 Tmax = 8 88.8 32.9jMj=32 Tmax = 5 90.1 38.7jMj=64 Tmax = 5 92.7 38.0jMj=128 Tmax = 5 92.2 36.1jMj=512 Tmax = 5 90.0 35.3jMj=4096 Tmax = 5 88.7 34.7can appear with at most one tail entity, 1-Many if a head entity can appear with many tail entities,Many-1 if multiple heads can appear with the same tail entity, and Many-Many if multiple headentities can appear with multiple tail entities. The detailed results are shown in Table 3. The IRNsignificantly improves the hits@10 results in the Many-1 category on predicting the head entity(18:8%), the 1-Many category on predicting the tail entity ( 16:5%), and the Many-Many category(over 8%in average).To analyze the behavior of IRNs , we pick some examples for the tail entity prediction in Table 4.Interestingly, we observed that the model can gradually increase the ranking score of the correct tailentity during the inference process.5 A NALYSIS : APPLYING IRN STO A SHORTEST PATH SYNTHESIS TASKWe construct a synthetic task, shortest path synthesis, to evaluate the inference capability over ashared memory. The motivations of applying our model to this task are as follows. First, we wantto evaluate IRNs on another task requiring multi-step inference. Second, we select the sequencegeneration task so that we are able to analyze the inference capability of IRNs in details.In the shortest path synthesis task, as illustrated in Figure 2, a training instance consists of a startnode and an end node (e.g., 215 493) of an underlying weighted directed graph that is unknown tomodels. The output of each instance is the shortest path between the given start and end nodes of theunderlying graph (e.g., 215!101!493). Specifically, models can only observe the start-end node6Under review as a conference paper at ICLR 2017Table 3: Hits@10 (%) in the relation category on FB15k. ( Mstands for Many )ModelPredicting head h Predicting tail t1-1 1-M M-1 M-M 1-1 1-M M-1 M-MSE (Bordes et al., 2011) 35.6 62.6 17.2 37.5 34.9 14.6 68.3 41.3Unstructured (Bordes et al., 2014) 34.5 2.5 6.1 6.6 34.3 4.2 1.9 6.6TransE (Bordes et al., 2013) 43.7 65.7 18.2 47.2 43.7 19.7 66.7 50.0TransH (Wang et al., 2014) 66.8 87.6 28.7 64.5 65.5 39.8 83.3 67.2TransR (Lin et al., 2015b) 78.8 89.2 34.1 69.2 79.2 37.4 90.4 72.1CTransR (Lin et al., 2015b) 81.5 89.0 34.7 71.2 80.8 38.6 90.1 73.8KG2E (He et al., 2015) 92.3 94.6 66.0 69.6 92.6 67.9 94.4 73.4TransD (Ji et al., 2015) 86.1 95.5 39.8 78.5 85.4 50.6 94.4 81.2TATEC (García-Durán et al., 2015) 79.3 93.2 42.3 77.2 78.5 51.5 92.7 80.7STransE (Nguyen et al., 2016) 82.8 94.2 50.4 80.1 82.4 56.9 93.4 83.1PTransE (Lin et al., 2015a) 91.0 92.8 60.9 83.8 91.2 74.0 88.9 86.4IRN 87.2 96.1 84.8 92.9 86.9 90.5 95.3 94.1Table 4: Test examples in FB15k dataset, given a head entity and a relation, the IRN predicts the tailentity with multiple search steps.Input : (Dean Koontz , /PEOPLE /PERSON /PROFESSION )Target :Film ProducerStep Termination Prob. Rank Predict top-3 entities1 0.018 9 Author TV. Director Songwriter2 0.052 7 Actor Singer Songwriter3 0.095 4 Actor Singer Songwriter4 0.132 4 Actor Singer Songwriter5 0.702 3 Actor Singer Film ProducerInput : (War and Peace , /FILM /FILM /PRODUCED _BY)Target :Carlo PontiStep Termination Prob. Rank Predict top-3 entities1 0.001 13 Scott Rudin Stephen Woolley Hal B. Wallis2 5.8E-13 7 Billy Wilder William Wyler Elia Kazan3 0.997 1 Carlo Ponti King Vidor Hal B. Wallispairs as input and their shortest path as output. The whole graph is unknown to the models and theedge weights are not revealed in the training data. At test time, a path sequence is considered correctif it connects the start node and the end node of the underlying graph, and the cost of the predictedpath is the same as the optimal path.Note that the task is very difficult and cannot be solved by dynamic programming algorithms since theweights on the edges are not revealed to the algorithms or the models. To recover some of the shortestpaths at the test time, the model needs to infer the correct path from the observed instances. Forexample, assume that we observe two instances in the training data, “ A D:A!B!G!D”and “B E:B!C!E”. In order to answer the shortest path between AandE, the modelneeds to infer that “ A!B!C!E” is a possible path between AandE. If there are multiplepossible paths, the model has to decide which one is the shortest one using statistical information.In the experiments, we construct a graph with 500 nodes and we randomly assign two nodes to forman edge. We split 20,000 instances for training, 10,000 instances for validation, and 10,000 instancesfor testing. We create the training and testing instances carefully so that the model needs to performinference to recover the correct path. We describe the details of the graph and data construction partsin the appendix section. A sub-graph of the data is shown in Figure 2.For the settings of the IRN, we switch the output module to a GRU decoder for a sequence generationtask. We assign reward rT= 1if all the prediction symbols are correct and 0otherwise. We use a64-dimensional embedding vector for input symbols, a GRU controller with 128cells, and a GRUdecoder with 128cells. We set the maximum inference step Tmaxto5.7Under review as a conference paper at ICLR 2017Step Termination Distance PredictionsProbability1 0.001 N/A 215!158!89!458!49320 N/A 215!479!277!353!49330 N/A 215!49!49340 0.77 215!140!4935 0.999 0.70 215!101!493Figure 2: An example of the shortest path synthesis dataset, given an input “ 215 493” (Answer: 215!101!493). Note that we only show the nodes that are related to this example here. The correspondingtermination probability and prediction results are shown in the table. The model terminates at step 5.We compare the IRN with two baseline approaches: dynamic programming without edge-weightinformation and a standard sequence-to-sequence model (Sutskever et al., 2014) using a similarparameter size to our model. Without knowing the edge weights, dynamic programming only recovers589 correct paths at test time. The sequence-to-sequence model recovers 904 correct paths. The IRNoutperforms both baselines, recovering 1,319 paths. Furthermore, 76.9% of the predicted paths fromIRN arevalid paths, where a path is valid if the path connects the start and end node nodes of theunderlying graph. In contrast, only 69.1% of the predicted paths from the sequence-to-sequencemodel are valid.To further understand the inference process of the IRN, Figure 2 shows the inference process of a testinstance. Interestingly, to make the correct prediction on this instance, the model has to perform afairly complicated inference.2We observe that the model cannot find a connected path in the firstthree steps. Finally, the model finds a valid path at the forth step and predict the correct shortest pathsequence at the fifth step.6 R ELATED WORKLink Prediction and Knowledge Base Completion Given thatris a relation, his the head entity,andtis the tail entity, most of the embedding models for link prediction focus on finding the scoringfunctionfr(h;t)that represents the implausibility of a triple. (Bordes et al., 2011; 2014; 2013; Wanget al., 2014; Ji et al., 2015; Nguyen et al., 2016). In many studies, the scoring function fr(h;t)islinear or bi-linear. For example, in TransE (Bordes et al., 2013), the function is implemented asfr(h;t) =kh+rtk, where h,randtare the corresponding vector representations.Recently, different studies (Guu et al., 2015; Lin et al., 2015a; Toutanova et al., 2016) demonstratethe importance for models to also learn from multi-step relations. Learning from multi-step relationsinjects the structured relationships between triples into the model. However, this also poses a technicalchallenge of considering exponential numbers of multi-step relationships. Prior approaches addressthis issue by designing path-mining algorithms (Lin et al., 2015a) or considering all possible pathsusing a dynamic programming algorithm with the restriction of using linear or bi-linear modelsonly (Toutanova et al., 2016). Toutanova & Chen (2015) shows the effectiveness of using simple nodeand link features that encode structured information on FB15k and WN18. In our work, the IRNoutperforms prior results and shows that similar information can be captured by the model withoutexplicitly designing features.2In the example, to find the right path, the model needs to search over observed instances “ 215 448:215!101!448” and “ 76 493:76!308!101!493”, and to figure out the distance of “ 140!493”is longer than “ 101!493” (there are four shortest paths between 101!493and three shortest paths between140!493in the training set).8Under review as a conference paper at ICLR 2017Studies such as (Riedel et al., 2013) show that incorporating textual information can further improvethe knowledge base completion tasks. It would be interesting to incorporate the information outsidethe knowledge bases in our model in the future.Neural Frameworks Sequence-to-sequence models (Sutskever et al., 2014; Cho et al., 2014) haveshown to be successful in many applications such as machine translation and conversation model-ing (Sordoni et al., 2015). While sequence-to-sequence models are powerful, recent work has shownthat the necessity of incorporating an external memory to perform inference in simple algorithmictasks (Graves et al., 2014; 2016).7 C ONCLUSIONIn this paper, we propose Implicit ReasoNets ( IRNs ), which perform inference over a shared memorythat models large-scale structured relationships implicitly. The inference process is guided by a searchcontroller to access the memory that is shared across instances. We demonstrate and analyze themulti-step inference capability of IRNs in the knowledge base completion tasks and a shortest pathsynthesis task. Our model, without using any explicit knowledge base information in the inferenceprocedure, outperforms all prior approaches on the popular FB15k benchmark by more than 5.7%.For future work, we aim to further extend IRNs in two ways. First, inspired from Ribeiro et al. (2016),we would like to develop techniques to exploit ways to generate human understandable reasoninginterpretation from the shared memory. Second, we plan to apply IRNs to infer the relationshipsin unstructured data such as natural language. For example, given a natural language query suchas “are rabbits animals?”, the model can infer a natural language answer implicitly in the sharedmemory without performing inference directly on top of huge amount of observed sentences such as“all mammals are animals” and “rabbits are animals”. We believe the ability to perform inferenceimplicitly is crucial for modeling large-scale structured relationships.ACKNOWLEDGMENTSWe thank Scott Wen-Tau Yih, Kristina Toutanova, Jian Tang and Zachary Lipton for their thoughtfulfeedback and discussions.
SyHbpXIVl
Review
6: Marginally above acceptance threshold
This paper proposes a method for link prediction on Knowledge Bases. The method contains 2 main innovations: (1) an iterative inference process that allows the model to refine its predictions and (2) a shared memory component. Thanks to these 2 elements, the model introduced in the paper achieved remarkable results on two benchmarks. The paper is fairly written. The model is interesting and the experimental results are strikingly good. Still, I only rate for a weak accept for the following reasons. * The main problem with this paper is that there is little explanation of how and why the two new elements aforementioned are leading to such better results. For instance: - What are the performance without the shared memory? And when its size is grown? - How does the performance is impacted when one varies Tmax from 1 to 5 (which the chosen value for the experiments I assume)? This gives an indications of how often the termination gate works. - It would also be interesting to give the proportion of examples for which the inference is terminated before hitting Tmax. - What is the proportion of examples for which the prediction changed along several inference iterations? * A value of \lambda set to 10 (Section 2) seems to indicate a low temperature for the softmax. Is the attention finally attending mostly at a single cell? How do the softmax activations change with the type of relationships? the entity type? * FB15k and WN18 are quite old overused benchmarks now. It would be interesting to test on larger conditions.
4: The reviewer is confident but not absolutely certain that the evaluation is correct
r1PRvK9el
ICLR.cc/2017/conference
2017
Implicit ReasoNet: Modeling Large-Scale Structured Relationships with Shared Memory
["Yelong Shen*", "Po-Sen Huang*", "Ming-Wei Chang", "Jianfeng Gao"]
Recent studies on knowledge base completion, the task of recovering missing relationships based on recorded relations, demonstrate the importance of learning embeddings from multi-step relations. However, due to the size of knowledge bases, learning multi-step relations directly on top of observed instances could be costly. In this paper, we propose Implicit ReasoNets (IRNs), which is designed to perform large-scale inference implicitly through a search controller and shared memory. Unlike previous work, IRNs use training data to learn to perform multi-step inference through the shared memory, which is also jointly updated during training. While the inference procedure is not operating on top of observed instances for IRNs, our proposed model outperforms all previous approaches on the popular FB15k benchmark by more than 5.7%.
["Deep learning", "Reinforcement Learning"]
ABSTRACTRecent studies on knowledge base completion, the task of recovering missingrelationships based on recorded relations, demonstrate the importance of learningembeddings from multi-step relations. However, due to the size of knowledge bases,learning multi-step relations directly on top of observed instances could be costly.In this paper, we propose Implicit ReasoNets ( IRNs ), which is designed to performlarge-scale inference implicitly through a search controller and shared memory.Unlike previous work, IRNs use training data to learn to perform multi-stepinference through the shared memory, which is also jointly updated during training.While the inference procedure is not operating on top of observed instances forIRNs , our proposed model outperforms all previous approaches on the popularFB15k benchmark by more than 5.7%.1 I NTRODUCTIONKnowledge bases such as WordNet (Fellbaum, 1998), Freebase (Bollacker et al., 2008), orYago (Suchanek et al., 2007) contain many real-world facts expressed as triples, e.g., ( Bill Gates ,FounderOf ,Microsoft ). These knowledge bases are useful for many downstream applicationssuch as question answering (Berant et al., 2013; Yih et al., 2015) and information extraction (Mintzet al., 2009). However, despite the formidable size of knowledge bases, many important facts arestill missing. For example, West et al. (2014) showed that 21% of the 100K most frequent PERSONentities have no recorded nationality in a recent version of Freebase. We seek to infer unknownrelations based on the observed triples. Thus, the knowledge base completion (KBC) task has emergedan important open research problem (Nickel et al., 2011).Neural-network based methods have been very popular for solving the KBC task. Following Bordeset al. (2013), one of the most popular approaches for KBC is to learn vector-space representations ofentities and relations during training, and then apply linear or bi-linear operations to infer the missingrelations at test time. However, several recent papers demonstrate limitations of prior approachesrelying upon vector-space models alone. By themselves, there is no straightforward way to capturethe structured relationships between multiple triples adequately (Guu et al., 2015; Toutanova et al.,2016; Lin et al., 2015a). For example, assume that we want to fill in the missing relation for thetriple ( Obama ,NATIONALITY , ?), a multi-step search procedure might be needed to discover theevidence in the observed triples such as ( Obama ,BORNIN,Hawaii ) and ( Hawaii ,PARTOF,U.S.A ). To address this issue, Guu et al. (2015); Toutanova et al. (2016); Lin et al. (2015a) proposedifferent approaches of injecting structured information by directly operating on the observed triplets.Unfortunately, due to the size of knowledge bases, these newly proposed approaches suffer fromsome limitations, as most paths are not informative for inferring missing relations, and it is prohibitiveto consider all possible paths during the training time with expressive models.In this paper, we take a different approach from prior work on KBC by addressing the challenges ofperforming large-scale inference through the design of search controller andshared memory . Ourinference procedure centers around the search controller , which only operates on the shared memoryinstead of directly manipulating the observed triples in knowledge base. IRNs use training data toEqual contribution.1Under review as a conference paper at ICLR 2017Search ControllerS1 St St+1 St+2XtTt Tt+1ftc(St) ftc(St+1) FalseTruefo(St) TrueOt Ot+1fa(St,M) Xt+1 fa(St+1,M)FalseShared Memory AttentionTerminationOutput ModuleInput Moduleftc(St+2) fo(St+2) Ot+2Tt+2Mqfo(St+1) Figure 1: An IRN Architecture.learn to perform multi-step inference through the shared memory. First, input module generates arepresentation of the query. Then, the search controller repeatedly interacts with the shared memoryand checks the termination gate . After each iteration, if the termination condition is met, the modelstops the search process and calls the output module to generate a prediction. The shared memory isdesigned to store key information about the overall structures it learned during training, and hencethe search controller only needs to access the shared memory instead of operating on the observedtriples.There are several advantages of using IRNs . First, the cost of inference can be controlled becausethe search controller only needs to access the shared memory. Second, all the modules, including thesearch controller and memory, are jointly trained, and hence alleviate the needs to inject structuredrelationships between instances manually. Finally, we can easily extend IRNs to other tasks thatrequire modeling structured relationships between instances by switching the input and outputmodules.The main contributions of our paper are as follows:We propose Implicit ReasoNets ( IRNs ), which use a shared memory guided by a searchcontroller to model large-scale structured relationships implicitly.We evaluate IRNs and demonstrate that our proposed model achieves the state-of-the-artresults on the popular FB15k benchmark, surpassing prior approaches by more than 5.7%.We analyze the behavior of IRNs for shortest path synthesis. We show that IRNs outper-form a standard sequence-to-sequence model and execute meaningful multi-step inference.2 R EASO NET FOR IMPLICIT INFERENCEIn this section, we describe the general architecture of IRNs in a way that is agnostic to KBC. IRNsare composed of four main components: an input component, an output component, a shared memory,and a search controller, as shown in Figure 1. In this section, we briefly describe each component.Input/Output Modules : These two modules are task-dependent. The input module takes a query andconverts the query into a vector representation q. The output module is a function fo, which convertsthe hidden state received from the search controller ( s) into an output O. We optimize the whole2Under review as a conference paper at ICLR 2017model using the output prediction Owith respect to a ground-truth target using a task-specified lossfunction.Shared Memory : The shared memory is denoted as M. It consists of a list of memory vectors,M=fmigi=1:::I, wheremiis a fixed dimensional vector. The memory vectors are randomlyinitialized and automatically updated through back-propagation. The shared memory component isshared across all instances.Search Controller : The search controller is a recurrent neural network and controls the search processby keeping internal state sequences to track the current search process and history. The searchcontroller uses an attention mechanism to fetch information from relevant memory vectors in M, anddecides if the model should output the prediction or continue to generate the next possible output.Internal State : The internal state of the search controller is denoted as S, which is a vectorrepresentation of the search process. The initial state s1is usually the vector representationof the input vector q. The internal state at t-th time step is represented by st. The sequenceof internal states is modeled by an RNN: st+1=RNN (st;xt;s).Attention to memory : The attention vector xtatt-th time step is generated based on thecurrent internal state stand the shared memory M:xt=fatt(st;M;x). Specifically,the attention score at;ion a memory vector migiven a state stis computed as at;i=softmaxi=1;:::;jMjcos(W1mi;W 2st), whereis set to 10 in our experiments and theweight matrices W1andW2are learned during training. The attention vector xtcan bewritten asxt=fatt(st;M;x) =PjMjiat;imi.Termination Control : The terminate gate produces a stochastic random variable accordingto the current internal state, ttp(jftc(st;tc))).ttis a binary random variable. If ttistrue, the IRN will finish the search process, and the output module will execute at time stept; otherwise the IRN will generate the next attention vector xt+1and feed into the statenetwork to update the next internal state st+1. In our experiments, the termination variable ismodeled by a logistical regression: ftc(st;tc) =sigmoid (Wtcst+btc), where the weightmatrixWtcand bias vector btcare learned during training.Compared IRNs to Memory Networks (MemNN) (Weston et al., 2014; Sukhbaatar et al., 2015; ?)and Neural Turing Machines (NTM) (Graves et al., 2014; 2016), the biggest difference between ourmodel and the existing frameworks is the search controller and the use of the shared memory. Webuild upon our previous work (Shen et al., 2016) for using a search controller module to dynamicallyperform a multi-step inference depending on the complexity of the instance. MemNN and NTMexplicitly store inputs (such as graph definition, supporting facts) in the memory. In contrast, in IRNs ,we do not explicitly store all the observed inputs in the shared memory. Instead, we directly operateon the shared memory, which modeling the structured relationships implicitly. We randomly initializethe memory and update the memory with respect to task-specific objectives. The idea of exploitingshared memory is proposed by Munkhdalai & Yu (2016) independently. Despite of using the sameterm, the goal and the operations used by IRNs are different from the one used in Munkhdalai & Yu(2016), as IRNs allow the model to perform multi-step for each instance dynamically.2.1 S TOCHASTIC INFERENCE PROCESSThe inference process of an IRN is as follows. First, the model converts a task-dependent input toa vector representation through the input module. Then, the model uses the input representationto initialize the search controller. In every time step, the search controller determines whether theprocess is finished by sampling from the distribution according to the terminate gate. If the outcome istermination, the output module will generate a task-dependent prediction given the search controllerstates. If the outcome is continuation, the search controller will move on to the next time step,and create an attention vector based on the current search controller state and the shared memory.Intuitively, we design whole process by mimicking a search procedure that iteratively finds its targetthrough a structure and output its prediction when a satisfying answer is found. The detailed inferenceprocess is described in Algorithm 1.The inference process of an IRN is considered as a Partially Observable Markov Decision Process(POMDP) (Kaelbling et al., 1998) in the reinforcement learning (RL) literature. The IRN produces3Under review as a conference paper at ICLR 2017Algorithm 1: Stochastic Inference Process in an IRNInput : Randomly initialized shared memory M; Input vector q; Maximum step TmaxOutput : Output vector o1Defines1=q;t= 1;2Samplettfrom the distribution p(jftc(st;tc));3ifttis false, go to Step 4; otherwise Step 7;4Generate an attention vector xt=fatt(st;M;x);5Update the internal state st+1=RNN (st;xt;s);6Sett=t+ 1; ift<T maxgo to Step 2; otherwise Step 7;7Generate output ot=fo(st;o);8Returno=ot;the output vector oTat theT-th step, which implies termination gate variables t1:T= (t1= 0;t2=0;:::;tT1= 0;tT= 1) , and then takes prediction action pTaccording to the probability distributiongivenoT. Therefore, the IRN learns a stochastic policy ((t1:T;pT)jq;)with parameters to get adistribution over termination actions, and over prediction actions. The termination step Tvaries frominstance to instance. The parameters of the IRNare given by the parameters of the embeddingmatricesWfor the input/output module, the shared memory M, the attention network x, the searchcontroller RNN network s, the output generation network o, and the termination gate network tc.The parameters =fW;M;x;s;o;tcgare trained to maximize the total expected reward thattheIRN when interacting with the environment. The expected reward for an instance is defined as:J() =E(t1:T;pT;)"TXt=1rt#The reward can only be received at the final termination step when a prediction action pTis performed.The rewards on intermediate steps are zeros, frt= 0gt=1:::T1.We employ the approach from our previous work (Shen et al., 2016), REINFORCE (Williams, 1992)based Contrastive Reward method, to maximize the expected reward. The gradient of Jcan bewritten as:rJ() =X(t1:T;pT)2Ay(t1:T;pT;)hrlog(t1:T;pT;)(rTbi1)iwhere Ayis all the possible episodes, the baseline bi=P(t1:T;pT)2Ay(t1:T;pT;)rTis theexpected reward on the jAyjepisodes for the i-th training instance.3 A PPLYING IRN STOKNOWLEDGE BASE COMPLETIONThe goal of KBC tasks (Bordes et al., 2013) is to predict a head or a tail entity given the relation typeand the other entity, i.e. predicting hgiven (?;r;t)or predicting tgiven (h;r;?), where ?denotesthe missing entity. For a KBC task, the input to our model is a subject entity (a head or tail entity)and a relation. The task-dependent input module first extracts the embedding vectors for the entityand relation from an embedding matrix. We then represent the query vector qfor an IRN as theconcatenation of the two vectors. We randomly initialize the shared memory component. At each step,a training triplet is processed through the model by Algorithm 1, where no explicit path informationis given. The IRN updates the shared memory implicitly with respect to the objective function. Forthe task dependent output module, we use a nonlinear projection to project the search controller stateinto an output vector o:fo(st;o) =tanh(Wost+bo), where theWoandboare the weight matrixand bias vector, respectively. We define the ground truth target (object) entity embedding as y, anduse theL1distance measure between the output oand target entity y, namelyd(o;y) =joyj1. Wesample a set of incorrect entity embeddings N=fyigjNji=1as negative examples. The probability of4Under review as a conference paper at ICLR 2017selecting a prediction ^y2Dcan be approximated asp(^yjo) =exp(d(o;^y))Pyk2Dexp(d(o;yk))whereD=N[fyg. We setjNjandto 20 and 5, respectively, for the experiments on FB15k andWN18 datasets. The IRN performs a prediction action pTon selecting ^ywith probability p(^yjo).We define the reward of the prediction action as one if the ground truth entity is selected, and zerootherwise.4 E XPERIMENTAL RESULTSIn this section, we evaluate the performance of our model on the benchmark FB15k and WN18datasets for KBC tasks (Bordes et al., 2013). These datasets contain multi-relations between headand tail entities. Given a head entity and a relation, the model produces a ranked list of the entitiesaccording to the score of the entity being the tail entity of this triple. To evaluate the ranking, wereport mean rank (MR) , the mean of rank of the correct entity across the test examples, and hits@10 ,the proportion of correct entities ranked in the top-10 predictions. Lower MR or higher hits@10indicates a better prediction performance. We follow the evaluation protocol in Bordes et al. (2013)to report filtered results, where negative examples Nare removed from the dataset. In this case, wecan avoid some negative examples being valid and ranked above the target triplet.We use the same hyper-parameters of our model for both FB15k and WN18 datasets. Entity embed-dings (which are not shared between input and output modules) and relation embedding are both100-dimensions. We use the input module and output module to encode subject and object entities,respectively. There are 64 memory vectors with 200 dimensions each, initialized by random vectorswith unitL2-norm. We use single-layer GRU with 200 cells as the search controller. We set themaximum inference step of the IRN to 5. We randomly initialize all model parameters, and use SGDas the training algorithm with mini-batch size of 64. We set the learning rate to a constant number,0.01. To prevent the model from learning a trivial solution by increasing entity embeddings norms,we follow Bordes et al. (2013) to enforce the L2-norm of the entity embeddings as 1. We use hits@10as the validation metric for the IRN. Following the work (Lin et al., 2015a), we add reverse relationsinto the training triplet set to increase the training data.Following Nguyen et al. (2016), we divide the results of previous work into two groups. The firstgroup contains the models that directly optimize a scoring function for the triples in a knowledge basewithout using extra information. The second group of models make uses of additional informationfrom multi-step relations. For example, RTransE (García-Durán et al., 2015) and PTransE (Lin et al.,2015a) models are extensions of the TransE (Bordes et al., 2013) model by explicitly exploringmulti-step relations in the knowledge base to regularize the trained embeddings. The NLFeat model(Toutanova et al., 2015) is a log-linear model that makes use of simple node and link features.Table 1 presents the experimental results. According to the table, our model significantly outperformsprevious baselines, regardless of whether previous approaches use additional information or not.Specifically, on FB15k, the MR of our model surpasses all previous results by 12, and our hit@10outperforms others by 5.7%. On WN18, the IRN obtains the highest hit@10 while maintainingsimilar MR results compared to previous work.1To better understand the behavior of IRNs , we report the results of IRNs with different memory sizesand different Tmax on FB15K in Table 2. We find the performance of IRNs increases significantlyif the number of inference step increases. Note that an IRN withTmax = 1 is the case that an IRNwithout the shared memory. Interestingly, given Tmax = 5,IRNs are not sensitive to memory sizes.In particular, larger memory always improves the MR score, but the best hit@10 is obtained byjMj=64 memory vectors. A possible reason is that the best memory size is determined by thecomplexity of the tasks.We analyze hits@10 results on FB15k with respect to the relation categories. Following the evaluationin Bordes et al. (2013), we evaluate the performance in four types of relation: 1-1 if a head entity1Nguyen et al. (2016) reported two results on WN18, where the first one is obtained by choosing to optimizehits@10 on the validation set, and second one is obtained by choosing to optimize MR on the validation set. Welist both of them in Table 1.5Under review as a conference paper at ICLR 2017Table 1: The knowledge base completion (link prediction) results on WN18 and FB15k.Model Additional Information WN18 FB15kHits@10 (%) MR Hits@10 (%) MRSE (Bordes et al., 2011) NO 80.5 985 39.8 162Unstructured (Bordes et al., 2014) NO 38.2 304 6.3 979TransE (Bordes et al., 2013) NO 89.2 251 47.1 125TransH (Wang et al., 2014) NO 86.7 303 64.4 87TransR (Lin et al., 2015b) NO 92.0 225 68.7 77CTransR (Lin et al., 2015b) NO 92.3 218 70.2 75KG2E (He et al., 2015) NO 93.2 348 74.0 59TransD (Ji et al., 2015) NO 92.2 212 77.3 91TATEC (García-Durán et al., 2015) NO - - 76.7 58NTN (Socher et al., 2013) NO 66.1 - 41.4 -DISTMULT (Yang et al., 2014) NO 94.2 - 57.7 -STransE (Nguyen et al., 2016) NO 94.7 (93) 244 ( 206) 79.7 69RTransE (García-Durán et al., 2015) Path - - 76.2 50PTransE (Lin et al., 2015a) Path - - 84.6 58NLFeat (Toutanova et al., 2015) Node + Link Features 94.3 - 87.0 -Random Walk (Wei et al., 2016) Path 94.8 - 74.7 -IRN NO 95.3 249 92.7 38Table 2: The performance of IRNs with different memory sizes and inference steps on FB15K.Number of memory vectors Maximum inference step FB15kHits@10 (%) MRjMj=64 Tmax = 1 80.7 55.7jMj=64 Tmax = 2 87.4 49.2jMj=64 Tmax = 5 92.7 38.0jMj=64 Tmax = 8 88.8 32.9jMj=32 Tmax = 5 90.1 38.7jMj=64 Tmax = 5 92.7 38.0jMj=128 Tmax = 5 92.2 36.1jMj=512 Tmax = 5 90.0 35.3jMj=4096 Tmax = 5 88.7 34.7can appear with at most one tail entity, 1-Many if a head entity can appear with many tail entities,Many-1 if multiple heads can appear with the same tail entity, and Many-Many if multiple headentities can appear with multiple tail entities. The detailed results are shown in Table 3. The IRNsignificantly improves the hits@10 results in the Many-1 category on predicting the head entity(18:8%), the 1-Many category on predicting the tail entity ( 16:5%), and the Many-Many category(over 8%in average).To analyze the behavior of IRNs , we pick some examples for the tail entity prediction in Table 4.Interestingly, we observed that the model can gradually increase the ranking score of the correct tailentity during the inference process.5 A NALYSIS : APPLYING IRN STO A SHORTEST PATH SYNTHESIS TASKWe construct a synthetic task, shortest path synthesis, to evaluate the inference capability over ashared memory. The motivations of applying our model to this task are as follows. First, we wantto evaluate IRNs on another task requiring multi-step inference. Second, we select the sequencegeneration task so that we are able to analyze the inference capability of IRNs in details.In the shortest path synthesis task, as illustrated in Figure 2, a training instance consists of a startnode and an end node (e.g., 215 493) of an underlying weighted directed graph that is unknown tomodels. The output of each instance is the shortest path between the given start and end nodes of theunderlying graph (e.g., 215!101!493). Specifically, models can only observe the start-end node6Under review as a conference paper at ICLR 2017Table 3: Hits@10 (%) in the relation category on FB15k. ( Mstands for Many )ModelPredicting head h Predicting tail t1-1 1-M M-1 M-M 1-1 1-M M-1 M-MSE (Bordes et al., 2011) 35.6 62.6 17.2 37.5 34.9 14.6 68.3 41.3Unstructured (Bordes et al., 2014) 34.5 2.5 6.1 6.6 34.3 4.2 1.9 6.6TransE (Bordes et al., 2013) 43.7 65.7 18.2 47.2 43.7 19.7 66.7 50.0TransH (Wang et al., 2014) 66.8 87.6 28.7 64.5 65.5 39.8 83.3 67.2TransR (Lin et al., 2015b) 78.8 89.2 34.1 69.2 79.2 37.4 90.4 72.1CTransR (Lin et al., 2015b) 81.5 89.0 34.7 71.2 80.8 38.6 90.1 73.8KG2E (He et al., 2015) 92.3 94.6 66.0 69.6 92.6 67.9 94.4 73.4TransD (Ji et al., 2015) 86.1 95.5 39.8 78.5 85.4 50.6 94.4 81.2TATEC (García-Durán et al., 2015) 79.3 93.2 42.3 77.2 78.5 51.5 92.7 80.7STransE (Nguyen et al., 2016) 82.8 94.2 50.4 80.1 82.4 56.9 93.4 83.1PTransE (Lin et al., 2015a) 91.0 92.8 60.9 83.8 91.2 74.0 88.9 86.4IRN 87.2 96.1 84.8 92.9 86.9 90.5 95.3 94.1Table 4: Test examples in FB15k dataset, given a head entity and a relation, the IRN predicts the tailentity with multiple search steps.Input : (Dean Koontz , /PEOPLE /PERSON /PROFESSION )Target :Film ProducerStep Termination Prob. Rank Predict top-3 entities1 0.018 9 Author TV. Director Songwriter2 0.052 7 Actor Singer Songwriter3 0.095 4 Actor Singer Songwriter4 0.132 4 Actor Singer Songwriter5 0.702 3 Actor Singer Film ProducerInput : (War and Peace , /FILM /FILM /PRODUCED _BY)Target :Carlo PontiStep Termination Prob. Rank Predict top-3 entities1 0.001 13 Scott Rudin Stephen Woolley Hal B. Wallis2 5.8E-13 7 Billy Wilder William Wyler Elia Kazan3 0.997 1 Carlo Ponti King Vidor Hal B. Wallispairs as input and their shortest path as output. The whole graph is unknown to the models and theedge weights are not revealed in the training data. At test time, a path sequence is considered correctif it connects the start node and the end node of the underlying graph, and the cost of the predictedpath is the same as the optimal path.Note that the task is very difficult and cannot be solved by dynamic programming algorithms since theweights on the edges are not revealed to the algorithms or the models. To recover some of the shortestpaths at the test time, the model needs to infer the correct path from the observed instances. Forexample, assume that we observe two instances in the training data, “ A D:A!B!G!D”and “B E:B!C!E”. In order to answer the shortest path between AandE, the modelneeds to infer that “ A!B!C!E” is a possible path between AandE. If there are multiplepossible paths, the model has to decide which one is the shortest one using statistical information.In the experiments, we construct a graph with 500 nodes and we randomly assign two nodes to forman edge. We split 20,000 instances for training, 10,000 instances for validation, and 10,000 instancesfor testing. We create the training and testing instances carefully so that the model needs to performinference to recover the correct path. We describe the details of the graph and data construction partsin the appendix section. A sub-graph of the data is shown in Figure 2.For the settings of the IRN, we switch the output module to a GRU decoder for a sequence generationtask. We assign reward rT= 1if all the prediction symbols are correct and 0otherwise. We use a64-dimensional embedding vector for input symbols, a GRU controller with 128cells, and a GRUdecoder with 128cells. We set the maximum inference step Tmaxto5.7Under review as a conference paper at ICLR 2017Step Termination Distance PredictionsProbability1 0.001 N/A 215!158!89!458!49320 N/A 215!479!277!353!49330 N/A 215!49!49340 0.77 215!140!4935 0.999 0.70 215!101!493Figure 2: An example of the shortest path synthesis dataset, given an input “ 215 493” (Answer: 215!101!493). Note that we only show the nodes that are related to this example here. The correspondingtermination probability and prediction results are shown in the table. The model terminates at step 5.We compare the IRN with two baseline approaches: dynamic programming without edge-weightinformation and a standard sequence-to-sequence model (Sutskever et al., 2014) using a similarparameter size to our model. Without knowing the edge weights, dynamic programming only recovers589 correct paths at test time. The sequence-to-sequence model recovers 904 correct paths. The IRNoutperforms both baselines, recovering 1,319 paths. Furthermore, 76.9% of the predicted paths fromIRN arevalid paths, where a path is valid if the path connects the start and end node nodes of theunderlying graph. In contrast, only 69.1% of the predicted paths from the sequence-to-sequencemodel are valid.To further understand the inference process of the IRN, Figure 2 shows the inference process of a testinstance. Interestingly, to make the correct prediction on this instance, the model has to perform afairly complicated inference.2We observe that the model cannot find a connected path in the firstthree steps. Finally, the model finds a valid path at the forth step and predict the correct shortest pathsequence at the fifth step.6 R ELATED WORKLink Prediction and Knowledge Base Completion Given thatris a relation, his the head entity,andtis the tail entity, most of the embedding models for link prediction focus on finding the scoringfunctionfr(h;t)that represents the implausibility of a triple. (Bordes et al., 2011; 2014; 2013; Wanget al., 2014; Ji et al., 2015; Nguyen et al., 2016). In many studies, the scoring function fr(h;t)islinear or bi-linear. For example, in TransE (Bordes et al., 2013), the function is implemented asfr(h;t) =kh+rtk, where h,randtare the corresponding vector representations.Recently, different studies (Guu et al., 2015; Lin et al., 2015a; Toutanova et al., 2016) demonstratethe importance for models to also learn from multi-step relations. Learning from multi-step relationsinjects the structured relationships between triples into the model. However, this also poses a technicalchallenge of considering exponential numbers of multi-step relationships. Prior approaches addressthis issue by designing path-mining algorithms (Lin et al., 2015a) or considering all possible pathsusing a dynamic programming algorithm with the restriction of using linear or bi-linear modelsonly (Toutanova et al., 2016). Toutanova & Chen (2015) shows the effectiveness of using simple nodeand link features that encode structured information on FB15k and WN18. In our work, the IRNoutperforms prior results and shows that similar information can be captured by the model withoutexplicitly designing features.2In the example, to find the right path, the model needs to search over observed instances “ 215 448:215!101!448” and “ 76 493:76!308!101!493”, and to figure out the distance of “ 140!493”is longer than “ 101!493” (there are four shortest paths between 101!493and three shortest paths between140!493in the training set).8Under review as a conference paper at ICLR 2017Studies such as (Riedel et al., 2013) show that incorporating textual information can further improvethe knowledge base completion tasks. It would be interesting to incorporate the information outsidethe knowledge bases in our model in the future.Neural Frameworks Sequence-to-sequence models (Sutskever et al., 2014; Cho et al., 2014) haveshown to be successful in many applications such as machine translation and conversation model-ing (Sordoni et al., 2015). While sequence-to-sequence models are powerful, recent work has shownthat the necessity of incorporating an external memory to perform inference in simple algorithmictasks (Graves et al., 2014; 2016).7 C ONCLUSIONIn this paper, we propose Implicit ReasoNets ( IRNs ), which perform inference over a shared memorythat models large-scale structured relationships implicitly. The inference process is guided by a searchcontroller to access the memory that is shared across instances. We demonstrate and analyze themulti-step inference capability of IRNs in the knowledge base completion tasks and a shortest pathsynthesis task. Our model, without using any explicit knowledge base information in the inferenceprocedure, outperforms all prior approaches on the popular FB15k benchmark by more than 5.7%.For future work, we aim to further extend IRNs in two ways. First, inspired from Ribeiro et al. (2016),we would like to develop techniques to exploit ways to generate human understandable reasoninginterpretation from the shared memory. Second, we plan to apply IRNs to infer the relationshipsin unstructured data such as natural language. For example, given a natural language query suchas “are rabbits animals?”, the model can infer a natural language answer implicitly in the sharedmemory without performing inference directly on top of huge amount of observed sentences such as“all mammals are animals” and “rabbits are animals”. We believe the ability to perform inferenceimplicitly is crucial for modeling large-scale structured relationships.ACKNOWLEDGMENTSWe thank Scott Wen-Tau Yih, Kristina Toutanova, Jian Tang and Zachary Lipton for their thoughtfulfeedback and discussions.
B1g02CM4g
review
6: Marginally above acceptance threshold
[Summary] This paper proposes a new way for knowledge base completion which highlights: 1) adopting an implicit shared memory, which makes no assumption about its structure and is completely learned during training; 2) modeling a multi-step search process that can decide when to terminate. The experimental results on WN18 and FB15k seem pretty good. The authors also perform an analysis on a shortest path synthetic task, and demonstrate that this model is better than standard seq2seq. The paper is well-written and it is easy to follow. [Major comments] I actually do like the idea and am also impressed that this model can work well. The main concern is that this paper presents too little analysis about how it works and whether it is sensitive to the hyper-parameters, besides that only reporting a final model on WN18 and FB15k. One key hyper-parameter I believe is the size of shared memory (using 64 for the experiments). I don’t think that this number should be fixed for all tasks, at least it should depend on the KB scale. Could you verify this in your experiments? Would it be even possible to make a memory structure with dynamic size? The RL setting (stochastic search process) is also one highlight of the paper, but could you demonstrate that how much it does really help? I think it is necessary to compare to the following: remove the termination gate and fix the number of inference steps and see how well the model does? Also show how the performance varies on # of steps? I appreciate your attempts on the shortest path synthetic task. However, I think it would be much better if you can demonstrate that under a real KB setting. You can still perform the shortest path analysis, but using KB (e.g., Freebase) entities and relations. [Minor comments] I am afraid that the output gate illustrated in Figure 1 is a bit confusing. There should be only one output, depending on when the search process is terminated.
4: The reviewer is confident but not absolutely certain that the evaluation is correct
rJXTf9Bxg
ICLR.cc/2017/conference
2017
Conditional Image Synthesis With Auxiliary Classifier GANs
["Augustus Odena", "Christopher Olah", "Jonathon Shlens"]
Synthesizing high resolution photorealistic images has been a long-standing challenge in machine learning. In this paper we introduce new methods for the improved training of generative adversarial networks (GANs) for image synthesis. We construct a variant of GANs employing label conditioning that results in 128x128 resolution image samples exhibiting global coherence. We expand on previous work for image quality assessment to provide two new analyses for assessing the discriminability and diversity of samples from class-conditional image synthesis models. These analyses demonstrate that high resolution samples provide class information not present in low resolution samples. Across 1000 ImageNet classes, 128x128 samples are more than twice as discriminable as artificially resized 32x32 samples. In addition, 84.7% of the classes have samples exhibiting diversity comparable to real ImageNet data.
["Deep learning"]
ABSTRACTSynthesizing high resolution photorealistic images has been a long-standing chal-lenge in machine learning. In this paper we introduce new methods for the im-proved training of generative adversarial networks (GANs) for image synthesis.We construct a variant of GANs employing label conditioning that results in128128 resolution image samples exhibiting global coherence. We expandon previous work for image quality assessment to provide two new analyses forassessing the discriminability and diversity of samples from class-conditional im-age synthesis models. These analyses demonstrate that high resolution samplesprovide class information not present in low resolution samples. Across 1000ImageNet classes, 128128samples are more than twice as discriminable as ar-tificially resized 3232samples. In addition, 84.7% of the classes have samplesexhibiting diversity comparable to real ImageNet data.1 I NTRODUCTIONCharacterizing the structure of natural images has been a rich research endeavor. Natural imagesobey intrinsic invariances and exhibit multi-scale statistical structures that have historically beendifficult to quantify (Simoncelli & Olshausen, 2001). Recent advances in machine learning of-fer an opportunity to substantially improve the quality of image models. Improved image modelsadvance the state-of-the-art in image denoising (Ball ́e et al., 2015), compression (Toderici et al.,2016), in-painting (van den Oord et al., 2016a), and super-resolution (Ledig et al., 2016). Bet-ter models of natural images also improve performance in semi-supervised learning tasks (Kingmaet al., 2014; Springenberg, 2015; Odena, 2016; Salimans et al., 2016) and reinforcement learningproblems (Blundell et al., 2016).One method for understanding natural image statistics is to build a system that synthesizes imagesde novo . There are several promising approaches for building image synthesis models. Variationalautoencoders (V AEs) maximize a variational lower bound on the log-likelihood of the training data(Kingma & Welling, 2013; Rezende et al., 2014). V AEs are straightforward to train but introducepotentially restrictive assumptions about the approximate posterior distribution (but see Rezende &Mohamed (2015); Kingma et al. (2016)). Autoregressive models dispense with latent variables anddirectly model the conditional distribution over pixels (van den Oord et al., 2016a;b). These modelsproduce convincing samples but are costly to sample from and do not provide a latent representation.Invertible density estimators transform latent variables directly using a series of parameterized func-tions constrained to be invertible (Dinh et al., 2016). This technique allows for exact log-likelihoodcomputation and exact inference, but the invertibility constraint is restrictive.Generative adversarial networks (GANs) offer a distinct and promising approach that focuses on agame-theoretic formulation for training an image synthesis model (Goodfellow et al., 2014). Recentwork has shown that GANs can produce convincing image samples on datasets with low variabilityand low resolution (Denton et al., 2015; Radford et al., 2015). However, GANs struggle to gen-erate globally coherent, high resolution samples - particularly from datasets with high variability.Moreover, a theoretical understanding of GANs is an on-going research topic (Uehara et al., 2016;Mohamed & Lakshminarayanan, 2016).Work completed as a participant in the 2016-2017 Google Brain Residency program.1arXiv:1610.09585v1 [stat.ML] 30 Oct 2016Under review as a conference paper at ICLR 2017monarch butterfly goldfinch daisy grey whale redshankFigure 1: 128128resolution samples from 5 classes taken from an AC-GAN trained on the ImageNet dataset.Note that the classes shown have been selected to highlight the success of the model and are not representative.Samples from all ImageNet classes are in the Appendix.In this work we demonstrate that that adding more structure to the GAN latent space along witha specialized cost function results in higher quality samples. We exhibit 128128pixel samplesfrom all classes of the ImageNet dataset (Russakovsky et al., 2015) with increased global coherence(Figure 1). Importantly, we demonstrate quantitatively that our high resolution samples are not justnaive resizings of low resolution samples. In particular, downsampling our 128128 samplesto3232leads to a 50% decrease in visual discriminability. We also introduce a new metricfor assessing the variability across image samples and employ this metric to demonstrate that oursynthesized images exhibit diversity comparable to training data for a large fraction (84.7%) ofImageNet classes.2 B ACKGROUNDA generative adversarial network (GAN) consists of two neural networks trained in opposition toone another. The generator Gtakes as input a random noise vector zand outputs an image Xfake=G(z). The discriminator Dreceives as input either a training image or a synthesized image fromthe generator and outputs a probability distribution P(SjX) =D(X)over possible image sources.The discriminator is trained to maximize the log-likelihood it assigns to the correct source:L=E[logP(S=realjXreal)] +E[logP(S=fakejXfake)]The generator is trained to minimize that same quantity.The basic GAN framework can be augmented using side information. One strategy is to supplyboth the generator and discriminator with class labels in order to produce class conditional samples(Mirza & Osindero, 2014). Class conditional synthesis can significantly improve the quality ofgenerated samples (van den Oord et al., 2016b). Richer side information such as image captions andbounding box localizations may improve sample quality further (Reed et al., 2016a;b).Instead of feeding side information to the discriminator, one can task the discriminator with re-constructing side information. This is done by modifying the discriminator to contain an auxiliarydecoder network1that outputs the class label for the training data (Odena, 2016; Salimans et al.,2016) or a subset of the latent variables from which the samples are generated (Chen et al., 2016).Forcing a model to perform additional tasks is known to improve performance on the original task(e.g. Sutskever et al. (2014); Szegedy et al. (2014); Ramsundar et al. (2016)). In addition, an auxil-iary decoder could leverage pre-trained discriminators (e.g. image classifiers) for further improvingthe synthesized images (Nguyen et al., 2016). Motivated by these considerations, we introduce amodel that combines both strategies for leveraging side information. That is, the model proposedbelow is class conditional, but with an auxiliary decoder that is tasked with reconstructing classlabels.2Under review as a conference paper at ICLR 2017(noise) (latent)(data)InfoGAN(Chen, et al., 2016) . . .(noise) (class)(data)AC-GAN(Present Work) (noise) (class)(data)Conditional GAN (Mirza & Osindero, 2014) (noise). . .(class)(data)Semi-Supervised GAN (Odena, 2016; Salimans, et al., 2016) Figure 2: A comparison of several GAN architectures with the proposed AC-GAN architecture.3 AC-GAN SWe propose a variant of the GAN architecture which we call an auxiliary classifier GAN (or AC-GAN - see Figure 2). In the AC-GAN, every generated sample has a corresponding class label, cpcin addition to the noise z.Guses both to generate images Xfake =G(c;z). The discriminatorgives both a probability distribution over sources and a probability distribution over the class labels,P(SjX); P(CjX) =D(X). The objective function has two parts: the log-likelihood of thecorrect source, LS, and the log-likelihood of the correct class, LC.LS=E[logP(S=realjXreal)] +E[logP(S=fakejXfake)]LC=E[logP(C=cjXreal)] +E[logP(C=cjXfake)]Dis trained to maximize LS+LCwhileGis trained to maximize LCLS. AC-GANs learn arepresentation for zthat is independent of class label (e.g. Kingma et al. (2014)).Early experiments demonstrated that increasing the number of classes trained on while holding themodel fixed decreased the quality of the model outputs (Appendix B). The structure of the AC-GAN model permits separating large datasets into subsets by class and training a generator anddiscriminator for each subset. We exploit this property in our experiments to train across the entireImageNet data set.4 R ESULTSWe train several AC-GAN models on the ImageNet data set (Russakovsky et al., 2015). Broadlyspeaking, the architecture of the generator Gis a series of ‘deconvolution’ layers that transform thenoisezand classcinto an image (Odena et al., 2016). We train two variants of the model architecturefor generating images at 128128and6464spatial resolutions. The discriminator Dis a deepconvolutional neural network with a Leaky ReLU nonlinearity (Maas et al., 2013). See Appendix Afor more details. As mentioned earlier, we find that reducing the variability introduced by all 1000classes of ImageNet significantly improves the quality of training. We train 100 AC-GAN models –each on images from just 10 classes – for 50000 mini-batches of size 100.Evaluating the quality of image synthesis models is challenging due to the variety of probabilis-tic criteria (Theis et al., 2015) and the lack of a perceptually meaningful image similarity metric.Nonetheless, in subsequent sections we attempt to measure the quality of the AC-GAN by buildingseveral ad-hoc measures for image sample discriminability and diversity. Our hope is that this workmight provide quantitative measures that may be used to aid training and subsequent developmentof image synthesis models.1Alternatively, one can force the discriminator to work with the joint distribution (X; z)and train a separateinference network that computes q(zjX)(Dumoulin et al., 2016; Donahue et al., 2016).3Under review as a conference paper at ICLR 201716 x 16 32 x 32 64 x 64 128 x 128 256 x 256RealFake0% 0% 42% 76% 76%0% 7% 62% 94% 94%Figure 3: Generating high resolution images improves discriminability. Top: Training data and synthesized im-ages from the zebra class resized to a lower spatial resolution (indicated above) and subsequently artificiallyresized to the original resolution. Inception accuracy is shown below the corresponding images. Bottom Left:Summary of accuracies across varying spatial resolutions for training data and image samples from 6464and128128models. Error bar measures standard deviation across 10 subsets of images. Dashed lines highlightthe accuracy at the output spatial resolution of the model. The training data (clipped) achieves accuracies of24%, 54%, 81% and 81% at resolutions of 32, 64, 128, and 256 respectively. Bottom Right: Comparison ofaccuracy scores at 128128and3232spatial resolutions ( xandyaxis, respectively). Each point representsan ImageNet class. 84.4% of the classes are below the line of equality. The green dot corresponds to the zebraclass.4.1 G ENERATING HIGHRESOLUTION IMAGES IMPROVES DISCRIMINABILITYBuilding a class-conditional image synthesis model necessitates measuring the extent to which syn-thesized images appear to belong to the intended class. In particular, we would like to know thata high resolution sample is not just a naive resizing of a low resolution sample. Consider a simpleexperiment: pretend there exists a model that synthesizes 3232images. One can trivially increasethe resolution of synthesized images by performing bilinear interpolation. This would yield higherresolution images, but these images would just be blurry versions of the low resolution images thatare not discriminable. Hence, the goal of an image synthesis model is not simply to produce highresolution images, but to produce high resolution images that are more discriminable than low reso-lution images.To measure discriminability, we feed synthesized images to a pre-trained Inception network(Szegedy et al., 2015) and report the fraction of the samples for which the Inception network as-signed the correct label2. We calculate this accuracy measure on a series of real and synthesized im-ages which have had their spatial resolution artificially decreased by bilinear interpolation (Figure 3,2One could also use the Inception score (Salimans et al., 2016), but our method has several advan-tages: accuracy figures are easier to interpret than exponentiated KL-divergences; accuracy may be as-sessed for individual classes; accuracy measures whether a class-conditional model generated samples from4Under review as a conference paper at ICLR 2017top panels). Note that as the spatial resolution is decreased, the accuracy decreases - indicating thatresulting images contain less class information (Figure 3, scores below top panels). We summarizedthis finding across all 1000 ImageNet classes for the ImageNet training data (black), a 128128resolution AC-GAN (red) and a 6464resolution AC-GAN (blue) in Figure 3 (bottom, left). Theblack curve (clipped) provides an upper-bound on the discriminability of real images.The goal of this analysis is to show that synthesizing higher resolution images leads to increaseddiscriminability. The 128128model achieves an accuracy of 10.1% 2.0% versus 7.0%2.0%with samples resized to 6464and 5.0%2.0% with samples resized to 3232. In other words,downsizing the outputs of the AC-GAN to 3232and6464decreases visual discriminabilityby 50% and 38% respectively. Furthermore, 84.4% of the ImageNet classes have higher accuracy at128128than at 3232(Figure 3, bottom left).We performed the same analysis on an AC-GAN trained to 6464spatial resolution. This modelachieved less discriminability than a 128128AC-GAN model. Accuracies from the 6464modelplateau at a 6464spatial resolution consistent with previous results. Finally, the 6464resolutionmodel achieves less discriminability at 64 spatial resolution than the 128128model.4.2 M EASURING THE DIVERSITY OF GENERATED IMAGESAn image synthesis model is not very interesting if it only outputs one image. Indeed, a well-knownfailure mode of GANs is that the generator will collapse and output a single prototype that maximallyfools the discriminator (Goodfellow et al., 2014; Salimans et al., 2016). A class-conditional modelof images is not very interesting if it only outputs one image per class. The Inception accuracy cannot measure whether a model has collapsed. A model that simply memorized one example fromeach ImageNet class would do very well by this metric. Thus, we seek a complementary metric toexplicitly evaluate the intra-class diversity of samples generated by the AC-GAN.Several methods exist for quantitatively evaluating image similarity by attempting to predict humanperceptual similarity judgements. The most successful of these is multi-scale structural similarity(MS-SSIM) (Wang et al., 2004b; Ma et al., 2016). MS-SSIM is a multi-scale variant of a well-characterized perceptual similarity metric that attempts to discount aspects of an image that are notimportant for human perception (Wang et al., 2004a). MS-SSIM values range between 0.0 and 1.0;higher MS-SSIM values correspond to perceptually more similar images. As a proxy for imagediversity, we measure the MS-SSIM scores between randomly chosen pairs of images within agiven class. Samples from classes that have higher diversity result in lower mean MS-SSIM scores(Figure 4, left columns); samples from classes with lower diversity have higher mean MS-SSIMscores (Figure 4, right columns). Training images from the ImageNet training data contain a varietyof mean MS-SSIM scores across the classes indicating the variability of image diversity in ImageNetclasses (Figure 5, left panel, x-axis). Note that the highest mean MS-SSIM score (indicating the leastvariability) is 0.25 for the training data.We calculate the mean MS-SSIM score for all 1000 ImageNet classes generated by the AC-GANmodel. We track this value during training to identify whether the generator has collapsed (Figure 5,right panel, red curve). We also employ this metric to compare the diversity of the training imagesto the samples from the GAN model after training has completed. Figure 5 (left) plots the meanMS-SSIM values for image samples and training data broken up by class. The blue line is the lineof equality. Out of the 1000 classes, we find that 847 have mean sample MS-SSIM scores belowthat of the maximum MS-SSIM for the training data. In other words, 84.7% of classes have samplevariability that exceeds that of the least variable class from the ImageNet training data.4.3 G ENERATED IMAGES ARE BOTH DIVERSE AND DISCRIMINABLEWe have presented quantitative metrics demonstrating that AC-GAN samples may be diverse anddiscriminable but we have yet to examine how these metrics interact. Figure 6 shows the jointdistribution of Inception accuracies and MS-SSIM scores across all classes. Inception accuracyand MS-SSIM are anti-correlated ( r2=0:16). In fact, 74% of the classes with low diversity (MS-SSIM0:25) contain Inception accuracies 1%. These results suggest that GANs that drop modesthe intended class. To compute the Inception accuracy, we modified a version of Inception-v3 supplied inhttps://github.com/openai/improved-gan/ .5Under review as a conference paper at ICLR 2017hot dog artichoke promontory green appleMS-SSIM = 0. 11 MS-SSIM = 0.29 MS-SSIM = 0.41 MS-SSIM = 0.90MS-SSIM = 0.05 MS-SSIM = 0.15 MS-SSIM = 0.08 MS-SSIM = 0.04real synthesizedFigure 4: Examples of different MS-SSIM scores. The top and bottom rows contain AC-GAN samples andtraining data, respectively.Figure 5: (Left) Comparison of the mean MS-SSIM scores between pairs of images within a given class forImageNet training data and samples from the GAN (blue line is equality). The horizontal red line marks themaximum MS-SSIM value across all ImageNet classes. Each point is an individual class. The mean standarddeviation of scores across the training data and the samples was 0.06 and 0.08 respectively. Scores belowthe red line (84.7% of classes) arise from classes where GAN training largely succeeded. (Right) Intra-classMS-SSIM for selected ImageNet classes throughout a training run. Classes that successfully train tend to havedecreasing mean MS-SSIM scores, to a point.are most likely to produce low quality images. Conversely, 78% of classes with high diversity (MS-SSIM<0:25) have Inception accuracies that exceed 1%. In comparison, the Inception-v3 modelachieves 78.8% accuracy on average across all 1000 classes (Szegedy et al., 2015). A fraction of theclasses AC-GAN samples reach this level of accuracy. This indicates opportunity for future imagesynthesis models.4.4 C OMPARISON TO PREVIOUS RESULTSPrevious quantitative results for image synthesis models trained on ImageNet are reported in termsof log-likelihood (van den Oord et al., 2016a;b). Log-likelihood is a coarse and potentially inaccu-rate measure of sample quality (Theis et al., 2015). Addditionally, log-likelihood is intractable tocompute for GANs. Instead we compare with previous state-of-the-art results on CIFAR-10 using alower spatial resolution ( 3232). Following the procedure in Salimans et al. (2016), we compute6Under review as a conference paper at ICLR 2017Figure 6: Inception accuracy vs MS-SSIM for all 1000 ImageNet classes ( r2=0:16). Samples from AC-GAN models do not achieve variability at the expense of discriminability.the Inception score3for 50000 samples from an AC-GAN with resolution ( 3232), split into 10groups at random. We also compute the Inception score for 25000 extra samples, split into 5 groupsat random. We select the best model based on the first score and report the second score. Performinga grid search across 27 hyperparameter configurations, we are able to achieve a score of 8.25 0.07compared to state of the art 8.09 0.07 (Salimans et al., 2016). Moreover, we accomplish this with-out employing any of the new techniques introduced in that work (i.e. virtual batch normalization,minibatch discrimination, and label smoothing). This provides additional evidence that AC-GANsare effective even without the benefit of class splitting (Appendix B).4.5 S EARCHING FOR SIGNATURES OF OVERFITTINGOne possibility that must be investigated is that the AC-GAN has overfit on the training data. As afirst check that the network does not memorize the training data, we identify the nearest neighborsof image samples in the training data measured by L1 distance in pixel space (Figure 7). The nearestneighbors from the training data do not resemble the corresponding samples. This provides evidencethat the AC-GAN is not merely memorizing the training data.Figure 7: Nearest neighbor analysis. (Left) Samples from a single ImageNet class. (Right) Correspondingnearest neighbor (L1 distance) in training data for each sample.A more sophisticated method for understanding the degree of overfitting in a model is to explorethat model’s latent space by interpolation. In an overfit model one might observe discrete transitionsin the interpolated images and regions in latent space that do not correspond to meaningful images(Bengio et al., 2012; Radford et al., 2015; Dinh et al., 2016). Figure 8 (left) highlights interpolationsin the latent space between several image samples. Notably, the generator learned that certain com-binations of dimensions correspond to semantically meaningful features (e.g. size of the arch, lengthof a bird’s beak) and there are no discrete transitions or ‘holes’ in the latent space. A second methodfor exploring the latent space of the AC-GAN is to exploit the structure of the model. The AC-GANfactorizes its representation into class information and a class-independent latent representation z.Sampling the AC-GAN with zfixed but altering the class label corresponds to generating sampleswith the same ‘style’ across multiple classes (Kingma et al., 2014). Figure 8 (right) shows samples3The Inception score is given by exp (Ex[DKL(p(yjx)jjp(y))])where xis a particular image, p(yjx)is the conditional output distribution over the classes in a pre-trained Inception network (Szegedy et al., 2014)given x, andp(y)is the marginal distribution over the classes.7Under review as a conference paper at ICLR 2017from 8 bird classes. Elements of the same row have the same z. Although the class changes foreach column, elements of the global structure (e.g. position, layout, background) are preserved,indicating that AC-GAN can represent certain types of ‘compositionality’.Figure 8: (Left) Latent space interpolations for selected ImageNet classes. Left-most and right-columns showthree pairs of image samples - each pair from a distinct class. Intermediate columns highlight linear interpola-tions in the latent space between these three pairs of images. (Right) Class-independent information containsglobal structure about the synthesized image. Each column is a distinct bird class while each row correspondsto a fixed latent code z.5 D ISCUSSIONThis work introduced the AC-GAN architecture and demonstrated that AC-GANs can generate glob-ally coherent ImageNet samples. We provided a new quantitative metric for image discriminabilityas a function of spatial resolution. Using this metric we demonstrated that our samples are morediscriminable than those from a model that generates lower resolution images and performs a naiveresize operation. We also analyzed the diversity of our samples with respect to the training dataand provided some evidence that the image samples from the majority of classes are comparable indiversity to ImageNet training data. We hope that these metrics might provide quantitative measuresof sample quality for evaluating and improving future image synthesis models.Several directions exist for building upon this work. Much work needs to be done to improve thevisual discriminability of the 128128resolution model. Although some synthesized image classesexhibit high Inception accuracies, the average Inception accuracy of the model ( 10:1%2:0%)is still far below real training data at 81%. One immediate opportunity for addressing this is toaugment the discriminator with a pre-trained model to perform additional supervised tasks (e.g.image segmentation, Ronneberger et al. (2015)). Such techniques might allow for the synthesis ofeven higher resolution images with global coherence and meaningful visual content.Improving the robustness and reliability of training a GAN is an ongoing research topic. Only 84.7%of the ImageNet classes avoided mode dropping and exhibited a diversity comparable to real trainingdata. Training stability was vastly aided by dividing up 1000 ImageNet classes across 100 AC-GANmodels. Building a single unified model that could generate diverse samples from all 1000 classeswould be an important step forward.Image synthesis models provide a unique opportunity for performing semi-supervised learning.Namely, these models build a rich prior over natural image statistics that can be leveraged by clas-sifiers to improve predictions on datasets for which few labels exist. The AC-GAN model canperform semi-supervised learning by simply ignoring the component of the loss arising from classlabels when a label is unavailable for a given training image. Interestingly, prior work suggeststhat achieving good sample quality might be independent of success in semi-supervised learning(Salimans et al., 2016).ACKNOWLEDGMENTSWe thank the developers of TensorFlow (Abadi et al., 2016). We thank Luke Metz and VincentDumoulin for extensive and helpful comments on drafts. We also thank Ben Poole, Sam Schoenholz,Barret Zoph, Mart ́ın Abadi, Manjunath Kudlur and Jascha Sohl-Dickstein for helpful discussions.8Under review as a conference paper at ICLR 2017
H1qHMAWVe
Review
6: Marginally above acceptance threshold
This paper introduces a class-conditional GAN as a generative model for images. It introduces two main diagnostic tools for training GANs: one to assess whether a model is making full use of its output resolution and another to measure the diversity of generated samples. Experiments are conducted on the CIFAR-10 and ImageNet datasets. Pros: + The paper is clear and well-written. + Experiments performed in the relatively under-explored 128 x 128 ImageNet setting. + The proposed MS-SSIM diversity metric appears to be a useful tool for detecting convergence issues in class-conditional GAN models. Cons: - AC-GAN model itself is of limited novelty relative to other GAN approaches that condition on class. - Diversity metric is of limited use for training non class-conditional GANs. - No experimental comparison of AC-GAN to other class-conditional models. To my knowledge training GANs on large, diverse images such as 128 x 128 ImageNet images is under-explored ([1] contains just a few samples in this setting). Though the model is not very novel and a comparison to other class-conditional models is lacking, I feel the community will find the diagnostic tools and the thorough exploration of the ImageNet-trained model to be of interest. * Section 4.2: MS-SSIM is traditionally defined for grayscale images only. How do you extend MS-SSIM to color images in your work? Were they computed channel-wise across R,G, and B? * Section 4.4: It is difficult to tell whether a single AC-GAN was trained for all of CIFAR-10 or one for each group. If single, why were the samples split into groups for computing Inception Score? And if multiple, the comparison to Salimans et al. is not a direct one. Also it would be helpful to include the real data Inception score as a point of comparison. * Appendix D: The caption of Figure 9 states that the same number of training steps was taken for each model. From this it seems possible that the models with more classes simply did not converge yet. [1] Salimans, Tim, et al. "Improved techniques for training GANs." Advances in Neural Information Processing Systems. 2016.
4: The reviewer is confident but not absolutely certain that the evaluation is correct
rJXTf9Bxg
ICLR.cc/2017/conference
2017
Conditional Image Synthesis With Auxiliary Classifier GANs
["Augustus Odena", "Christopher Olah", "Jonathon Shlens"]
Synthesizing high resolution photorealistic images has been a long-standing challenge in machine learning. In this paper we introduce new methods for the improved training of generative adversarial networks (GANs) for image synthesis. We construct a variant of GANs employing label conditioning that results in 128x128 resolution image samples exhibiting global coherence. We expand on previous work for image quality assessment to provide two new analyses for assessing the discriminability and diversity of samples from class-conditional image synthesis models. These analyses demonstrate that high resolution samples provide class information not present in low resolution samples. Across 1000 ImageNet classes, 128x128 samples are more than twice as discriminable as artificially resized 32x32 samples. In addition, 84.7% of the classes have samples exhibiting diversity comparable to real ImageNet data.
["Deep learning"]
ABSTRACTSynthesizing high resolution photorealistic images has been a long-standing chal-lenge in machine learning. In this paper we introduce new methods for the im-proved training of generative adversarial networks (GANs) for image synthesis.We construct a variant of GANs employing label conditioning that results in128128 resolution image samples exhibiting global coherence. We expandon previous work for image quality assessment to provide two new analyses forassessing the discriminability and diversity of samples from class-conditional im-age synthesis models. These analyses demonstrate that high resolution samplesprovide class information not present in low resolution samples. Across 1000ImageNet classes, 128128samples are more than twice as discriminable as ar-tificially resized 3232samples. In addition, 84.7% of the classes have samplesexhibiting diversity comparable to real ImageNet data.1 I NTRODUCTIONCharacterizing the structure of natural images has been a rich research endeavor. Natural imagesobey intrinsic invariances and exhibit multi-scale statistical structures that have historically beendifficult to quantify (Simoncelli & Olshausen, 2001). Recent advances in machine learning of-fer an opportunity to substantially improve the quality of image models. Improved image modelsadvance the state-of-the-art in image denoising (Ball ́e et al., 2015), compression (Toderici et al.,2016), in-painting (van den Oord et al., 2016a), and super-resolution (Ledig et al., 2016). Bet-ter models of natural images also improve performance in semi-supervised learning tasks (Kingmaet al., 2014; Springenberg, 2015; Odena, 2016; Salimans et al., 2016) and reinforcement learningproblems (Blundell et al., 2016).One method for understanding natural image statistics is to build a system that synthesizes imagesde novo . There are several promising approaches for building image synthesis models. Variationalautoencoders (V AEs) maximize a variational lower bound on the log-likelihood of the training data(Kingma & Welling, 2013; Rezende et al., 2014). V AEs are straightforward to train but introducepotentially restrictive assumptions about the approximate posterior distribution (but see Rezende &Mohamed (2015); Kingma et al. (2016)). Autoregressive models dispense with latent variables anddirectly model the conditional distribution over pixels (van den Oord et al., 2016a;b). These modelsproduce convincing samples but are costly to sample from and do not provide a latent representation.Invertible density estimators transform latent variables directly using a series of parameterized func-tions constrained to be invertible (Dinh et al., 2016). This technique allows for exact log-likelihoodcomputation and exact inference, but the invertibility constraint is restrictive.Generative adversarial networks (GANs) offer a distinct and promising approach that focuses on agame-theoretic formulation for training an image synthesis model (Goodfellow et al., 2014). Recentwork has shown that GANs can produce convincing image samples on datasets with low variabilityand low resolution (Denton et al., 2015; Radford et al., 2015). However, GANs struggle to gen-erate globally coherent, high resolution samples - particularly from datasets with high variability.Moreover, a theoretical understanding of GANs is an on-going research topic (Uehara et al., 2016;Mohamed & Lakshminarayanan, 2016).Work completed as a participant in the 2016-2017 Google Brain Residency program.1arXiv:1610.09585v1 [stat.ML] 30 Oct 2016Under review as a conference paper at ICLR 2017monarch butterfly goldfinch daisy grey whale redshankFigure 1: 128128resolution samples from 5 classes taken from an AC-GAN trained on the ImageNet dataset.Note that the classes shown have been selected to highlight the success of the model and are not representative.Samples from all ImageNet classes are in the Appendix.In this work we demonstrate that that adding more structure to the GAN latent space along witha specialized cost function results in higher quality samples. We exhibit 128128pixel samplesfrom all classes of the ImageNet dataset (Russakovsky et al., 2015) with increased global coherence(Figure 1). Importantly, we demonstrate quantitatively that our high resolution samples are not justnaive resizings of low resolution samples. In particular, downsampling our 128128 samplesto3232leads to a 50% decrease in visual discriminability. We also introduce a new metricfor assessing the variability across image samples and employ this metric to demonstrate that oursynthesized images exhibit diversity comparable to training data for a large fraction (84.7%) ofImageNet classes.2 B ACKGROUNDA generative adversarial network (GAN) consists of two neural networks trained in opposition toone another. The generator Gtakes as input a random noise vector zand outputs an image Xfake=G(z). The discriminator Dreceives as input either a training image or a synthesized image fromthe generator and outputs a probability distribution P(SjX) =D(X)over possible image sources.The discriminator is trained to maximize the log-likelihood it assigns to the correct source:L=E[logP(S=realjXreal)] +E[logP(S=fakejXfake)]The generator is trained to minimize that same quantity.The basic GAN framework can be augmented using side information. One strategy is to supplyboth the generator and discriminator with class labels in order to produce class conditional samples(Mirza & Osindero, 2014). Class conditional synthesis can significantly improve the quality ofgenerated samples (van den Oord et al., 2016b). Richer side information such as image captions andbounding box localizations may improve sample quality further (Reed et al., 2016a;b).Instead of feeding side information to the discriminator, one can task the discriminator with re-constructing side information. This is done by modifying the discriminator to contain an auxiliarydecoder network1that outputs the class label for the training data (Odena, 2016; Salimans et al.,2016) or a subset of the latent variables from which the samples are generated (Chen et al., 2016).Forcing a model to perform additional tasks is known to improve performance on the original task(e.g. Sutskever et al. (2014); Szegedy et al. (2014); Ramsundar et al. (2016)). In addition, an auxil-iary decoder could leverage pre-trained discriminators (e.g. image classifiers) for further improvingthe synthesized images (Nguyen et al., 2016). Motivated by these considerations, we introduce amodel that combines both strategies for leveraging side information. That is, the model proposedbelow is class conditional, but with an auxiliary decoder that is tasked with reconstructing classlabels.2Under review as a conference paper at ICLR 2017(noise) (latent)(data)InfoGAN(Chen, et al., 2016) . . .(noise) (class)(data)AC-GAN(Present Work) (noise) (class)(data)Conditional GAN (Mirza & Osindero, 2014) (noise). . .(class)(data)Semi-Supervised GAN (Odena, 2016; Salimans, et al., 2016) Figure 2: A comparison of several GAN architectures with the proposed AC-GAN architecture.3 AC-GAN SWe propose a variant of the GAN architecture which we call an auxiliary classifier GAN (or AC-GAN - see Figure 2). In the AC-GAN, every generated sample has a corresponding class label, cpcin addition to the noise z.Guses both to generate images Xfake =G(c;z). The discriminatorgives both a probability distribution over sources and a probability distribution over the class labels,P(SjX); P(CjX) =D(X). The objective function has two parts: the log-likelihood of thecorrect source, LS, and the log-likelihood of the correct class, LC.LS=E[logP(S=realjXreal)] +E[logP(S=fakejXfake)]LC=E[logP(C=cjXreal)] +E[logP(C=cjXfake)]Dis trained to maximize LS+LCwhileGis trained to maximize LCLS. AC-GANs learn arepresentation for zthat is independent of class label (e.g. Kingma et al. (2014)).Early experiments demonstrated that increasing the number of classes trained on while holding themodel fixed decreased the quality of the model outputs (Appendix B). The structure of the AC-GAN model permits separating large datasets into subsets by class and training a generator anddiscriminator for each subset. We exploit this property in our experiments to train across the entireImageNet data set.4 R ESULTSWe train several AC-GAN models on the ImageNet data set (Russakovsky et al., 2015). Broadlyspeaking, the architecture of the generator Gis a series of ‘deconvolution’ layers that transform thenoisezand classcinto an image (Odena et al., 2016). We train two variants of the model architecturefor generating images at 128128and6464spatial resolutions. The discriminator Dis a deepconvolutional neural network with a Leaky ReLU nonlinearity (Maas et al., 2013). See Appendix Afor more details. As mentioned earlier, we find that reducing the variability introduced by all 1000classes of ImageNet significantly improves the quality of training. We train 100 AC-GAN models –each on images from just 10 classes – for 50000 mini-batches of size 100.Evaluating the quality of image synthesis models is challenging due to the variety of probabilis-tic criteria (Theis et al., 2015) and the lack of a perceptually meaningful image similarity metric.Nonetheless, in subsequent sections we attempt to measure the quality of the AC-GAN by buildingseveral ad-hoc measures for image sample discriminability and diversity. Our hope is that this workmight provide quantitative measures that may be used to aid training and subsequent developmentof image synthesis models.1Alternatively, one can force the discriminator to work with the joint distribution (X; z)and train a separateinference network that computes q(zjX)(Dumoulin et al., 2016; Donahue et al., 2016).3Under review as a conference paper at ICLR 201716 x 16 32 x 32 64 x 64 128 x 128 256 x 256RealFake0% 0% 42% 76% 76%0% 7% 62% 94% 94%Figure 3: Generating high resolution images improves discriminability. Top: Training data and synthesized im-ages from the zebra class resized to a lower spatial resolution (indicated above) and subsequently artificiallyresized to the original resolution. Inception accuracy is shown below the corresponding images. Bottom Left:Summary of accuracies across varying spatial resolutions for training data and image samples from 6464and128128models. Error bar measures standard deviation across 10 subsets of images. Dashed lines highlightthe accuracy at the output spatial resolution of the model. The training data (clipped) achieves accuracies of24%, 54%, 81% and 81% at resolutions of 32, 64, 128, and 256 respectively. Bottom Right: Comparison ofaccuracy scores at 128128and3232spatial resolutions ( xandyaxis, respectively). Each point representsan ImageNet class. 84.4% of the classes are below the line of equality. The green dot corresponds to the zebraclass.4.1 G ENERATING HIGHRESOLUTION IMAGES IMPROVES DISCRIMINABILITYBuilding a class-conditional image synthesis model necessitates measuring the extent to which syn-thesized images appear to belong to the intended class. In particular, we would like to know thata high resolution sample is not just a naive resizing of a low resolution sample. Consider a simpleexperiment: pretend there exists a model that synthesizes 3232images. One can trivially increasethe resolution of synthesized images by performing bilinear interpolation. This would yield higherresolution images, but these images would just be blurry versions of the low resolution images thatare not discriminable. Hence, the goal of an image synthesis model is not simply to produce highresolution images, but to produce high resolution images that are more discriminable than low reso-lution images.To measure discriminability, we feed synthesized images to a pre-trained Inception network(Szegedy et al., 2015) and report the fraction of the samples for which the Inception network as-signed the correct label2. We calculate this accuracy measure on a series of real and synthesized im-ages which have had their spatial resolution artificially decreased by bilinear interpolation (Figure 3,2One could also use the Inception score (Salimans et al., 2016), but our method has several advan-tages: accuracy figures are easier to interpret than exponentiated KL-divergences; accuracy may be as-sessed for individual classes; accuracy measures whether a class-conditional model generated samples from4Under review as a conference paper at ICLR 2017top panels). Note that as the spatial resolution is decreased, the accuracy decreases - indicating thatresulting images contain less class information (Figure 3, scores below top panels). We summarizedthis finding across all 1000 ImageNet classes for the ImageNet training data (black), a 128128resolution AC-GAN (red) and a 6464resolution AC-GAN (blue) in Figure 3 (bottom, left). Theblack curve (clipped) provides an upper-bound on the discriminability of real images.The goal of this analysis is to show that synthesizing higher resolution images leads to increaseddiscriminability. The 128128model achieves an accuracy of 10.1% 2.0% versus 7.0%2.0%with samples resized to 6464and 5.0%2.0% with samples resized to 3232. In other words,downsizing the outputs of the AC-GAN to 3232and6464decreases visual discriminabilityby 50% and 38% respectively. Furthermore, 84.4% of the ImageNet classes have higher accuracy at128128than at 3232(Figure 3, bottom left).We performed the same analysis on an AC-GAN trained to 6464spatial resolution. This modelachieved less discriminability than a 128128AC-GAN model. Accuracies from the 6464modelplateau at a 6464spatial resolution consistent with previous results. Finally, the 6464resolutionmodel achieves less discriminability at 64 spatial resolution than the 128128model.4.2 M EASURING THE DIVERSITY OF GENERATED IMAGESAn image synthesis model is not very interesting if it only outputs one image. Indeed, a well-knownfailure mode of GANs is that the generator will collapse and output a single prototype that maximallyfools the discriminator (Goodfellow et al., 2014; Salimans et al., 2016). A class-conditional modelof images is not very interesting if it only outputs one image per class. The Inception accuracy cannot measure whether a model has collapsed. A model that simply memorized one example fromeach ImageNet class would do very well by this metric. Thus, we seek a complementary metric toexplicitly evaluate the intra-class diversity of samples generated by the AC-GAN.Several methods exist for quantitatively evaluating image similarity by attempting to predict humanperceptual similarity judgements. The most successful of these is multi-scale structural similarity(MS-SSIM) (Wang et al., 2004b; Ma et al., 2016). MS-SSIM is a multi-scale variant of a well-characterized perceptual similarity metric that attempts to discount aspects of an image that are notimportant for human perception (Wang et al., 2004a). MS-SSIM values range between 0.0 and 1.0;higher MS-SSIM values correspond to perceptually more similar images. As a proxy for imagediversity, we measure the MS-SSIM scores between randomly chosen pairs of images within agiven class. Samples from classes that have higher diversity result in lower mean MS-SSIM scores(Figure 4, left columns); samples from classes with lower diversity have higher mean MS-SSIMscores (Figure 4, right columns). Training images from the ImageNet training data contain a varietyof mean MS-SSIM scores across the classes indicating the variability of image diversity in ImageNetclasses (Figure 5, left panel, x-axis). Note that the highest mean MS-SSIM score (indicating the leastvariability) is 0.25 for the training data.We calculate the mean MS-SSIM score for all 1000 ImageNet classes generated by the AC-GANmodel. We track this value during training to identify whether the generator has collapsed (Figure 5,right panel, red curve). We also employ this metric to compare the diversity of the training imagesto the samples from the GAN model after training has completed. Figure 5 (left) plots the meanMS-SSIM values for image samples and training data broken up by class. The blue line is the lineof equality. Out of the 1000 classes, we find that 847 have mean sample MS-SSIM scores belowthat of the maximum MS-SSIM for the training data. In other words, 84.7% of classes have samplevariability that exceeds that of the least variable class from the ImageNet training data.4.3 G ENERATED IMAGES ARE BOTH DIVERSE AND DISCRIMINABLEWe have presented quantitative metrics demonstrating that AC-GAN samples may be diverse anddiscriminable but we have yet to examine how these metrics interact. Figure 6 shows the jointdistribution of Inception accuracies and MS-SSIM scores across all classes. Inception accuracyand MS-SSIM are anti-correlated ( r2=0:16). In fact, 74% of the classes with low diversity (MS-SSIM0:25) contain Inception accuracies 1%. These results suggest that GANs that drop modesthe intended class. To compute the Inception accuracy, we modified a version of Inception-v3 supplied inhttps://github.com/openai/improved-gan/ .5Under review as a conference paper at ICLR 2017hot dog artichoke promontory green appleMS-SSIM = 0. 11 MS-SSIM = 0.29 MS-SSIM = 0.41 MS-SSIM = 0.90MS-SSIM = 0.05 MS-SSIM = 0.15 MS-SSIM = 0.08 MS-SSIM = 0.04real synthesizedFigure 4: Examples of different MS-SSIM scores. The top and bottom rows contain AC-GAN samples andtraining data, respectively.Figure 5: (Left) Comparison of the mean MS-SSIM scores between pairs of images within a given class forImageNet training data and samples from the GAN (blue line is equality). The horizontal red line marks themaximum MS-SSIM value across all ImageNet classes. Each point is an individual class. The mean standarddeviation of scores across the training data and the samples was 0.06 and 0.08 respectively. Scores belowthe red line (84.7% of classes) arise from classes where GAN training largely succeeded. (Right) Intra-classMS-SSIM for selected ImageNet classes throughout a training run. Classes that successfully train tend to havedecreasing mean MS-SSIM scores, to a point.are most likely to produce low quality images. Conversely, 78% of classes with high diversity (MS-SSIM<0:25) have Inception accuracies that exceed 1%. In comparison, the Inception-v3 modelachieves 78.8% accuracy on average across all 1000 classes (Szegedy et al., 2015). A fraction of theclasses AC-GAN samples reach this level of accuracy. This indicates opportunity for future imagesynthesis models.4.4 C OMPARISON TO PREVIOUS RESULTSPrevious quantitative results for image synthesis models trained on ImageNet are reported in termsof log-likelihood (van den Oord et al., 2016a;b). Log-likelihood is a coarse and potentially inaccu-rate measure of sample quality (Theis et al., 2015). Addditionally, log-likelihood is intractable tocompute for GANs. Instead we compare with previous state-of-the-art results on CIFAR-10 using alower spatial resolution ( 3232). Following the procedure in Salimans et al. (2016), we compute6Under review as a conference paper at ICLR 2017Figure 6: Inception accuracy vs MS-SSIM for all 1000 ImageNet classes ( r2=0:16). Samples from AC-GAN models do not achieve variability at the expense of discriminability.the Inception score3for 50000 samples from an AC-GAN with resolution ( 3232), split into 10groups at random. We also compute the Inception score for 25000 extra samples, split into 5 groupsat random. We select the best model based on the first score and report the second score. Performinga grid search across 27 hyperparameter configurations, we are able to achieve a score of 8.25 0.07compared to state of the art 8.09 0.07 (Salimans et al., 2016). Moreover, we accomplish this with-out employing any of the new techniques introduced in that work (i.e. virtual batch normalization,minibatch discrimination, and label smoothing). This provides additional evidence that AC-GANsare effective even without the benefit of class splitting (Appendix B).4.5 S EARCHING FOR SIGNATURES OF OVERFITTINGOne possibility that must be investigated is that the AC-GAN has overfit on the training data. As afirst check that the network does not memorize the training data, we identify the nearest neighborsof image samples in the training data measured by L1 distance in pixel space (Figure 7). The nearestneighbors from the training data do not resemble the corresponding samples. This provides evidencethat the AC-GAN is not merely memorizing the training data.Figure 7: Nearest neighbor analysis. (Left) Samples from a single ImageNet class. (Right) Correspondingnearest neighbor (L1 distance) in training data for each sample.A more sophisticated method for understanding the degree of overfitting in a model is to explorethat model’s latent space by interpolation. In an overfit model one might observe discrete transitionsin the interpolated images and regions in latent space that do not correspond to meaningful images(Bengio et al., 2012; Radford et al., 2015; Dinh et al., 2016). Figure 8 (left) highlights interpolationsin the latent space between several image samples. Notably, the generator learned that certain com-binations of dimensions correspond to semantically meaningful features (e.g. size of the arch, lengthof a bird’s beak) and there are no discrete transitions or ‘holes’ in the latent space. A second methodfor exploring the latent space of the AC-GAN is to exploit the structure of the model. The AC-GANfactorizes its representation into class information and a class-independent latent representation z.Sampling the AC-GAN with zfixed but altering the class label corresponds to generating sampleswith the same ‘style’ across multiple classes (Kingma et al., 2014). Figure 8 (right) shows samples3The Inception score is given by exp (Ex[DKL(p(yjx)jjp(y))])where xis a particular image, p(yjx)is the conditional output distribution over the classes in a pre-trained Inception network (Szegedy et al., 2014)given x, andp(y)is the marginal distribution over the classes.7Under review as a conference paper at ICLR 2017from 8 bird classes. Elements of the same row have the same z. Although the class changes foreach column, elements of the global structure (e.g. position, layout, background) are preserved,indicating that AC-GAN can represent certain types of ‘compositionality’.Figure 8: (Left) Latent space interpolations for selected ImageNet classes. Left-most and right-columns showthree pairs of image samples - each pair from a distinct class. Intermediate columns highlight linear interpola-tions in the latent space between these three pairs of images. (Right) Class-independent information containsglobal structure about the synthesized image. Each column is a distinct bird class while each row correspondsto a fixed latent code z.5 D ISCUSSIONThis work introduced the AC-GAN architecture and demonstrated that AC-GANs can generate glob-ally coherent ImageNet samples. We provided a new quantitative metric for image discriminabilityas a function of spatial resolution. Using this metric we demonstrated that our samples are morediscriminable than those from a model that generates lower resolution images and performs a naiveresize operation. We also analyzed the diversity of our samples with respect to the training dataand provided some evidence that the image samples from the majority of classes are comparable indiversity to ImageNet training data. We hope that these metrics might provide quantitative measuresof sample quality for evaluating and improving future image synthesis models.Several directions exist for building upon this work. Much work needs to be done to improve thevisual discriminability of the 128128resolution model. Although some synthesized image classesexhibit high Inception accuracies, the average Inception accuracy of the model ( 10:1%2:0%)is still far below real training data at 81%. One immediate opportunity for addressing this is toaugment the discriminator with a pre-trained model to perform additional supervised tasks (e.g.image segmentation, Ronneberger et al. (2015)). Such techniques might allow for the synthesis ofeven higher resolution images with global coherence and meaningful visual content.Improving the robustness and reliability of training a GAN is an ongoing research topic. Only 84.7%of the ImageNet classes avoided mode dropping and exhibited a diversity comparable to real trainingdata. Training stability was vastly aided by dividing up 1000 ImageNet classes across 100 AC-GANmodels. Building a single unified model that could generate diverse samples from all 1000 classeswould be an important step forward.Image synthesis models provide a unique opportunity for performing semi-supervised learning.Namely, these models build a rich prior over natural image statistics that can be leveraged by clas-sifiers to improve predictions on datasets for which few labels exist. The AC-GAN model canperform semi-supervised learning by simply ignoring the component of the loss arising from classlabels when a label is unavailable for a given training image. Interestingly, prior work suggeststhat achieving good sample quality might be independent of success in semi-supervised learning(Salimans et al., 2016).ACKNOWLEDGMENTSWe thank the developers of TensorFlow (Abadi et al., 2016). We thank Luke Metz and VincentDumoulin for extensive and helpful comments on drafts. We also thank Ben Poole, Sam Schoenholz,Barret Zoph, Mart ́ın Abadi, Manjunath Kudlur and Jascha Sohl-Dickstein for helpful discussions.8Under review as a conference paper at ICLR 2017
S1pqbrY4x
Review
3: Clear rejection
Apologies for the late review. This submission proposes method for class-conditional generative image modeling using auxiliary classifiers. Compared to normal GANs the generator also receives a randomly sampled class label c from the class distribution. The discriminator has two outputs and two corresponding objectives: determine whether a sample is real or generated, and independently to predict the (real or sampled) class label corresponding to the sample. Figure 2. nicely illustrates related methods - this particular method bears similarities to InfoGANs and Semi-supervised GANs. Compared to infogans, this method also encourages correspondence between the latent c and the real class labels for the real examples (whereas infogans are presented as fully unsupervised). The authors attempt at evaluating the method quantitatively by looking at the discriminability and diversity of samples. It is found - not surprisingly - that higher resolution improves discriminability (because more information is present). Discriminability: Figure 3 doesn’t have legends so it is a bit hard to understand what is going on. Furthermore, my understanding is that when evaluating discriminability the authors downsample and then bicubically upsample the image, which is much more like a blurring, very different from retraining all the models to work on low resolution in the first place. Diversity: The authors try to quantitatively evaluate diversity of samples by measuring the average MS-SSIM between randomly selected pairs of points within each class. I think this method is significantly flawed and limited, for reasons mentioned in (Theis et al, 2015, A note on the evaluation…). In its behaviour, MS-SSIM is not that dissimilar from Euclidean distance - although it is nonlinear and is bounded between -1 and 1. Evaluating diversity/entropy of samples in high dimensions is very hard, especially if the distributions involved are non-trivial for example concentrated around manifolds. Consider for example a generative model which randomly samples just two images. Assuming that the MSSSIM between these two images is -1, this generative model can easily achieve an average MSSSIM score of 0, implying a conclusion that this model has more diversity than the training data itself. Conversely, SSIM is designed not to be sensitive to contrast and average pixel intensity, so if a model is diverse in this sense, that will be ignored by this measure. Overall, the paper proposes a new way to incorporate class labels into training GAN-type models. As far as I know the particular algorithm is novel, but I consider it incremental compared to what has been done before. I think the proposed evaluation metrics are flawed, especially when evaluating the diversity of the samples for the aforementioned reasons.
4: The reviewer is confident but not absolutely certain that the evaluation is correct
rJXTf9Bxg
ICLR.cc/2017/conference
2017
Conditional Image Synthesis With Auxiliary Classifier GANs
["Augustus Odena", "Christopher Olah", "Jonathon Shlens"]
Synthesizing high resolution photorealistic images has been a long-standing challenge in machine learning. In this paper we introduce new methods for the improved training of generative adversarial networks (GANs) for image synthesis. We construct a variant of GANs employing label conditioning that results in 128x128 resolution image samples exhibiting global coherence. We expand on previous work for image quality assessment to provide two new analyses for assessing the discriminability and diversity of samples from class-conditional image synthesis models. These analyses demonstrate that high resolution samples provide class information not present in low resolution samples. Across 1000 ImageNet classes, 128x128 samples are more than twice as discriminable as artificially resized 32x32 samples. In addition, 84.7% of the classes have samples exhibiting diversity comparable to real ImageNet data.
["Deep learning"]
ABSTRACTSynthesizing high resolution photorealistic images has been a long-standing chal-lenge in machine learning. In this paper we introduce new methods for the im-proved training of generative adversarial networks (GANs) for image synthesis.We construct a variant of GANs employing label conditioning that results in128128 resolution image samples exhibiting global coherence. We expandon previous work for image quality assessment to provide two new analyses forassessing the discriminability and diversity of samples from class-conditional im-age synthesis models. These analyses demonstrate that high resolution samplesprovide class information not present in low resolution samples. Across 1000ImageNet classes, 128128samples are more than twice as discriminable as ar-tificially resized 3232samples. In addition, 84.7% of the classes have samplesexhibiting diversity comparable to real ImageNet data.1 I NTRODUCTIONCharacterizing the structure of natural images has been a rich research endeavor. Natural imagesobey intrinsic invariances and exhibit multi-scale statistical structures that have historically beendifficult to quantify (Simoncelli & Olshausen, 2001). Recent advances in machine learning of-fer an opportunity to substantially improve the quality of image models. Improved image modelsadvance the state-of-the-art in image denoising (Ball ́e et al., 2015), compression (Toderici et al.,2016), in-painting (van den Oord et al., 2016a), and super-resolution (Ledig et al., 2016). Bet-ter models of natural images also improve performance in semi-supervised learning tasks (Kingmaet al., 2014; Springenberg, 2015; Odena, 2016; Salimans et al., 2016) and reinforcement learningproblems (Blundell et al., 2016).One method for understanding natural image statistics is to build a system that synthesizes imagesde novo . There are several promising approaches for building image synthesis models. Variationalautoencoders (V AEs) maximize a variational lower bound on the log-likelihood of the training data(Kingma & Welling, 2013; Rezende et al., 2014). V AEs are straightforward to train but introducepotentially restrictive assumptions about the approximate posterior distribution (but see Rezende &Mohamed (2015); Kingma et al. (2016)). Autoregressive models dispense with latent variables anddirectly model the conditional distribution over pixels (van den Oord et al., 2016a;b). These modelsproduce convincing samples but are costly to sample from and do not provide a latent representation.Invertible density estimators transform latent variables directly using a series of parameterized func-tions constrained to be invertible (Dinh et al., 2016). This technique allows for exact log-likelihoodcomputation and exact inference, but the invertibility constraint is restrictive.Generative adversarial networks (GANs) offer a distinct and promising approach that focuses on agame-theoretic formulation for training an image synthesis model (Goodfellow et al., 2014). Recentwork has shown that GANs can produce convincing image samples on datasets with low variabilityand low resolution (Denton et al., 2015; Radford et al., 2015). However, GANs struggle to gen-erate globally coherent, high resolution samples - particularly from datasets with high variability.Moreover, a theoretical understanding of GANs is an on-going research topic (Uehara et al., 2016;Mohamed & Lakshminarayanan, 2016).Work completed as a participant in the 2016-2017 Google Brain Residency program.1arXiv:1610.09585v1 [stat.ML] 30 Oct 2016Under review as a conference paper at ICLR 2017monarch butterfly goldfinch daisy grey whale redshankFigure 1: 128128resolution samples from 5 classes taken from an AC-GAN trained on the ImageNet dataset.Note that the classes shown have been selected to highlight the success of the model and are not representative.Samples from all ImageNet classes are in the Appendix.In this work we demonstrate that that adding more structure to the GAN latent space along witha specialized cost function results in higher quality samples. We exhibit 128128pixel samplesfrom all classes of the ImageNet dataset (Russakovsky et al., 2015) with increased global coherence(Figure 1). Importantly, we demonstrate quantitatively that our high resolution samples are not justnaive resizings of low resolution samples. In particular, downsampling our 128128 samplesto3232leads to a 50% decrease in visual discriminability. We also introduce a new metricfor assessing the variability across image samples and employ this metric to demonstrate that oursynthesized images exhibit diversity comparable to training data for a large fraction (84.7%) ofImageNet classes.2 B ACKGROUNDA generative adversarial network (GAN) consists of two neural networks trained in opposition toone another. The generator Gtakes as input a random noise vector zand outputs an image Xfake=G(z). The discriminator Dreceives as input either a training image or a synthesized image fromthe generator and outputs a probability distribution P(SjX) =D(X)over possible image sources.The discriminator is trained to maximize the log-likelihood it assigns to the correct source:L=E[logP(S=realjXreal)] +E[logP(S=fakejXfake)]The generator is trained to minimize that same quantity.The basic GAN framework can be augmented using side information. One strategy is to supplyboth the generator and discriminator with class labels in order to produce class conditional samples(Mirza & Osindero, 2014). Class conditional synthesis can significantly improve the quality ofgenerated samples (van den Oord et al., 2016b). Richer side information such as image captions andbounding box localizations may improve sample quality further (Reed et al., 2016a;b).Instead of feeding side information to the discriminator, one can task the discriminator with re-constructing side information. This is done by modifying the discriminator to contain an auxiliarydecoder network1that outputs the class label for the training data (Odena, 2016; Salimans et al.,2016) or a subset of the latent variables from which the samples are generated (Chen et al., 2016).Forcing a model to perform additional tasks is known to improve performance on the original task(e.g. Sutskever et al. (2014); Szegedy et al. (2014); Ramsundar et al. (2016)). In addition, an auxil-iary decoder could leverage pre-trained discriminators (e.g. image classifiers) for further improvingthe synthesized images (Nguyen et al., 2016). Motivated by these considerations, we introduce amodel that combines both strategies for leveraging side information. That is, the model proposedbelow is class conditional, but with an auxiliary decoder that is tasked with reconstructing classlabels.2Under review as a conference paper at ICLR 2017(noise) (latent)(data)InfoGAN(Chen, et al., 2016) . . .(noise) (class)(data)AC-GAN(Present Work) (noise) (class)(data)Conditional GAN (Mirza & Osindero, 2014) (noise). . .(class)(data)Semi-Supervised GAN (Odena, 2016; Salimans, et al., 2016) Figure 2: A comparison of several GAN architectures with the proposed AC-GAN architecture.3 AC-GAN SWe propose a variant of the GAN architecture which we call an auxiliary classifier GAN (or AC-GAN - see Figure 2). In the AC-GAN, every generated sample has a corresponding class label, cpcin addition to the noise z.Guses both to generate images Xfake =G(c;z). The discriminatorgives both a probability distribution over sources and a probability distribution over the class labels,P(SjX); P(CjX) =D(X). The objective function has two parts: the log-likelihood of thecorrect source, LS, and the log-likelihood of the correct class, LC.LS=E[logP(S=realjXreal)] +E[logP(S=fakejXfake)]LC=E[logP(C=cjXreal)] +E[logP(C=cjXfake)]Dis trained to maximize LS+LCwhileGis trained to maximize LCLS. AC-GANs learn arepresentation for zthat is independent of class label (e.g. Kingma et al. (2014)).Early experiments demonstrated that increasing the number of classes trained on while holding themodel fixed decreased the quality of the model outputs (Appendix B). The structure of the AC-GAN model permits separating large datasets into subsets by class and training a generator anddiscriminator for each subset. We exploit this property in our experiments to train across the entireImageNet data set.4 R ESULTSWe train several AC-GAN models on the ImageNet data set (Russakovsky et al., 2015). Broadlyspeaking, the architecture of the generator Gis a series of ‘deconvolution’ layers that transform thenoisezand classcinto an image (Odena et al., 2016). We train two variants of the model architecturefor generating images at 128128and6464spatial resolutions. The discriminator Dis a deepconvolutional neural network with a Leaky ReLU nonlinearity (Maas et al., 2013). See Appendix Afor more details. As mentioned earlier, we find that reducing the variability introduced by all 1000classes of ImageNet significantly improves the quality of training. We train 100 AC-GAN models –each on images from just 10 classes – for 50000 mini-batches of size 100.Evaluating the quality of image synthesis models is challenging due to the variety of probabilis-tic criteria (Theis et al., 2015) and the lack of a perceptually meaningful image similarity metric.Nonetheless, in subsequent sections we attempt to measure the quality of the AC-GAN by buildingseveral ad-hoc measures for image sample discriminability and diversity. Our hope is that this workmight provide quantitative measures that may be used to aid training and subsequent developmentof image synthesis models.1Alternatively, one can force the discriminator to work with the joint distribution (X; z)and train a separateinference network that computes q(zjX)(Dumoulin et al., 2016; Donahue et al., 2016).3Under review as a conference paper at ICLR 201716 x 16 32 x 32 64 x 64 128 x 128 256 x 256RealFake0% 0% 42% 76% 76%0% 7% 62% 94% 94%Figure 3: Generating high resolution images improves discriminability. Top: Training data and synthesized im-ages from the zebra class resized to a lower spatial resolution (indicated above) and subsequently artificiallyresized to the original resolution. Inception accuracy is shown below the corresponding images. Bottom Left:Summary of accuracies across varying spatial resolutions for training data and image samples from 6464and128128models. Error bar measures standard deviation across 10 subsets of images. Dashed lines highlightthe accuracy at the output spatial resolution of the model. The training data (clipped) achieves accuracies of24%, 54%, 81% and 81% at resolutions of 32, 64, 128, and 256 respectively. Bottom Right: Comparison ofaccuracy scores at 128128and3232spatial resolutions ( xandyaxis, respectively). Each point representsan ImageNet class. 84.4% of the classes are below the line of equality. The green dot corresponds to the zebraclass.4.1 G ENERATING HIGHRESOLUTION IMAGES IMPROVES DISCRIMINABILITYBuilding a class-conditional image synthesis model necessitates measuring the extent to which syn-thesized images appear to belong to the intended class. In particular, we would like to know thata high resolution sample is not just a naive resizing of a low resolution sample. Consider a simpleexperiment: pretend there exists a model that synthesizes 3232images. One can trivially increasethe resolution of synthesized images by performing bilinear interpolation. This would yield higherresolution images, but these images would just be blurry versions of the low resolution images thatare not discriminable. Hence, the goal of an image synthesis model is not simply to produce highresolution images, but to produce high resolution images that are more discriminable than low reso-lution images.To measure discriminability, we feed synthesized images to a pre-trained Inception network(Szegedy et al., 2015) and report the fraction of the samples for which the Inception network as-signed the correct label2. We calculate this accuracy measure on a series of real and synthesized im-ages which have had their spatial resolution artificially decreased by bilinear interpolation (Figure 3,2One could also use the Inception score (Salimans et al., 2016), but our method has several advan-tages: accuracy figures are easier to interpret than exponentiated KL-divergences; accuracy may be as-sessed for individual classes; accuracy measures whether a class-conditional model generated samples from4Under review as a conference paper at ICLR 2017top panels). Note that as the spatial resolution is decreased, the accuracy decreases - indicating thatresulting images contain less class information (Figure 3, scores below top panels). We summarizedthis finding across all 1000 ImageNet classes for the ImageNet training data (black), a 128128resolution AC-GAN (red) and a 6464resolution AC-GAN (blue) in Figure 3 (bottom, left). Theblack curve (clipped) provides an upper-bound on the discriminability of real images.The goal of this analysis is to show that synthesizing higher resolution images leads to increaseddiscriminability. The 128128model achieves an accuracy of 10.1% 2.0% versus 7.0%2.0%with samples resized to 6464and 5.0%2.0% with samples resized to 3232. In other words,downsizing the outputs of the AC-GAN to 3232and6464decreases visual discriminabilityby 50% and 38% respectively. Furthermore, 84.4% of the ImageNet classes have higher accuracy at128128than at 3232(Figure 3, bottom left).We performed the same analysis on an AC-GAN trained to 6464spatial resolution. This modelachieved less discriminability than a 128128AC-GAN model. Accuracies from the 6464modelplateau at a 6464spatial resolution consistent with previous results. Finally, the 6464resolutionmodel achieves less discriminability at 64 spatial resolution than the 128128model.4.2 M EASURING THE DIVERSITY OF GENERATED IMAGESAn image synthesis model is not very interesting if it only outputs one image. Indeed, a well-knownfailure mode of GANs is that the generator will collapse and output a single prototype that maximallyfools the discriminator (Goodfellow et al., 2014; Salimans et al., 2016). A class-conditional modelof images is not very interesting if it only outputs one image per class. The Inception accuracy cannot measure whether a model has collapsed. A model that simply memorized one example fromeach ImageNet class would do very well by this metric. Thus, we seek a complementary metric toexplicitly evaluate the intra-class diversity of samples generated by the AC-GAN.Several methods exist for quantitatively evaluating image similarity by attempting to predict humanperceptual similarity judgements. The most successful of these is multi-scale structural similarity(MS-SSIM) (Wang et al., 2004b; Ma et al., 2016). MS-SSIM is a multi-scale variant of a well-characterized perceptual similarity metric that attempts to discount aspects of an image that are notimportant for human perception (Wang et al., 2004a). MS-SSIM values range between 0.0 and 1.0;higher MS-SSIM values correspond to perceptually more similar images. As a proxy for imagediversity, we measure the MS-SSIM scores between randomly chosen pairs of images within agiven class. Samples from classes that have higher diversity result in lower mean MS-SSIM scores(Figure 4, left columns); samples from classes with lower diversity have higher mean MS-SSIMscores (Figure 4, right columns). Training images from the ImageNet training data contain a varietyof mean MS-SSIM scores across the classes indicating the variability of image diversity in ImageNetclasses (Figure 5, left panel, x-axis). Note that the highest mean MS-SSIM score (indicating the leastvariability) is 0.25 for the training data.We calculate the mean MS-SSIM score for all 1000 ImageNet classes generated by the AC-GANmodel. We track this value during training to identify whether the generator has collapsed (Figure 5,right panel, red curve). We also employ this metric to compare the diversity of the training imagesto the samples from the GAN model after training has completed. Figure 5 (left) plots the meanMS-SSIM values for image samples and training data broken up by class. The blue line is the lineof equality. Out of the 1000 classes, we find that 847 have mean sample MS-SSIM scores belowthat of the maximum MS-SSIM for the training data. In other words, 84.7% of classes have samplevariability that exceeds that of the least variable class from the ImageNet training data.4.3 G ENERATED IMAGES ARE BOTH DIVERSE AND DISCRIMINABLEWe have presented quantitative metrics demonstrating that AC-GAN samples may be diverse anddiscriminable but we have yet to examine how these metrics interact. Figure 6 shows the jointdistribution of Inception accuracies and MS-SSIM scores across all classes. Inception accuracyand MS-SSIM are anti-correlated ( r2=0:16). In fact, 74% of the classes with low diversity (MS-SSIM0:25) contain Inception accuracies 1%. These results suggest that GANs that drop modesthe intended class. To compute the Inception accuracy, we modified a version of Inception-v3 supplied inhttps://github.com/openai/improved-gan/ .5Under review as a conference paper at ICLR 2017hot dog artichoke promontory green appleMS-SSIM = 0. 11 MS-SSIM = 0.29 MS-SSIM = 0.41 MS-SSIM = 0.90MS-SSIM = 0.05 MS-SSIM = 0.15 MS-SSIM = 0.08 MS-SSIM = 0.04real synthesizedFigure 4: Examples of different MS-SSIM scores. The top and bottom rows contain AC-GAN samples andtraining data, respectively.Figure 5: (Left) Comparison of the mean MS-SSIM scores between pairs of images within a given class forImageNet training data and samples from the GAN (blue line is equality). The horizontal red line marks themaximum MS-SSIM value across all ImageNet classes. Each point is an individual class. The mean standarddeviation of scores across the training data and the samples was 0.06 and 0.08 respectively. Scores belowthe red line (84.7% of classes) arise from classes where GAN training largely succeeded. (Right) Intra-classMS-SSIM for selected ImageNet classes throughout a training run. Classes that successfully train tend to havedecreasing mean MS-SSIM scores, to a point.are most likely to produce low quality images. Conversely, 78% of classes with high diversity (MS-SSIM<0:25) have Inception accuracies that exceed 1%. In comparison, the Inception-v3 modelachieves 78.8% accuracy on average across all 1000 classes (Szegedy et al., 2015). A fraction of theclasses AC-GAN samples reach this level of accuracy. This indicates opportunity for future imagesynthesis models.4.4 C OMPARISON TO PREVIOUS RESULTSPrevious quantitative results for image synthesis models trained on ImageNet are reported in termsof log-likelihood (van den Oord et al., 2016a;b). Log-likelihood is a coarse and potentially inaccu-rate measure of sample quality (Theis et al., 2015). Addditionally, log-likelihood is intractable tocompute for GANs. Instead we compare with previous state-of-the-art results on CIFAR-10 using alower spatial resolution ( 3232). Following the procedure in Salimans et al. (2016), we compute6Under review as a conference paper at ICLR 2017Figure 6: Inception accuracy vs MS-SSIM for all 1000 ImageNet classes ( r2=0:16). Samples from AC-GAN models do not achieve variability at the expense of discriminability.the Inception score3for 50000 samples from an AC-GAN with resolution ( 3232), split into 10groups at random. We also compute the Inception score for 25000 extra samples, split into 5 groupsat random. We select the best model based on the first score and report the second score. Performinga grid search across 27 hyperparameter configurations, we are able to achieve a score of 8.25 0.07compared to state of the art 8.09 0.07 (Salimans et al., 2016). Moreover, we accomplish this with-out employing any of the new techniques introduced in that work (i.e. virtual batch normalization,minibatch discrimination, and label smoothing). This provides additional evidence that AC-GANsare effective even without the benefit of class splitting (Appendix B).4.5 S EARCHING FOR SIGNATURES OF OVERFITTINGOne possibility that must be investigated is that the AC-GAN has overfit on the training data. As afirst check that the network does not memorize the training data, we identify the nearest neighborsof image samples in the training data measured by L1 distance in pixel space (Figure 7). The nearestneighbors from the training data do not resemble the corresponding samples. This provides evidencethat the AC-GAN is not merely memorizing the training data.Figure 7: Nearest neighbor analysis. (Left) Samples from a single ImageNet class. (Right) Correspondingnearest neighbor (L1 distance) in training data for each sample.A more sophisticated method for understanding the degree of overfitting in a model is to explorethat model’s latent space by interpolation. In an overfit model one might observe discrete transitionsin the interpolated images and regions in latent space that do not correspond to meaningful images(Bengio et al., 2012; Radford et al., 2015; Dinh et al., 2016). Figure 8 (left) highlights interpolationsin the latent space between several image samples. Notably, the generator learned that certain com-binations of dimensions correspond to semantically meaningful features (e.g. size of the arch, lengthof a bird’s beak) and there are no discrete transitions or ‘holes’ in the latent space. A second methodfor exploring the latent space of the AC-GAN is to exploit the structure of the model. The AC-GANfactorizes its representation into class information and a class-independent latent representation z.Sampling the AC-GAN with zfixed but altering the class label corresponds to generating sampleswith the same ‘style’ across multiple classes (Kingma et al., 2014). Figure 8 (right) shows samples3The Inception score is given by exp (Ex[DKL(p(yjx)jjp(y))])where xis a particular image, p(yjx)is the conditional output distribution over the classes in a pre-trained Inception network (Szegedy et al., 2014)given x, andp(y)is the marginal distribution over the classes.7Under review as a conference paper at ICLR 2017from 8 bird classes. Elements of the same row have the same z. Although the class changes foreach column, elements of the global structure (e.g. position, layout, background) are preserved,indicating that AC-GAN can represent certain types of ‘compositionality’.Figure 8: (Left) Latent space interpolations for selected ImageNet classes. Left-most and right-columns showthree pairs of image samples - each pair from a distinct class. Intermediate columns highlight linear interpola-tions in the latent space between these three pairs of images. (Right) Class-independent information containsglobal structure about the synthesized image. Each column is a distinct bird class while each row correspondsto a fixed latent code z.5 D ISCUSSIONThis work introduced the AC-GAN architecture and demonstrated that AC-GANs can generate glob-ally coherent ImageNet samples. We provided a new quantitative metric for image discriminabilityas a function of spatial resolution. Using this metric we demonstrated that our samples are morediscriminable than those from a model that generates lower resolution images and performs a naiveresize operation. We also analyzed the diversity of our samples with respect to the training dataand provided some evidence that the image samples from the majority of classes are comparable indiversity to ImageNet training data. We hope that these metrics might provide quantitative measuresof sample quality for evaluating and improving future image synthesis models.Several directions exist for building upon this work. Much work needs to be done to improve thevisual discriminability of the 128128resolution model. Although some synthesized image classesexhibit high Inception accuracies, the average Inception accuracy of the model ( 10:1%2:0%)is still far below real training data at 81%. One immediate opportunity for addressing this is toaugment the discriminator with a pre-trained model to perform additional supervised tasks (e.g.image segmentation, Ronneberger et al. (2015)). Such techniques might allow for the synthesis ofeven higher resolution images with global coherence and meaningful visual content.Improving the robustness and reliability of training a GAN is an ongoing research topic. Only 84.7%of the ImageNet classes avoided mode dropping and exhibited a diversity comparable to real trainingdata. Training stability was vastly aided by dividing up 1000 ImageNet classes across 100 AC-GANmodels. Building a single unified model that could generate diverse samples from all 1000 classeswould be an important step forward.Image synthesis models provide a unique opportunity for performing semi-supervised learning.Namely, these models build a rich prior over natural image statistics that can be leveraged by clas-sifiers to improve predictions on datasets for which few labels exist. The AC-GAN model canperform semi-supervised learning by simply ignoring the component of the loss arising from classlabels when a label is unavailable for a given training image. Interestingly, prior work suggeststhat achieving good sample quality might be independent of success in semi-supervised learning(Salimans et al., 2016).ACKNOWLEDGMENTSWe thank the developers of TensorFlow (Abadi et al., 2016). We thank Luke Metz and VincentDumoulin for extensive and helpful comments on drafts. We also thank Ben Poole, Sam Schoenholz,Barret Zoph, Mart ́ın Abadi, Manjunath Kudlur and Jascha Sohl-Dickstein for helpful discussions.8Under review as a conference paper at ICLR 2017
rJPBH1MEg
review
6: Marginally above acceptance threshold
This is a clear, easy to read, highly relevant paper that improves GAN training for images and explores evaluation criterion on GANs. The main contributions are as follows: - Adding an auxiliary classifier head to a GAN discriminator and training a classification objective in addition to the real/fake objective improves performance. Generator is conditioned on 1-hot encoding of class and is trained to generate the specified class. - Training different models on different subsets of imagenet classes improves performance. - They motivate evaluating GAN images by using a perceptual similarity metric (MS-SSIM) on pairs of samples to quantify diversity in the samples (and detect mode collapse) - They show this metric correlates with a discriminability metric (classification accuracy of pre-trained imagenet model on generated samples) . The overall novelty of this approach is somewhat lacking in that previous methods have proposed training a classifier head on the discriminator and the discriminability metric proposed is simply the inception score of [1] except with class information. However, I think there is still a contribution to be made my putting these tricks together and successfully demonstrating image synthesis gains. Questions for the authors: (1) Why do you think splitting the imagenet training into 100 different models improves performance? Is the issue with the representation of the class? In other words, if an encoding more meaningful that 1-hot vector was used do you still think 100 models would be needed. Ideally we should hope that a generative model can leverage information from different classes to help with the generation of a particular class and also text-image synthesis models [2] have been quite successful when trained on diverse datasets (and these are conditioned on a semantically meaningful text encoding) which suggests to be that the issue is with the representation. (2) In section 3 the AC-GAN classification objective (omitting expectation for brevity) is given as L_S = log P(C=c|X_real) + log P(C=c|X_fake) and you say that both the discriminator and generator are trained to maximize this quantity. Obviously the generator would want to maximize log P(C=c|X_fake) for its given conditioning class c. But can you explain why you would want the discriminator to also maximize the classification accuracy of generated samples? Why not do something similar to the CatGAN paper [3] and train the discriminator to be as uncertain as possible about the generated examples. Seems counterintuitive to me to have both the generator and discriminator trying to optimize the same classification objective rather than not be adversarial wrt to this loss as well as the real/fake loss. Overall, this paper makes a clear contribution to GAN research both in terms of image quality and evaluation metrics and I would recommend it for acceptance. [1] Salimans et al. Improved Techniques for Training GANs (https://arxiv.org/abs/1606.03498) [2] Reed et al. Generative Adversarial Text to Image Synthesis (https://arxiv.org/abs/1605.05396) [3] Jost Tobias Springenberg. Unsupervised and semi-supervised learning with categorical generative adversarial networks (https://arxiv.org/abs/1511.06390)
5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature
ByG8A7cee
ICLR.cc/2017/conference
2017
Reference-Aware Language Models
["Zichao Yang", "Phil Blunsom", "Chris Dyer", "Wang Ling"]
We propose a general class of language models that treat reference as an explicit stochastic latent variable. This architecture allows models to create mentions of entities and their attributes by accessing external databases (required by, e.g., dialogue generation and recipe generation) and internal state (required by, e.g. language models which are aware of coreference). This facilitates the incorporation of information that can be accessed in predictable locations in databases or dis- course context, even when the targets of the reference may be rare words. Experiments on three tasks show our model variants outperform models based on deterministic attention.
["Natural language processing", "Deep learning"]
ABSTRACTWe propose a general class of language models that treat reference as an explicitstochastic latent variable. This architecture allows models to create mentions ofentities and their attributes by accessing external databases (required by, e.g., di-alogue generation and recipe generation) and internal state (required by, e.g. lan-guage models which are aware of coreference). This facilitates the incorporationof information that can be accessed in predictable locations in databases or dis-course context, even when the targets of the reference may be rare words. Ex-periments on three tasks show our model variants outperform models based ondeterministic attention.1 I NTRODUCTIONReferring expressions (REs) in natural language are noun phrases (proper nouns, common nouns,and pronouns) that identify objects, entities, and events in an environment. REs occur frequentlyand they play a key role in communicating information efficiently. While REs are common, previ-ous works neglect to model REs explicitly, either treating REs as ordinary words in the model orreplacing them with special tokens. Here we propose a language modeling framework that explicitlyincorporates reference decisions.In Figure 1we list examples of REs in the context of the three tasks that we consider in this work.Firstly, reference to a database is crucial in many applications. One example is in task orienteddialogue where access to a database is necessary to answer a user’s query ( Young et al. ,2013 ;Liet al. ,2016 ;Vinyals & Le ,2015 ;Wen et al. ,2015 ;Sordoni et al. ,2015 ;Serban et al. ,2016 ;Bordes& Weston ,2016 ;Williams & Zweig ,2016 ;Shang et al. ,2015 ;Wen et al. ,2016 ). Here we considerthe domain of restaurant recommendation where a system refers to restaurants (name) and theirattributes (address, phone number etc) in its responses. When the system says “ the nirala is anice restaurant”, it refers to the restaurant name the nirala from the database. Secondly, manymodels need to refer to a list of items ( Kiddon et al. ,2016 ;Wen et al. ,2015 ). In the task of recipegeneration from a list of ingredients ( Kiddon et al. ,2016 ), the generation of the recipe will frequentlyreference these items. As shown in Figure 1, in the recipe “Blend soy milk and . . . ”, soy milkrefers to the ingredient summaries. Finally, we address references within a document ( Mikolov et al. ,2010 ;Ji et al. ,2015 ;Wang & Cho ,2015 ), as the generation of words will ofter refer to previouslygenerated words. For instance the same entity will often be referred to throughout a document. InFigure 1, the entity you refers to Iin a previous utterance.In this work we develop a language model that has a specific module for generating REs. A series oflatent decisions (should I generate a RE? If yes, which entity in the context should I refer to? Howshould the RE be rendered?) augment a traditional recurrent neural network language model andthe two components are combined as a mixture model. Selecting an entity in context is similar tofamiliar models of attention ( Bahdanau et al. ,2014 ), but rather than being a deterministic functionthat reweights representations of elements in the context, it is treated as a distribution over contextualelements which are stochastically selected and then copied or, if the task warrants it, transformed(e.g., a pronoun rather than a proper name is produced as output). Two variants are possible forupdating the RNN state: one that only looks at the generated output form; and a second that looksat values of the latent variables. The former admits trivial unsupervised learning, latent decisionsare conditionally independent of each other given observed context, whereas the latter enables moreWork completed at DeepMind.1Under review as a conference paper at ICLR 2017referenceexampledialoguerecipecoreferenceM: the nirala is a nice restuarantthe niralamoderate...1 cpu plain soy milk...tableingredientsBlend soy milk and ...[I]1 [Linda]2 [you]1...um and [I]1 think ... [you]1 ...corefFigure 1: Reference-aware language models.expressive models that can extract information from the entity that is being referred to. In each ofthe three tasks, we demonstrate our reference aware model’s efficacy in evaluations against modelsthat do not explicitly include a reference operation.Our contributions are as follows:We propose a general framework to model reference in language and instantiate it in thecontext of dialogue modeling, recipe generation and coreference based language models.We build three data sets to test our models. There lack existing data sets that satisfy ourneed, so we build these data sets ourselves. These data sets are either built on top existingdata set (we constructed the table for DSTC2 data set for dialogue evaluation), crawledfrom websites (we crawled all recipes in www.allrecipes.com ) or annotated withNLP tools (we annotate the coreference with Gigaword corpus for our evaluation).We perform comprehensive evaluation of our models on the three data sets and verify ourmodels perform better than strong baselines.2 R EFERENCE -AWARE LANGUAGE MODELSHere we propose a general framework for reference-aware language models.We denote each document as a series of tokens x1; : : : ; x L, where Lis the number of tokens in thedocument. Our goal is to maximize the probabilities p(xijci), for each word in the document basedon its previous context ci=x1; : : : ; x i1. In contrast to traditional neural language models, weintroduce a variable at each position zi, which controls the decision on which source xiis generatedfrom. The token conditional probably is then obtained by:p(xijci) =p(xijzi; ci)p(zijci): (1)In dialogue modeling and recipe generation, ziwill simply taken on values in f0;1g. Where zi= 1denotes that xiis generated as a reference, either to a database entry or an item in a list. However,zican also be defined as a distribution over previous entities, allowing the model to predict xiconditioned on its a previous mention word. This will be the focus of the coreference languagemodel. When ziis not observed (which it generally will not be), we will train our model to maximizethe marginal probability in Eq. 1directly.2.1 D IALOGUE MODEL WITH DATABASE SUPPORTWe first apply our model on task-oriented dialogue systems in the domain of restaurant recommenda-tions, and work on the data set from the second Dialogue State Tracking Challenge (DSTC2) ( Hen-derson et al. ,2014 ). Table. 1is one example dialogue from this dataset.We can observe from this example, users get recommendations of restaurants based on queriesthat specify the area, price and food type of the restaurant. We can support the system’s decisionsby incorporating a mechanism that allows the model to query the database allowing the model tofind restaurants that satisfy the users queries. Thus, we crawled TripAdvisor for restaurants in the2Under review as a conference paper at ICLR 2017M: Hello , welcome to the Cambridge restaurant system? You can ask for restaurants byarea ,price range orfood type . How may I help you?U: i want a moderately priced restaurant in the north townM: the nirala is a nice restaurant in the north of town in the moderate price rangeU: what is the addressM: Sure , the nirala is on7 Milton Road ChestertonTable 1: Example dialogue, M stands for Machine and U stands for UserNAME PRICE RANGE FOOD AREA ADDRESS POST CODE PHONEali baba moderate lebanese centre 59 Hills Road City Cen-treCB 2, 1 NT 01462 432565theniralamoderate indian north 7 Milton Road Chester-tonCB 4, 1 UY 01223 360966Table 2: Fragment of database for dialogue system.Cambridge area, where the dialog dataset was collected. Then, we remove restaurants that do notappear in the data set and create a database with 109 entries with restaurants and their attributes (e.g.food type). A sample of our database is shown in Table. 2. We can observe that each restaurantcontains 6 attributes that are generally referred in the dialogue dataset. As such, if the user requestsa restaurant that serves “indian” food, we wish to train a model that can search for entries whose“food” column contains “indian”. Now, we describe how we deploy a model that fulfills theserequirements.2.1.1 D IALOGUE MODELMUMUsentence encoderturn encoderdecoderattnFigure 2: Hierarchical RNN Seq2Seq modelWe build a model based on the hierarchical RNN model described in ( Serban et al. ,2016 ), as indialogues, the generation of the response is not only dependent on the previous sentence, but on allsentences leading to the response. We assume that a dialogue is alternated between a machine and auser. An illustration of the model is shown in Figure 2.Consider a dialogue with Tturns, and the utterance from a user is denoted as X=fxigTi=1, whereiis the i-th utterance, whereas the utterance from a machine is denoted as Y=fyigTi=1, where iis the i-th utterance. We define xi=fxijgjxijj=1,yi=fyivgjyijv=1, where xijdenotes the j-th tokenin the i-th utterance from the user, whereas yivdenotes the v-th token in the i-th utterance fromthe machine. Finally, jxijandjyijdenote the number of tokens in the user and machine utterances,respectively. The dialogue sequence starts with machine utterance fy1; x1; y2; x2; : : : ; y T; xTg. Wewould like to model the utterances from the machinep(y1; y2; : : : ; y Tjx1; x2; : : : ; x T) =∏ip(yijy<i; x<i) =∏i;vp(yi;vjyi;<v; y<i; x<i);where y<idenotes all the utterances before iandyi;<v denotes the first v1tokens in the i-thutterance of the machine. A neural model is employed to predict p(yi;vjyi;<v; y<i; x<i), whichoperates as follows:Sentence Encoder : We first encode previous utterances y<iandx<iinto continuous space by gen-erating employing a LSTM encoder. Thus, for a given utterance xi, and start with the initial LSTMstatehxi;0and apply the recursion hxi;j=LSTM E(WExi;j; hxi;j1), where WExi;jdenotes a word3Under review as a conference paper at ICLR 2017embedding lookup for the token xi;j, and LSTM Edenotes the LSTM transition function describedinHochreiter & Schmidhuber (1997 ). The representation of the user utterance is represented bythe final LSTM state hxi=hxi;jxij. The same process is applied to obtain the machine utterancerepresentation hyi=hyi;jyij.Turn Encoder : Then, combine all the representations of all the utterances with a second LSTM,which encodes the sequence fhy1; hx1; :::; hyi; hxiginto a continuous vector. Once again, we start withan initial state u0and feed each of the utterance representation to obtain the following LSTM state,until the final state is obtained. For simplicity, we shall refer to this as ui, which can be seen as thehierarchical encoding of the previous iutterances.Seq2Seq Decoder : As for decoding, in order to generate each utterance yi, we can feed ui1intothe decoder LSTM as the initial state si;0=ui1and decode each token in yi. Thus, we can expressthe decoder as:syi;v=LSTM D(WEyi;v1; si;v1);pyi;v=softmax (Wsyi;v);where the desired probability p(yi;vjyi;<v; y<i; x<i)is expressed by pyi;v.Attention based decoder : We can also incorporate the attention mechanism in our hierarchicalmodel. An attention model builds a representation dby averaging over a set of vectors p. We definethe attention function as a=ATTN (p; q), where ais a probability distribution over the set of vectorsp, conditioned on any input representation q. A full description of this operation is described in ( Bah-danau et al. ,2014 ). Thus, for each generated token yi;v, we compute the attentions ai;v, conditionedon the current decoder state syi;v, obtaining the attentions over input tokens from previous turn ( i1).We denote the vector of all tokens in previous turn as hx;yi1= [fhxi1;jgjxi1jj=1;fhyi1;vgjyi1jv=1]. LetK=jhx;yi1jbe the number of tokens in previous turn. Thus, we obtain the attention probabilitiesover all previous tokens ai;vas ATTN (syi;v; hx;yi1). Then, the weighted sum is computed over theseprobabilities di;v=∑k2Kai;v;khx;yi1;k, where ai;v;k is the probability of aligning to the k-th tokenfrom previous turn. The resulting vector di;vis used to obtain the probability of the following wordpyi;v. Thus, we express the decoder as:syi;v=LSTM D([WEyi;v1; di;v1]; si;v1);ai;v=ATTN (hx;yi1; syi;v);di;v=∑k2Kai;v;khx;yi1;k;pyi;v=softmax (W[syi;v; di;v]):2.1.2 I NCORPORATING TABLE ATTENTIONTable Attnquery...attributesrowsTable AttndecoderUattnweighted rowStep 1: attribute attnStep 2: weighted columnStep 3: row attnpapaprpr(a) Decoder with table attention.query...attributesrowszYesNoUdecoderTable PointerTable PointerStep 1: attribute attnStep 2: weighted columnStep 3: row attnStep 4: weighted rowStep 5: column attnpapaprprpcpcpvocabpvocabpcopypcopy (b) Decoder with table pointer.Figure 3: Table based decoder.4Under review as a conference paper at ICLR 2017We now extend the attention model in order to allow the attention to be computed over a table,allowing the model to condition the generation on a database.We denote a table with Rrows and Ccolumns as ffr;cg; r2[1; R]; c2[1; C], where fr;cis the cellin row rand column c. The attribute of each column is denoted as sc, where cis the c-th attribute.fr;candscare one-hot vector.Table Encoding : To encode the table, we build an attribute vector gcfor each column. For eachcellfr;cof the table, we concatenate it with the corresponding attribute gcand then feed it througha one-layer MLP as follows: gc=WEscand then er;c= tanh( W[WEfr;c; gc]).Table Attention : The diagram for table attention is shown in Figure 3a. The attention over cellsin the table is conditioned on a given vector q, similarly to the attention model for sequencesATTN (p; q). However, rather than a sequence p, we now operate over a table f. Our attentionmodel computes a attribute attention followed by row attention of the table. We first use the atten-tion mechanism on the attributes to find out which attribute the user asks about. Suppose a usersayscheap , then we should focus on the price attribute. After we get the attention probabil-itypa=ATTN (fgcg; q), over the attribute, we calculate the weighted representation for each rower=∑cpacercconditioned on pa. Then erhas the price information of each row. We further useattention mechanism on erand get the probability pr=ATTN (ferg; q)over the rows. Then restau-rants with cheap price will be picked. Then, using the probabilities pr, we compute the weightedaverage over the all rows ec=∑rprrer;c, which is used in the decoder. The detailed process is:pa=ATTN (fgcg; q); (2)er=∑cpacerc8r; (3)pr=ATTN (ferg; q); (4)ec=∑rprrer;c8c: (5)This is embedded in the decoder by replacing the conditioned state qas the current decoder statesyi;0and then at each step, conditioning the prediction of yi;vonfecgby using attention mechanismat each step. The detailed diagram of table attention is shown in Figure 3a.2.1.3 I NCORPORATING TABLE POINTER NETWORKSWe now describe the mechanism used to refer to specific database entries during decoding. At eachtimestep, the model needs to decide whether to generate the next token from an entry of the databaseor from the word softmax. This is performed as follows.Pointer Switch : We use zi;v2[0;1]to denote the decision of whether to copy one cell from thetable. We compute this probability as follows:p(zi;vjsi;v) =sigmoid (W[si;v; di;v]):Thus, if zi;v= 1, the next token yi;vwill be generated from the database, whereas if zi;v= 0, thenthe following token is generated from a softmax. We shall now describe how we generate tokensfrom the database.Table Pointer : Ifzi;v= 1, the token is generated from the table. The detailed process of calculatingthe probability distribution over the table is shown in Figure 3b. This is similar to the attentionmechanism, except that we perform a column attention to compute the probabilities of copying fromeach column after Equation. 5. More formally:pc=ATTN (fecg; q); (6)pcopy=prpc; (7)where pcis a probability distribution over columns, whereas pris a probability distribution overrows. In order to compute a matrix with the probability of copying each cell, we simply computethe outer product pcopy=prpc.Objective: As we treat zias a latent variable, we wish to maximize the marginal probability of thesequence yiover all possible values of zi. Thus, our objective function is defined as:p(yi;vjsi;v) =pvocabp(0jsi;v) +pcopyp(1jsi;v) =pvocab(1p(1jsi;v)) +pcopyp(1jsi;v):(8)5Under review as a conference paper at ICLR 2017The model can also be trained in a fully supervised fashion, if zi;vis observed. In such cases,we simply maximize the likelihood of p(zi;vjsi;v), based on the observations, rather than using themarginal probability over zi;v.2.2 R ECIPE GENERATIONingredients recipe1 cup plain soy milk Blend soy milk andspinach leavestogether in a blender until smooth. Add bananaand pulse until thoroughly blended.3/4 cup packed fresh spinach leaves1 large banana , slicedTable 3: Ingredients and recipe for Spinach and Banana Power Smoothie .Next, we consider the task of recipe generation conditioning on the ingredient lists. In this task, wemust generate the recipe from a list of ingredients. Table. 3illustrates the ingredient list and recipeforSpinach and Banana Power Smoothie . We can see that the ingredients soy milk, spinachleaves, and banana occur in the recipe.soydecoderingredientszYesNoencoderBlendsoypvocabpvocabpcopypcopyFigure 4: Recipe pointerLet the ingredients of a recipe be X=fxigTi=1and each ingredient contains Ltokens xi=fxijgLj=1. The corresponding recipe is y=fyvgKv=1. We first use a LSTM to encode each in-gredient:hi;j=LSTM E(WExij; hi;j1)8i:Then, we sum the resulting state of each ingredient to obtain the starting LSTM state of the decoder.Once again we use an attention based decoder:sv=LSTM D(sv1; dv1; W Eyv1);pcopyv=ATTN (ffhi;jgTi=1gLj=1; sv);dv=∑ijpv;i;jhi;j;p(zvjsv) =sigmoid (W[sv; dv]);pvocabv =softmax (W[sv; dv]):Similar to the previous task, the decision to copy from the ingredient list or generate a newword from the softmax is performed using a switch, denoted as p(zvjsv). We can obtain aprobability distribution of copying each of the words in the ingredients by computing pcopyv =ATTN (ffhi;jgTi=1gLj=1; sv)in the attention mechanism. For training, we optimize the marginallikelihood function employed in the previous task.2.3 C OREFERENCE BASED LANGUAGE MODELFinally, we build a language model that uses coreference links to point to previous words. Beforegenerating a word, we first make the decision on whether it is an entity mention. If so, we decide6Under review as a conference paper at ICLR 2017which entity this mention belongs to, then we generate the word based on that entity. Denote thedocument as X=fxigLi=1, and the entities are E=feigNi=1, each entity has Mimentions, ei=fmijgMij=1, such that fxmijgMij=1refer to the same entity. We use a LSTM to model the document,the hidden state of each token is hi=LSTM (WExi; hi1). We use a set he=fhe0; he1; :::; heMgtokeep track of the entity states, where hejis the state of entity j.um and [I] 1think that is whats - Go ahead [Linda] 2. Well and thanks goes to [you] 1and to[the media] 3to help [us] 4...So [our] 4hat is off to all of [you] 5...[I]1umentity stateupdate processI[Linda]2ILinda[You]1YouLindaupdate statepush stateemptystate001012012push stateattn......attnand[I]1of[You]1newentityentity1Figure 5: Coreference based language model, example taken from Wiseman et al. (2016 ).Word generation : At each time step before generating the next word, we predict whether the wordis an entity mention:pcoref(vijhi1; he) =ATTN (he; hi1);di=∑vip(vi)hevip(zijhi1) =sigmoid (W[di; hi1]);where zidenotes whether the next word is an entity and if yes videnotes which entity thenext word corefers to. If the next word is an entity mention, then p(xijvi; hi1; he) =softmax (W1tanh( W2[hevi; hi1]))elsep(xijhi1) =softmax (W1hi1);p(xijx<i) ={p(xijhi1)p(zijhi1; he) ifzi= 0:p(xijvi; hi1; he)pcoref(vijhi1; he)p(zijhi1; he) ifzi= 1:(9)Entity state update : We update the entity state heat each time step. In the beginning, he=fhe0g,he0denotes the state of an virtual empty entity and is a learnable variable. If zi= 1andvi= 0, thenit indicates the next word is a new entity mention, then in the next step, we append hitohe, i.e.,he=fhe; hig, ifei>0, then we update the corresponding entity state with the new hidden state,he[vi] =hi. Another way to update the entity state is to use one LSTM to encode the mention statesand get the new entity state. Here we use the latest entity mention state as the new entity state forsimplicity. The detailed update process is shown in Figure 5.3 E XPERIMENTS4 D ATA SETS AND PREPROCESSINGDialogue : We use the DSTC2 data set. We only extracted the dialogue transcript from data set.There are about 3,200 dialogues in total. Since this is a small data set, we use 5-fold cross validationand report the average result over the 5 partitions. There may be multiple tokens in each table cell,for example in Table. 2, the name, address, post code and phone number have multiple tokens, wereplace them with one special token. For the name, address, post code and phone number of the j-throw, we replace the tokens in each cell with NAME j,ADDR j,POSTCODE j,PHONE j.If a table cell is empty, we replace it with an empty token EMPTY. We do a string match in thetranscript and replace the corresponding tokens in transcripts from the table with the special tokens.7Under review as a conference paper at ICLR 2017Each dialogue on average has 8 turns (16 sentences). We use a vocabulary size of 900, includingabout 400 table tokens and 500 words.Recipes : We crawl all recipes from www.allrecipes.com . There are about 31;000recipes intotal, and every recipe has a ingredient list and a corresponding recipe. We exclude the recipes thathave less than 10 tokens or more than 500 tokens, those recipes take about 0.1% of all data set. Onaverage each recipe has 118 tokens and 9 ingredients. We random shuffle the whole data set and take80% as training and 10% for validation and test. We use a vocabulary size of 10,000 in the model.Coref LM : We use the Xinhua News data set from Gigaword Fifth Edition and sample 100,000documents from it that has length in range from 100 to 500. Each document has on average 234tokens, so there are 23 million tokens in total. We use a tool to annotate all the entity mentionsand use the annotation in the training. We take 80% as training and 10% as validation and testrespectively. We ignore the entities that have only one mention and for the mentions that havemultiple tokens, we take the token that is most frequent in the all the mentions for this entity. Afterthe preprocessing, tokens that are entity mentions take about 10% of all tokens. We use a vocabularysize of 50,000 in the model.4.1 M ODEL TRAINING AND EVALUATIONWe train all models with simple stochastic gradient descent with clipping. We use a one-layer LSTMfor all RNN components. Hyper-parameters are selected using grid search based on the validationset. We use dropout after the input embedding and LSTM output. The learning rate is selected from[0.1, 0.2, 0.5, 1], maximum gradient norm is selected from [1, 2, 5, 10] and drop ratio is selectedfrom [0.2, 0.3, 0.5]. The batch size and LSTM dimension size is slightly different for differenttasks so as to make the model fit into memory. The number of epochs to train are different foreach task and we drop the learning rate after reaching a given number of epochs. We report theper-word perplexity for all tasks, specifically, we report the perplexity of all words, words that canbe generated from reference and non-reference words. For recipe generation, we also generate therecipe using beam size of 10 and evaluate the generated recipe with BLEU.model all table table oov wordseq2seq 1.350.01 4.98 0.38 1.99E7 7.75E6 1.23 0.01table attn 1.370.01 5.09 0.64 7.91E7 1.39E8 1.24 0.01table pointer 1.330.01 3.99 0.36 13602600 1.23 0.01table latent 1.360.01 4.99 0.20 3.78E7 6.08E7 1.24 0.01+sentence attnseq2seq 1.280.01 3.31 0.21 2.83E9 4.69E9 1.19 0.01table attn 1.280.01 3.17 0.21 1.67E7 9.5E6 1.20 0.01table pointer 1.270.01 2.99 0.19 82.86110 1.20 0.01table latent 1.280.01 3.26 0.25 1.27E7 1.41E7 1.20 0.01Table 4: Dialogue perplexity results. (All means all tokens, table means tokens from table, table oovdenotes table tokens that does not appear in the training set, word means non-table tokens). sentenceattn denotes we use attention mechanism over tokens from past turn. Table pointer and table latentdiffers in that table pointer, we provide supervised signal on when to generate a table token, whilein table latent it is a latent decision.modelval testpplBLEUpplBLEUall ing word all ing wordseq2seq 5.60 11.26 5.00 14.07 5.52 11.26 4.91 14.39attn 5.25 6.86 5.03 14.84 5.19 6.92 4.95 15.15pointer 5.15 5.86 5.04 15.06 5.11 6.04 4.98 15.29latent 5.02 5.10 5.01 14.87 4.97 5.19 4.94 15.41Table 5: Recipe result, evaluated in perplexity and BLEU score. ing denotes tokens from recipe thatappear in ingredients.8Under review as a conference paper at ICLR 2017modelval testall entity word all entity wordlm 33.08 44.52 32.04 33.08 43.86 32.10pointer 32.57 32.07 32.62 32.62 32.07 32.69pointer + init 30.43 28.56 30.63 30.42 28.56 30.66Table 6: Coreference based LM. pointer + init means we initialize the model with the LM weights.4.2 R ESULTS AND ANALYSISThe results for dialogue, recipe generation and coref language model are shown in Table 4,5and6respectively. We can see from Table 4that models that condition on table performs better inpredicting table tokens in general. Table pointer has the lowest perplexity for token in the table.Since the table token appears rarely in the dialogue, the overall perplexity does not differ much andthe non-table tokens perplexity are similar. With attention mechanism over the table, the perplexityof table token improves over basic seq2seq model, but not as good as directly pointing to cells in thetable. As expected, using sentence attention improves significantly over models without sentenceattention. Surprisingly, table latent performs much worse than table pointer. We also measure theperplexity of table tokens that appear only in test set. For models other than table pointer, becausethe tokens never appear in training set, the perplexity is quite high, while table pointer can predictthese tokens much more accurately. The recipe results in Table 5in general follows that findingsfrom the dialogue. But the latent model performs better than pointer model since that tokens iningredients that match with recipe does not necessarily come from the ingredients. Imposing asupervised signal will give wrong information to the model and hence make the result worse. Hencewith latent decision, the model learns to when to copy and when to generate it from the vocabulary.The coref LM results are shown in Table 6. We find that coref based LM performs much better onthe entities perplexities, but however is a little bit worse than for non-entity words. We found it is anoptimization problem and perhaps the model is stuck in local optimum. So we initialize the pointermodel with the weights learned from LM, the pointer model performs better than LM both for entityperplexity and non-entity words perplexity.5 R ELATED WORKRecently, there has been great progresses in modeling languages based on neural network, includinglanguage modeling ( Mikolov et al. ,2010 ;Jozefowicz et al. ,2016 ), machine translation ( Sutskeveret al. ,2014 ;Bahdanau et al. ,2014 ), question answering ( Hermann et al. ,2015 ) etc. Based on thesuccess of seq2seq models, neural networks are applied in modeling chit-chat dialogue ( Li et al. ,2016 ;Vinyals & Le ,2015 ;Sordoni et al. ,2015 ;Serban et al. ,2016 ;Shang et al. ,2015 ) and taskoriented dialogue ( Wen et al. ,2015 ;Bordes & Weston ,2016 ;Williams & Zweig ,2016 ;Wen et al. ,2016 ). Most of the chit-chat neural dialogue models are simply applying the seq2seq models. Forthe task oriented dialogues, most of them embed the seq2seq model in traditional dialogue systems,in which the table query part is not differentiable. while our model queries the database directly.Recipe generation was proposed in ( Kiddon et al. ,2016 ). Their model extents previous work onattention models ( Allamanis et al. ,2016 ) to checklists, whereas our work models explicit referencesto those checklists. Context dependent language models ( Mikolov et al. ,2010 ;Ji et al. ,2015 ;Wang& Cho ,2015 ) are proposed to capture long term dependency of text. There are also lots of workson coreference resolution ( Haghighi & Klein ,2010 ;Wiseman et al. ,2016 ). We are the first tocombine coreference with language modeling, to the best of our knowledge. Much effort has beeninvested in embedding a copying mechanism for neural models ( G ̈ulc ̧ehre et al. ,2016 ;Gu et al. ,2016 ;Ling et al. ,2016 ). In general, a gating mechanism is employed to combine the softmax overobserved words and a pointer network ( Vinyals et al. ,2015 ). These gates can be trained either bymarginalizing over both outcomes, or using heuristics (e.g. copy low frequency words). Our modelsare similar to models proposed in ( Ahn et al. ,2016 ;Merity et al. ,2016 ), where the generation ofeach word can be conditioned on a particular entry in knowledge lists and previous words. In ourwork, we describe a model with broader applications, allowing us to condition, on databases, listsand dynamic lists.9Under review as a conference paper at ICLR 20176 C ONCLUSIONWe introduce reference-aware language models which explicitly model the decision of from whereto generate the token at each step. Our model can also learns the decision by treating it as a latentvariable. We demonstrate on three tasks, table based dialogue modeling, recipe generation and corefbased LM, that our model performs better than attention based model, which does not incorporatethis decision explicitly. There are several directions to explore further based on our framework. Thecurrent evaluation method is based on perplexity and BLEU. In task oriented dialogues, we can alsotry human evaluation to see if the model can reply users’ query accurately. It is also interesting touse reinforcement learning to learn the actions in each step.
H1KxcSgVg
Review
5: Marginally below acceptance threshold
This paper introduces pointer-network neural networks, which are applied to referring expressions in three small-scale language modeling tasks: dialogue modeling, recipe modeling and news article modeling. When conditioned on the co-reference chain, the proposed models outperform standard sequence-to-sequence models with attention. The proposed models are essentially variants of pointer networks with copy mechanisms (Gulcehre et al., 2016; Gu et al., 2016; Ling et al., 2016), which have been modified to take into account reference chains. As such, the main architectural novelty lies in 1) restricting the pointer mechanism to focus on co-referenced entities, 2) applying pointer mechanism to 2D arrays (tables), and 3) training with supervised alignments. Although useful in practice, these are minor contributions from an architectural perspective. The empirical contributions are centred around measuring perplexity on the three language modeling tasks. Measuring perplexity is typical for standard language modeling tasks, but is really an unreliable proxy for dialogue modeling and recipe generation performance. In addition to this, both the dialogue and recipe tasks are tiny compared to standard language modeling tasks. This makes it difficult to evaluate the impact of the dialogue and recipe modeling results. For example, if one was to bootstrap from a larger corpus, it seems likely that a standard sequence-to-sequence model with attention would yield performance comparable to the proposed models (with enough data, the attention mechanism could learn to align referring entities by itself). The language modeling task on news article (Gigaword) seems to yield the most conclusive results. However, the dataset for this task is non-standard and results are provided for only a single baseline. Overall, this limits the conclusions we can draw from the empirical experiments. Finally, the paper itself contains many errors, including mathematical errors, grammatical errors and typos: - Eq. (1) is missing a sum over $z_i$. - "into the a decoder LSTM" -> "into the decoder LSTM" - "denoted as his" -> "denoted as" - "Surprising," -> "Surprisingly," - "torkens" -> "tokens" - "if follows that the next token" -> "the next token" - In the "COREFERENCE BASED LANGUAGE MODEL" sub-section, what does $M$ denote? - In the sentence: "The attribute of each column is denoted as $s_c, where $c$ is the c-th attribute". For these definitions to be make sense, $s_c$ has to be a one-hot vector. If yes, please clarify this in the text. - "the weighted sum is performed" -> "the weighted sum is computed" - "a attribute" -> "an attribute" - In the paragraph on Pointer Switch, change $p(z_{i,v} |s_{i,v}) = 1$ -> $p(z_{i,v} |s_{i,v}) = 0$. - In the "Table Pointer" paragraph, I assume you mean outer-product instead of cross-product? Otherwise, I don't see how the equations add up. Other comments: - For the "Attention based decoder", is the attention computed using the word embeddings themselves or the hidden states of the sentence encoder? Also, it applied only to the previous turn of the dialogue or to the entire dialogue history? Please clarify this. - What's the advantage of using an "Entity state update" rule, compared to a pointer network or copy network, which you used in the dialogue and recipe tasks? Please elaborate on this. - In the Related Work section, the following sentence is not quite accurate: "For the task oriented dialogues, most of them embed the seq2seq model in traditional dialogue systems while our model queries the database directly.". There are task-oriented dialogue models which do query databases during natural language generation. See, for example, "A Network-based End-to-End Trainable Task-oriented Dialogue System" by Wen et al.
4: The reviewer is confident but not absolutely certain that the evaluation is correct
ByG8A7cee
ICLR.cc/2017/conference
2017
Reference-Aware Language Models
["Zichao Yang", "Phil Blunsom", "Chris Dyer", "Wang Ling"]
We propose a general class of language models that treat reference as an explicit stochastic latent variable. This architecture allows models to create mentions of entities and their attributes by accessing external databases (required by, e.g., dialogue generation and recipe generation) and internal state (required by, e.g. language models which are aware of coreference). This facilitates the incorporation of information that can be accessed in predictable locations in databases or dis- course context, even when the targets of the reference may be rare words. Experiments on three tasks show our model variants outperform models based on deterministic attention.
["Natural language processing", "Deep learning"]
ABSTRACTWe propose a general class of language models that treat reference as an explicitstochastic latent variable. This architecture allows models to create mentions ofentities and their attributes by accessing external databases (required by, e.g., di-alogue generation and recipe generation) and internal state (required by, e.g. lan-guage models which are aware of coreference). This facilitates the incorporationof information that can be accessed in predictable locations in databases or dis-course context, even when the targets of the reference may be rare words. Ex-periments on three tasks show our model variants outperform models based ondeterministic attention.1 I NTRODUCTIONReferring expressions (REs) in natural language are noun phrases (proper nouns, common nouns,and pronouns) that identify objects, entities, and events in an environment. REs occur frequentlyand they play a key role in communicating information efficiently. While REs are common, previ-ous works neglect to model REs explicitly, either treating REs as ordinary words in the model orreplacing them with special tokens. Here we propose a language modeling framework that explicitlyincorporates reference decisions.In Figure 1we list examples of REs in the context of the three tasks that we consider in this work.Firstly, reference to a database is crucial in many applications. One example is in task orienteddialogue where access to a database is necessary to answer a user’s query ( Young et al. ,2013 ;Liet al. ,2016 ;Vinyals & Le ,2015 ;Wen et al. ,2015 ;Sordoni et al. ,2015 ;Serban et al. ,2016 ;Bordes& Weston ,2016 ;Williams & Zweig ,2016 ;Shang et al. ,2015 ;Wen et al. ,2016 ). Here we considerthe domain of restaurant recommendation where a system refers to restaurants (name) and theirattributes (address, phone number etc) in its responses. When the system says “ the nirala is anice restaurant”, it refers to the restaurant name the nirala from the database. Secondly, manymodels need to refer to a list of items ( Kiddon et al. ,2016 ;Wen et al. ,2015 ). In the task of recipegeneration from a list of ingredients ( Kiddon et al. ,2016 ), the generation of the recipe will frequentlyreference these items. As shown in Figure 1, in the recipe “Blend soy milk and . . . ”, soy milkrefers to the ingredient summaries. Finally, we address references within a document ( Mikolov et al. ,2010 ;Ji et al. ,2015 ;Wang & Cho ,2015 ), as the generation of words will ofter refer to previouslygenerated words. For instance the same entity will often be referred to throughout a document. InFigure 1, the entity you refers to Iin a previous utterance.In this work we develop a language model that has a specific module for generating REs. A series oflatent decisions (should I generate a RE? If yes, which entity in the context should I refer to? Howshould the RE be rendered?) augment a traditional recurrent neural network language model andthe two components are combined as a mixture model. Selecting an entity in context is similar tofamiliar models of attention ( Bahdanau et al. ,2014 ), but rather than being a deterministic functionthat reweights representations of elements in the context, it is treated as a distribution over contextualelements which are stochastically selected and then copied or, if the task warrants it, transformed(e.g., a pronoun rather than a proper name is produced as output). Two variants are possible forupdating the RNN state: one that only looks at the generated output form; and a second that looksat values of the latent variables. The former admits trivial unsupervised learning, latent decisionsare conditionally independent of each other given observed context, whereas the latter enables moreWork completed at DeepMind.1Under review as a conference paper at ICLR 2017referenceexampledialoguerecipecoreferenceM: the nirala is a nice restuarantthe niralamoderate...1 cpu plain soy milk...tableingredientsBlend soy milk and ...[I]1 [Linda]2 [you]1...um and [I]1 think ... [you]1 ...corefFigure 1: Reference-aware language models.expressive models that can extract information from the entity that is being referred to. In each ofthe three tasks, we demonstrate our reference aware model’s efficacy in evaluations against modelsthat do not explicitly include a reference operation.Our contributions are as follows:We propose a general framework to model reference in language and instantiate it in thecontext of dialogue modeling, recipe generation and coreference based language models.We build three data sets to test our models. There lack existing data sets that satisfy ourneed, so we build these data sets ourselves. These data sets are either built on top existingdata set (we constructed the table for DSTC2 data set for dialogue evaluation), crawledfrom websites (we crawled all recipes in www.allrecipes.com ) or annotated withNLP tools (we annotate the coreference with Gigaword corpus for our evaluation).We perform comprehensive evaluation of our models on the three data sets and verify ourmodels perform better than strong baselines.2 R EFERENCE -AWARE LANGUAGE MODELSHere we propose a general framework for reference-aware language models.We denote each document as a series of tokens x1; : : : ; x L, where Lis the number of tokens in thedocument. Our goal is to maximize the probabilities p(xijci), for each word in the document basedon its previous context ci=x1; : : : ; x i1. In contrast to traditional neural language models, weintroduce a variable at each position zi, which controls the decision on which source xiis generatedfrom. The token conditional probably is then obtained by:p(xijci) =p(xijzi; ci)p(zijci): (1)In dialogue modeling and recipe generation, ziwill simply taken on values in f0;1g. Where zi= 1denotes that xiis generated as a reference, either to a database entry or an item in a list. However,zican also be defined as a distribution over previous entities, allowing the model to predict xiconditioned on its a previous mention word. This will be the focus of the coreference languagemodel. When ziis not observed (which it generally will not be), we will train our model to maximizethe marginal probability in Eq. 1directly.2.1 D IALOGUE MODEL WITH DATABASE SUPPORTWe first apply our model on task-oriented dialogue systems in the domain of restaurant recommenda-tions, and work on the data set from the second Dialogue State Tracking Challenge (DSTC2) ( Hen-derson et al. ,2014 ). Table. 1is one example dialogue from this dataset.We can observe from this example, users get recommendations of restaurants based on queriesthat specify the area, price and food type of the restaurant. We can support the system’s decisionsby incorporating a mechanism that allows the model to query the database allowing the model tofind restaurants that satisfy the users queries. Thus, we crawled TripAdvisor for restaurants in the2Under review as a conference paper at ICLR 2017M: Hello , welcome to the Cambridge restaurant system? You can ask for restaurants byarea ,price range orfood type . How may I help you?U: i want a moderately priced restaurant in the north townM: the nirala is a nice restaurant in the north of town in the moderate price rangeU: what is the addressM: Sure , the nirala is on7 Milton Road ChestertonTable 1: Example dialogue, M stands for Machine and U stands for UserNAME PRICE RANGE FOOD AREA ADDRESS POST CODE PHONEali baba moderate lebanese centre 59 Hills Road City Cen-treCB 2, 1 NT 01462 432565theniralamoderate indian north 7 Milton Road Chester-tonCB 4, 1 UY 01223 360966Table 2: Fragment of database for dialogue system.Cambridge area, where the dialog dataset was collected. Then, we remove restaurants that do notappear in the data set and create a database with 109 entries with restaurants and their attributes (e.g.food type). A sample of our database is shown in Table. 2. We can observe that each restaurantcontains 6 attributes that are generally referred in the dialogue dataset. As such, if the user requestsa restaurant that serves “indian” food, we wish to train a model that can search for entries whose“food” column contains “indian”. Now, we describe how we deploy a model that fulfills theserequirements.2.1.1 D IALOGUE MODELMUMUsentence encoderturn encoderdecoderattnFigure 2: Hierarchical RNN Seq2Seq modelWe build a model based on the hierarchical RNN model described in ( Serban et al. ,2016 ), as indialogues, the generation of the response is not only dependent on the previous sentence, but on allsentences leading to the response. We assume that a dialogue is alternated between a machine and auser. An illustration of the model is shown in Figure 2.Consider a dialogue with Tturns, and the utterance from a user is denoted as X=fxigTi=1, whereiis the i-th utterance, whereas the utterance from a machine is denoted as Y=fyigTi=1, where iis the i-th utterance. We define xi=fxijgjxijj=1,yi=fyivgjyijv=1, where xijdenotes the j-th tokenin the i-th utterance from the user, whereas yivdenotes the v-th token in the i-th utterance fromthe machine. Finally, jxijandjyijdenote the number of tokens in the user and machine utterances,respectively. The dialogue sequence starts with machine utterance fy1; x1; y2; x2; : : : ; y T; xTg. Wewould like to model the utterances from the machinep(y1; y2; : : : ; y Tjx1; x2; : : : ; x T) =∏ip(yijy<i; x<i) =∏i;vp(yi;vjyi;<v; y<i; x<i);where y<idenotes all the utterances before iandyi;<v denotes the first v1tokens in the i-thutterance of the machine. A neural model is employed to predict p(yi;vjyi;<v; y<i; x<i), whichoperates as follows:Sentence Encoder : We first encode previous utterances y<iandx<iinto continuous space by gen-erating employing a LSTM encoder. Thus, for a given utterance xi, and start with the initial LSTMstatehxi;0and apply the recursion hxi;j=LSTM E(WExi;j; hxi;j1), where WExi;jdenotes a word3Under review as a conference paper at ICLR 2017embedding lookup for the token xi;j, and LSTM Edenotes the LSTM transition function describedinHochreiter & Schmidhuber (1997 ). The representation of the user utterance is represented bythe final LSTM state hxi=hxi;jxij. The same process is applied to obtain the machine utterancerepresentation hyi=hyi;jyij.Turn Encoder : Then, combine all the representations of all the utterances with a second LSTM,which encodes the sequence fhy1; hx1; :::; hyi; hxiginto a continuous vector. Once again, we start withan initial state u0and feed each of the utterance representation to obtain the following LSTM state,until the final state is obtained. For simplicity, we shall refer to this as ui, which can be seen as thehierarchical encoding of the previous iutterances.Seq2Seq Decoder : As for decoding, in order to generate each utterance yi, we can feed ui1intothe decoder LSTM as the initial state si;0=ui1and decode each token in yi. Thus, we can expressthe decoder as:syi;v=LSTM D(WEyi;v1; si;v1);pyi;v=softmax (Wsyi;v);where the desired probability p(yi;vjyi;<v; y<i; x<i)is expressed by pyi;v.Attention based decoder : We can also incorporate the attention mechanism in our hierarchicalmodel. An attention model builds a representation dby averaging over a set of vectors p. We definethe attention function as a=ATTN (p; q), where ais a probability distribution over the set of vectorsp, conditioned on any input representation q. A full description of this operation is described in ( Bah-danau et al. ,2014 ). Thus, for each generated token yi;v, we compute the attentions ai;v, conditionedon the current decoder state syi;v, obtaining the attentions over input tokens from previous turn ( i1).We denote the vector of all tokens in previous turn as hx;yi1= [fhxi1;jgjxi1jj=1;fhyi1;vgjyi1jv=1]. LetK=jhx;yi1jbe the number of tokens in previous turn. Thus, we obtain the attention probabilitiesover all previous tokens ai;vas ATTN (syi;v; hx;yi1). Then, the weighted sum is computed over theseprobabilities di;v=∑k2Kai;v;khx;yi1;k, where ai;v;k is the probability of aligning to the k-th tokenfrom previous turn. The resulting vector di;vis used to obtain the probability of the following wordpyi;v. Thus, we express the decoder as:syi;v=LSTM D([WEyi;v1; di;v1]; si;v1);ai;v=ATTN (hx;yi1; syi;v);di;v=∑k2Kai;v;khx;yi1;k;pyi;v=softmax (W[syi;v; di;v]):2.1.2 I NCORPORATING TABLE ATTENTIONTable Attnquery...attributesrowsTable AttndecoderUattnweighted rowStep 1: attribute attnStep 2: weighted columnStep 3: row attnpapaprpr(a) Decoder with table attention.query...attributesrowszYesNoUdecoderTable PointerTable PointerStep 1: attribute attnStep 2: weighted columnStep 3: row attnStep 4: weighted rowStep 5: column attnpapaprprpcpcpvocabpvocabpcopypcopy (b) Decoder with table pointer.Figure 3: Table based decoder.4Under review as a conference paper at ICLR 2017We now extend the attention model in order to allow the attention to be computed over a table,allowing the model to condition the generation on a database.We denote a table with Rrows and Ccolumns as ffr;cg; r2[1; R]; c2[1; C], where fr;cis the cellin row rand column c. The attribute of each column is denoted as sc, where cis the c-th attribute.fr;candscare one-hot vector.Table Encoding : To encode the table, we build an attribute vector gcfor each column. For eachcellfr;cof the table, we concatenate it with the corresponding attribute gcand then feed it througha one-layer MLP as follows: gc=WEscand then er;c= tanh( W[WEfr;c; gc]).Table Attention : The diagram for table attention is shown in Figure 3a. The attention over cellsin the table is conditioned on a given vector q, similarly to the attention model for sequencesATTN (p; q). However, rather than a sequence p, we now operate over a table f. Our attentionmodel computes a attribute attention followed by row attention of the table. We first use the atten-tion mechanism on the attributes to find out which attribute the user asks about. Suppose a usersayscheap , then we should focus on the price attribute. After we get the attention probabil-itypa=ATTN (fgcg; q), over the attribute, we calculate the weighted representation for each rower=∑cpacercconditioned on pa. Then erhas the price information of each row. We further useattention mechanism on erand get the probability pr=ATTN (ferg; q)over the rows. Then restau-rants with cheap price will be picked. Then, using the probabilities pr, we compute the weightedaverage over the all rows ec=∑rprrer;c, which is used in the decoder. The detailed process is:pa=ATTN (fgcg; q); (2)er=∑cpacerc8r; (3)pr=ATTN (ferg; q); (4)ec=∑rprrer;c8c: (5)This is embedded in the decoder by replacing the conditioned state qas the current decoder statesyi;0and then at each step, conditioning the prediction of yi;vonfecgby using attention mechanismat each step. The detailed diagram of table attention is shown in Figure 3a.2.1.3 I NCORPORATING TABLE POINTER NETWORKSWe now describe the mechanism used to refer to specific database entries during decoding. At eachtimestep, the model needs to decide whether to generate the next token from an entry of the databaseor from the word softmax. This is performed as follows.Pointer Switch : We use zi;v2[0;1]to denote the decision of whether to copy one cell from thetable. We compute this probability as follows:p(zi;vjsi;v) =sigmoid (W[si;v; di;v]):Thus, if zi;v= 1, the next token yi;vwill be generated from the database, whereas if zi;v= 0, thenthe following token is generated from a softmax. We shall now describe how we generate tokensfrom the database.Table Pointer : Ifzi;v= 1, the token is generated from the table. The detailed process of calculatingthe probability distribution over the table is shown in Figure 3b. This is similar to the attentionmechanism, except that we perform a column attention to compute the probabilities of copying fromeach column after Equation. 5. More formally:pc=ATTN (fecg; q); (6)pcopy=prpc; (7)where pcis a probability distribution over columns, whereas pris a probability distribution overrows. In order to compute a matrix with the probability of copying each cell, we simply computethe outer product pcopy=prpc.Objective: As we treat zias a latent variable, we wish to maximize the marginal probability of thesequence yiover all possible values of zi. Thus, our objective function is defined as:p(yi;vjsi;v) =pvocabp(0jsi;v) +pcopyp(1jsi;v) =pvocab(1p(1jsi;v)) +pcopyp(1jsi;v):(8)5Under review as a conference paper at ICLR 2017The model can also be trained in a fully supervised fashion, if zi;vis observed. In such cases,we simply maximize the likelihood of p(zi;vjsi;v), based on the observations, rather than using themarginal probability over zi;v.2.2 R ECIPE GENERATIONingredients recipe1 cup plain soy milk Blend soy milk andspinach leavestogether in a blender until smooth. Add bananaand pulse until thoroughly blended.3/4 cup packed fresh spinach leaves1 large banana , slicedTable 3: Ingredients and recipe for Spinach and Banana Power Smoothie .Next, we consider the task of recipe generation conditioning on the ingredient lists. In this task, wemust generate the recipe from a list of ingredients. Table. 3illustrates the ingredient list and recipeforSpinach and Banana Power Smoothie . We can see that the ingredients soy milk, spinachleaves, and banana occur in the recipe.soydecoderingredientszYesNoencoderBlendsoypvocabpvocabpcopypcopyFigure 4: Recipe pointerLet the ingredients of a recipe be X=fxigTi=1and each ingredient contains Ltokens xi=fxijgLj=1. The corresponding recipe is y=fyvgKv=1. We first use a LSTM to encode each in-gredient:hi;j=LSTM E(WExij; hi;j1)8i:Then, we sum the resulting state of each ingredient to obtain the starting LSTM state of the decoder.Once again we use an attention based decoder:sv=LSTM D(sv1; dv1; W Eyv1);pcopyv=ATTN (ffhi;jgTi=1gLj=1; sv);dv=∑ijpv;i;jhi;j;p(zvjsv) =sigmoid (W[sv; dv]);pvocabv =softmax (W[sv; dv]):Similar to the previous task, the decision to copy from the ingredient list or generate a newword from the softmax is performed using a switch, denoted as p(zvjsv). We can obtain aprobability distribution of copying each of the words in the ingredients by computing pcopyv =ATTN (ffhi;jgTi=1gLj=1; sv)in the attention mechanism. For training, we optimize the marginallikelihood function employed in the previous task.2.3 C OREFERENCE BASED LANGUAGE MODELFinally, we build a language model that uses coreference links to point to previous words. Beforegenerating a word, we first make the decision on whether it is an entity mention. If so, we decide6Under review as a conference paper at ICLR 2017which entity this mention belongs to, then we generate the word based on that entity. Denote thedocument as X=fxigLi=1, and the entities are E=feigNi=1, each entity has Mimentions, ei=fmijgMij=1, such that fxmijgMij=1refer to the same entity. We use a LSTM to model the document,the hidden state of each token is hi=LSTM (WExi; hi1). We use a set he=fhe0; he1; :::; heMgtokeep track of the entity states, where hejis the state of entity j.um and [I] 1think that is whats - Go ahead [Linda] 2. Well and thanks goes to [you] 1and to[the media] 3to help [us] 4...So [our] 4hat is off to all of [you] 5...[I]1umentity stateupdate processI[Linda]2ILinda[You]1YouLindaupdate statepush stateemptystate001012012push stateattn......attnand[I]1of[You]1newentityentity1Figure 5: Coreference based language model, example taken from Wiseman et al. (2016 ).Word generation : At each time step before generating the next word, we predict whether the wordis an entity mention:pcoref(vijhi1; he) =ATTN (he; hi1);di=∑vip(vi)hevip(zijhi1) =sigmoid (W[di; hi1]);where zidenotes whether the next word is an entity and if yes videnotes which entity thenext word corefers to. If the next word is an entity mention, then p(xijvi; hi1; he) =softmax (W1tanh( W2[hevi; hi1]))elsep(xijhi1) =softmax (W1hi1);p(xijx<i) ={p(xijhi1)p(zijhi1; he) ifzi= 0:p(xijvi; hi1; he)pcoref(vijhi1; he)p(zijhi1; he) ifzi= 1:(9)Entity state update : We update the entity state heat each time step. In the beginning, he=fhe0g,he0denotes the state of an virtual empty entity and is a learnable variable. If zi= 1andvi= 0, thenit indicates the next word is a new entity mention, then in the next step, we append hitohe, i.e.,he=fhe; hig, ifei>0, then we update the corresponding entity state with the new hidden state,he[vi] =hi. Another way to update the entity state is to use one LSTM to encode the mention statesand get the new entity state. Here we use the latest entity mention state as the new entity state forsimplicity. The detailed update process is shown in Figure 5.3 E XPERIMENTS4 D ATA SETS AND PREPROCESSINGDialogue : We use the DSTC2 data set. We only extracted the dialogue transcript from data set.There are about 3,200 dialogues in total. Since this is a small data set, we use 5-fold cross validationand report the average result over the 5 partitions. There may be multiple tokens in each table cell,for example in Table. 2, the name, address, post code and phone number have multiple tokens, wereplace them with one special token. For the name, address, post code and phone number of the j-throw, we replace the tokens in each cell with NAME j,ADDR j,POSTCODE j,PHONE j.If a table cell is empty, we replace it with an empty token EMPTY. We do a string match in thetranscript and replace the corresponding tokens in transcripts from the table with the special tokens.7Under review as a conference paper at ICLR 2017Each dialogue on average has 8 turns (16 sentences). We use a vocabulary size of 900, includingabout 400 table tokens and 500 words.Recipes : We crawl all recipes from www.allrecipes.com . There are about 31;000recipes intotal, and every recipe has a ingredient list and a corresponding recipe. We exclude the recipes thathave less than 10 tokens or more than 500 tokens, those recipes take about 0.1% of all data set. Onaverage each recipe has 118 tokens and 9 ingredients. We random shuffle the whole data set and take80% as training and 10% for validation and test. We use a vocabulary size of 10,000 in the model.Coref LM : We use the Xinhua News data set from Gigaword Fifth Edition and sample 100,000documents from it that has length in range from 100 to 500. Each document has on average 234tokens, so there are 23 million tokens in total. We use a tool to annotate all the entity mentionsand use the annotation in the training. We take 80% as training and 10% as validation and testrespectively. We ignore the entities that have only one mention and for the mentions that havemultiple tokens, we take the token that is most frequent in the all the mentions for this entity. Afterthe preprocessing, tokens that are entity mentions take about 10% of all tokens. We use a vocabularysize of 50,000 in the model.4.1 M ODEL TRAINING AND EVALUATIONWe train all models with simple stochastic gradient descent with clipping. We use a one-layer LSTMfor all RNN components. Hyper-parameters are selected using grid search based on the validationset. We use dropout after the input embedding and LSTM output. The learning rate is selected from[0.1, 0.2, 0.5, 1], maximum gradient norm is selected from [1, 2, 5, 10] and drop ratio is selectedfrom [0.2, 0.3, 0.5]. The batch size and LSTM dimension size is slightly different for differenttasks so as to make the model fit into memory. The number of epochs to train are different foreach task and we drop the learning rate after reaching a given number of epochs. We report theper-word perplexity for all tasks, specifically, we report the perplexity of all words, words that canbe generated from reference and non-reference words. For recipe generation, we also generate therecipe using beam size of 10 and evaluate the generated recipe with BLEU.model all table table oov wordseq2seq 1.350.01 4.98 0.38 1.99E7 7.75E6 1.23 0.01table attn 1.370.01 5.09 0.64 7.91E7 1.39E8 1.24 0.01table pointer 1.330.01 3.99 0.36 13602600 1.23 0.01table latent 1.360.01 4.99 0.20 3.78E7 6.08E7 1.24 0.01+sentence attnseq2seq 1.280.01 3.31 0.21 2.83E9 4.69E9 1.19 0.01table attn 1.280.01 3.17 0.21 1.67E7 9.5E6 1.20 0.01table pointer 1.270.01 2.99 0.19 82.86110 1.20 0.01table latent 1.280.01 3.26 0.25 1.27E7 1.41E7 1.20 0.01Table 4: Dialogue perplexity results. (All means all tokens, table means tokens from table, table oovdenotes table tokens that does not appear in the training set, word means non-table tokens). sentenceattn denotes we use attention mechanism over tokens from past turn. Table pointer and table latentdiffers in that table pointer, we provide supervised signal on when to generate a table token, whilein table latent it is a latent decision.modelval testpplBLEUpplBLEUall ing word all ing wordseq2seq 5.60 11.26 5.00 14.07 5.52 11.26 4.91 14.39attn 5.25 6.86 5.03 14.84 5.19 6.92 4.95 15.15pointer 5.15 5.86 5.04 15.06 5.11 6.04 4.98 15.29latent 5.02 5.10 5.01 14.87 4.97 5.19 4.94 15.41Table 5: Recipe result, evaluated in perplexity and BLEU score. ing denotes tokens from recipe thatappear in ingredients.8Under review as a conference paper at ICLR 2017modelval testall entity word all entity wordlm 33.08 44.52 32.04 33.08 43.86 32.10pointer 32.57 32.07 32.62 32.62 32.07 32.69pointer + init 30.43 28.56 30.63 30.42 28.56 30.66Table 6: Coreference based LM. pointer + init means we initialize the model with the LM weights.4.2 R ESULTS AND ANALYSISThe results for dialogue, recipe generation and coref language model are shown in Table 4,5and6respectively. We can see from Table 4that models that condition on table performs better inpredicting table tokens in general. Table pointer has the lowest perplexity for token in the table.Since the table token appears rarely in the dialogue, the overall perplexity does not differ much andthe non-table tokens perplexity are similar. With attention mechanism over the table, the perplexityof table token improves over basic seq2seq model, but not as good as directly pointing to cells in thetable. As expected, using sentence attention improves significantly over models without sentenceattention. Surprisingly, table latent performs much worse than table pointer. We also measure theperplexity of table tokens that appear only in test set. For models other than table pointer, becausethe tokens never appear in training set, the perplexity is quite high, while table pointer can predictthese tokens much more accurately. The recipe results in Table 5in general follows that findingsfrom the dialogue. But the latent model performs better than pointer model since that tokens iningredients that match with recipe does not necessarily come from the ingredients. Imposing asupervised signal will give wrong information to the model and hence make the result worse. Hencewith latent decision, the model learns to when to copy and when to generate it from the vocabulary.The coref LM results are shown in Table 6. We find that coref based LM performs much better onthe entities perplexities, but however is a little bit worse than for non-entity words. We found it is anoptimization problem and perhaps the model is stuck in local optimum. So we initialize the pointermodel with the weights learned from LM, the pointer model performs better than LM both for entityperplexity and non-entity words perplexity.5 R ELATED WORKRecently, there has been great progresses in modeling languages based on neural network, includinglanguage modeling ( Mikolov et al. ,2010 ;Jozefowicz et al. ,2016 ), machine translation ( Sutskeveret al. ,2014 ;Bahdanau et al. ,2014 ), question answering ( Hermann et al. ,2015 ) etc. Based on thesuccess of seq2seq models, neural networks are applied in modeling chit-chat dialogue ( Li et al. ,2016 ;Vinyals & Le ,2015 ;Sordoni et al. ,2015 ;Serban et al. ,2016 ;Shang et al. ,2015 ) and taskoriented dialogue ( Wen et al. ,2015 ;Bordes & Weston ,2016 ;Williams & Zweig ,2016 ;Wen et al. ,2016 ). Most of the chit-chat neural dialogue models are simply applying the seq2seq models. Forthe task oriented dialogues, most of them embed the seq2seq model in traditional dialogue systems,in which the table query part is not differentiable. while our model queries the database directly.Recipe generation was proposed in ( Kiddon et al. ,2016 ). Their model extents previous work onattention models ( Allamanis et al. ,2016 ) to checklists, whereas our work models explicit referencesto those checklists. Context dependent language models ( Mikolov et al. ,2010 ;Ji et al. ,2015 ;Wang& Cho ,2015 ) are proposed to capture long term dependency of text. There are also lots of workson coreference resolution ( Haghighi & Klein ,2010 ;Wiseman et al. ,2016 ). We are the first tocombine coreference with language modeling, to the best of our knowledge. Much effort has beeninvested in embedding a copying mechanism for neural models ( G ̈ulc ̧ehre et al. ,2016 ;Gu et al. ,2016 ;Ling et al. ,2016 ). In general, a gating mechanism is employed to combine the softmax overobserved words and a pointer network ( Vinyals et al. ,2015 ). These gates can be trained either bymarginalizing over both outcomes, or using heuristics (e.g. copy low frequency words). Our modelsare similar to models proposed in ( Ahn et al. ,2016 ;Merity et al. ,2016 ), where the generation ofeach word can be conditioned on a particular entry in knowledge lists and previous words. In ourwork, we describe a model with broader applications, allowing us to condition, on databases, listsand dynamic lists.9Under review as a conference paper at ICLR 20176 C ONCLUSIONWe introduce reference-aware language models which explicitly model the decision of from whereto generate the token at each step. Our model can also learns the decision by treating it as a latentvariable. We demonstrate on three tasks, table based dialogue modeling, recipe generation and corefbased LM, that our model performs better than attention based model, which does not incorporatethis decision explicitly. There are several directions to explore further based on our framework. Thecurrent evaluation method is based on perplexity and BLEU. In task oriented dialogues, we can alsotry human evaluation to see if the model can reply users’ query accurately. It is also interesting touse reinforcement learning to learn the actions in each step.
BJ_SQ64Vg
6: Marginally above acceptance threshold
This paper presents a new type of language model that treats entity references as latent variables. The paper is structured as three specialized models for three applications: dialog generation with references to database entries, recipe generation with references to ingredients, and text generation with coreference mentions. Despite some opaqueness in details that I will discuss later, the paper does a great job making the main idea coming through, which I think is quite interesting and definitely worth pursuing further. But it seems the paper was rushed into the deadline, as there are a few major weaknesses. The first major weakness is that the claimed latent variables are hardly latent in the actual empirical evaluation. As clarified by the authors via pre-review QAs, all mentions were assumed to be given to all model variants, and so, it would seem like an over-claim to call these variables as latent when they are in fact treated as observed variables. Is it because the models with latent variables were too difficult to train right? A related problem is the use of perplexity as an evaluation measure when comparing reference-aware language models to vanilla language models. Essentially the authors are comparing two language models defined over different event space, which is not a fair comparison. Because mentions were assumed to be given for the reference-aware language models, and because of the fact that mention generators are designed similar to a pointer network, the probability scores over mentions will naturally be higher, compared to the regular language model that needs to consider a much bigger vocabulary set. The effect is analogous to comparing language models with aggressive UNK (and a small vocabulary set) to a language models with no UNK (and a much larger vocabulary set). To mitigate this problem, the authors need to perform one of the following additional evaluations: either assuming no mention boundaries and marginalizing over all possibilities (treating latent variables as truly latent), or showing other types of evaluation beyond perplexity, for example, BLEU, METEOR, human evaluation etc on the corresponding generation task. The other major weakness is writing in terms of technical accuracy and completeness. I found many details opaque and confusing even after QAs. I wonder if the main challenge that hinders the quality of writing has something to do with having three very specialized models in one paper, each having a lot of details to be worked out, which may have not been extremely important for the main story of the paper, but nonetheless not negligible in order to understand what is going on with the paper. Perhaps the authors can restructure the paper so that the most important details are clearly worked out in the main body of the paper, especially in terms of latent variable handling — how to make mention detection and conference resolution truly latent, and if and when entity update helps, which in the current version is not elaborated at all, as it is mentioned only very briefly for the third application (coreference resolution) without any empirical comparisons to motivate the update operation.
4: The reviewer is confident but not absolutely certain that the evaluation is correct
ByG8A7cee
ICLR.cc/2017/conference
2017
Reference-Aware Language Models
["Zichao Yang", "Phil Blunsom", "Chris Dyer", "Wang Ling"]
We propose a general class of language models that treat reference as an explicit stochastic latent variable. This architecture allows models to create mentions of entities and their attributes by accessing external databases (required by, e.g., dialogue generation and recipe generation) and internal state (required by, e.g. language models which are aware of coreference). This facilitates the incorporation of information that can be accessed in predictable locations in databases or dis- course context, even when the targets of the reference may be rare words. Experiments on three tasks show our model variants outperform models based on deterministic attention.
["Natural language processing", "Deep learning"]
ABSTRACTWe propose a general class of language models that treat reference as an explicitstochastic latent variable. This architecture allows models to create mentions ofentities and their attributes by accessing external databases (required by, e.g., di-alogue generation and recipe generation) and internal state (required by, e.g. lan-guage models which are aware of coreference). This facilitates the incorporationof information that can be accessed in predictable locations in databases or dis-course context, even when the targets of the reference may be rare words. Ex-periments on three tasks show our model variants outperform models based ondeterministic attention.1 I NTRODUCTIONReferring expressions (REs) in natural language are noun phrases (proper nouns, common nouns,and pronouns) that identify objects, entities, and events in an environment. REs occur frequentlyand they play a key role in communicating information efficiently. While REs are common, previ-ous works neglect to model REs explicitly, either treating REs as ordinary words in the model orreplacing them with special tokens. Here we propose a language modeling framework that explicitlyincorporates reference decisions.In Figure 1we list examples of REs in the context of the three tasks that we consider in this work.Firstly, reference to a database is crucial in many applications. One example is in task orienteddialogue where access to a database is necessary to answer a user’s query ( Young et al. ,2013 ;Liet al. ,2016 ;Vinyals & Le ,2015 ;Wen et al. ,2015 ;Sordoni et al. ,2015 ;Serban et al. ,2016 ;Bordes& Weston ,2016 ;Williams & Zweig ,2016 ;Shang et al. ,2015 ;Wen et al. ,2016 ). Here we considerthe domain of restaurant recommendation where a system refers to restaurants (name) and theirattributes (address, phone number etc) in its responses. When the system says “ the nirala is anice restaurant”, it refers to the restaurant name the nirala from the database. Secondly, manymodels need to refer to a list of items ( Kiddon et al. ,2016 ;Wen et al. ,2015 ). In the task of recipegeneration from a list of ingredients ( Kiddon et al. ,2016 ), the generation of the recipe will frequentlyreference these items. As shown in Figure 1, in the recipe “Blend soy milk and . . . ”, soy milkrefers to the ingredient summaries. Finally, we address references within a document ( Mikolov et al. ,2010 ;Ji et al. ,2015 ;Wang & Cho ,2015 ), as the generation of words will ofter refer to previouslygenerated words. For instance the same entity will often be referred to throughout a document. InFigure 1, the entity you refers to Iin a previous utterance.In this work we develop a language model that has a specific module for generating REs. A series oflatent decisions (should I generate a RE? If yes, which entity in the context should I refer to? Howshould the RE be rendered?) augment a traditional recurrent neural network language model andthe two components are combined as a mixture model. Selecting an entity in context is similar tofamiliar models of attention ( Bahdanau et al. ,2014 ), but rather than being a deterministic functionthat reweights representations of elements in the context, it is treated as a distribution over contextualelements which are stochastically selected and then copied or, if the task warrants it, transformed(e.g., a pronoun rather than a proper name is produced as output). Two variants are possible forupdating the RNN state: one that only looks at the generated output form; and a second that looksat values of the latent variables. The former admits trivial unsupervised learning, latent decisionsare conditionally independent of each other given observed context, whereas the latter enables moreWork completed at DeepMind.1Under review as a conference paper at ICLR 2017referenceexampledialoguerecipecoreferenceM: the nirala is a nice restuarantthe niralamoderate...1 cpu plain soy milk...tableingredientsBlend soy milk and ...[I]1 [Linda]2 [you]1...um and [I]1 think ... [you]1 ...corefFigure 1: Reference-aware language models.expressive models that can extract information from the entity that is being referred to. In each ofthe three tasks, we demonstrate our reference aware model’s efficacy in evaluations against modelsthat do not explicitly include a reference operation.Our contributions are as follows:We propose a general framework to model reference in language and instantiate it in thecontext of dialogue modeling, recipe generation and coreference based language models.We build three data sets to test our models. There lack existing data sets that satisfy ourneed, so we build these data sets ourselves. These data sets are either built on top existingdata set (we constructed the table for DSTC2 data set for dialogue evaluation), crawledfrom websites (we crawled all recipes in www.allrecipes.com ) or annotated withNLP tools (we annotate the coreference with Gigaword corpus for our evaluation).We perform comprehensive evaluation of our models on the three data sets and verify ourmodels perform better than strong baselines.2 R EFERENCE -AWARE LANGUAGE MODELSHere we propose a general framework for reference-aware language models.We denote each document as a series of tokens x1; : : : ; x L, where Lis the number of tokens in thedocument. Our goal is to maximize the probabilities p(xijci), for each word in the document basedon its previous context ci=x1; : : : ; x i1. In contrast to traditional neural language models, weintroduce a variable at each position zi, which controls the decision on which source xiis generatedfrom. The token conditional probably is then obtained by:p(xijci) =p(xijzi; ci)p(zijci): (1)In dialogue modeling and recipe generation, ziwill simply taken on values in f0;1g. Where zi= 1denotes that xiis generated as a reference, either to a database entry or an item in a list. However,zican also be defined as a distribution over previous entities, allowing the model to predict xiconditioned on its a previous mention word. This will be the focus of the coreference languagemodel. When ziis not observed (which it generally will not be), we will train our model to maximizethe marginal probability in Eq. 1directly.2.1 D IALOGUE MODEL WITH DATABASE SUPPORTWe first apply our model on task-oriented dialogue systems in the domain of restaurant recommenda-tions, and work on the data set from the second Dialogue State Tracking Challenge (DSTC2) ( Hen-derson et al. ,2014 ). Table. 1is one example dialogue from this dataset.We can observe from this example, users get recommendations of restaurants based on queriesthat specify the area, price and food type of the restaurant. We can support the system’s decisionsby incorporating a mechanism that allows the model to query the database allowing the model tofind restaurants that satisfy the users queries. Thus, we crawled TripAdvisor for restaurants in the2Under review as a conference paper at ICLR 2017M: Hello , welcome to the Cambridge restaurant system? You can ask for restaurants byarea ,price range orfood type . How may I help you?U: i want a moderately priced restaurant in the north townM: the nirala is a nice restaurant in the north of town in the moderate price rangeU: what is the addressM: Sure , the nirala is on7 Milton Road ChestertonTable 1: Example dialogue, M stands for Machine and U stands for UserNAME PRICE RANGE FOOD AREA ADDRESS POST CODE PHONEali baba moderate lebanese centre 59 Hills Road City Cen-treCB 2, 1 NT 01462 432565theniralamoderate indian north 7 Milton Road Chester-tonCB 4, 1 UY 01223 360966Table 2: Fragment of database for dialogue system.Cambridge area, where the dialog dataset was collected. Then, we remove restaurants that do notappear in the data set and create a database with 109 entries with restaurants and their attributes (e.g.food type). A sample of our database is shown in Table. 2. We can observe that each restaurantcontains 6 attributes that are generally referred in the dialogue dataset. As such, if the user requestsa restaurant that serves “indian” food, we wish to train a model that can search for entries whose“food” column contains “indian”. Now, we describe how we deploy a model that fulfills theserequirements.2.1.1 D IALOGUE MODELMUMUsentence encoderturn encoderdecoderattnFigure 2: Hierarchical RNN Seq2Seq modelWe build a model based on the hierarchical RNN model described in ( Serban et al. ,2016 ), as indialogues, the generation of the response is not only dependent on the previous sentence, but on allsentences leading to the response. We assume that a dialogue is alternated between a machine and auser. An illustration of the model is shown in Figure 2.Consider a dialogue with Tturns, and the utterance from a user is denoted as X=fxigTi=1, whereiis the i-th utterance, whereas the utterance from a machine is denoted as Y=fyigTi=1, where iis the i-th utterance. We define xi=fxijgjxijj=1,yi=fyivgjyijv=1, where xijdenotes the j-th tokenin the i-th utterance from the user, whereas yivdenotes the v-th token in the i-th utterance fromthe machine. Finally, jxijandjyijdenote the number of tokens in the user and machine utterances,respectively. The dialogue sequence starts with machine utterance fy1; x1; y2; x2; : : : ; y T; xTg. Wewould like to model the utterances from the machinep(y1; y2; : : : ; y Tjx1; x2; : : : ; x T) =∏ip(yijy<i; x<i) =∏i;vp(yi;vjyi;<v; y<i; x<i);where y<idenotes all the utterances before iandyi;<v denotes the first v1tokens in the i-thutterance of the machine. A neural model is employed to predict p(yi;vjyi;<v; y<i; x<i), whichoperates as follows:Sentence Encoder : We first encode previous utterances y<iandx<iinto continuous space by gen-erating employing a LSTM encoder. Thus, for a given utterance xi, and start with the initial LSTMstatehxi;0and apply the recursion hxi;j=LSTM E(WExi;j; hxi;j1), where WExi;jdenotes a word3Under review as a conference paper at ICLR 2017embedding lookup for the token xi;j, and LSTM Edenotes the LSTM transition function describedinHochreiter & Schmidhuber (1997 ). The representation of the user utterance is represented bythe final LSTM state hxi=hxi;jxij. The same process is applied to obtain the machine utterancerepresentation hyi=hyi;jyij.Turn Encoder : Then, combine all the representations of all the utterances with a second LSTM,which encodes the sequence fhy1; hx1; :::; hyi; hxiginto a continuous vector. Once again, we start withan initial state u0and feed each of the utterance representation to obtain the following LSTM state,until the final state is obtained. For simplicity, we shall refer to this as ui, which can be seen as thehierarchical encoding of the previous iutterances.Seq2Seq Decoder : As for decoding, in order to generate each utterance yi, we can feed ui1intothe decoder LSTM as the initial state si;0=ui1and decode each token in yi. Thus, we can expressthe decoder as:syi;v=LSTM D(WEyi;v1; si;v1);pyi;v=softmax (Wsyi;v);where the desired probability p(yi;vjyi;<v; y<i; x<i)is expressed by pyi;v.Attention based decoder : We can also incorporate the attention mechanism in our hierarchicalmodel. An attention model builds a representation dby averaging over a set of vectors p. We definethe attention function as a=ATTN (p; q), where ais a probability distribution over the set of vectorsp, conditioned on any input representation q. A full description of this operation is described in ( Bah-danau et al. ,2014 ). Thus, for each generated token yi;v, we compute the attentions ai;v, conditionedon the current decoder state syi;v, obtaining the attentions over input tokens from previous turn ( i1).We denote the vector of all tokens in previous turn as hx;yi1= [fhxi1;jgjxi1jj=1;fhyi1;vgjyi1jv=1]. LetK=jhx;yi1jbe the number of tokens in previous turn. Thus, we obtain the attention probabilitiesover all previous tokens ai;vas ATTN (syi;v; hx;yi1). Then, the weighted sum is computed over theseprobabilities di;v=∑k2Kai;v;khx;yi1;k, where ai;v;k is the probability of aligning to the k-th tokenfrom previous turn. The resulting vector di;vis used to obtain the probability of the following wordpyi;v. Thus, we express the decoder as:syi;v=LSTM D([WEyi;v1; di;v1]; si;v1);ai;v=ATTN (hx;yi1; syi;v);di;v=∑k2Kai;v;khx;yi1;k;pyi;v=softmax (W[syi;v; di;v]):2.1.2 I NCORPORATING TABLE ATTENTIONTable Attnquery...attributesrowsTable AttndecoderUattnweighted rowStep 1: attribute attnStep 2: weighted columnStep 3: row attnpapaprpr(a) Decoder with table attention.query...attributesrowszYesNoUdecoderTable PointerTable PointerStep 1: attribute attnStep 2: weighted columnStep 3: row attnStep 4: weighted rowStep 5: column attnpapaprprpcpcpvocabpvocabpcopypcopy (b) Decoder with table pointer.Figure 3: Table based decoder.4Under review as a conference paper at ICLR 2017We now extend the attention model in order to allow the attention to be computed over a table,allowing the model to condition the generation on a database.We denote a table with Rrows and Ccolumns as ffr;cg; r2[1; R]; c2[1; C], where fr;cis the cellin row rand column c. The attribute of each column is denoted as sc, where cis the c-th attribute.fr;candscare one-hot vector.Table Encoding : To encode the table, we build an attribute vector gcfor each column. For eachcellfr;cof the table, we concatenate it with the corresponding attribute gcand then feed it througha one-layer MLP as follows: gc=WEscand then er;c= tanh( W[WEfr;c; gc]).Table Attention : The diagram for table attention is shown in Figure 3a. The attention over cellsin the table is conditioned on a given vector q, similarly to the attention model for sequencesATTN (p; q). However, rather than a sequence p, we now operate over a table f. Our attentionmodel computes a attribute attention followed by row attention of the table. We first use the atten-tion mechanism on the attributes to find out which attribute the user asks about. Suppose a usersayscheap , then we should focus on the price attribute. After we get the attention probabil-itypa=ATTN (fgcg; q), over the attribute, we calculate the weighted representation for each rower=∑cpacercconditioned on pa. Then erhas the price information of each row. We further useattention mechanism on erand get the probability pr=ATTN (ferg; q)over the rows. Then restau-rants with cheap price will be picked. Then, using the probabilities pr, we compute the weightedaverage over the all rows ec=∑rprrer;c, which is used in the decoder. The detailed process is:pa=ATTN (fgcg; q); (2)er=∑cpacerc8r; (3)pr=ATTN (ferg; q); (4)ec=∑rprrer;c8c: (5)This is embedded in the decoder by replacing the conditioned state qas the current decoder statesyi;0and then at each step, conditioning the prediction of yi;vonfecgby using attention mechanismat each step. The detailed diagram of table attention is shown in Figure 3a.2.1.3 I NCORPORATING TABLE POINTER NETWORKSWe now describe the mechanism used to refer to specific database entries during decoding. At eachtimestep, the model needs to decide whether to generate the next token from an entry of the databaseor from the word softmax. This is performed as follows.Pointer Switch : We use zi;v2[0;1]to denote the decision of whether to copy one cell from thetable. We compute this probability as follows:p(zi;vjsi;v) =sigmoid (W[si;v; di;v]):Thus, if zi;v= 1, the next token yi;vwill be generated from the database, whereas if zi;v= 0, thenthe following token is generated from a softmax. We shall now describe how we generate tokensfrom the database.Table Pointer : Ifzi;v= 1, the token is generated from the table. The detailed process of calculatingthe probability distribution over the table is shown in Figure 3b. This is similar to the attentionmechanism, except that we perform a column attention to compute the probabilities of copying fromeach column after Equation. 5. More formally:pc=ATTN (fecg; q); (6)pcopy=prpc; (7)where pcis a probability distribution over columns, whereas pris a probability distribution overrows. In order to compute a matrix with the probability of copying each cell, we simply computethe outer product pcopy=prpc.Objective: As we treat zias a latent variable, we wish to maximize the marginal probability of thesequence yiover all possible values of zi. Thus, our objective function is defined as:p(yi;vjsi;v) =pvocabp(0jsi;v) +pcopyp(1jsi;v) =pvocab(1p(1jsi;v)) +pcopyp(1jsi;v):(8)5Under review as a conference paper at ICLR 2017The model can also be trained in a fully supervised fashion, if zi;vis observed. In such cases,we simply maximize the likelihood of p(zi;vjsi;v), based on the observations, rather than using themarginal probability over zi;v.2.2 R ECIPE GENERATIONingredients recipe1 cup plain soy milk Blend soy milk andspinach leavestogether in a blender until smooth. Add bananaand pulse until thoroughly blended.3/4 cup packed fresh spinach leaves1 large banana , slicedTable 3: Ingredients and recipe for Spinach and Banana Power Smoothie .Next, we consider the task of recipe generation conditioning on the ingredient lists. In this task, wemust generate the recipe from a list of ingredients. Table. 3illustrates the ingredient list and recipeforSpinach and Banana Power Smoothie . We can see that the ingredients soy milk, spinachleaves, and banana occur in the recipe.soydecoderingredientszYesNoencoderBlendsoypvocabpvocabpcopypcopyFigure 4: Recipe pointerLet the ingredients of a recipe be X=fxigTi=1and each ingredient contains Ltokens xi=fxijgLj=1. The corresponding recipe is y=fyvgKv=1. We first use a LSTM to encode each in-gredient:hi;j=LSTM E(WExij; hi;j1)8i:Then, we sum the resulting state of each ingredient to obtain the starting LSTM state of the decoder.Once again we use an attention based decoder:sv=LSTM D(sv1; dv1; W Eyv1);pcopyv=ATTN (ffhi;jgTi=1gLj=1; sv);dv=∑ijpv;i;jhi;j;p(zvjsv) =sigmoid (W[sv; dv]);pvocabv =softmax (W[sv; dv]):Similar to the previous task, the decision to copy from the ingredient list or generate a newword from the softmax is performed using a switch, denoted as p(zvjsv). We can obtain aprobability distribution of copying each of the words in the ingredients by computing pcopyv =ATTN (ffhi;jgTi=1gLj=1; sv)in the attention mechanism. For training, we optimize the marginallikelihood function employed in the previous task.2.3 C OREFERENCE BASED LANGUAGE MODELFinally, we build a language model that uses coreference links to point to previous words. Beforegenerating a word, we first make the decision on whether it is an entity mention. If so, we decide6Under review as a conference paper at ICLR 2017which entity this mention belongs to, then we generate the word based on that entity. Denote thedocument as X=fxigLi=1, and the entities are E=feigNi=1, each entity has Mimentions, ei=fmijgMij=1, such that fxmijgMij=1refer to the same entity. We use a LSTM to model the document,the hidden state of each token is hi=LSTM (WExi; hi1). We use a set he=fhe0; he1; :::; heMgtokeep track of the entity states, where hejis the state of entity j.um and [I] 1think that is whats - Go ahead [Linda] 2. Well and thanks goes to [you] 1and to[the media] 3to help [us] 4...So [our] 4hat is off to all of [you] 5...[I]1umentity stateupdate processI[Linda]2ILinda[You]1YouLindaupdate statepush stateemptystate001012012push stateattn......attnand[I]1of[You]1newentityentity1Figure 5: Coreference based language model, example taken from Wiseman et al. (2016 ).Word generation : At each time step before generating the next word, we predict whether the wordis an entity mention:pcoref(vijhi1; he) =ATTN (he; hi1);di=∑vip(vi)hevip(zijhi1) =sigmoid (W[di; hi1]);where zidenotes whether the next word is an entity and if yes videnotes which entity thenext word corefers to. If the next word is an entity mention, then p(xijvi; hi1; he) =softmax (W1tanh( W2[hevi; hi1]))elsep(xijhi1) =softmax (W1hi1);p(xijx<i) ={p(xijhi1)p(zijhi1; he) ifzi= 0:p(xijvi; hi1; he)pcoref(vijhi1; he)p(zijhi1; he) ifzi= 1:(9)Entity state update : We update the entity state heat each time step. In the beginning, he=fhe0g,he0denotes the state of an virtual empty entity and is a learnable variable. If zi= 1andvi= 0, thenit indicates the next word is a new entity mention, then in the next step, we append hitohe, i.e.,he=fhe; hig, ifei>0, then we update the corresponding entity state with the new hidden state,he[vi] =hi. Another way to update the entity state is to use one LSTM to encode the mention statesand get the new entity state. Here we use the latest entity mention state as the new entity state forsimplicity. The detailed update process is shown in Figure 5.3 E XPERIMENTS4 D ATA SETS AND PREPROCESSINGDialogue : We use the DSTC2 data set. We only extracted the dialogue transcript from data set.There are about 3,200 dialogues in total. Since this is a small data set, we use 5-fold cross validationand report the average result over the 5 partitions. There may be multiple tokens in each table cell,for example in Table. 2, the name, address, post code and phone number have multiple tokens, wereplace them with one special token. For the name, address, post code and phone number of the j-throw, we replace the tokens in each cell with NAME j,ADDR j,POSTCODE j,PHONE j.If a table cell is empty, we replace it with an empty token EMPTY. We do a string match in thetranscript and replace the corresponding tokens in transcripts from the table with the special tokens.7Under review as a conference paper at ICLR 2017Each dialogue on average has 8 turns (16 sentences). We use a vocabulary size of 900, includingabout 400 table tokens and 500 words.Recipes : We crawl all recipes from www.allrecipes.com . There are about 31;000recipes intotal, and every recipe has a ingredient list and a corresponding recipe. We exclude the recipes thathave less than 10 tokens or more than 500 tokens, those recipes take about 0.1% of all data set. Onaverage each recipe has 118 tokens and 9 ingredients. We random shuffle the whole data set and take80% as training and 10% for validation and test. We use a vocabulary size of 10,000 in the model.Coref LM : We use the Xinhua News data set from Gigaword Fifth Edition and sample 100,000documents from it that has length in range from 100 to 500. Each document has on average 234tokens, so there are 23 million tokens in total. We use a tool to annotate all the entity mentionsand use the annotation in the training. We take 80% as training and 10% as validation and testrespectively. We ignore the entities that have only one mention and for the mentions that havemultiple tokens, we take the token that is most frequent in the all the mentions for this entity. Afterthe preprocessing, tokens that are entity mentions take about 10% of all tokens. We use a vocabularysize of 50,000 in the model.4.1 M ODEL TRAINING AND EVALUATIONWe train all models with simple stochastic gradient descent with clipping. We use a one-layer LSTMfor all RNN components. Hyper-parameters are selected using grid search based on the validationset. We use dropout after the input embedding and LSTM output. The learning rate is selected from[0.1, 0.2, 0.5, 1], maximum gradient norm is selected from [1, 2, 5, 10] and drop ratio is selectedfrom [0.2, 0.3, 0.5]. The batch size and LSTM dimension size is slightly different for differenttasks so as to make the model fit into memory. The number of epochs to train are different foreach task and we drop the learning rate after reaching a given number of epochs. We report theper-word perplexity for all tasks, specifically, we report the perplexity of all words, words that canbe generated from reference and non-reference words. For recipe generation, we also generate therecipe using beam size of 10 and evaluate the generated recipe with BLEU.model all table table oov wordseq2seq 1.350.01 4.98 0.38 1.99E7 7.75E6 1.23 0.01table attn 1.370.01 5.09 0.64 7.91E7 1.39E8 1.24 0.01table pointer 1.330.01 3.99 0.36 13602600 1.23 0.01table latent 1.360.01 4.99 0.20 3.78E7 6.08E7 1.24 0.01+sentence attnseq2seq 1.280.01 3.31 0.21 2.83E9 4.69E9 1.19 0.01table attn 1.280.01 3.17 0.21 1.67E7 9.5E6 1.20 0.01table pointer 1.270.01 2.99 0.19 82.86110 1.20 0.01table latent 1.280.01 3.26 0.25 1.27E7 1.41E7 1.20 0.01Table 4: Dialogue perplexity results. (All means all tokens, table means tokens from table, table oovdenotes table tokens that does not appear in the training set, word means non-table tokens). sentenceattn denotes we use attention mechanism over tokens from past turn. Table pointer and table latentdiffers in that table pointer, we provide supervised signal on when to generate a table token, whilein table latent it is a latent decision.modelval testpplBLEUpplBLEUall ing word all ing wordseq2seq 5.60 11.26 5.00 14.07 5.52 11.26 4.91 14.39attn 5.25 6.86 5.03 14.84 5.19 6.92 4.95 15.15pointer 5.15 5.86 5.04 15.06 5.11 6.04 4.98 15.29latent 5.02 5.10 5.01 14.87 4.97 5.19 4.94 15.41Table 5: Recipe result, evaluated in perplexity and BLEU score. ing denotes tokens from recipe thatappear in ingredients.8Under review as a conference paper at ICLR 2017modelval testall entity word all entity wordlm 33.08 44.52 32.04 33.08 43.86 32.10pointer 32.57 32.07 32.62 32.62 32.07 32.69pointer + init 30.43 28.56 30.63 30.42 28.56 30.66Table 6: Coreference based LM. pointer + init means we initialize the model with the LM weights.4.2 R ESULTS AND ANALYSISThe results for dialogue, recipe generation and coref language model are shown in Table 4,5and6respectively. We can see from Table 4that models that condition on table performs better inpredicting table tokens in general. Table pointer has the lowest perplexity for token in the table.Since the table token appears rarely in the dialogue, the overall perplexity does not differ much andthe non-table tokens perplexity are similar. With attention mechanism over the table, the perplexityof table token improves over basic seq2seq model, but not as good as directly pointing to cells in thetable. As expected, using sentence attention improves significantly over models without sentenceattention. Surprisingly, table latent performs much worse than table pointer. We also measure theperplexity of table tokens that appear only in test set. For models other than table pointer, becausethe tokens never appear in training set, the perplexity is quite high, while table pointer can predictthese tokens much more accurately. The recipe results in Table 5in general follows that findingsfrom the dialogue. But the latent model performs better than pointer model since that tokens iningredients that match with recipe does not necessarily come from the ingredients. Imposing asupervised signal will give wrong information to the model and hence make the result worse. Hencewith latent decision, the model learns to when to copy and when to generate it from the vocabulary.The coref LM results are shown in Table 6. We find that coref based LM performs much better onthe entities perplexities, but however is a little bit worse than for non-entity words. We found it is anoptimization problem and perhaps the model is stuck in local optimum. So we initialize the pointermodel with the weights learned from LM, the pointer model performs better than LM both for entityperplexity and non-entity words perplexity.5 R ELATED WORKRecently, there has been great progresses in modeling languages based on neural network, includinglanguage modeling ( Mikolov et al. ,2010 ;Jozefowicz et al. ,2016 ), machine translation ( Sutskeveret al. ,2014 ;Bahdanau et al. ,2014 ), question answering ( Hermann et al. ,2015 ) etc. Based on thesuccess of seq2seq models, neural networks are applied in modeling chit-chat dialogue ( Li et al. ,2016 ;Vinyals & Le ,2015 ;Sordoni et al. ,2015 ;Serban et al. ,2016 ;Shang et al. ,2015 ) and taskoriented dialogue ( Wen et al. ,2015 ;Bordes & Weston ,2016 ;Williams & Zweig ,2016 ;Wen et al. ,2016 ). Most of the chit-chat neural dialogue models are simply applying the seq2seq models. Forthe task oriented dialogues, most of them embed the seq2seq model in traditional dialogue systems,in which the table query part is not differentiable. while our model queries the database directly.Recipe generation was proposed in ( Kiddon et al. ,2016 ). Their model extents previous work onattention models ( Allamanis et al. ,2016 ) to checklists, whereas our work models explicit referencesto those checklists. Context dependent language models ( Mikolov et al. ,2010 ;Ji et al. ,2015 ;Wang& Cho ,2015 ) are proposed to capture long term dependency of text. There are also lots of workson coreference resolution ( Haghighi & Klein ,2010 ;Wiseman et al. ,2016 ). We are the first tocombine coreference with language modeling, to the best of our knowledge. Much effort has beeninvested in embedding a copying mechanism for neural models ( G ̈ulc ̧ehre et al. ,2016 ;Gu et al. ,2016 ;Ling et al. ,2016 ). In general, a gating mechanism is employed to combine the softmax overobserved words and a pointer network ( Vinyals et al. ,2015 ). These gates can be trained either bymarginalizing over both outcomes, or using heuristics (e.g. copy low frequency words). Our modelsare similar to models proposed in ( Ahn et al. ,2016 ;Merity et al. ,2016 ), where the generation ofeach word can be conditioned on a particular entry in knowledge lists and previous words. In ourwork, we describe a model with broader applications, allowing us to condition, on databases, listsand dynamic lists.9Under review as a conference paper at ICLR 20176 C ONCLUSIONWe introduce reference-aware language models which explicitly model the decision of from whereto generate the token at each step. Our model can also learns the decision by treating it as a latentvariable. We demonstrate on three tasks, table based dialogue modeling, recipe generation and corefbased LM, that our model performs better than attention based model, which does not incorporatethis decision explicitly. There are several directions to explore further based on our framework. Thecurrent evaluation method is based on perplexity and BLEU. In task oriented dialogues, we can alsotry human evaluation to see if the model can reply users’ query accurately. It is also interesting touse reinforcement learning to learn the actions in each step.
BJBPqNGNg
5: Marginally below acceptance threshold
This paper explores 3 language modeling applications with an explicit modeling of reference expressions: dialog, receipt generation and coreferences. While these are important tasks for NLP and the authors have done a number of experiments, the paper is limited for a few reasons: 1. This paper is not clearly written and is pretty hard to follow some details. In particular, there are many obvious math errors, such as missing the marginalization sum in Eq (1), and P(z_{i,v}...) = 1 (should be 0 here) on page 5, pointer switch section. 2. The major novelty seems to be the 2-dimensional attention from the table and the pointer to the 2-D table. These are more of a customization of existing work to a particular task with 2-D tables as a part of the input to seq2seq model with both attentions and pointer networks. 3. The empirical results are not very conclusive yet, limited by either the relatively small data size, or the lack of well-established baseline for some new applications (e.g., the recipe generation task). Overall, this paper, as it is for now, is more suitable for a workshop rather than for the main conference.
4: The reviewer is confident but not absolutely certain that the evaluation is correct
B1ewdt9xe
ICLR.cc/2017/conference
2017
Deep Predictive Coding Networks for Video Prediction and Unsupervised Learning
["William Lotter", "Gabriel Kreiman", "David Cox"]
While great strides have been made in using deep learning algorithms to solve supervised learning tasks, the problem of unsupervised learning - leveraging unlabeled examples to learn about the structure of a domain - remains a difficult unsolved challenge. Here, we explore prediction of future frames in a video sequence as an unsupervised learning rule for learning about the structure of the visual world. We describe a predictive neural network ("PredNet") architecture that is inspired by the concept of "predictive coding" from the neuroscience literature. These networks learn to predict future frames in a video sequence, with each layer in the network making local predictions and only forwarding deviations from those predictions to subsequent network layers. We show that these networks are able to robustly learn to predict the movement of synthetic (rendered) objects, and that in doing so, the networks learn internal representations that are useful for decoding latent object parameters (e.g. pose) that support object recognition with fewer training views. We also show that these networks can scale to complex natural image streams (car-mounted camera videos), capturing key aspects of both egocentric movement and the movement of objects in the visual scene, and the representation learned in this setting is useful for estimating the steering angle. These results suggest that prediction represents a powerful framework for unsupervised learning, allowing for implicit learning of object and scene structure.
["networks", "video prediction", "unsupervised learning", "structure", "prediction", "future frames", "video sequence", "movement", "objects", "useful"]
ABSTRACTWhile great strides have been made in using deep learning algorithms to solvesupervised learning tasks, the problem of unsupervised learning — leveraging un-labeled examples to learn about the structure of a domain — remains a difficultunsolved challenge. Here, we explore prediction of future frames in a video se-quence as an unsupervised learning rule for learning about the structure of thevisual world. We describe a predictive neural network (“PredNet”) architecturethat is inspired by the concept of “predictive coding” from the neuroscience lit-erature. These networks learn to predict future frames in a video sequence, witheach layer in the network making local predictions and only forwarding deviationsfrom those predictions to subsequent network layers. We show that these networksare able to robustly learn to predict the movement of synthetic (rendered) objects,and that in doing so, the networks learn internal representations that are usefulfor decoding latent object parameters (e.g. pose) that support object recognitionwith fewer training views. We also show that these networks can scale to com-plex natural image streams (car-mounted camera videos), capturing key aspectsof both egocentric movement and the movement of objects in the visual scene,and the representation learned in this setting is useful for estimating the steer-ing angle. Altogether, these results suggest that prediction represents a powerfulframework for unsupervised learning, allowing for implicit learning of object andscene structure.1 I NTRODUCTIONMany of the most successful current deep learning architectures for vision rely on supervised learn-ing from large sets of labeled training images. While the performance of these networks is un-doubtedly impressive, reliance on such large numbers of training examples limits the utility of deeplearning in many domains where such datasets are not available. Furthermore, the need for largenumbers of labeled examples stands at odds with human visual learning, where one or a few viewsof an object is often all that is needed to enable robust recognition of that object across a wide rangeof different views, lightings and contexts. The development of a representation that facilitates suchabilities, especially in an unsupervised way, is a largely unsolved problem.In addition, while computer vision models are typically trained using static images, in the real world,visual objects are rarely experienced as disjoint snapshots. Instead, the visual world is alive withmovement, driven both by self-motion of the viewer and the movement of objects within the scene.Many have suggested that temporal experience with objects as they move and undergo transforma-tions can serve as an important signal for learning about the structure of objects (F ̈oldi ́ak, 1991;Softky, 1996; Wiskott & Sejnowski, 2002; George & Hawkins, 2005; Palm, 2012; O’Reilly et al.,2014; Agrawal et al., 2015; Goroshin et al., 2015a; Lotter et al., 2015; Mathieu et al., 2016; Srivas-tava et al., 2015; Wang & Gupta, 2015; Whitney et al., 2016). For instance, Wiskott and Sejnowskiproposed “slow feature analysis” as a framework for exploiting temporal structure in video streams(Wiskott & Sejnowski, 2002). Their approach attempts to build feature representations that extractCode and video examples can be found at: https://coxlab.github.io/prednet/1Published as a conference paper at ICLR 2017slowly-varying parameters, such as object identity, from parameters that produce fast changes in theimage, such as movement of the object. While approaches that rely on temporal coherence havearguably not yet yielded representations as powerful as those learned by supervised methods, theynonetheless point to the potential of learning useful representations from video (Mohabi et al., 2009;Sun et al., 2014; Goroshin et al., 2015a; Maltoni & Lomonaco, 2015; Wang & Gupta, 2015).Here, we explore another potential principle for exploiting video for unsupervised learning: pre-diction of future image frames (Softky, 1996; Palm, 2012; O’Reilly et al., 2014; Goroshin et al.,2015b; Srivastava et al., 2015; Mathieu et al., 2016; Patraucean et al., 2015; Finn et al., 2016; V on-drick et al., 2016). A key insight here is that in order to be able to predict how the visual worldwill change over time, an agent must have at least some implicit model of object structure and thepossible transformations objects can undergo. To this end, we have designed a neural network archi-tecture, which we informally call a “PredNet,” that attempts to continually predict the appearanceof future video frames, using a deep, recurrent convolutional network with both bottom-up and top-down connections. Our work here builds on previous work in next-frame video prediction (Ranzatoet al., 2014; Michalski et al., 2014; Srivastava et al., 2015; Mathieu et al., 2016; Lotter et al., 2015;Patraucean et al., 2015; Oh et al., 2015; Finn et al., 2016; Xue et al., 2016; V ondrick et al., 2016;Brabandere et al., 2016), but we take particular inspiration from the concept of “predictive coding”from the neuroscience literature (Rao & Ballard, 1999; Rao & Sejnowski, 2000; Lee & Mumford,2003; Friston, 2005; Summerfield et al., 2006; Egner et al., 2010; Bastos et al., 2012; Spratling,2012; Chalasani & Principe, 2013; Clark, 2013; O’Reilly et al., 2014; Kanai et al., 2015). Predictivecoding posits that the brain is continually making predictions of incoming sensory stimuli (Rao &Ballard, 1999; Friston, 2005). Top-down (and perhaps lateral) connections convey these predictions,which are compared against actual observations to generate an error signal. The error signal is thenpropagated back up the hierarchy, eventually leading to an update of the predictions.We demonstrate the effectiveness of our model for both synthetic sequences, where we have accessto the underlying generative model and can investigate what the model learns, as well as naturalvideos. Consistent with the idea that prediction requires knowledge of object structure, we findthat these networks successfully learn internal representations that are well-suited to subsequentrecognition and decoding of latent object parameters (e.g. identity, view, rotation speed, etc.). Wealso find that our architecture can scale effectively to natural image sequences, by training usingcar-mounted camera videos. The network is able to successfully learn to predict both the movementof the camera and the movement of objects in the camera’s view. Again supporting the notionof prediction as an unsupervised learning rule, the model’s learned representation in this settingsupports decoding of the current steering angle.––inputoutputRepresentationPredictionTargetErrorFigure 1: Predictive Coding Network (PredNet). Left: Illustration of information flow within twolayers. Each layer consists of representation neurons ( Rl), which output a layer-specific prediction ateach time step ( ^Al), which is compared against a target ( Al) (Bengio, 2014) to produce an error term(El), which is then propagated laterally and vertically in the network. Right: Module operations forcase of video sequences.2Published as a conference paper at ICLR 20172 T HEPREDNETMODELThe PredNet architecture is diagrammed in Figure 1. The network consists of a series of repeatingstacked modules that attempt to make local predictions of the input to the module, which is thensubtracted from the actual input and passed along to the next layer. Briefly, each module of thenetwork consists of four basic parts: an input convolutional layer ( Al), a recurrent representationlayer (Rl), a prediction layer ( ^Al), and an error representation ( El). The representation layer, Rl, isa recurrent convolutional network that generates a prediction, ^Al, of what the layer input, Al, willbe on the next frame. The network takes the difference between Aland^Aland outputs an errorrepresentation, El, which is split into separate rectified positive and negative error populations. Theerror,El, is then passed forward through a convolutional layer to become the input to the next layer(Al+1). The recurrent prediction layer Rlreceives a copy of the error signal El, along with top-downinput from the representation layer of the next level of the network ( Rl+1). The organization of thenetwork is such that on the first time step of operation, the “right” side of the network ( Al’s andEl’s)is equivalent to a standard deep convolutional network. Meanwhile, the “left” side of the network(theRl’s) is equivalent to a generative deconvolutional network with local recurrence at each stage.The architecture described here is inspired by that originally proposed by (Rao & Ballard, 1999), butis formulated in a modern deep learning framework and trained end-to-end using gradient descent,with a loss function implicitly embedded in the network as the firing rates of the error neurons. Ourwork also shares motivation with the Deep Predictive Coding Networks of Chalasani & Principe(2013); however, their framework is based upon sparse coding and a linear dynamical system withgreedy layer-wise training, whereas ours is rooted in convolutional and recurrent neural networkstrained with backprop.While the architecture is general with respect to the kinds of data it models, here we focus on imagesequence (video) data. Consider a sequence of images, xt. The target for the lowest layer is setto the the actual sequence itself, i.e. At0=xt8t. The targets for higher layers, Atlforl >0, arecomputed by a convolution over the error units from the layer below, Etl1, followed by rectifiedlinear unit (ReLU) activation and max-pooling. For the representation neurons, we specificallyuse convolutional LSTM units (Hochreiter & Schmidhuber, 1997; Shi et al., 2015). In our setting,theRtlhidden state is updated according to Rt1l,Et1l, as well as Rtl+1, which is first spatiallyupsampled (nearest-neighbor), due to the pooling present in the feedforward path. The predictions,^Atlare made through a convolution of the Rtlstack followed by a ReLU non-linearity. For thelowest layer, ^Atlis also passed through a saturating non-linearity set at the maximum pixel value:SatLU (x;pmax):= min(pmax;x). Finally, the error response, Etl, is calculated from the differencebetween ^AtlandAtland is split into ReLU-activated positive and negative prediction errors, whichare concatenated along the feature dimension. As discussed in (Rao & Ballard, 1999), although notexplicit in their model, the separate error populations are analogous to the existence of on-center,off-surround and off-center, on-surround neurons early in the visual system.The full set of update rules are listed in Equations (1) to (4). The model is trained to minimizethe weighted sum of the activity of the error units. Explicitly, the training loss is formalized inEquation 5 with weighting factors by time, t, and layer,l, and where nlis the number of units inthelth layer. With error units consisting of subtraction followed by ReLU activation, the loss at eachlayer is equivalent to an L1 error. Although not explored here, other error unit implementations,potentially even probabilistic or adversarial (Goodfellow et al., 2014), could also be used.Atl=xt ifl= 0MAXPOOL(RELU(CONV(Etl1)))l>0(1)^Atl=RELU(CONV(Rtl)) (2)Etl= [RELU(Atl^Atl);RELU(^AtlAtl)] (3)Rtl=CONV LSTM (Et1l;Rt1l;UPSAMPLE (Rtl+1)) (4)Ltrain =XttXllnlXnlEtl (5)3Published as a conference paper at ICLR 2017Algorithm 1 Calculation of PredNet statesRequire:xt1:At0 xt2:E0l;R0l 03:fort= 1toTdo4: forl=Lto0do .UpdateRtlstates5: ifl=Lthen6: RtL=CONV LSTM(Et1L;Rt1L)7: else8: Rtl=CONV LSTM(Et1l;Rt1l;UPSAMPLE (Rtl+1))9: forl= 0toLdo .Update ^Atl;Atl;Etlstates10: ifl= 0then11: ^At0=SATLU(R ELU(C ONV(Rt0)))12: else13: ^Atl=RELU(C ONV(Rtl))14:Etl=[RELU(Atl^Atl); R ELU(^AtlAlt)]15: ifl<L then16: Atl+1=MAXPOOL(CONV(Elt))The order in which each unit in the model is updated must also be specified, and our implementa-tion is described in Algorithm 1. Updating of states occurs through two passes: a top-down passwhere theRtlstates are computed, and then a forward pass to calculate the predictions, errors, andhigher level targets. A last detail of note is that RlandElare initialized to zero, which, due to theconvolutional nature of the network, means that the initial prediction is spatially uniform.3 E XPERIMENTS3.1 R ENDERED IMAGE SEQUENCESTo gain an understanding of the representations learned in the proposed framework, we first trainedPredNet models using synthetic images, for which we have access to the underlying generativestimulus model and all latent parameters. We created sequences of rendered faces rotating with twodegrees of freedom, along the “pan” (out-of-plane) and “roll” (in-plane) axes. The faces start at arandom orientation and rotate at a random constant velocity for a total of 10frames. A different facewas sampled for each sequence. The images were processed to be grayscale, with values normalizedbetween 0and1, and 64x64pixels in size. We used 16K sequences for training and 800for bothvalidation and testing.Predictions generated by a PredNet model are shown in Figure 2. The model is able to accumulateinformation over time to make accurate predictions of future frames. Since the representation neu-rons are initialized to zero, the prediction at the first time step is uniform. On the second time step,with no motion information yet, the prediction is a blurry reconstruction of the first time step. Afterfurther iterations, the model adapts to the underlying dynamics to generate predictions that closelymatch the incoming frame.For choosing the hyperparameters of the model, we performed a random search and chose the modelthat had the lowest L1 error in frame prediction averaged over time steps 2-10on a validation set.Given this selection criteria, the best performing models tended to have a loss solely concentrated atthe lowest layer (i.e. 0= 1,l>0= 0), which is the case for the model shown. Using an equal lossat each layer considerably degraded predictions, but enforcing a moderate loss on upper layers thatwas one magnitude smaller than the lowest layer (i.e. 0= 1,l>0= 0:1) led to only slightly worsepredictions, as illustrated in Figure 9 in the Appendix. In all cases, the time loss weight, t, was set tozero for the first time step and then one for all time steps after. As for the remaining hyperparameters,the model shown has 5layers with 3x3filter sizes for all convolutions, max-pooling of stride 2, andnumber of channels per layer, for both AlandRlunits, of (1;32;64;128;256) . Model weights wereoptimized using the Adam algorithm (Kingma & Ba, 2014).4Published as a conference paper at ICLR 2017ActualPredictedtime→ActualPredictedActualPredictedFigure 2: PredNet next-frame predictions for sequences of rendered faces rotating with two degreesof freedom. Faces shown were not seen during training.Table 1: Evaluation of next-frame predictionson Rotating Faces Dataset (test set).MSE SSIMPredNetL0 0.0152 0.937PredNetLall 0.0157 0.921CNN-LSTM Enc.-Dec. 0.0180 0.907Copy Last Frame 0.125 0.631Quantitative evaluation of generative models is adifficult, unsolved problem (Theis et al., 2016), buthere we report prediction error in terms of mean-squared error (MSE) and the Structural SimilarityIndex Measure (SSIM) (Wang et al., 2004). SSIMis designed to be more correlated with perceptualjudgments, and ranges from 1and1, with a largerscore indicating greater similarity. We compare thePredNet to the trivial solution of copying the lastframe, as well as a control model that shares the overall architecture and training scheme of thePredNet, but that sends forward the layer-wise activations ( Al) rather than the errors ( El). Thismodel thus takes the form of a more traditional encoder-decoder pair, with a CNN encoder that haslateral skip connections to a convolutional LSTM decoder. The performance of all models on therotating faces dataset is summarized in Table 1, where the scores were calculated as an average overall predictions after the first frame. We report results for the PredNet model trained with loss onlyon the lowest layer, denoted as PredNet L0, as well as the model trained with an 0:1weight onupper layers, denoted as PredNet Lall. Both PredNet models outperformed the baselines on bothmeasures, with the L0model slightly outperforming Lall, as expected for evaluating the pixel-levelpredictions.Synthetic sequences were chosen as the initial training set in order to better understand what islearned in different layers of the model, specifically with respect to the underlying generative model(Kulkarni et al., 2015). The rotating faces were generated using the FaceGen software package (Sin-gular Inversions, Inc.), which internally generates 3D face meshes by a principal component analysisin “face space”, derived from a corpus of 3D face scans. Thus, the latent parameters of the imagesequences used here consist of the initial pan and roll angles, the pan and roll velocities, and the prin-cipal component (PC) values, which control the “identity” of the face. To understand the informationcontained in the trained models, we decoded the latent parameters from the representation neurons(Rl) in different layers, using a ridge regression. The Rlstates were taken at the earliest possibleinformative time steps, which, in the our notation, are the second and third steps, respectively, forthe static and dynamic parameters. The regression was trained using 4Ksequences with 500forvalidation and 1Kfor testing. For a baseline comparison of the information implicitly embeddedin the network architecture, we compare to the decoding accuracies of an untrained network withrandom initial weights. Note that in this randomly initialized case, we still expect above-chance de-coding performance, given past theoretical and empirical work with random networks (Pinto et al.,2009; Jarrett et al., 2009; Saxe et al., 2010).5Published as a conference paper at ICLR 2017Latent variable decoding accuracies of the pan and roll velocities, pan initial angle, and first PC areshown in the left panel of Figure 3. There are several interesting patterns. First, the trained modelslearn a representation that generally permits a better linear decoding of the underlying latent factorsthan the randomly initialized model, with the most striking difference in terms of the the pan rotationspeed (pan). Second, the most notable difference between the LallandL0versions occurs withthe first principle component, where the model trained with loss on all layers has a higher decodingaccuracy than the model trained with loss only on the lowest layer.Figure 3: Information contained in PredNet representation for rotating faces sequences. Left: De-coding of latent variables using a ridge regression ( pan: pan (out-of-frame) angular velocity, pan:pan angle, PC-1: first principal component of face, roll: roll (in-frame) angular velocity). Right:Orientation-invariant classification of static faces.The latent variable decoding analysis suggests that the model learns a representation that may gen-eralize well to other tasks for which it was not explicitly trained. To investigate this further, weassessed the models in a classification task from single, static images. We created a dataset of 25previously unseen FaceGen faces at 7pan angles, equally spaced between [2;2], and 8roll angles,equally spaced between [0;2). There were therefore 78 = 56 orientations per identity, whichwere tested in a cross-validated fashion. A linear SVM to decode face identity was fit on a model’srepresentation of a random subset of orientations and then tested on the remaining angles. For eachsize of the SVM training set, ranging from 1-40orientations per face, 50different random splitswere generated, with results averaged over the splits.For the static face classification task, we compare the PredNets to a standard autoencoder and avariant of the Ladder Network (Valpola, 2015; Rasmus et al., 2015). Both models were constructedto have the same number of layers and channel sizes as the PredNets, as well as a similar alternat-ing convolution/max-pooling, then upsampling/convolution scheme. As both networks are autoen-coders, they were trained with a reconstruction loss, with a dataset consisting of all of the individualframes from the sequences used to train the PredNets. For the Ladder Network, which is a denois-ing autoencoder with lateral skip connections, one must also choose a noise parameter, as well asthe relative weights of each layer in the total cost. We tested noise levels ranging from 0to0:5in increments of 0:1, with loss weights either evenly distributed across layers, solely concentratedat the pixel layer, or 1at the bottom layer and 0:1at upper layers (analogous to the PredNet Lallmodel). Shown is the model that performed best for classification, which consisted of 0:4noise andonly pixel weighting. Lastly, as in our architecture, the Ladder Network has lateral and top-downstreams that are combined by a combinator function. Inspired by (Pezeshki et al., 2015), where alearnable MLP improved results, and to be consistent in comparing to the PredNet, we used a purelyconvolutional combinator. Given the distributed representation in both networks, we decoded froma concatenation of the feature representations at all layers, except the pixel layer. For the PredNets,the representation units were used and features were extracted after processing one input frame.6Published as a conference paper at ICLR 2017Face classification accuracies using the representations learned by the L0andLallPredNets, a stan-dard autoencoder, and a Ladder Network variant are shown in the right panel of Figure 3. BothPredNets compare favorably to the other models at all sizes of the training set, suggesting they learna representation that is relatively tolerant to object transformations. Similar to the decoding accu-racy of the first principle component, the PredNet Lallmodel actually outperformed the L0variant.Altogether, these results suggest that predictive training with the PredNet can be a viable alternativeto other models trained with a more traditional reconstructive or denoising loss, and that the relativelayer loss weightings ( l’s) may be important for the particular task at hand.3.2 N ATURAL IMAGE SEQUENCESWe next sought to test the PredNet architecture on complex, real-world sequences. As a testbed, wechose car-mounted camera videos, since these videos span across a wide range of settings and arecharacterized by rich temporal dynamics, including both self-motion of the vehicle and the motionof other objects in the scene (Agrawal et al., 2015). Models were trained using the raw videos fromthe KITTI dataset (Geiger et al., 2013), which were captured by a roof-mounted camera on a cardriving around an urban environment in Germany. Sequences of 10frames were sampled from the“City”, “Residential”, and “Road” categories, with 57recording sessions used for training and 4used for validation. Frames were center-cropped and downsampled to 128x160pixels. In total, thetraining set consisted of roughly 41K frames.A random hyperparameter search, with model selection based on the validation set, resulted in a 4layer model with 3x3convolutions and layer channel sizes of (3;48;96;192) . Models were againtrained with Adam (Kingma & Ba, 2014) using a loss either solely computed on the lowest layer(L0) or with a weight of 1on the lowest layer and 0:1on the upper layers ( Lall). Adam parameterswere initially set to their default values ( = 0:001,1= 0:9,2= 0:999) with the learning rate, ,decreasing by a factor of 10halfway through training. To assess that the network had indeed learneda robust representation, we tested on the CalTech Pedestrian dataset (Doll ́ar et al., 2009), whichconsists of videos from a dashboard-mounted camera on a vehicle driving around Los Angeles.Testing sequences were made to match the frame rate of the KITTI dataset and again cropped to128x160pixels. Quantitative evaluation was performed on the entire CalTech test partition, splitinto sequences of 10frames.Sample PredNet predictions (for the L0model) on the CalTech Pedestrian dataset are shown inFigure 4, and example videos can be found at https://coxlab.github.io/prednet/ . Themodel is able to make fairly accurate predictions in a wide range of scenarios. In the top sequenceof Fig. 4, a car is passing in the opposite direction, and the model, while not perfect, is able to predictits trajectory, as well as fill in the ground it leaves behind. Similarly in Sequence 3, the model isable to predict the motion of a vehicle completing a left turn. Sequences 2and5illustrate that thePredNet can judge its own movement, as it predicts the appearance of shadows and a stationaryvehicle as they approach. The model makes reasonable predictions even in difficult scenarios, suchas when the camera-mounted vehicle is turning. In Sequence 4, the model predicts the position of atree, as the vehicle turns onto a road. The turning sequences also further illustrate the model’s abilityto “fill-in”, as it is able to extrapolate sky and tree textures as unseen regions come into view. As anadditional control, we show a sequence at the bottom of Fig. 4, where the input has been temporallyscrambled. In this case, the model generates blurry frames, which mostly just resemble the previousframe. Finally, although the PredNet shown here was trained to predict one frame ahead, it is alsopossible to predict multiple frames into the future, by feeding back predictions as the inputs andrecursively iterating. We explore this in Appendix 5.3.Table 2: Evaluation of Next-Frame Predictions onCalTech Pedestrian Dataset.MSE SSIMPredNetL0 3:131030.884PredNetLall 3:331030.875CNN-LSTM Enc.-Dec. 3:671030.865Copy Last Frame 7:951030.762Quantitatively, the PredNet models againoutperformed the CNN-LSTM Encoder-Decoder. To ensure that the difference inperformance was not simply because of thechoice of hyperparameters, we trained mod-els with four other sets of hyperparameters,which were sampled from the initial ran-dom search over the number of layers, fil-ter sizes, and number of filters per layer. For each of the four additional sets, the PredNet L0hadthe best performance, with an average error reduction of 14:7%and14:9%for MSE and SSIM,7Published as a conference paper at ICLR 20171PredictedActual2PredictedActual3PredictedActual4PredictedActual5PredictedActual6PredictedActual7PredictedActual8PredictedScrambledtime →Figure 4: PredNet predictions for car-cam videos. The first rows contain ground truth and the secondrows contain predictions. The sequence below the red line was temporally scrambled. The modelwas trained on the KITTI dataset and sequences shown are from the CalTech Pedestrian dataset.respectively, compared to the CNN-LSTM Encoder-Decoder. More details, as well as a thoroughinvestigation of systematically simplified models on the continuum between the PredNet and theCNN-LSTM Encoder-Decoder can be found in Appendix 5.1. Briefly, the elementwise subtractionoperation in the PredNet seems to be beneficial, and the nonlinearity of positive/negative splittingalso adds modest improvements. Finally, while these experiments measure the benefits of each com-ponent of our model, we also directly compare against recent work in a similar car-cam setting, byreporting results on a 64x64pixel, grayscale car-cam dataset released by Brabandere et al. (2016).Our PredNet model outperforms the model by Brabandere et al. (2016) by 29%. Details can befound in Appendix 5.2. Also in Appendix 5.2, we present results for the Human3.6M (Ionescuet al., 2014) dataset, as reported by Finn et al. (2016). Without re-optimizing hyperparameters, our8Published as a conference paper at ICLR 2017model underperforms the concurrently developed DNA model by Finn et al. (2016), but outperformsthe model by Mathieu et al. (2016).To test the implicit encoding of latent parameters in the car-cam setting, we used the internal rep-resentation in the PredNet to estimate the car’s steering angle (Bojarski et al., 2016; Biasini et al.,2016). We used a dataset released by Comma.ai (Biasini et al., 2016) consisting of 11videos total-ing about 7hours of mostly highway driving. We first trained networks for next-frame predictionand then fit a linear fully-connected layer on the learned representation to estimate the steering an-gle, using a MSE loss. We again concatenate the Rlrepresentation at all layers, but first spatiallyaverage pool lower layers to match the spatial size of the upper layer, in order to reduce dimension-ality. Steering angle estimation results, using the representation on the 10thtime step, are shownin Figure 5. Given just 1K labeled training examples, a simple linear readout on the PredNet L0representation explains 74% of the variance in the steering angle and outperforms the CNN-LSTMEnc.-Dec. by 35%. With 25K labeled training examples, the PredNet L0has a MSE (in degrees2)of2:14. As a point of reference, a CNN model designed to predict the steering angle (Biasiniet al., 2016), albeit from a single frame instead of multiple frames, achieve a MSE of ~ 4whentrained end-to-end using 396K labeled training examples. Details of this analysis can be found inAppendix 8. Interestingly, in this task, the PredNet Lallmodel actually underperformed the L0model and slightly underperformed the CNN-LSTM Enc.-Dec, again suggesting that the lparam-eter can affect the representation learned, and different values may be preferable in different endtasks. Nonetheless, the readout from the Lallmodel still explained a substantial proportion of thesteering angle variance and strongly outperformed the random initial weights. Overall, this anal-ysis again demonstrates that a representation learned through prediction, and particularly with thePredNet model with appropriate hyperparameters, can contain useful information about underlyinglatent parameters.Figure 5: Steering angle estimation accuracy on the Comma.ai dataset (Biasini et al., 2016). Left:Example steering angle curve with model estimations for a segment in the test set. Decoding wasperformed using a fully-connected readout on the PredNet representation trained with 25K labeledtraining examples. PredNet representation was trained for next-frame prediction on Comma.ai train-ing set. Right: Mean-squared error of steering angle estimation.4 D ISCUSSIONAbove, we have demonstrated a predictive coding inspired architecture that is able to predict futureframes in both synthetic and natural image sequences. Importantly, we have shown that learning topredict how an object or scene will move in a future frame confers advantages in decoding latentparameters (such as viewing angle) that give rise to an object’s appearance, and can improve recog-nition performance. More generally, we argue that prediction can serve as a powerful unsupervisedlearning signal, since accurately predicting future frames requires at least an implicit model of theobjects that make up the scene and how they are allowed to move. Developing a deeper understand-ing of the nature of the representations learned by the networks, and extending the architecture, by,for instance, allowing sampling, are important future directions.9Published as a conference paper at ICLR 2017ACKNOWLEDGMENTSWe would like to thank Rasmus Berg Palm for fruitful discussions and early brainstorming. Wewould also like to thank the developers of Keras (Chollet, 2016). This work was supported by IARPA(contract D16PC00002), the National Science Foundation (NSF IIS 1409097), and the Center forBrains, Minds and Machines (CBMM, NSF STC award CCF-1231216).
r1RNR8-Vg
Review
8: Top 50% of accepted papers, clear accept
Learning about the physical structure and semantics of the world from video (without supervision) is a very hot area in computer vision and machine learning. In this paper, the authors investigate how the prediction of future image frames (inherently unsupervised) can help to deduce object/s structure and it's properties (in this case single object pose, category, and steering angle, (after a supervised linear readout step)) I enjoyed reading this paper, it is clear, interesting and proposes an original network architecture (PredNet) for video frame prediction that has produced promising results on both synthetic and natural images. Moreover, the extensive experimental evaluation and analysis the authors provide puts it on solid ground to which others can compare. The weaknesses: - the link to predictive coding should be better explained in the paper if it is to be used as a motivation for the prednet model. - any idea that the proposed method is learning an implicit `model' of the `objects' that make up the `scene' is vague and far fetched, but it sounds great. Minor comment: Next to the number of labeled training examples (Fig.5), it would be interesting to see how much unsupervised training data was used to train your representations.
4: The reviewer is confident but not absolutely certain that the evaluation is correct
B1ewdt9xe
ICLR.cc/2017/conference
2017
Deep Predictive Coding Networks for Video Prediction and Unsupervised Learning
["William Lotter", "Gabriel Kreiman", "David Cox"]
While great strides have been made in using deep learning algorithms to solve supervised learning tasks, the problem of unsupervised learning - leveraging unlabeled examples to learn about the structure of a domain - remains a difficult unsolved challenge. Here, we explore prediction of future frames in a video sequence as an unsupervised learning rule for learning about the structure of the visual world. We describe a predictive neural network ("PredNet") architecture that is inspired by the concept of "predictive coding" from the neuroscience literature. These networks learn to predict future frames in a video sequence, with each layer in the network making local predictions and only forwarding deviations from those predictions to subsequent network layers. We show that these networks are able to robustly learn to predict the movement of synthetic (rendered) objects, and that in doing so, the networks learn internal representations that are useful for decoding latent object parameters (e.g. pose) that support object recognition with fewer training views. We also show that these networks can scale to complex natural image streams (car-mounted camera videos), capturing key aspects of both egocentric movement and the movement of objects in the visual scene, and the representation learned in this setting is useful for estimating the steering angle. These results suggest that prediction represents a powerful framework for unsupervised learning, allowing for implicit learning of object and scene structure.
["networks", "video prediction", "unsupervised learning", "structure", "prediction", "future frames", "video sequence", "movement", "objects", "useful"]
ABSTRACTWhile great strides have been made in using deep learning algorithms to solvesupervised learning tasks, the problem of unsupervised learning — leveraging un-labeled examples to learn about the structure of a domain — remains a difficultunsolved challenge. Here, we explore prediction of future frames in a video se-quence as an unsupervised learning rule for learning about the structure of thevisual world. We describe a predictive neural network (“PredNet”) architecturethat is inspired by the concept of “predictive coding” from the neuroscience lit-erature. These networks learn to predict future frames in a video sequence, witheach layer in the network making local predictions and only forwarding deviationsfrom those predictions to subsequent network layers. We show that these networksare able to robustly learn to predict the movement of synthetic (rendered) objects,and that in doing so, the networks learn internal representations that are usefulfor decoding latent object parameters (e.g. pose) that support object recognitionwith fewer training views. We also show that these networks can scale to com-plex natural image streams (car-mounted camera videos), capturing key aspectsof both egocentric movement and the movement of objects in the visual scene,and the representation learned in this setting is useful for estimating the steer-ing angle. Altogether, these results suggest that prediction represents a powerfulframework for unsupervised learning, allowing for implicit learning of object andscene structure.1 I NTRODUCTIONMany of the most successful current deep learning architectures for vision rely on supervised learn-ing from large sets of labeled training images. While the performance of these networks is un-doubtedly impressive, reliance on such large numbers of training examples limits the utility of deeplearning in many domains where such datasets are not available. Furthermore, the need for largenumbers of labeled examples stands at odds with human visual learning, where one or a few viewsof an object is often all that is needed to enable robust recognition of that object across a wide rangeof different views, lightings and contexts. The development of a representation that facilitates suchabilities, especially in an unsupervised way, is a largely unsolved problem.In addition, while computer vision models are typically trained using static images, in the real world,visual objects are rarely experienced as disjoint snapshots. Instead, the visual world is alive withmovement, driven both by self-motion of the viewer and the movement of objects within the scene.Many have suggested that temporal experience with objects as they move and undergo transforma-tions can serve as an important signal for learning about the structure of objects (F ̈oldi ́ak, 1991;Softky, 1996; Wiskott & Sejnowski, 2002; George & Hawkins, 2005; Palm, 2012; O’Reilly et al.,2014; Agrawal et al., 2015; Goroshin et al., 2015a; Lotter et al., 2015; Mathieu et al., 2016; Srivas-tava et al., 2015; Wang & Gupta, 2015; Whitney et al., 2016). For instance, Wiskott and Sejnowskiproposed “slow feature analysis” as a framework for exploiting temporal structure in video streams(Wiskott & Sejnowski, 2002). Their approach attempts to build feature representations that extractCode and video examples can be found at: https://coxlab.github.io/prednet/1Published as a conference paper at ICLR 2017slowly-varying parameters, such as object identity, from parameters that produce fast changes in theimage, such as movement of the object. While approaches that rely on temporal coherence havearguably not yet yielded representations as powerful as those learned by supervised methods, theynonetheless point to the potential of learning useful representations from video (Mohabi et al., 2009;Sun et al., 2014; Goroshin et al., 2015a; Maltoni & Lomonaco, 2015; Wang & Gupta, 2015).Here, we explore another potential principle for exploiting video for unsupervised learning: pre-diction of future image frames (Softky, 1996; Palm, 2012; O’Reilly et al., 2014; Goroshin et al.,2015b; Srivastava et al., 2015; Mathieu et al., 2016; Patraucean et al., 2015; Finn et al., 2016; V on-drick et al., 2016). A key insight here is that in order to be able to predict how the visual worldwill change over time, an agent must have at least some implicit model of object structure and thepossible transformations objects can undergo. To this end, we have designed a neural network archi-tecture, which we informally call a “PredNet,” that attempts to continually predict the appearanceof future video frames, using a deep, recurrent convolutional network with both bottom-up and top-down connections. Our work here builds on previous work in next-frame video prediction (Ranzatoet al., 2014; Michalski et al., 2014; Srivastava et al., 2015; Mathieu et al., 2016; Lotter et al., 2015;Patraucean et al., 2015; Oh et al., 2015; Finn et al., 2016; Xue et al., 2016; V ondrick et al., 2016;Brabandere et al., 2016), but we take particular inspiration from the concept of “predictive coding”from the neuroscience literature (Rao & Ballard, 1999; Rao & Sejnowski, 2000; Lee & Mumford,2003; Friston, 2005; Summerfield et al., 2006; Egner et al., 2010; Bastos et al., 2012; Spratling,2012; Chalasani & Principe, 2013; Clark, 2013; O’Reilly et al., 2014; Kanai et al., 2015). Predictivecoding posits that the brain is continually making predictions of incoming sensory stimuli (Rao &Ballard, 1999; Friston, 2005). Top-down (and perhaps lateral) connections convey these predictions,which are compared against actual observations to generate an error signal. The error signal is thenpropagated back up the hierarchy, eventually leading to an update of the predictions.We demonstrate the effectiveness of our model for both synthetic sequences, where we have accessto the underlying generative model and can investigate what the model learns, as well as naturalvideos. Consistent with the idea that prediction requires knowledge of object structure, we findthat these networks successfully learn internal representations that are well-suited to subsequentrecognition and decoding of latent object parameters (e.g. identity, view, rotation speed, etc.). Wealso find that our architecture can scale effectively to natural image sequences, by training usingcar-mounted camera videos. The network is able to successfully learn to predict both the movementof the camera and the movement of objects in the camera’s view. Again supporting the notionof prediction as an unsupervised learning rule, the model’s learned representation in this settingsupports decoding of the current steering angle.––inputoutputRepresentationPredictionTargetErrorFigure 1: Predictive Coding Network (PredNet). Left: Illustration of information flow within twolayers. Each layer consists of representation neurons ( Rl), which output a layer-specific prediction ateach time step ( ^Al), which is compared against a target ( Al) (Bengio, 2014) to produce an error term(El), which is then propagated laterally and vertically in the network. Right: Module operations forcase of video sequences.2Published as a conference paper at ICLR 20172 T HEPREDNETMODELThe PredNet architecture is diagrammed in Figure 1. The network consists of a series of repeatingstacked modules that attempt to make local predictions of the input to the module, which is thensubtracted from the actual input and passed along to the next layer. Briefly, each module of thenetwork consists of four basic parts: an input convolutional layer ( Al), a recurrent representationlayer (Rl), a prediction layer ( ^Al), and an error representation ( El). The representation layer, Rl, isa recurrent convolutional network that generates a prediction, ^Al, of what the layer input, Al, willbe on the next frame. The network takes the difference between Aland^Aland outputs an errorrepresentation, El, which is split into separate rectified positive and negative error populations. Theerror,El, is then passed forward through a convolutional layer to become the input to the next layer(Al+1). The recurrent prediction layer Rlreceives a copy of the error signal El, along with top-downinput from the representation layer of the next level of the network ( Rl+1). The organization of thenetwork is such that on the first time step of operation, the “right” side of the network ( Al’s andEl’s)is equivalent to a standard deep convolutional network. Meanwhile, the “left” side of the network(theRl’s) is equivalent to a generative deconvolutional network with local recurrence at each stage.The architecture described here is inspired by that originally proposed by (Rao & Ballard, 1999), butis formulated in a modern deep learning framework and trained end-to-end using gradient descent,with a loss function implicitly embedded in the network as the firing rates of the error neurons. Ourwork also shares motivation with the Deep Predictive Coding Networks of Chalasani & Principe(2013); however, their framework is based upon sparse coding and a linear dynamical system withgreedy layer-wise training, whereas ours is rooted in convolutional and recurrent neural networkstrained with backprop.While the architecture is general with respect to the kinds of data it models, here we focus on imagesequence (video) data. Consider a sequence of images, xt. The target for the lowest layer is setto the the actual sequence itself, i.e. At0=xt8t. The targets for higher layers, Atlforl >0, arecomputed by a convolution over the error units from the layer below, Etl1, followed by rectifiedlinear unit (ReLU) activation and max-pooling. For the representation neurons, we specificallyuse convolutional LSTM units (Hochreiter & Schmidhuber, 1997; Shi et al., 2015). In our setting,theRtlhidden state is updated according to Rt1l,Et1l, as well as Rtl+1, which is first spatiallyupsampled (nearest-neighbor), due to the pooling present in the feedforward path. The predictions,^Atlare made through a convolution of the Rtlstack followed by a ReLU non-linearity. For thelowest layer, ^Atlis also passed through a saturating non-linearity set at the maximum pixel value:SatLU (x;pmax):= min(pmax;x). Finally, the error response, Etl, is calculated from the differencebetween ^AtlandAtland is split into ReLU-activated positive and negative prediction errors, whichare concatenated along the feature dimension. As discussed in (Rao & Ballard, 1999), although notexplicit in their model, the separate error populations are analogous to the existence of on-center,off-surround and off-center, on-surround neurons early in the visual system.The full set of update rules are listed in Equations (1) to (4). The model is trained to minimizethe weighted sum of the activity of the error units. Explicitly, the training loss is formalized inEquation 5 with weighting factors by time, t, and layer,l, and where nlis the number of units inthelth layer. With error units consisting of subtraction followed by ReLU activation, the loss at eachlayer is equivalent to an L1 error. Although not explored here, other error unit implementations,potentially even probabilistic or adversarial (Goodfellow et al., 2014), could also be used.Atl=xt ifl= 0MAXPOOL(RELU(CONV(Etl1)))l>0(1)^Atl=RELU(CONV(Rtl)) (2)Etl= [RELU(Atl^Atl);RELU(^AtlAtl)] (3)Rtl=CONV LSTM (Et1l;Rt1l;UPSAMPLE (Rtl+1)) (4)Ltrain =XttXllnlXnlEtl (5)3Published as a conference paper at ICLR 2017Algorithm 1 Calculation of PredNet statesRequire:xt1:At0 xt2:E0l;R0l 03:fort= 1toTdo4: forl=Lto0do .UpdateRtlstates5: ifl=Lthen6: RtL=CONV LSTM(Et1L;Rt1L)7: else8: Rtl=CONV LSTM(Et1l;Rt1l;UPSAMPLE (Rtl+1))9: forl= 0toLdo .Update ^Atl;Atl;Etlstates10: ifl= 0then11: ^At0=SATLU(R ELU(C ONV(Rt0)))12: else13: ^Atl=RELU(C ONV(Rtl))14:Etl=[RELU(Atl^Atl); R ELU(^AtlAlt)]15: ifl<L then16: Atl+1=MAXPOOL(CONV(Elt))The order in which each unit in the model is updated must also be specified, and our implementa-tion is described in Algorithm 1. Updating of states occurs through two passes: a top-down passwhere theRtlstates are computed, and then a forward pass to calculate the predictions, errors, andhigher level targets. A last detail of note is that RlandElare initialized to zero, which, due to theconvolutional nature of the network, means that the initial prediction is spatially uniform.3 E XPERIMENTS3.1 R ENDERED IMAGE SEQUENCESTo gain an understanding of the representations learned in the proposed framework, we first trainedPredNet models using synthetic images, for which we have access to the underlying generativestimulus model and all latent parameters. We created sequences of rendered faces rotating with twodegrees of freedom, along the “pan” (out-of-plane) and “roll” (in-plane) axes. The faces start at arandom orientation and rotate at a random constant velocity for a total of 10frames. A different facewas sampled for each sequence. The images were processed to be grayscale, with values normalizedbetween 0and1, and 64x64pixels in size. We used 16K sequences for training and 800for bothvalidation and testing.Predictions generated by a PredNet model are shown in Figure 2. The model is able to accumulateinformation over time to make accurate predictions of future frames. Since the representation neu-rons are initialized to zero, the prediction at the first time step is uniform. On the second time step,with no motion information yet, the prediction is a blurry reconstruction of the first time step. Afterfurther iterations, the model adapts to the underlying dynamics to generate predictions that closelymatch the incoming frame.For choosing the hyperparameters of the model, we performed a random search and chose the modelthat had the lowest L1 error in frame prediction averaged over time steps 2-10on a validation set.Given this selection criteria, the best performing models tended to have a loss solely concentrated atthe lowest layer (i.e. 0= 1,l>0= 0), which is the case for the model shown. Using an equal lossat each layer considerably degraded predictions, but enforcing a moderate loss on upper layers thatwas one magnitude smaller than the lowest layer (i.e. 0= 1,l>0= 0:1) led to only slightly worsepredictions, as illustrated in Figure 9 in the Appendix. In all cases, the time loss weight, t, was set tozero for the first time step and then one for all time steps after. As for the remaining hyperparameters,the model shown has 5layers with 3x3filter sizes for all convolutions, max-pooling of stride 2, andnumber of channels per layer, for both AlandRlunits, of (1;32;64;128;256) . Model weights wereoptimized using the Adam algorithm (Kingma & Ba, 2014).4Published as a conference paper at ICLR 2017ActualPredictedtime→ActualPredictedActualPredictedFigure 2: PredNet next-frame predictions for sequences of rendered faces rotating with two degreesof freedom. Faces shown were not seen during training.Table 1: Evaluation of next-frame predictionson Rotating Faces Dataset (test set).MSE SSIMPredNetL0 0.0152 0.937PredNetLall 0.0157 0.921CNN-LSTM Enc.-Dec. 0.0180 0.907Copy Last Frame 0.125 0.631Quantitative evaluation of generative models is adifficult, unsolved problem (Theis et al., 2016), buthere we report prediction error in terms of mean-squared error (MSE) and the Structural SimilarityIndex Measure (SSIM) (Wang et al., 2004). SSIMis designed to be more correlated with perceptualjudgments, and ranges from 1and1, with a largerscore indicating greater similarity. We compare thePredNet to the trivial solution of copying the lastframe, as well as a control model that shares the overall architecture and training scheme of thePredNet, but that sends forward the layer-wise activations ( Al) rather than the errors ( El). Thismodel thus takes the form of a more traditional encoder-decoder pair, with a CNN encoder that haslateral skip connections to a convolutional LSTM decoder. The performance of all models on therotating faces dataset is summarized in Table 1, where the scores were calculated as an average overall predictions after the first frame. We report results for the PredNet model trained with loss onlyon the lowest layer, denoted as PredNet L0, as well as the model trained with an 0:1weight onupper layers, denoted as PredNet Lall. Both PredNet models outperformed the baselines on bothmeasures, with the L0model slightly outperforming Lall, as expected for evaluating the pixel-levelpredictions.Synthetic sequences were chosen as the initial training set in order to better understand what islearned in different layers of the model, specifically with respect to the underlying generative model(Kulkarni et al., 2015). The rotating faces were generated using the FaceGen software package (Sin-gular Inversions, Inc.), which internally generates 3D face meshes by a principal component analysisin “face space”, derived from a corpus of 3D face scans. Thus, the latent parameters of the imagesequences used here consist of the initial pan and roll angles, the pan and roll velocities, and the prin-cipal component (PC) values, which control the “identity” of the face. To understand the informationcontained in the trained models, we decoded the latent parameters from the representation neurons(Rl) in different layers, using a ridge regression. The Rlstates were taken at the earliest possibleinformative time steps, which, in the our notation, are the second and third steps, respectively, forthe static and dynamic parameters. The regression was trained using 4Ksequences with 500forvalidation and 1Kfor testing. For a baseline comparison of the information implicitly embeddedin the network architecture, we compare to the decoding accuracies of an untrained network withrandom initial weights. Note that in this randomly initialized case, we still expect above-chance de-coding performance, given past theoretical and empirical work with random networks (Pinto et al.,2009; Jarrett et al., 2009; Saxe et al., 2010).5Published as a conference paper at ICLR 2017Latent variable decoding accuracies of the pan and roll velocities, pan initial angle, and first PC areshown in the left panel of Figure 3. There are several interesting patterns. First, the trained modelslearn a representation that generally permits a better linear decoding of the underlying latent factorsthan the randomly initialized model, with the most striking difference in terms of the the pan rotationspeed (pan). Second, the most notable difference between the LallandL0versions occurs withthe first principle component, where the model trained with loss on all layers has a higher decodingaccuracy than the model trained with loss only on the lowest layer.Figure 3: Information contained in PredNet representation for rotating faces sequences. Left: De-coding of latent variables using a ridge regression ( pan: pan (out-of-frame) angular velocity, pan:pan angle, PC-1: first principal component of face, roll: roll (in-frame) angular velocity). Right:Orientation-invariant classification of static faces.The latent variable decoding analysis suggests that the model learns a representation that may gen-eralize well to other tasks for which it was not explicitly trained. To investigate this further, weassessed the models in a classification task from single, static images. We created a dataset of 25previously unseen FaceGen faces at 7pan angles, equally spaced between [2;2], and 8roll angles,equally spaced between [0;2). There were therefore 78 = 56 orientations per identity, whichwere tested in a cross-validated fashion. A linear SVM to decode face identity was fit on a model’srepresentation of a random subset of orientations and then tested on the remaining angles. For eachsize of the SVM training set, ranging from 1-40orientations per face, 50different random splitswere generated, with results averaged over the splits.For the static face classification task, we compare the PredNets to a standard autoencoder and avariant of the Ladder Network (Valpola, 2015; Rasmus et al., 2015). Both models were constructedto have the same number of layers and channel sizes as the PredNets, as well as a similar alternat-ing convolution/max-pooling, then upsampling/convolution scheme. As both networks are autoen-coders, they were trained with a reconstruction loss, with a dataset consisting of all of the individualframes from the sequences used to train the PredNets. For the Ladder Network, which is a denois-ing autoencoder with lateral skip connections, one must also choose a noise parameter, as well asthe relative weights of each layer in the total cost. We tested noise levels ranging from 0to0:5in increments of 0:1, with loss weights either evenly distributed across layers, solely concentratedat the pixel layer, or 1at the bottom layer and 0:1at upper layers (analogous to the PredNet Lallmodel). Shown is the model that performed best for classification, which consisted of 0:4noise andonly pixel weighting. Lastly, as in our architecture, the Ladder Network has lateral and top-downstreams that are combined by a combinator function. Inspired by (Pezeshki et al., 2015), where alearnable MLP improved results, and to be consistent in comparing to the PredNet, we used a purelyconvolutional combinator. Given the distributed representation in both networks, we decoded froma concatenation of the feature representations at all layers, except the pixel layer. For the PredNets,the representation units were used and features were extracted after processing one input frame.6Published as a conference paper at ICLR 2017Face classification accuracies using the representations learned by the L0andLallPredNets, a stan-dard autoencoder, and a Ladder Network variant are shown in the right panel of Figure 3. BothPredNets compare favorably to the other models at all sizes of the training set, suggesting they learna representation that is relatively tolerant to object transformations. Similar to the decoding accu-racy of the first principle component, the PredNet Lallmodel actually outperformed the L0variant.Altogether, these results suggest that predictive training with the PredNet can be a viable alternativeto other models trained with a more traditional reconstructive or denoising loss, and that the relativelayer loss weightings ( l’s) may be important for the particular task at hand.3.2 N ATURAL IMAGE SEQUENCESWe next sought to test the PredNet architecture on complex, real-world sequences. As a testbed, wechose car-mounted camera videos, since these videos span across a wide range of settings and arecharacterized by rich temporal dynamics, including both self-motion of the vehicle and the motionof other objects in the scene (Agrawal et al., 2015). Models were trained using the raw videos fromthe KITTI dataset (Geiger et al., 2013), which were captured by a roof-mounted camera on a cardriving around an urban environment in Germany. Sequences of 10frames were sampled from the“City”, “Residential”, and “Road” categories, with 57recording sessions used for training and 4used for validation. Frames were center-cropped and downsampled to 128x160pixels. In total, thetraining set consisted of roughly 41K frames.A random hyperparameter search, with model selection based on the validation set, resulted in a 4layer model with 3x3convolutions and layer channel sizes of (3;48;96;192) . Models were againtrained with Adam (Kingma & Ba, 2014) using a loss either solely computed on the lowest layer(L0) or with a weight of 1on the lowest layer and 0:1on the upper layers ( Lall). Adam parameterswere initially set to their default values ( = 0:001,1= 0:9,2= 0:999) with the learning rate, ,decreasing by a factor of 10halfway through training. To assess that the network had indeed learneda robust representation, we tested on the CalTech Pedestrian dataset (Doll ́ar et al., 2009), whichconsists of videos from a dashboard-mounted camera on a vehicle driving around Los Angeles.Testing sequences were made to match the frame rate of the KITTI dataset and again cropped to128x160pixels. Quantitative evaluation was performed on the entire CalTech test partition, splitinto sequences of 10frames.Sample PredNet predictions (for the L0model) on the CalTech Pedestrian dataset are shown inFigure 4, and example videos can be found at https://coxlab.github.io/prednet/ . Themodel is able to make fairly accurate predictions in a wide range of scenarios. In the top sequenceof Fig. 4, a car is passing in the opposite direction, and the model, while not perfect, is able to predictits trajectory, as well as fill in the ground it leaves behind. Similarly in Sequence 3, the model isable to predict the motion of a vehicle completing a left turn. Sequences 2and5illustrate that thePredNet can judge its own movement, as it predicts the appearance of shadows and a stationaryvehicle as they approach. The model makes reasonable predictions even in difficult scenarios, suchas when the camera-mounted vehicle is turning. In Sequence 4, the model predicts the position of atree, as the vehicle turns onto a road. The turning sequences also further illustrate the model’s abilityto “fill-in”, as it is able to extrapolate sky and tree textures as unseen regions come into view. As anadditional control, we show a sequence at the bottom of Fig. 4, where the input has been temporallyscrambled. In this case, the model generates blurry frames, which mostly just resemble the previousframe. Finally, although the PredNet shown here was trained to predict one frame ahead, it is alsopossible to predict multiple frames into the future, by feeding back predictions as the inputs andrecursively iterating. We explore this in Appendix 5.3.Table 2: Evaluation of Next-Frame Predictions onCalTech Pedestrian Dataset.MSE SSIMPredNetL0 3:131030.884PredNetLall 3:331030.875CNN-LSTM Enc.-Dec. 3:671030.865Copy Last Frame 7:951030.762Quantitatively, the PredNet models againoutperformed the CNN-LSTM Encoder-Decoder. To ensure that the difference inperformance was not simply because of thechoice of hyperparameters, we trained mod-els with four other sets of hyperparameters,which were sampled from the initial ran-dom search over the number of layers, fil-ter sizes, and number of filters per layer. For each of the four additional sets, the PredNet L0hadthe best performance, with an average error reduction of 14:7%and14:9%for MSE and SSIM,7Published as a conference paper at ICLR 20171PredictedActual2PredictedActual3PredictedActual4PredictedActual5PredictedActual6PredictedActual7PredictedActual8PredictedScrambledtime →Figure 4: PredNet predictions for car-cam videos. The first rows contain ground truth and the secondrows contain predictions. The sequence below the red line was temporally scrambled. The modelwas trained on the KITTI dataset and sequences shown are from the CalTech Pedestrian dataset.respectively, compared to the CNN-LSTM Encoder-Decoder. More details, as well as a thoroughinvestigation of systematically simplified models on the continuum between the PredNet and theCNN-LSTM Encoder-Decoder can be found in Appendix 5.1. Briefly, the elementwise subtractionoperation in the PredNet seems to be beneficial, and the nonlinearity of positive/negative splittingalso adds modest improvements. Finally, while these experiments measure the benefits of each com-ponent of our model, we also directly compare against recent work in a similar car-cam setting, byreporting results on a 64x64pixel, grayscale car-cam dataset released by Brabandere et al. (2016).Our PredNet model outperforms the model by Brabandere et al. (2016) by 29%. Details can befound in Appendix 5.2. Also in Appendix 5.2, we present results for the Human3.6M (Ionescuet al., 2014) dataset, as reported by Finn et al. (2016). Without re-optimizing hyperparameters, our8Published as a conference paper at ICLR 2017model underperforms the concurrently developed DNA model by Finn et al. (2016), but outperformsthe model by Mathieu et al. (2016).To test the implicit encoding of latent parameters in the car-cam setting, we used the internal rep-resentation in the PredNet to estimate the car’s steering angle (Bojarski et al., 2016; Biasini et al.,2016). We used a dataset released by Comma.ai (Biasini et al., 2016) consisting of 11videos total-ing about 7hours of mostly highway driving. We first trained networks for next-frame predictionand then fit a linear fully-connected layer on the learned representation to estimate the steering an-gle, using a MSE loss. We again concatenate the Rlrepresentation at all layers, but first spatiallyaverage pool lower layers to match the spatial size of the upper layer, in order to reduce dimension-ality. Steering angle estimation results, using the representation on the 10thtime step, are shownin Figure 5. Given just 1K labeled training examples, a simple linear readout on the PredNet L0representation explains 74% of the variance in the steering angle and outperforms the CNN-LSTMEnc.-Dec. by 35%. With 25K labeled training examples, the PredNet L0has a MSE (in degrees2)of2:14. As a point of reference, a CNN model designed to predict the steering angle (Biasiniet al., 2016), albeit from a single frame instead of multiple frames, achieve a MSE of ~ 4whentrained end-to-end using 396K labeled training examples. Details of this analysis can be found inAppendix 8. Interestingly, in this task, the PredNet Lallmodel actually underperformed the L0model and slightly underperformed the CNN-LSTM Enc.-Dec, again suggesting that the lparam-eter can affect the representation learned, and different values may be preferable in different endtasks. Nonetheless, the readout from the Lallmodel still explained a substantial proportion of thesteering angle variance and strongly outperformed the random initial weights. Overall, this anal-ysis again demonstrates that a representation learned through prediction, and particularly with thePredNet model with appropriate hyperparameters, can contain useful information about underlyinglatent parameters.Figure 5: Steering angle estimation accuracy on the Comma.ai dataset (Biasini et al., 2016). Left:Example steering angle curve with model estimations for a segment in the test set. Decoding wasperformed using a fully-connected readout on the PredNet representation trained with 25K labeledtraining examples. PredNet representation was trained for next-frame prediction on Comma.ai train-ing set. Right: Mean-squared error of steering angle estimation.4 D ISCUSSIONAbove, we have demonstrated a predictive coding inspired architecture that is able to predict futureframes in both synthetic and natural image sequences. Importantly, we have shown that learning topredict how an object or scene will move in a future frame confers advantages in decoding latentparameters (such as viewing angle) that give rise to an object’s appearance, and can improve recog-nition performance. More generally, we argue that prediction can serve as a powerful unsupervisedlearning signal, since accurately predicting future frames requires at least an implicit model of theobjects that make up the scene and how they are allowed to move. Developing a deeper understand-ing of the nature of the representations learned by the networks, and extending the architecture, by,for instance, allowing sampling, are important future directions.9Published as a conference paper at ICLR 2017ACKNOWLEDGMENTSWe would like to thank Rasmus Berg Palm for fruitful discussions and early brainstorming. Wewould also like to thank the developers of Keras (Chollet, 2016). This work was supported by IARPA(contract D16PC00002), the National Science Foundation (NSF IIS 1409097), and the Center forBrains, Minds and Machines (CBMM, NSF STC award CCF-1231216).
H1_DwlWEl
Good paper, nice example of using the idea of feeding forward error signals.
8: Top 50% of accepted papers, clear accept
Paper Summary This paper proposes an unsupervised learning model in which the network predicts what its state would look like at the next time step (at input layer and potentially other layers). When these states are observed, an error signal is computed by comparing the predictions and the observations. This error signal is fed back into the model. The authors show that this model is able to make good predictions on a toy dataset of rotating 3D faces as well as on natural videos. They also show that these features help perform supervised tasks. Strengths - The model is an interesting embodiment of the idea of predictive coding implemented using a end-to-end backpropable recurrent neural network architecture. - The idea of feeding forward an error signal is perhaps not used as widely as it could be, and this work shows a compelling example of using it. - Strong empirical results and relevant comparisons show that the model works well. - The authors present a detailed ablative analysis of the proposed model. Weaknesses - The model (esp. in Fig 1) is presented as a generalized predictive model where next step predictions are made at each layer. However, as discovered by running the experiments, only the predictions at the input layer are the ones that actually matter and the optimal choice seems to be to turn off the error signal from the higher layers. While the authors intend to address this in future work, I think this point merits some more discussion in the current work, given the way this model is presented. - The network currently lacks stochasticity and does not model the future as a multimodal distribution (However, this is mentioned as potential future work). Quality The experiments are well-designed and a detailed analysis is provided in the appendix. Clarity The paper is well-written and easy to follow. Originality Some deep models have previously been proposed that use predictive coding. However, the proposed model is most probably novel in the way it feds back the error signal and implements the entire model as a single differentiable network. Significance This paper will be of wide interest to the growing set of researchers working in unsupervised learning of time series. This helps draw attention to predictive coding as an important learning paradigm. Overall Good paper with detailed and well-designed experiments. The idea of feeding forward the error signal is not being used as much as it could be in our community. This work helps to draw the community's attention to this idea.
5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature
B1ewdt9xe
ICLR.cc/2017/conference
2017
Deep Predictive Coding Networks for Video Prediction and Unsupervised Learning
["William Lotter", "Gabriel Kreiman", "David Cox"]
While great strides have been made in using deep learning algorithms to solve supervised learning tasks, the problem of unsupervised learning - leveraging unlabeled examples to learn about the structure of a domain - remains a difficult unsolved challenge. Here, we explore prediction of future frames in a video sequence as an unsupervised learning rule for learning about the structure of the visual world. We describe a predictive neural network ("PredNet") architecture that is inspired by the concept of "predictive coding" from the neuroscience literature. These networks learn to predict future frames in a video sequence, with each layer in the network making local predictions and only forwarding deviations from those predictions to subsequent network layers. We show that these networks are able to robustly learn to predict the movement of synthetic (rendered) objects, and that in doing so, the networks learn internal representations that are useful for decoding latent object parameters (e.g. pose) that support object recognition with fewer training views. We also show that these networks can scale to complex natural image streams (car-mounted camera videos), capturing key aspects of both egocentric movement and the movement of objects in the visual scene, and the representation learned in this setting is useful for estimating the steering angle. These results suggest that prediction represents a powerful framework for unsupervised learning, allowing for implicit learning of object and scene structure.
["networks", "video prediction", "unsupervised learning", "structure", "prediction", "future frames", "video sequence", "movement", "objects", "useful"]
ABSTRACTWhile great strides have been made in using deep learning algorithms to solvesupervised learning tasks, the problem of unsupervised learning — leveraging un-labeled examples to learn about the structure of a domain — remains a difficultunsolved challenge. Here, we explore prediction of future frames in a video se-quence as an unsupervised learning rule for learning about the structure of thevisual world. We describe a predictive neural network (“PredNet”) architecturethat is inspired by the concept of “predictive coding” from the neuroscience lit-erature. These networks learn to predict future frames in a video sequence, witheach layer in the network making local predictions and only forwarding deviationsfrom those predictions to subsequent network layers. We show that these networksare able to robustly learn to predict the movement of synthetic (rendered) objects,and that in doing so, the networks learn internal representations that are usefulfor decoding latent object parameters (e.g. pose) that support object recognitionwith fewer training views. We also show that these networks can scale to com-plex natural image streams (car-mounted camera videos), capturing key aspectsof both egocentric movement and the movement of objects in the visual scene,and the representation learned in this setting is useful for estimating the steer-ing angle. Altogether, these results suggest that prediction represents a powerfulframework for unsupervised learning, allowing for implicit learning of object andscene structure.1 I NTRODUCTIONMany of the most successful current deep learning architectures for vision rely on supervised learn-ing from large sets of labeled training images. While the performance of these networks is un-doubtedly impressive, reliance on such large numbers of training examples limits the utility of deeplearning in many domains where such datasets are not available. Furthermore, the need for largenumbers of labeled examples stands at odds with human visual learning, where one or a few viewsof an object is often all that is needed to enable robust recognition of that object across a wide rangeof different views, lightings and contexts. The development of a representation that facilitates suchabilities, especially in an unsupervised way, is a largely unsolved problem.In addition, while computer vision models are typically trained using static images, in the real world,visual objects are rarely experienced as disjoint snapshots. Instead, the visual world is alive withmovement, driven both by self-motion of the viewer and the movement of objects within the scene.Many have suggested that temporal experience with objects as they move and undergo transforma-tions can serve as an important signal for learning about the structure of objects (F ̈oldi ́ak, 1991;Softky, 1996; Wiskott & Sejnowski, 2002; George & Hawkins, 2005; Palm, 2012; O’Reilly et al.,2014; Agrawal et al., 2015; Goroshin et al., 2015a; Lotter et al., 2015; Mathieu et al., 2016; Srivas-tava et al., 2015; Wang & Gupta, 2015; Whitney et al., 2016). For instance, Wiskott and Sejnowskiproposed “slow feature analysis” as a framework for exploiting temporal structure in video streams(Wiskott & Sejnowski, 2002). Their approach attempts to build feature representations that extractCode and video examples can be found at: https://coxlab.github.io/prednet/1Published as a conference paper at ICLR 2017slowly-varying parameters, such as object identity, from parameters that produce fast changes in theimage, such as movement of the object. While approaches that rely on temporal coherence havearguably not yet yielded representations as powerful as those learned by supervised methods, theynonetheless point to the potential of learning useful representations from video (Mohabi et al., 2009;Sun et al., 2014; Goroshin et al., 2015a; Maltoni & Lomonaco, 2015; Wang & Gupta, 2015).Here, we explore another potential principle for exploiting video for unsupervised learning: pre-diction of future image frames (Softky, 1996; Palm, 2012; O’Reilly et al., 2014; Goroshin et al.,2015b; Srivastava et al., 2015; Mathieu et al., 2016; Patraucean et al., 2015; Finn et al., 2016; V on-drick et al., 2016). A key insight here is that in order to be able to predict how the visual worldwill change over time, an agent must have at least some implicit model of object structure and thepossible transformations objects can undergo. To this end, we have designed a neural network archi-tecture, which we informally call a “PredNet,” that attempts to continually predict the appearanceof future video frames, using a deep, recurrent convolutional network with both bottom-up and top-down connections. Our work here builds on previous work in next-frame video prediction (Ranzatoet al., 2014; Michalski et al., 2014; Srivastava et al., 2015; Mathieu et al., 2016; Lotter et al., 2015;Patraucean et al., 2015; Oh et al., 2015; Finn et al., 2016; Xue et al., 2016; V ondrick et al., 2016;Brabandere et al., 2016), but we take particular inspiration from the concept of “predictive coding”from the neuroscience literature (Rao & Ballard, 1999; Rao & Sejnowski, 2000; Lee & Mumford,2003; Friston, 2005; Summerfield et al., 2006; Egner et al., 2010; Bastos et al., 2012; Spratling,2012; Chalasani & Principe, 2013; Clark, 2013; O’Reilly et al., 2014; Kanai et al., 2015). Predictivecoding posits that the brain is continually making predictions of incoming sensory stimuli (Rao &Ballard, 1999; Friston, 2005). Top-down (and perhaps lateral) connections convey these predictions,which are compared against actual observations to generate an error signal. The error signal is thenpropagated back up the hierarchy, eventually leading to an update of the predictions.We demonstrate the effectiveness of our model for both synthetic sequences, where we have accessto the underlying generative model and can investigate what the model learns, as well as naturalvideos. Consistent with the idea that prediction requires knowledge of object structure, we findthat these networks successfully learn internal representations that are well-suited to subsequentrecognition and decoding of latent object parameters (e.g. identity, view, rotation speed, etc.). Wealso find that our architecture can scale effectively to natural image sequences, by training usingcar-mounted camera videos. The network is able to successfully learn to predict both the movementof the camera and the movement of objects in the camera’s view. Again supporting the notionof prediction as an unsupervised learning rule, the model’s learned representation in this settingsupports decoding of the current steering angle.––inputoutputRepresentationPredictionTargetErrorFigure 1: Predictive Coding Network (PredNet). Left: Illustration of information flow within twolayers. Each layer consists of representation neurons ( Rl), which output a layer-specific prediction ateach time step ( ^Al), which is compared against a target ( Al) (Bengio, 2014) to produce an error term(El), which is then propagated laterally and vertically in the network. Right: Module operations forcase of video sequences.2Published as a conference paper at ICLR 20172 T HEPREDNETMODELThe PredNet architecture is diagrammed in Figure 1. The network consists of a series of repeatingstacked modules that attempt to make local predictions of the input to the module, which is thensubtracted from the actual input and passed along to the next layer. Briefly, each module of thenetwork consists of four basic parts: an input convolutional layer ( Al), a recurrent representationlayer (Rl), a prediction layer ( ^Al), and an error representation ( El). The representation layer, Rl, isa recurrent convolutional network that generates a prediction, ^Al, of what the layer input, Al, willbe on the next frame. The network takes the difference between Aland^Aland outputs an errorrepresentation, El, which is split into separate rectified positive and negative error populations. Theerror,El, is then passed forward through a convolutional layer to become the input to the next layer(Al+1). The recurrent prediction layer Rlreceives a copy of the error signal El, along with top-downinput from the representation layer of the next level of the network ( Rl+1). The organization of thenetwork is such that on the first time step of operation, the “right” side of the network ( Al’s andEl’s)is equivalent to a standard deep convolutional network. Meanwhile, the “left” side of the network(theRl’s) is equivalent to a generative deconvolutional network with local recurrence at each stage.The architecture described here is inspired by that originally proposed by (Rao & Ballard, 1999), butis formulated in a modern deep learning framework and trained end-to-end using gradient descent,with a loss function implicitly embedded in the network as the firing rates of the error neurons. Ourwork also shares motivation with the Deep Predictive Coding Networks of Chalasani & Principe(2013); however, their framework is based upon sparse coding and a linear dynamical system withgreedy layer-wise training, whereas ours is rooted in convolutional and recurrent neural networkstrained with backprop.While the architecture is general with respect to the kinds of data it models, here we focus on imagesequence (video) data. Consider a sequence of images, xt. The target for the lowest layer is setto the the actual sequence itself, i.e. At0=xt8t. The targets for higher layers, Atlforl >0, arecomputed by a convolution over the error units from the layer below, Etl1, followed by rectifiedlinear unit (ReLU) activation and max-pooling. For the representation neurons, we specificallyuse convolutional LSTM units (Hochreiter & Schmidhuber, 1997; Shi et al., 2015). In our setting,theRtlhidden state is updated according to Rt1l,Et1l, as well as Rtl+1, which is first spatiallyupsampled (nearest-neighbor), due to the pooling present in the feedforward path. The predictions,^Atlare made through a convolution of the Rtlstack followed by a ReLU non-linearity. For thelowest layer, ^Atlis also passed through a saturating non-linearity set at the maximum pixel value:SatLU (x;pmax):= min(pmax;x). Finally, the error response, Etl, is calculated from the differencebetween ^AtlandAtland is split into ReLU-activated positive and negative prediction errors, whichare concatenated along the feature dimension. As discussed in (Rao & Ballard, 1999), although notexplicit in their model, the separate error populations are analogous to the existence of on-center,off-surround and off-center, on-surround neurons early in the visual system.The full set of update rules are listed in Equations (1) to (4). The model is trained to minimizethe weighted sum of the activity of the error units. Explicitly, the training loss is formalized inEquation 5 with weighting factors by time, t, and layer,l, and where nlis the number of units inthelth layer. With error units consisting of subtraction followed by ReLU activation, the loss at eachlayer is equivalent to an L1 error. Although not explored here, other error unit implementations,potentially even probabilistic or adversarial (Goodfellow et al., 2014), could also be used.Atl=xt ifl= 0MAXPOOL(RELU(CONV(Etl1)))l>0(1)^Atl=RELU(CONV(Rtl)) (2)Etl= [RELU(Atl^Atl);RELU(^AtlAtl)] (3)Rtl=CONV LSTM (Et1l;Rt1l;UPSAMPLE (Rtl+1)) (4)Ltrain =XttXllnlXnlEtl (5)3Published as a conference paper at ICLR 2017Algorithm 1 Calculation of PredNet statesRequire:xt1:At0 xt2:E0l;R0l 03:fort= 1toTdo4: forl=Lto0do .UpdateRtlstates5: ifl=Lthen6: RtL=CONV LSTM(Et1L;Rt1L)7: else8: Rtl=CONV LSTM(Et1l;Rt1l;UPSAMPLE (Rtl+1))9: forl= 0toLdo .Update ^Atl;Atl;Etlstates10: ifl= 0then11: ^At0=SATLU(R ELU(C ONV(Rt0)))12: else13: ^Atl=RELU(C ONV(Rtl))14:Etl=[RELU(Atl^Atl); R ELU(^AtlAlt)]15: ifl<L then16: Atl+1=MAXPOOL(CONV(Elt))The order in which each unit in the model is updated must also be specified, and our implementa-tion is described in Algorithm 1. Updating of states occurs through two passes: a top-down passwhere theRtlstates are computed, and then a forward pass to calculate the predictions, errors, andhigher level targets. A last detail of note is that RlandElare initialized to zero, which, due to theconvolutional nature of the network, means that the initial prediction is spatially uniform.3 E XPERIMENTS3.1 R ENDERED IMAGE SEQUENCESTo gain an understanding of the representations learned in the proposed framework, we first trainedPredNet models using synthetic images, for which we have access to the underlying generativestimulus model and all latent parameters. We created sequences of rendered faces rotating with twodegrees of freedom, along the “pan” (out-of-plane) and “roll” (in-plane) axes. The faces start at arandom orientation and rotate at a random constant velocity for a total of 10frames. A different facewas sampled for each sequence. The images were processed to be grayscale, with values normalizedbetween 0and1, and 64x64pixels in size. We used 16K sequences for training and 800for bothvalidation and testing.Predictions generated by a PredNet model are shown in Figure 2. The model is able to accumulateinformation over time to make accurate predictions of future frames. Since the representation neu-rons are initialized to zero, the prediction at the first time step is uniform. On the second time step,with no motion information yet, the prediction is a blurry reconstruction of the first time step. Afterfurther iterations, the model adapts to the underlying dynamics to generate predictions that closelymatch the incoming frame.For choosing the hyperparameters of the model, we performed a random search and chose the modelthat had the lowest L1 error in frame prediction averaged over time steps 2-10on a validation set.Given this selection criteria, the best performing models tended to have a loss solely concentrated atthe lowest layer (i.e. 0= 1,l>0= 0), which is the case for the model shown. Using an equal lossat each layer considerably degraded predictions, but enforcing a moderate loss on upper layers thatwas one magnitude smaller than the lowest layer (i.e. 0= 1,l>0= 0:1) led to only slightly worsepredictions, as illustrated in Figure 9 in the Appendix. In all cases, the time loss weight, t, was set tozero for the first time step and then one for all time steps after. As for the remaining hyperparameters,the model shown has 5layers with 3x3filter sizes for all convolutions, max-pooling of stride 2, andnumber of channels per layer, for both AlandRlunits, of (1;32;64;128;256) . Model weights wereoptimized using the Adam algorithm (Kingma & Ba, 2014).4Published as a conference paper at ICLR 2017ActualPredictedtime→ActualPredictedActualPredictedFigure 2: PredNet next-frame predictions for sequences of rendered faces rotating with two degreesof freedom. Faces shown were not seen during training.Table 1: Evaluation of next-frame predictionson Rotating Faces Dataset (test set).MSE SSIMPredNetL0 0.0152 0.937PredNetLall 0.0157 0.921CNN-LSTM Enc.-Dec. 0.0180 0.907Copy Last Frame 0.125 0.631Quantitative evaluation of generative models is adifficult, unsolved problem (Theis et al., 2016), buthere we report prediction error in terms of mean-squared error (MSE) and the Structural SimilarityIndex Measure (SSIM) (Wang et al., 2004). SSIMis designed to be more correlated with perceptualjudgments, and ranges from 1and1, with a largerscore indicating greater similarity. We compare thePredNet to the trivial solution of copying the lastframe, as well as a control model that shares the overall architecture and training scheme of thePredNet, but that sends forward the layer-wise activations ( Al) rather than the errors ( El). Thismodel thus takes the form of a more traditional encoder-decoder pair, with a CNN encoder that haslateral skip connections to a convolutional LSTM decoder. The performance of all models on therotating faces dataset is summarized in Table 1, where the scores were calculated as an average overall predictions after the first frame. We report results for the PredNet model trained with loss onlyon the lowest layer, denoted as PredNet L0, as well as the model trained with an 0:1weight onupper layers, denoted as PredNet Lall. Both PredNet models outperformed the baselines on bothmeasures, with the L0model slightly outperforming Lall, as expected for evaluating the pixel-levelpredictions.Synthetic sequences were chosen as the initial training set in order to better understand what islearned in different layers of the model, specifically with respect to the underlying generative model(Kulkarni et al., 2015). The rotating faces were generated using the FaceGen software package (Sin-gular Inversions, Inc.), which internally generates 3D face meshes by a principal component analysisin “face space”, derived from a corpus of 3D face scans. Thus, the latent parameters of the imagesequences used here consist of the initial pan and roll angles, the pan and roll velocities, and the prin-cipal component (PC) values, which control the “identity” of the face. To understand the informationcontained in the trained models, we decoded the latent parameters from the representation neurons(Rl) in different layers, using a ridge regression. The Rlstates were taken at the earliest possibleinformative time steps, which, in the our notation, are the second and third steps, respectively, forthe static and dynamic parameters. The regression was trained using 4Ksequences with 500forvalidation and 1Kfor testing. For a baseline comparison of the information implicitly embeddedin the network architecture, we compare to the decoding accuracies of an untrained network withrandom initial weights. Note that in this randomly initialized case, we still expect above-chance de-coding performance, given past theoretical and empirical work with random networks (Pinto et al.,2009; Jarrett et al., 2009; Saxe et al., 2010).5Published as a conference paper at ICLR 2017Latent variable decoding accuracies of the pan and roll velocities, pan initial angle, and first PC areshown in the left panel of Figure 3. There are several interesting patterns. First, the trained modelslearn a representation that generally permits a better linear decoding of the underlying latent factorsthan the randomly initialized model, with the most striking difference in terms of the the pan rotationspeed (pan). Second, the most notable difference between the LallandL0versions occurs withthe first principle component, where the model trained with loss on all layers has a higher decodingaccuracy than the model trained with loss only on the lowest layer.Figure 3: Information contained in PredNet representation for rotating faces sequences. Left: De-coding of latent variables using a ridge regression ( pan: pan (out-of-frame) angular velocity, pan:pan angle, PC-1: first principal component of face, roll: roll (in-frame) angular velocity). Right:Orientation-invariant classification of static faces.The latent variable decoding analysis suggests that the model learns a representation that may gen-eralize well to other tasks for which it was not explicitly trained. To investigate this further, weassessed the models in a classification task from single, static images. We created a dataset of 25previously unseen FaceGen faces at 7pan angles, equally spaced between [2;2], and 8roll angles,equally spaced between [0;2). There were therefore 78 = 56 orientations per identity, whichwere tested in a cross-validated fashion. A linear SVM to decode face identity was fit on a model’srepresentation of a random subset of orientations and then tested on the remaining angles. For eachsize of the SVM training set, ranging from 1-40orientations per face, 50different random splitswere generated, with results averaged over the splits.For the static face classification task, we compare the PredNets to a standard autoencoder and avariant of the Ladder Network (Valpola, 2015; Rasmus et al., 2015). Both models were constructedto have the same number of layers and channel sizes as the PredNets, as well as a similar alternat-ing convolution/max-pooling, then upsampling/convolution scheme. As both networks are autoen-coders, they were trained with a reconstruction loss, with a dataset consisting of all of the individualframes from the sequences used to train the PredNets. For the Ladder Network, which is a denois-ing autoencoder with lateral skip connections, one must also choose a noise parameter, as well asthe relative weights of each layer in the total cost. We tested noise levels ranging from 0to0:5in increments of 0:1, with loss weights either evenly distributed across layers, solely concentratedat the pixel layer, or 1at the bottom layer and 0:1at upper layers (analogous to the PredNet Lallmodel). Shown is the model that performed best for classification, which consisted of 0:4noise andonly pixel weighting. Lastly, as in our architecture, the Ladder Network has lateral and top-downstreams that are combined by a combinator function. Inspired by (Pezeshki et al., 2015), where alearnable MLP improved results, and to be consistent in comparing to the PredNet, we used a purelyconvolutional combinator. Given the distributed representation in both networks, we decoded froma concatenation of the feature representations at all layers, except the pixel layer. For the PredNets,the representation units were used and features were extracted after processing one input frame.6Published as a conference paper at ICLR 2017Face classification accuracies using the representations learned by the L0andLallPredNets, a stan-dard autoencoder, and a Ladder Network variant are shown in the right panel of Figure 3. BothPredNets compare favorably to the other models at all sizes of the training set, suggesting they learna representation that is relatively tolerant to object transformations. Similar to the decoding accu-racy of the first principle component, the PredNet Lallmodel actually outperformed the L0variant.Altogether, these results suggest that predictive training with the PredNet can be a viable alternativeto other models trained with a more traditional reconstructive or denoising loss, and that the relativelayer loss weightings ( l’s) may be important for the particular task at hand.3.2 N ATURAL IMAGE SEQUENCESWe next sought to test the PredNet architecture on complex, real-world sequences. As a testbed, wechose car-mounted camera videos, since these videos span across a wide range of settings and arecharacterized by rich temporal dynamics, including both self-motion of the vehicle and the motionof other objects in the scene (Agrawal et al., 2015). Models were trained using the raw videos fromthe KITTI dataset (Geiger et al., 2013), which were captured by a roof-mounted camera on a cardriving around an urban environment in Germany. Sequences of 10frames were sampled from the“City”, “Residential”, and “Road” categories, with 57recording sessions used for training and 4used for validation. Frames were center-cropped and downsampled to 128x160pixels. In total, thetraining set consisted of roughly 41K frames.A random hyperparameter search, with model selection based on the validation set, resulted in a 4layer model with 3x3convolutions and layer channel sizes of (3;48;96;192) . Models were againtrained with Adam (Kingma & Ba, 2014) using a loss either solely computed on the lowest layer(L0) or with a weight of 1on the lowest layer and 0:1on the upper layers ( Lall). Adam parameterswere initially set to their default values ( = 0:001,1= 0:9,2= 0:999) with the learning rate, ,decreasing by a factor of 10halfway through training. To assess that the network had indeed learneda robust representation, we tested on the CalTech Pedestrian dataset (Doll ́ar et al., 2009), whichconsists of videos from a dashboard-mounted camera on a vehicle driving around Los Angeles.Testing sequences were made to match the frame rate of the KITTI dataset and again cropped to128x160pixels. Quantitative evaluation was performed on the entire CalTech test partition, splitinto sequences of 10frames.Sample PredNet predictions (for the L0model) on the CalTech Pedestrian dataset are shown inFigure 4, and example videos can be found at https://coxlab.github.io/prednet/ . Themodel is able to make fairly accurate predictions in a wide range of scenarios. In the top sequenceof Fig. 4, a car is passing in the opposite direction, and the model, while not perfect, is able to predictits trajectory, as well as fill in the ground it leaves behind. Similarly in Sequence 3, the model isable to predict the motion of a vehicle completing a left turn. Sequences 2and5illustrate that thePredNet can judge its own movement, as it predicts the appearance of shadows and a stationaryvehicle as they approach. The model makes reasonable predictions even in difficult scenarios, suchas when the camera-mounted vehicle is turning. In Sequence 4, the model predicts the position of atree, as the vehicle turns onto a road. The turning sequences also further illustrate the model’s abilityto “fill-in”, as it is able to extrapolate sky and tree textures as unseen regions come into view. As anadditional control, we show a sequence at the bottom of Fig. 4, where the input has been temporallyscrambled. In this case, the model generates blurry frames, which mostly just resemble the previousframe. Finally, although the PredNet shown here was trained to predict one frame ahead, it is alsopossible to predict multiple frames into the future, by feeding back predictions as the inputs andrecursively iterating. We explore this in Appendix 5.3.Table 2: Evaluation of Next-Frame Predictions onCalTech Pedestrian Dataset.MSE SSIMPredNetL0 3:131030.884PredNetLall 3:331030.875CNN-LSTM Enc.-Dec. 3:671030.865Copy Last Frame 7:951030.762Quantitatively, the PredNet models againoutperformed the CNN-LSTM Encoder-Decoder. To ensure that the difference inperformance was not simply because of thechoice of hyperparameters, we trained mod-els with four other sets of hyperparameters,which were sampled from the initial ran-dom search over the number of layers, fil-ter sizes, and number of filters per layer. For each of the four additional sets, the PredNet L0hadthe best performance, with an average error reduction of 14:7%and14:9%for MSE and SSIM,7Published as a conference paper at ICLR 20171PredictedActual2PredictedActual3PredictedActual4PredictedActual5PredictedActual6PredictedActual7PredictedActual8PredictedScrambledtime →Figure 4: PredNet predictions for car-cam videos. The first rows contain ground truth and the secondrows contain predictions. The sequence below the red line was temporally scrambled. The modelwas trained on the KITTI dataset and sequences shown are from the CalTech Pedestrian dataset.respectively, compared to the CNN-LSTM Encoder-Decoder. More details, as well as a thoroughinvestigation of systematically simplified models on the continuum between the PredNet and theCNN-LSTM Encoder-Decoder can be found in Appendix 5.1. Briefly, the elementwise subtractionoperation in the PredNet seems to be beneficial, and the nonlinearity of positive/negative splittingalso adds modest improvements. Finally, while these experiments measure the benefits of each com-ponent of our model, we also directly compare against recent work in a similar car-cam setting, byreporting results on a 64x64pixel, grayscale car-cam dataset released by Brabandere et al. (2016).Our PredNet model outperforms the model by Brabandere et al. (2016) by 29%. Details can befound in Appendix 5.2. Also in Appendix 5.2, we present results for the Human3.6M (Ionescuet al., 2014) dataset, as reported by Finn et al. (2016). Without re-optimizing hyperparameters, our8Published as a conference paper at ICLR 2017model underperforms the concurrently developed DNA model by Finn et al. (2016), but outperformsthe model by Mathieu et al. (2016).To test the implicit encoding of latent parameters in the car-cam setting, we used the internal rep-resentation in the PredNet to estimate the car’s steering angle (Bojarski et al., 2016; Biasini et al.,2016). We used a dataset released by Comma.ai (Biasini et al., 2016) consisting of 11videos total-ing about 7hours of mostly highway driving. We first trained networks for next-frame predictionand then fit a linear fully-connected layer on the learned representation to estimate the steering an-gle, using a MSE loss. We again concatenate the Rlrepresentation at all layers, but first spatiallyaverage pool lower layers to match the spatial size of the upper layer, in order to reduce dimension-ality. Steering angle estimation results, using the representation on the 10thtime step, are shownin Figure 5. Given just 1K labeled training examples, a simple linear readout on the PredNet L0representation explains 74% of the variance in the steering angle and outperforms the CNN-LSTMEnc.-Dec. by 35%. With 25K labeled training examples, the PredNet L0has a MSE (in degrees2)of2:14. As a point of reference, a CNN model designed to predict the steering angle (Biasiniet al., 2016), albeit from a single frame instead of multiple frames, achieve a MSE of ~ 4whentrained end-to-end using 396K labeled training examples. Details of this analysis can be found inAppendix 8. Interestingly, in this task, the PredNet Lallmodel actually underperformed the L0model and slightly underperformed the CNN-LSTM Enc.-Dec, again suggesting that the lparam-eter can affect the representation learned, and different values may be preferable in different endtasks. Nonetheless, the readout from the Lallmodel still explained a substantial proportion of thesteering angle variance and strongly outperformed the random initial weights. Overall, this anal-ysis again demonstrates that a representation learned through prediction, and particularly with thePredNet model with appropriate hyperparameters, can contain useful information about underlyinglatent parameters.Figure 5: Steering angle estimation accuracy on the Comma.ai dataset (Biasini et al., 2016). Left:Example steering angle curve with model estimations for a segment in the test set. Decoding wasperformed using a fully-connected readout on the PredNet representation trained with 25K labeledtraining examples. PredNet representation was trained for next-frame prediction on Comma.ai train-ing set. Right: Mean-squared error of steering angle estimation.4 D ISCUSSIONAbove, we have demonstrated a predictive coding inspired architecture that is able to predict futureframes in both synthetic and natural image sequences. Importantly, we have shown that learning topredict how an object or scene will move in a future frame confers advantages in decoding latentparameters (such as viewing angle) that give rise to an object’s appearance, and can improve recog-nition performance. More generally, we argue that prediction can serve as a powerful unsupervisedlearning signal, since accurately predicting future frames requires at least an implicit model of theobjects that make up the scene and how they are allowed to move. Developing a deeper understand-ing of the nature of the representations learned by the networks, and extending the architecture, by,for instance, allowing sampling, are important future directions.9Published as a conference paper at ICLR 2017ACKNOWLEDGMENTSWe would like to thank Rasmus Berg Palm for fruitful discussions and early brainstorming. Wewould also like to thank the developers of Keras (Chollet, 2016). This work was supported by IARPA(contract D16PC00002), the National Science Foundation (NSF IIS 1409097), and the Center forBrains, Minds and Machines (CBMM, NSF STC award CCF-1231216).
rkGabqHNg
an interesting architecture for future prediction inspired by deep predictive coding
6: Marginally above acceptance threshold
An interesting architecture that accumulates and continuously corrects mistakes as you see more and more of a video sequence. Clarity: The video you generated seems very helpful towards understanding the information flow in your network, it would be nice to link to it from the paper. "Our model with hyperparameters optimized for KITTI underperforms the model of Finn et al. (2016), but outperforms the previous state-of-the-art model by Mathieu et al. (2016)." It is not clear how different are the train and test sequences at the moment, since standard benchmarks do not really exist for video prediction and each author picks his/her favorite. Underperforming Finn et al 206 at the H3.6m Walking videos is a bit disappointing.
3: The reviewer is fairly confident that the evaluation is correct
S1jE5L5gl
ICLR.cc/2017/conference
2017
The Concrete Distribution: A Continuous Relaxation of Discrete Random Variables
["Chris J. Maddison", "Andriy Mnih", "Yee Whye Teh"]
The reparameterization trick enables optimizing large scale stochastic computation graphs via gradient descent. The essence of the trick is to refactor each stochastic node into a differentiable function of its parameters and a random variable with fixed distribution. After refactoring, the gradients of the loss propagated by the chain rule through the graph are low variance unbiased estimators of the gradients of the expected loss. While many continuous random variables have such reparameterizations, discrete random variables lack useful reparameterizations due to the discontinuous nature of discrete states. In this work we introduce Concrete random variables -- continuous relaxations of discrete random variables. The Concrete distribution is a new family of distributions with closed form densities and a simple reparameterization. Whenever a discrete stochastic node of a computation graph can be refactored into a one-hot bit representation that is treated continuously, Concrete stochastic nodes can be used with automatic differentiation to produce low-variance biased gradients of objectives (including objectives that depend on the log-probability of latent stochastic nodes) on the corresponding discrete graph. We demonstrate the effectiveness of Concrete relaxations on density estimation and structured prediction tasks using neural networks.
["Deep learning", "Unsupervised Learning", "Structured prediction"]
rJM-7wZEl
A clearly relevant paper that should be accepted
9: Top 15% of accepted papers, strong accept
The authors describe the concrete distribution, a continuous approximation to discrete distributions parameterized by a vector of continuous positive numbers proportional to the probability of each discrete result. The concrete distribution is obtained by using the softmax function to approximate the argmax operator. The paper is clearly written, original and significant. The experiments clearly illustrate the advantages of the proposed method. Some minor questions: "for the general n-ary case the Gumbel is a crucial 1 and the Gumbel-Max trick cannot be generalized for other additive noise distributions" What do you mean by this? Can you be more specific? What is the temperature values used to obtain Table 1 and the table in Figure 4.
5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature
S1jE5L5gl
ICLR.cc/2017/conference
2017
The Concrete Distribution: A Continuous Relaxation of Discrete Random Variables
["Chris J. Maddison", "Andriy Mnih", "Yee Whye Teh"]
The reparameterization trick enables optimizing large scale stochastic computation graphs via gradient descent. The essence of the trick is to refactor each stochastic node into a differentiable function of its parameters and a random variable with fixed distribution. After refactoring, the gradients of the loss propagated by the chain rule through the graph are low variance unbiased estimators of the gradients of the expected loss. While many continuous random variables have such reparameterizations, discrete random variables lack useful reparameterizations due to the discontinuous nature of discrete states. In this work we introduce Concrete random variables -- continuous relaxations of discrete random variables. The Concrete distribution is a new family of distributions with closed form densities and a simple reparameterization. Whenever a discrete stochastic node of a computation graph can be refactored into a one-hot bit representation that is treated continuously, Concrete stochastic nodes can be used with automatic differentiation to produce low-variance biased gradients of objectives (including objectives that depend on the log-probability of latent stochastic nodes) on the corresponding discrete graph. We demonstrate the effectiveness of Concrete relaxations on density estimation and structured prediction tasks using neural networks.
["Deep learning", "Unsupervised Learning", "Structured prediction"]
BycCSky4x
Very important development for implementing stochastic networks with discrete random variables
8: Top 50% of accepted papers, clear accept
Thank you for an interesting read. I think this paper has proposed a very useful method, which significantly simplifies the implementation of gradients for discrete random variables. Using this trick quite a lot of discrete variable-based methods will be significantly easier to implement, e.g. a GAN-style generator for text (see the recent arxiv preprint arXiv:1611.04051). I've got one suggestion to make the paper even better, but maybe the authors want to leave it to future work. I think compared to lots of variance reduction techniques such as NVIL and VIMCO, this relaxation trick has smaller variance (from empirical observation of the reparameterisation trick), but in the price of introducing biases. It would be fantastic if the authors can discuss the bias-variance trade-off, either in theoretical or experimental way. My bet will be that here the variance dominates the stochastic estimation error of the gradient estimation, but it would be great if the authors can confirm this. **to area chair: concurrent paper by Jang et al. 2016** It seems there's a concurrent submission by Jang et al. I havent' read that paper in detail, but maybe the conference should accept or reject both?
4: The reviewer is confident but not absolutely certain that the evaluation is correct
S1jE5L5gl
ICLR.cc/2017/conference
2017
The Concrete Distribution: A Continuous Relaxation of Discrete Random Variables
["Chris J. Maddison", "Andriy Mnih", "Yee Whye Teh"]
The reparameterization trick enables optimizing large scale stochastic computation graphs via gradient descent. The essence of the trick is to refactor each stochastic node into a differentiable function of its parameters and a random variable with fixed distribution. After refactoring, the gradients of the loss propagated by the chain rule through the graph are low variance unbiased estimators of the gradients of the expected loss. While many continuous random variables have such reparameterizations, discrete random variables lack useful reparameterizations due to the discontinuous nature of discrete states. In this work we introduce Concrete random variables -- continuous relaxations of discrete random variables. The Concrete distribution is a new family of distributions with closed form densities and a simple reparameterization. Whenever a discrete stochastic node of a computation graph can be refactored into a one-hot bit representation that is treated continuously, Concrete stochastic nodes can be used with automatic differentiation to produce low-variance biased gradients of objectives (including objectives that depend on the log-probability of latent stochastic nodes) on the corresponding discrete graph. We demonstrate the effectiveness of Concrete relaxations on density estimation and structured prediction tasks using neural networks.
["Deep learning", "Unsupervised Learning", "Structured prediction"]
ryKOMc-He
Nice paper, should be accepted
7: Good paper, accept
The authors of the paper present a novel distribution for discrete variables called the "concrete distribution". The distribution can be seen as a continuous relaxation for a distribution over discrete random variables. The main motivation for introduction of the concrete distribution is the possibility to compute the gradient of discrete stochastic nodes in Stochastic Computational Graphs. I think the paper is well written and sound, definitely of interest for the conference program. As to the experimental part, the authors have results which support some kind of consistent superior performance for VIMCO for linear models and for concrete relaxations for non-linear models. Any explanation for that? Is this confirmed over different models and maybe datasets? Similarly, it looks like VIMCO outperforms (in Figure 4) Concrete for large m, on the test NLL. I would encourage to try with other values of m to see if this dependence on large m is confirmed or not. I believe the paper should be accepted to the conference, however please consider that I'm not an expert in this field. Some minor observations/comments/issues: -Section 2.1: there is a repetition "be be" in the first paragraph. -Section 2.4: I would add a reference for the "multi-sample variational objective" -Section 3.1, just before Section 3.2: "the Gumbel is a crucial 1". Why 1 and not "one"? -Section 3.3, last paragraph: "Thus, in addition to relaxing the sampling pass of a SCG the log..." I would add a comma after "SCG". More in general, the second part of the paragraph is very dense and not easy to "absorb". I don't think it's an issue with the presentation: the concepts themselves are just dense. However, maybe the authors could find a way to make the paragraph easier to assimilate for a less experienced reader. -Section 5.1, second paragraph: "All our models are neural networks with layers of n-ary discrete stochastic nodes with log_2(n)-dimensional states on the corners of the hypercube {-1,1}^log_2(n). The distribution of the nodes are parametrized by n real values log alpha_k". It is not clear to me, where does the log_2(n) come from. Similarly for the {-1,1}. -Section 5.2: After "this distribution." and "We will" there is an extra space. -If a compare the last formula in Section 5.3 with Eq. 8, I don't see exactly why the former is a special case of the latter. Is it because q(Z^i | x) is always one?
3: The reviewer is fairly confident that the evaluation is correct
Bk0FWVcgx
ICLR.cc/2017/conference
2017
Topology and Geometry of Half-Rectified Network Optimization
["C. Daniel Freeman", "Joan Bruna"]
The loss surface of deep neural networks has recently attracted interest in the optimization and machine learning communities as a prime example of high-dimensional non-convex problem. Some insights were recently gained using spin glass models and mean-field approximations, but at the expense of strongly simplifying the nonlinear nature of the model. In this work, we do not make any such approximation and study conditions on the data distribution and model architecture that prevent the existence of bad local minima. Our theoretical work quantifies and formalizes two important folklore facts: (i) the landscape of deep linear networks has a radically different topology from that of deep half-rectified ones, and (ii) that the energy landscape in the non-linear case is fundamentally controlled by the interplay between the smoothness of the data distribution and model over-parametrization. Our main theoretical contribution is to prove that half-rectified single layer networks are asymptotically connected, and we provide explicit bounds that reveal the aforementioned interplay. The conditioning of gradient descent is the next challenge we address. We study this question through the geometry of the level sets, and we introduce an algorithm to efficiently estimate the regularity of such sets on large-scale networks. Our empirical results show that these level sets remain connected throughout all the learning phase, suggesting a near convex behavior, but they become exponentially more curvy as the energy level decays, in accordance to what is observed in practice with very low curvature attractors.
["Theory", "Deep learning"]
ABSTRACTThe loss surface of deep neural networks has recently attracted interest in theoptimization and machine learning communities as a prime example of high-dimensional non-convex problem. Some insights were recently gained using spinglass models and mean-field approximations, but at the expense of strongly sim-plifying the nonlinear nature of the model.In this work, we do not make any such assumption and study conditions on the datadistribution and model architecture that prevent the existence of bad local minima.Our theoretical work quantifies and formalizes two important folklore facts: (i) thelandscape of deep linear networks has a radically different topology from that ofdeep half-rectified ones, and (ii) that the energy landscape in the non-linear caseis fundamentally controlled by the interplay between the smoothness of the datadistribution and model over-parametrization. Our main theoretical contributionis to prove that half-rectified single layer networks are asymptotically connected,and we provide explicit bounds that reveal the aforementioned interplay.The conditioning of gradient descent is the next challenge we address. We studythis question through the geometry of the level sets, and we introduce an algo-rithm to efficiently estimate the regularity of such sets on large-scale networks.Our empirical results show that these level sets remain connected throughout allthe learning phase, suggesting a near convex behavior, but they become exponen-tially more curvy as the energy level decays, in accordance to what is observed inpractice with very low curvature attractors.1 I NTRODUCTIONOptimization is a critical component in deep learning, governing its success in different areas ofcomputer vision, speech processing and natural language processing. The prevalent optimizationstrategy is Stochastic Gradient Descent, invented by Robbins and Munro in the 50s. The empiricalperformance of SGD on these models is better than one could expect in generic, arbitrary non-convexloss surfaces, often aided by modifications yielding significant speedups Duchi et al. (2011); Hintonet al. (2012); Ioffe & Szegedy (2015); Kingma & Ba (2014). This raises a number of theoreticalquestions as to why neural network optimization does not suffer in practice from poor local minima.The loss surface of deep neural networks has recently attracted interest in the optimization and ma-chine learning communities as a paradigmatic example of a hard, high-dimensional, non-convexproblem. Recent work has explored models from statistical physics such as spin glasses Choroman-ska et al. (2015), in order to understand the macroscopic properties of the system, but at the expenseof strongly simplifying the nonlinear nature of the model. Other authors have advocated that the realdanger in high-dimensional setups are saddle points rather than poor local minima Dauphin et al.(2014), although recent results rigorously establish that gradient descent does not get stuck on saddlepoints Lee et al. (2016) but merely slowed down. Other notable recent contributions are Kawaguchi(2016), which further develops the spin-glass connection from Choromanska et al. (2015) and re-solves the linear case by showing that no poor local minima exist; Sagun et al. (2014) which alsoCurrently on leave from UC Berkeley.1Published as a conference paper at ICLR 2017discusses the impact of stochastic vs plain gradient, Soudry & Carmon (2016), that studies Empir-ical Risk Minimization for piecewise multilayer neural networks under overparametrization (whichneeds to grow with the amount of available data), and Goodfellow et al. (2014), which provided in-sightful intuitions on the loss surface of large deep learning models and partly motivated our work.Additionally, the work Safran & Shamir (2015) studies some topological properties of homogeneousnonlinear networks and shows how overparametrization acts upon these properties, and the pioneer-ing Shamir (2016) studied the distribution-specific hardness of optimizing non-convex objectives.Lastly, several papers submitted concurrently and independently of this one deserve note, particu-larly Swirszcz et al. (2016) which analyzes the explicit criteria under which sigmoid-based neuralnetworks become trapped by poor local minima, as well as Tian (2017), which offers a complemen-tary study of two layer ReLU based networks, and their learning dynamics.In this work, we do not make any linearity assumption and study conditions on the data distributionand model architecture that prevent the existence of bad local minima. The loss surface F()ofa given model can be expressed in terms of its level sets , which contain for each energy levelall parameters yielding a loss smaller or equal than . A first question we address concernsthe topology of these level sets, i.e. under which conditions they are connected. Connected levelsets imply that one can always find a descent direction at each energy level, and therefore that nopoor local minima can exist. In absence of nonlinearities, deep (linear) networks have connectedlevel sets Kawaguchi (2016). We first generalize this result to include ridge regression (in the twolayer case) and provide an alternative, more direct proof of the general case. We then move to thehalf-rectified case and show that the topology is intrinsically different and clearly dependent on theinterplay between data distribution and model architecture. Our main theoretical contribution is toprove that half-rectified single layer networks are asymptotically connected, and we provide explicitbounds that reveal the aforementioned interplay.Beyond the question of whether the loss contains poor local minima or not, the immediate follow-upquestion that determines the convergence of algorithms in practice is the local conditioning of theloss surface. It is thus related not to the topology but to the shape or geometry of the level sets.As the energy level decays, one expects the level sets to exhibit more complex irregular structures,which correspond to regions where F()has small curvature. In order to verify this intuition, weintroduce an efficient algorithm to estimate the geometric regularity of these level sets by approx-imating geodesics of each level set starting at two random boundary points. Our algorithm usesdynamic programming and can be efficiently deployed to study mid-scale CNN architectures onMNIST, CIFAR-10 and RNN models on Penn Treebank next word prediction. Our empirical resultsshow that these models have a nearly convex behavior up until their lowest test errors, with a singleconnected component that becomes more elongated as the energy decays. The rest of the paper isstructured as follows. Section 2 presents our theoretical results on the topological connectednessof multilayer networks. Section 3 presents our path discovery algorithm and Section 4 covers thenumerical experiments.2 T OPOLOGY OF LEVEL SETSLetPbe a probability measure on a product space XY , where we assume XandYare Euclideanvector spaces for simplicity. Let f(xi;yi)gibe an iid sample of size Ldrawn from Pdefining thetraining set. We consider the classic empirical risk minimization of the formFe() =1LLXl=1k(xi;)yik2+R(); (1)where (x;)encapsulates the feature representation that uses parameters 2RSandR()is aregularization term. In a deep neural network, contains the weights and biases used in all layers.For convenience, in our analysis we will also use the oracle risk minimization:Fo() =E(X;Y)Pk(X;)Yk2+R(): (2)Our setup considers the case where Rconsists on either `1or`2norms, as we shall describe below.They correspond to well-known sparse and ridge regularization respectively.2Published as a conference paper at ICLR 20172.1 P OOR LOCAL MINIMA CHARACTERIZATION FROM TOPOLOGICAL CONNECTEDNESSWe define the level set of F()asF() =f2RS;F()g: (3)The first question we study is the structure of critical points of Fe()andFo()when is a mul-tilayer neural network. For simplicity, we consider first a strict notion of local minima: 2RSisa strict local minima of Fif there is >0withF(0)> F()for all02B(;)and06=.In particular, we are interested to know whether Fehas local minima which are not global minima.This question is answered by knowing whether F()is connected at each energy level :Proposition 2.1. IfF()is connected for all then every local minima of F()is a global minima.Strict local minima implies that rF() = 0 andHF()0, but avoids degenerate cases whereFis constant along a manifold intersecting . In that scenario, if Udenotes that manifold, ourreasoning immediately implies that if F()are connected, then for all >0there exists0withdist(0;U)andF(0)<F(). In other words, some element at the boundary of Umust be asaddle point. A stronger property that eliminates the risk of gradient descent getting stuck at Uisthatallelements at the boundary of Uare saddle points. This can be guaranteed if one can showthat there exists a path connecting any to the lowest energy level such that Fis strictly decreasingalong it.Such degenerate cases arise in deep linear networks in absence of regularization. If =(W1;:::;WK)denotes any parameter value, with N1;:::NKdenoting the hidden layer sizes, andFk2GL+Nk(R)are arbitrary elements of the general linear group of invertible NkNkmatriceswith positive determinant, thenU=fW1F11;F1W2F12;:::;FKWK;Fk2GL+Nk(R)g:In particular,Uhas a Lie Group structure. In the half-rectified nonlinear case, the general lineargroup is replaced by the Lie group of homogeneous invertible matrices Fk=diag(1;:::;Nk)withj>0.This proposition shows that a sufficient condition to prevent the existence of poor local minima ishaving connected level sets, but this condition is not necessary: one can have isolated local minimalying at the same energy level. This can be the case in systems that are defined up to a discretesymmetry group, such as multilayer neural networks. However, as we shall see next, this case putsthe system in a brittle position, since one needs to be able to account for all the local minima (andthere can be exponentially many of them as the parameter dimensionality increases) and verify thattheir energy is indeed equal.2.2 T HELINEAR CASEWe first consider the particularly simple case where Fis a multilayer network defined by(x;) =WK:::W 1x; = (W1;:::;WK): (4)and the ridge regression R() =kk2. This model defines a non-convex (and non-concave) lossFe(). When= 0, it has been shown in Saxe et al. (2013) and Kawaguchi (2016) that in this case,every local minima is a global minima. We provide here an alternative proof of that result that usesa somewhat simpler argument and allows for >0in the caseK= 2.Proposition 2.2. LetW1;W2;:::;WKbe weight matrices of sizes nknk+1,k < K , and letFe(),Fo()denote the risk minimizations using as in (4). Assume that njmin(n1;nK)forj= 2:::K1. Then Fe()(and Fo) is connected for all and allKwhen= 0, and for>0whenK= 2; and therefore there are no poor local minima in these cases. Moreover, any can be connected to the lowest energy level with a strictly decreasing path.Let us highlight that this result is slightly complementary than that of Kawaguchi (2016), Theorem2.3. Whereas we require njmin(n1;nK)forj= 2:::K1and our analysis does not informabout the order of the saddle points, we do not need full rank assumptions on Xnor the weightsWk.3Published as a conference paper at ICLR 2017This result does also highlight a certain mismatch between the picture of having no poor local min-ima and generalization error. Incorporating regularization drastically changes the topology, and thefact that we are able to show connectedness only in the two-layer case with ridge regression is pro-found; we conjecture that extending it to deeper models requires a different regularization, perhapsusing more general atomic norms Bach (2013). But we now move our interest to the nonlinear case,which is more relevant to our purposes.2.3 H ALF-RECTIFIED NONLINEAR CASEWe now study the setting given by(x;) =WKWK1:::W 1x; = (W1;:::;WK); (5)where(z) = max(0 ;z). The biases can be implemented by replacing the input vector xwithx= (x;1)and by rebranding each parameter matrix asWi=Wibi01;wherebicontains the biases for each layer. For simplicity, we continue to use Wiandxin thefollowing.2.3.1 N ONLINEAR MODELS ARE GENERALLY DISCONNECTEDOne may wonder whether the same phenomena of global connectedness also holds in the half-rectified case. A simple motivating counterexample shows that this is not the case in general. Con-sider a simple setup with X2R2drawn from a mixture of two Gaussians N1andN1, and letY= (XZ)Z, whereZis the (hidden) mixture component taking f1;1gvalues. Let^Y= (X;fW1;W2g)be a single-hidden layer ReLU network, with two hidden units. Let Abea configuration that bisects the two mixture components, and let Bthe same configuration, butswapping the bisectrices. One can verify that they can both achieve arbitrarily small risk by lettingthe covariance of the mixture components go to 0. However, any path that connects AtoBmustnecessarily pass through a point in which W1has rank 1, which leads to an estimator with risk atleast1=2.In fact, it is easy to see that this counter-example can be extended to any generic half-rectified ar-chitecture, if one is allowed to adversarially design a data distribution. For any given (X;)witharbitrary architecture and current parameters = (Wi), letP=fA1;:::;ASgbe the underly-ing tessellation of the input space given by our current choice of parameters; that is, (X;)ispiece-wise linear and Pcontains those pieces. Now let Xbe any arbitrary distribution with densityp(x)>0for allx2Rn, for example a Gaussian, and let YjXd= (X;). Since is invariantunder a subgroup of permutations of its hidden layers, it is easy to see that one can find two pa-rameter values A=andB=such thatFo(A) =Fo(B) = 0 , but any continuous path (t)fromAtoBwill have a different tessellation and therefore won’t satisfy Fo((t)) = 0 . Moreover,one can build on this counter-example to show that not only the level sets are disconnected, but alsothat there exist poor local minima. Let 0be a different set of parameters, and Y0jXd= (X;0)be a different target distribution. Now consider the data distribution given by the mixtureXjp(x); zBernoulli (); YjX;zd=z(X;) + (1z)(X;0):By adjusting the mixture component we can clearly change the risk at and0and make themdifferent, but we conjecture that this preserves the status of local minima of and0. Appendix Econstructs a counter-example numerically.This illustrates an intrinsic difficulty in the optimization landscape if one is after universal guaranteesthat do not depend upon the data distribution. This difficulty is non-existent in the linear case andnot easy to exploit in mean-field approaches such as Choromanska et al. (2015), and shows thatin general we should not expect to obtain connected level sets. However, connectedness can berecovered if one is willing to accept a small increase of energy and make some assumptions on thecomplexity of the regression task. Our main result shows that the amount by which the energy isallowed to increase is upper bounded by a quantity that trades-off model overparametrization andsmoothness in the data distribution.4Published as a conference paper at ICLR 2017For that purpose, we start with a characterization of the oracle loss, and for simplicity let us assumeY2Rand let us first consider the case with a single hidden layer and `1regularization:R() =kk1.2.3.2 P RELIMINARIESBefore proving our main result, we need to introduce preliminary notation and results. We firstdescribe the case with a single hidden layer of size m.We definee(m) = minW12Rmn;kW1(i)k21;W22RmEfj(X;)Yj2g+kW2k1: (6)to be the oracle risk using mhidden units with norm 1and using sparse regression. It is a wellknown result by Hornik and Cybenko that a single hidden layer is a universal approximator undervery mild assumptions, i.e. limm!1e(m) = 0 . This result merely states that our statistical setup isconsistent, and it should not be surprising to the reader familiar with classic approximation theory.A more interesting question is the rate at which e(m)decays, which depends on the smoothness ofthe joint density (X;Y )Prelative to the nonlinear activation family we have chosen.For convenience, we redefine W=W1and=W2andZ(W) = max(0;WX ). We also writez(w) = max(0;hw;Xi)where (X;Y )Pandw2RNis any deterministic vector. Let X=EPXXT2RNNbe the covariance operator of the random input X. We assumekXk<1.A fundamental property that will be essential to our analysis is that, despite the fact that Zisnonlinear, the quantity [w1;w2]Z:=EPfz(w1)z(w2)gis locally equivalent to the linear metrichw1;w2iX=EPfwT1XXTw2g=hw1;Xw2i, and that the linearization error decreases with theangle between w1andw2. Without loss of generality, we assume here that kw1k=kw2k= 1, andwe writekwk2Z=Efjz(w)j2g.Proposition 2.3. Let= cos1(hw1;w2i)be the angle between unitary vectors w1andw2and letwm=w1+w2kw1+w2kbe their unitary bisector. Then1 + cos2kwmk2Z2kXk1cos2+ sin2[w1;w2]Z1 + cos2kwmk2Z: (7)The termkXkis overly pessimistic: we can replace it by the energy of Xprojected into thesubspace spanned by w1andw2(which is bounded by 2kXk). Whenis small, a Taylor expansionof the trigonometric terms reveals that23kXkhw1;w2i=23kXkcos=23kXk(122+O(4))(12=4)kwmk2ZkXk(2=4 +2) +O(4)[w1;w2]Z+O(4);and similarly[w1;w2]Zhw1;w2ikwmk2ZkXkhw1;w2i:The local behavior of parameters w1;w2on our regression problem is thus equivalent to that of hav-ing a linear layer, provided w1andw2are sufficiently close to each other. This result can be seen asaspoiler of what is coming: increasing the hidden layer dimensionality mwill increase the chancesto encounter pairs of vectors w1;w2with small angle; and with it some hope of approximating theprevious linear behavior thanks to the small linearization error.In order to control the connectedness, we need a last definition. Given a hidden layer of size mwithcurrent parameters W2Rnm, we define a “robust compressibility” factor asW(l;;m) = minkk0l;supij\( ~wi;wi)jEfjYZ(~W)j2+kk1g;(lm): (8)This quantity thus measures how easily one can compress the current hidden layer representation,by keeping only a subset of lits units, but allowing these units to move by a small amount controlledby. It is a form of n-width similar to Kolmogorov width Donoho (2006) and is also related torobust sparse coding from Tang et al. (2013); Ekanadham et al. (2011).5Published as a conference paper at ICLR 20172.3.3 M AIN RESULTOur main result considers now a non-asymptotic scenario given by some fixed size mof the hid-den layer. Given two parameter values A= (WA1;WA2)2 W andB= (WB1;WB2)withFo(fA;Bg), we show that there exists a continuous path : [0;1]! W connectingAandBsuch that its oracle risk is uniformly bounded by max(;), wheredecreases with modeloverparametrization.Theorem 2.4. For anyA;B2W and2RsatisfyingFo(fA;Bg), there exists a continuouspath: [0;1]!W such that(0) =A,(1) =BandFo((t))max(;);with (9)= infl;maxne(l);WA1(m;0;m);WA1(ml;;m); (10)WB1(m;0;m);WB1(ml;;m)o+C1+O(2); (11)whereC1is an absolute constant depending only on andP.Some remarks are in order. First, our regularization term is currently a mix between `2norm con-straints on the first layer and `1norm constraints on the second layer. We believe this is an artifact ofour proof technique, and we conjecture that more general regularizations yield similar results. Next,this result uses the data distribution through the oracle bound e(m)and the covariance term. Theextension to empirical risk is accomplished by replacing the probability measure Pby the empiricalmeasure ^P=1LPl((x;y)(xl;yl)). However, our asymptotic analysis has to be carefully re-examined to take into account and avoid the trivial regime when MoutgrowsL. A consequence ofTheorem 2.4 is that as mincreases, the model becomes asymptotically connected, as proven in thefollowing corollary.Corollary 2.5. Asmincreases, the energy gap satisfies=O(m1n)and therefore the level setsbecome connected at all energy levels.This is consistent with the overparametrization results from Safran & Shamir (2015); Shamir (2016)and the general common knowledge amongst deep learning practitioners. Our next sections ex-plore this question, and refine it by considering not only topological properties but also some roughgeometrical measure of the level sets.3 G EOMETRY OF LEVEL SETS3.1 T HEGREEDY ALGORITHMThe intuition behind our main result is that, for smooth enough loss functions and for sufficientoverparameterization, it should be “easy” to connect two equally powerful models—i.e., two modelswithFoA;B. A sensible measure of this ease-of-connectedness is the normalized lengthof the geodesic connecting one model to the other: jA;B(t)j=jABj. This length representsapproximately how far of an excursion one must make in the space of models relative to the euclideandistance between a pair of models. Thus, convex models have a geodesic length of 1, becausethe geodesic is simply linear interpolation between models, while more non-convex models havegeodesic lengths strictly larger than 1.Because calculating the exact geodesic is difficult, we approximate the geodesic paths via a dynamicprogramming approach we call Dynamic String Sampling. We comment on alternative algorithmsin Appendix A.For a pair of models with network parameters i,j, each withFe()below a threshold L0, we aimto efficienly generate paths in the space of weights where the empirical loss along the path remainsbelowL0. These paths are continuous curves belonging to F()–that is, the level sets of the lossfunction of interest.6Published as a conference paper at ICLR 2017Algorithm 1 Greedy Dynamic String Sampling1:L0 Threshold below which path will be found2:1 randomly initialize 1, train (xi1)toL03:2 randomly initialize 2, train (xi2)toL04:BeadList (1;2)5:Depth 06:procedure FINDCONNECTION (1;2)7:t t such thatd(1;2;t)dtt= 0 ORt= 0:58: 3 train(xi;t1+ (1t)2)toL09: BeadList insert(3, after 1, BeadList)10:MaxError 1 maxt(Fe(t3+ (1t)1))11:MaxError 2 maxt(Fe(t2+ (1t)3))12: ifMaxError 1>L 0then return FindConnection (1;3)13: ifMaxError 2>L 0then return FindConnection (3;2)14: Depth Depth +1The algorithm recursively builds a string of models in the space of weights which continuouslyconnectitoj. Models are added and trained until the pairwise linearly interpolated loss, i.e.max tFe(ti+ (1t)j)fort2(0;1), is below the threshold, L0, for every pair of neighboringmodels on the string. We provide a cartoon of the algorithm in Appendix C.3.2 F AILURE CONDITIONS AND PRACTICALITIESWhile the algorithm presented will faithfully certify two models are connected if the algorithmconverges, it is worth emphasizing that the algorithm does not guarantee that two models are dis-connected if the algorithm fails to converge. In general, the problem of determining if two modelsare connected can be made arbitrarily difficult by choice of a particularly pathological geometry forthe loss function, so we are constrained to heuristic arguments for determining when to stop run-ning the algorithm. Thankfully, in practice, loss function geometries for problems of interest are notintractably difficult to explore. We comment more on diagnosing disconnections more carefully inAppendix E.Further, if the MaxError exceedsL0for every new recursive branch as the algorithm progresses,the worst case runtime scales as O(exp(Depth )). Empirically, we find that the number of newmodels added at each depth does grow, but eventually saturates, and falls for a wide variety ofmodels and architectures, so that the typical runtime is closer to O(poly( Depth ))—at least upuntil a critical value of L0.To aid convergence, either of the choices in line 7of the algorithm works in practice—choosing tata local maximum can provide a modest increase in algorithm runtime, but can be unstable if the thecalculated interpolated loss is particularly flat or noisy. t=:5is more stable, but slower. Finally,we find that training 3toL0for<1in line 8of the algorithm tends to aid convergence withoutnoticeably impacting our numerics. We provide further implementation details in 4.4 N UMERICAL EXPERIMENTSFor our numerical experiments, we calculated normalized geodesic lengths for a variety of regressionand classification tasks. In practice, this involved training a pair of randomly initialized models tothe desired test loss value/accuracy/perplexity, and then attempting to connect that pair of models viathe Dynamic String Sampling algorithm. We also tabulated the average number of “beads”, or thenumber intermediate models needed by the algorithm to connect two initial models. For all of thebelow experiments, the reported losses and accuracies are on a restricted test set. For more completearchitecture and implementation details, see our GitHub page.The results are broadly organized by increasing model complexity and task difficulty, from easiestto hardest. Throughout, and remarkably, we were able to easily connect models for every datasetand architecture investigated except the one explicitly constructed counterexample discussed in Ap-pendix E.1. Qualitatively, all of the models exhibit a transition from a highly convex regime at highloss to a non-convex regime at low loss, as demonstrated by the growth of the normalized length aswell as the monotonic increase in the number of required “beads” to form a low-loss connection.7Published as a conference paper at ICLR 20174.1 P OLYNOMIAL REGRESSIONWe studied a 1-4-4-1 fully connected multilayer perceptron style architecture with sigmoid nonlin-earities and RMSProp/ADAM optimization. For ease-of-analysis, we restricted the training and testdata to be strictly contained in the interval x2[0;1]andf(x)2[0;1]. The number of requiredbeads, and thus the runtime of the algorithm, grew approximately as a power-law, as demonstratedin Table 1 Fig. 1. We also provide a visualization of a representative connecting path between twomodels of equivalent power in Appendix D.0.00 0.02 0.04 0.06 0.08 0.10 0.12 0.14L01.21.41.61.82.0NormalizedLength(1a)2-102-92-82-72-62-52-42-3L021222324252627NumberofBeads (1b)0.00 0.05 0.10 0.15 0.20 0.25 0.30L01.01.52.02.53.0NormalizedLength(2a)2-72-62-52-42-32-2L0212223242526NumberofBeads (2b)0.5 1.0 1.5 2.0 2.5 3.0 3.5 4.0%errorontestset1.001.021.041.061.081.10NormalizedLength(3a)0.5 1.0 1.5 2.0 2.5 3.0 3.5 4.0%errorontestset3.54.04.55.05.56.06.57.07.5NumberofBeads (3b)10 20 30 40 50 60 70 80 90%errorontestset1.01.21.41.61.82.0NormalizedLength(4a)20 30 40 50 60 70 80 90%errorontestset2.53.03.54.04.55.05.56.06.57.0NumberofBeads (4b)100 200 300 400 500 600 700 800 900Perplexityontestset1.01.21.41.61.82.0NormalizedLength(5a)100 200 300 400 500 600 700 800 900Perplexityontestset2.53.03.54.04.55.05.56.06.57.0NumberofBeads (5b)Figure 1: (Column a) Average normalized geodesic length and (Column b) average number of beadsversus loss. (1) A quadratic regression task. (2) A cubic regression task. (3) A convnet for MNIST.(4) A convnet inspired by Krizhevsky for CIFAR10. (5) A RNN inspired by Zaremba for PTB nextword prediction.The cubic regression task exhibits an interesting feature around L0=:15in Table 1 Fig. 2, wherethe normalized length spikes, but the number of required beads remains low. Up until this point, the8Published as a conference paper at ICLR 2017cubic model is strongly convex, so this first spike seems to indicate the onset of non-convex behaviorand a concomitant radical change in the geometry of the loss surface for lower loss.4.2 C ONVOLUTIONAL NEURAL NETWORKSTo test the algorithm on larger architectures, we ran it on the MNIST hand written digit recognitiontask as well as the CIFAR10 image recognition task, indicated in Table 1, Figs. 3 and 4. Again,the data exhibits strong qualitative similarity with the previous models: normalized length remainslow until a threshold loss value, after which it grows approximately as a power law. Interestingly,the MNIST dataset exhibits very low normalized length, even for models nearly at the state of theart in classification power, in agreement with the folk-understanding that MNIST is highly convexand/or “easy”. The CIFAR10 dataset, however, exhibits large non-convexity, even at the modest testaccuracy of 80%.4.3 R ECURRENT NEURAL NETWORKSTo gauge the generalizability of our algorithm, we also applied it to an LSTM architecture for solvingthe next word prediction task on the PTB dataset, depicted in Table 1 Fig. 5. Noteably, even for aradically different architecture, loss function, and data set, the normalized lengths produced by theDSS algorithm recapitulate the same qualitative features seen in the above datasets—i.e., modelscan be easily connected at high perplexity, and the normalized length grows at lower and lowerperplexity after a threshold value, indicating an onset of increased non-convexity of the loss surface.5 D ISCUSSIONWe have addressed the problem of characterizing the loss surface of neural networks from the per-spective of gradient descent algorithms. We explored two angles – topological and geometricalaspects – that build on top of each other.On the one hand, we have presented new theoretical results that quantify the amount of uphill climb-ing that is required in order to progress to lower energy configurations in single hidden-layer ReLUnetworks, and proved that this amount converges to zero with overparametrization under mild con-ditions. On the other hand, we have introduced a dynamic programming algorithm that efficientlyapproximates geodesics within each level set, providing a tool that not only verifies the connected-ness of level sets, but also estimates the geometric regularity of these sets. Thanks to this informa-tion, we can quantify how ‘non-convex’ an optimization problem is, and verify that the optimizationof quintessential deep learning tasks – CIFAR-10 and MNIST classification using CNNs, and nextword prediction using LSTMs – behaves in a nearly convex fashion up until they reach high accuracylevels.That said, there are some limitations to our framework. In particular, we do not address saddle-pointissues that can greatly affect the actual convergence of gradient descent methods. There are also anumber of open questions; amongst those, in the near future we shall concentrate on:Extending Theorem 2.4 to the multilayer case . We believe this is within reach, since themain analytic tool we use is that small changes in the parameters result in small changes inthe covariance structure of the features. That remains the case in the multilayer case.Empirical versus Oracle Risk . A big limitation of our theory is that right now it does notinform us on the differences between optimizing the empirical risk versus the oracle risk.Understanding the impact of generalization error and stochastic gradient in the ability to dosmall uphill climbs is an open line of research.Influence of symmetry groups . Under appropriate conditions, the presence of discrete sym-metry groups does not prevent the loss from being connected, but at the expense of increas-ing the capacity. An important open question is whether one can improve the asymptoticproperties by relaxing connectedness to being connected up to discrete symmetry.Improving numerics with Hyperplane method . Our current numerical experiments employ agreedy (albeit faster) algorithm to discover connected components and estimate geodesics.We plan to perform experiments using the less greedy algorithm described in Appendix A.9Published as a conference paper at ICLR 2017ACKNOWLEDGMENTSWe would like to thank Mark Tygert for pointing out the reference to the -nets and Kolmogorovcapacity, and Martin Arjovsky for spotting several bugs in early version of the results. We wouldalso like to thank Maithra Raghu and Jascha Sohl-Dickstein for enlightening discussions, as well asYasaman Bahri for helpful feedback on an early version of the manuscript. CDF was supported bythe NSF Graduate Research Fellowship under Grant DGE-1106400.
HkIfgyz4g
Interesting analysis
7: Good paper, accept
This paper studies the energy landscape of the loss function in neural networks. It is generally clearly written and nicely provides intuitions for the results. One main contribution is to show that the level sets of the loss becomes connected as the network is increasingly overparameterized. It also quantifies, in a way, the degree of disconnectedness possible in terms of the increase in loss that one must allow to find a connected path. It would seem that this might have some implications for the likelihood of escaping local minima with stochastic gradient descent. The paper also presents a simple algorithm for finding geodesic paths between two networks such that the loss is decreasing along the path. Using this they show that the loss seems to become more nonconvex when the loss is smaller. This is also quite interesting. The work does have some significant limitations, which is not surprising given the difficulty of fully analyzing the network loss function. However, the authors are quite clear about these limitations, which especially include not yet analyzing deep networks and analyzing only the oracle loss, and not the empirical loss. I would have also appreciated a little more practical discussion of the bound in Theorem 2.4. It is hard to tell whether this bound is tight enough to be practically relevant.
3: The reviewer is fairly confident that the evaluation is correct
Bk0FWVcgx
ICLR.cc/2017/conference
2017
Topology and Geometry of Half-Rectified Network Optimization
["C. Daniel Freeman", "Joan Bruna"]
The loss surface of deep neural networks has recently attracted interest in the optimization and machine learning communities as a prime example of high-dimensional non-convex problem. Some insights were recently gained using spin glass models and mean-field approximations, but at the expense of strongly simplifying the nonlinear nature of the model. In this work, we do not make any such approximation and study conditions on the data distribution and model architecture that prevent the existence of bad local minima. Our theoretical work quantifies and formalizes two important folklore facts: (i) the landscape of deep linear networks has a radically different topology from that of deep half-rectified ones, and (ii) that the energy landscape in the non-linear case is fundamentally controlled by the interplay between the smoothness of the data distribution and model over-parametrization. Our main theoretical contribution is to prove that half-rectified single layer networks are asymptotically connected, and we provide explicit bounds that reveal the aforementioned interplay. The conditioning of gradient descent is the next challenge we address. We study this question through the geometry of the level sets, and we introduce an algorithm to efficiently estimate the regularity of such sets on large-scale networks. Our empirical results show that these level sets remain connected throughout all the learning phase, suggesting a near convex behavior, but they become exponentially more curvy as the energy level decays, in accordance to what is observed in practice with very low curvature attractors.
["Theory", "Deep learning"]
ABSTRACTThe loss surface of deep neural networks has recently attracted interest in theoptimization and machine learning communities as a prime example of high-dimensional non-convex problem. Some insights were recently gained using spinglass models and mean-field approximations, but at the expense of strongly sim-plifying the nonlinear nature of the model.In this work, we do not make any such assumption and study conditions on the datadistribution and model architecture that prevent the existence of bad local minima.Our theoretical work quantifies and formalizes two important folklore facts: (i) thelandscape of deep linear networks has a radically different topology from that ofdeep half-rectified ones, and (ii) that the energy landscape in the non-linear caseis fundamentally controlled by the interplay between the smoothness of the datadistribution and model over-parametrization. Our main theoretical contributionis to prove that half-rectified single layer networks are asymptotically connected,and we provide explicit bounds that reveal the aforementioned interplay.The conditioning of gradient descent is the next challenge we address. We studythis question through the geometry of the level sets, and we introduce an algo-rithm to efficiently estimate the regularity of such sets on large-scale networks.Our empirical results show that these level sets remain connected throughout allthe learning phase, suggesting a near convex behavior, but they become exponen-tially more curvy as the energy level decays, in accordance to what is observed inpractice with very low curvature attractors.1 I NTRODUCTIONOptimization is a critical component in deep learning, governing its success in different areas ofcomputer vision, speech processing and natural language processing. The prevalent optimizationstrategy is Stochastic Gradient Descent, invented by Robbins and Munro in the 50s. The empiricalperformance of SGD on these models is better than one could expect in generic, arbitrary non-convexloss surfaces, often aided by modifications yielding significant speedups Duchi et al. (2011); Hintonet al. (2012); Ioffe & Szegedy (2015); Kingma & Ba (2014). This raises a number of theoreticalquestions as to why neural network optimization does not suffer in practice from poor local minima.The loss surface of deep neural networks has recently attracted interest in the optimization and ma-chine learning communities as a paradigmatic example of a hard, high-dimensional, non-convexproblem. Recent work has explored models from statistical physics such as spin glasses Choroman-ska et al. (2015), in order to understand the macroscopic properties of the system, but at the expenseof strongly simplifying the nonlinear nature of the model. Other authors have advocated that the realdanger in high-dimensional setups are saddle points rather than poor local minima Dauphin et al.(2014), although recent results rigorously establish that gradient descent does not get stuck on saddlepoints Lee et al. (2016) but merely slowed down. Other notable recent contributions are Kawaguchi(2016), which further develops the spin-glass connection from Choromanska et al. (2015) and re-solves the linear case by showing that no poor local minima exist; Sagun et al. (2014) which alsoCurrently on leave from UC Berkeley.1Published as a conference paper at ICLR 2017discusses the impact of stochastic vs plain gradient, Soudry & Carmon (2016), that studies Empir-ical Risk Minimization for piecewise multilayer neural networks under overparametrization (whichneeds to grow with the amount of available data), and Goodfellow et al. (2014), which provided in-sightful intuitions on the loss surface of large deep learning models and partly motivated our work.Additionally, the work Safran & Shamir (2015) studies some topological properties of homogeneousnonlinear networks and shows how overparametrization acts upon these properties, and the pioneer-ing Shamir (2016) studied the distribution-specific hardness of optimizing non-convex objectives.Lastly, several papers submitted concurrently and independently of this one deserve note, particu-larly Swirszcz et al. (2016) which analyzes the explicit criteria under which sigmoid-based neuralnetworks become trapped by poor local minima, as well as Tian (2017), which offers a complemen-tary study of two layer ReLU based networks, and their learning dynamics.In this work, we do not make any linearity assumption and study conditions on the data distributionand model architecture that prevent the existence of bad local minima. The loss surface F()ofa given model can be expressed in terms of its level sets , which contain for each energy levelall parameters yielding a loss smaller or equal than . A first question we address concernsthe topology of these level sets, i.e. under which conditions they are connected. Connected levelsets imply that one can always find a descent direction at each energy level, and therefore that nopoor local minima can exist. In absence of nonlinearities, deep (linear) networks have connectedlevel sets Kawaguchi (2016). We first generalize this result to include ridge regression (in the twolayer case) and provide an alternative, more direct proof of the general case. We then move to thehalf-rectified case and show that the topology is intrinsically different and clearly dependent on theinterplay between data distribution and model architecture. Our main theoretical contribution is toprove that half-rectified single layer networks are asymptotically connected, and we provide explicitbounds that reveal the aforementioned interplay.Beyond the question of whether the loss contains poor local minima or not, the immediate follow-upquestion that determines the convergence of algorithms in practice is the local conditioning of theloss surface. It is thus related not to the topology but to the shape or geometry of the level sets.As the energy level decays, one expects the level sets to exhibit more complex irregular structures,which correspond to regions where F()has small curvature. In order to verify this intuition, weintroduce an efficient algorithm to estimate the geometric regularity of these level sets by approx-imating geodesics of each level set starting at two random boundary points. Our algorithm usesdynamic programming and can be efficiently deployed to study mid-scale CNN architectures onMNIST, CIFAR-10 and RNN models on Penn Treebank next word prediction. Our empirical resultsshow that these models have a nearly convex behavior up until their lowest test errors, with a singleconnected component that becomes more elongated as the energy decays. The rest of the paper isstructured as follows. Section 2 presents our theoretical results on the topological connectednessof multilayer networks. Section 3 presents our path discovery algorithm and Section 4 covers thenumerical experiments.2 T OPOLOGY OF LEVEL SETSLetPbe a probability measure on a product space XY , where we assume XandYare Euclideanvector spaces for simplicity. Let f(xi;yi)gibe an iid sample of size Ldrawn from Pdefining thetraining set. We consider the classic empirical risk minimization of the formFe() =1LLXl=1k(xi;)yik2+R(); (1)where (x;)encapsulates the feature representation that uses parameters 2RSandR()is aregularization term. In a deep neural network, contains the weights and biases used in all layers.For convenience, in our analysis we will also use the oracle risk minimization:Fo() =E(X;Y)Pk(X;)Yk2+R(): (2)Our setup considers the case where Rconsists on either `1or`2norms, as we shall describe below.They correspond to well-known sparse and ridge regularization respectively.2Published as a conference paper at ICLR 20172.1 P OOR LOCAL MINIMA CHARACTERIZATION FROM TOPOLOGICAL CONNECTEDNESSWe define the level set of F()asF() =f2RS;F()g: (3)The first question we study is the structure of critical points of Fe()andFo()when is a mul-tilayer neural network. For simplicity, we consider first a strict notion of local minima: 2RSisa strict local minima of Fif there is >0withF(0)> F()for all02B(;)and06=.In particular, we are interested to know whether Fehas local minima which are not global minima.This question is answered by knowing whether F()is connected at each energy level :Proposition 2.1. IfF()is connected for all then every local minima of F()is a global minima.Strict local minima implies that rF() = 0 andHF()0, but avoids degenerate cases whereFis constant along a manifold intersecting . In that scenario, if Udenotes that manifold, ourreasoning immediately implies that if F()are connected, then for all >0there exists0withdist(0;U)andF(0)<F(). In other words, some element at the boundary of Umust be asaddle point. A stronger property that eliminates the risk of gradient descent getting stuck at Uisthatallelements at the boundary of Uare saddle points. This can be guaranteed if one can showthat there exists a path connecting any to the lowest energy level such that Fis strictly decreasingalong it.Such degenerate cases arise in deep linear networks in absence of regularization. If =(W1;:::;WK)denotes any parameter value, with N1;:::NKdenoting the hidden layer sizes, andFk2GL+Nk(R)are arbitrary elements of the general linear group of invertible NkNkmatriceswith positive determinant, thenU=fW1F11;F1W2F12;:::;FKWK;Fk2GL+Nk(R)g:In particular,Uhas a Lie Group structure. In the half-rectified nonlinear case, the general lineargroup is replaced by the Lie group of homogeneous invertible matrices Fk=diag(1;:::;Nk)withj>0.This proposition shows that a sufficient condition to prevent the existence of poor local minima ishaving connected level sets, but this condition is not necessary: one can have isolated local minimalying at the same energy level. This can be the case in systems that are defined up to a discretesymmetry group, such as multilayer neural networks. However, as we shall see next, this case putsthe system in a brittle position, since one needs to be able to account for all the local minima (andthere can be exponentially many of them as the parameter dimensionality increases) and verify thattheir energy is indeed equal.2.2 T HELINEAR CASEWe first consider the particularly simple case where Fis a multilayer network defined by(x;) =WK:::W 1x; = (W1;:::;WK): (4)and the ridge regression R() =kk2. This model defines a non-convex (and non-concave) lossFe(). When= 0, it has been shown in Saxe et al. (2013) and Kawaguchi (2016) that in this case,every local minima is a global minima. We provide here an alternative proof of that result that usesa somewhat simpler argument and allows for >0in the caseK= 2.Proposition 2.2. LetW1;W2;:::;WKbe weight matrices of sizes nknk+1,k < K , and letFe(),Fo()denote the risk minimizations using as in (4). Assume that njmin(n1;nK)forj= 2:::K1. Then Fe()(and Fo) is connected for all and allKwhen= 0, and for>0whenK= 2; and therefore there are no poor local minima in these cases. Moreover, any can be connected to the lowest energy level with a strictly decreasing path.Let us highlight that this result is slightly complementary than that of Kawaguchi (2016), Theorem2.3. Whereas we require njmin(n1;nK)forj= 2:::K1and our analysis does not informabout the order of the saddle points, we do not need full rank assumptions on Xnor the weightsWk.3Published as a conference paper at ICLR 2017This result does also highlight a certain mismatch between the picture of having no poor local min-ima and generalization error. Incorporating regularization drastically changes the topology, and thefact that we are able to show connectedness only in the two-layer case with ridge regression is pro-found; we conjecture that extending it to deeper models requires a different regularization, perhapsusing more general atomic norms Bach (2013). But we now move our interest to the nonlinear case,which is more relevant to our purposes.2.3 H ALF-RECTIFIED NONLINEAR CASEWe now study the setting given by(x;) =WKWK1:::W 1x; = (W1;:::;WK); (5)where(z) = max(0 ;z). The biases can be implemented by replacing the input vector xwithx= (x;1)and by rebranding each parameter matrix asWi=Wibi01;wherebicontains the biases for each layer. For simplicity, we continue to use Wiandxin thefollowing.2.3.1 N ONLINEAR MODELS ARE GENERALLY DISCONNECTEDOne may wonder whether the same phenomena of global connectedness also holds in the half-rectified case. A simple motivating counterexample shows that this is not the case in general. Con-sider a simple setup with X2R2drawn from a mixture of two Gaussians N1andN1, and letY= (XZ)Z, whereZis the (hidden) mixture component taking f1;1gvalues. Let^Y= (X;fW1;W2g)be a single-hidden layer ReLU network, with two hidden units. Let Abea configuration that bisects the two mixture components, and let Bthe same configuration, butswapping the bisectrices. One can verify that they can both achieve arbitrarily small risk by lettingthe covariance of the mixture components go to 0. However, any path that connects AtoBmustnecessarily pass through a point in which W1has rank 1, which leads to an estimator with risk atleast1=2.In fact, it is easy to see that this counter-example can be extended to any generic half-rectified ar-chitecture, if one is allowed to adversarially design a data distribution. For any given (X;)witharbitrary architecture and current parameters = (Wi), letP=fA1;:::;ASgbe the underly-ing tessellation of the input space given by our current choice of parameters; that is, (X;)ispiece-wise linear and Pcontains those pieces. Now let Xbe any arbitrary distribution with densityp(x)>0for allx2Rn, for example a Gaussian, and let YjXd= (X;). Since is invariantunder a subgroup of permutations of its hidden layers, it is easy to see that one can find two pa-rameter values A=andB=such thatFo(A) =Fo(B) = 0 , but any continuous path (t)fromAtoBwill have a different tessellation and therefore won’t satisfy Fo((t)) = 0 . Moreover,one can build on this counter-example to show that not only the level sets are disconnected, but alsothat there exist poor local minima. Let 0be a different set of parameters, and Y0jXd= (X;0)be a different target distribution. Now consider the data distribution given by the mixtureXjp(x); zBernoulli (); YjX;zd=z(X;) + (1z)(X;0):By adjusting the mixture component we can clearly change the risk at and0and make themdifferent, but we conjecture that this preserves the status of local minima of and0. Appendix Econstructs a counter-example numerically.This illustrates an intrinsic difficulty in the optimization landscape if one is after universal guaranteesthat do not depend upon the data distribution. This difficulty is non-existent in the linear case andnot easy to exploit in mean-field approaches such as Choromanska et al. (2015), and shows thatin general we should not expect to obtain connected level sets. However, connectedness can berecovered if one is willing to accept a small increase of energy and make some assumptions on thecomplexity of the regression task. Our main result shows that the amount by which the energy isallowed to increase is upper bounded by a quantity that trades-off model overparametrization andsmoothness in the data distribution.4Published as a conference paper at ICLR 2017For that purpose, we start with a characterization of the oracle loss, and for simplicity let us assumeY2Rand let us first consider the case with a single hidden layer and `1regularization:R() =kk1.2.3.2 P RELIMINARIESBefore proving our main result, we need to introduce preliminary notation and results. We firstdescribe the case with a single hidden layer of size m.We definee(m) = minW12Rmn;kW1(i)k21;W22RmEfj(X;)Yj2g+kW2k1: (6)to be the oracle risk using mhidden units with norm 1and using sparse regression. It is a wellknown result by Hornik and Cybenko that a single hidden layer is a universal approximator undervery mild assumptions, i.e. limm!1e(m) = 0 . This result merely states that our statistical setup isconsistent, and it should not be surprising to the reader familiar with classic approximation theory.A more interesting question is the rate at which e(m)decays, which depends on the smoothness ofthe joint density (X;Y )Prelative to the nonlinear activation family we have chosen.For convenience, we redefine W=W1and=W2andZ(W) = max(0;WX ). We also writez(w) = max(0;hw;Xi)where (X;Y )Pandw2RNis any deterministic vector. Let X=EPXXT2RNNbe the covariance operator of the random input X. We assumekXk<1.A fundamental property that will be essential to our analysis is that, despite the fact that Zisnonlinear, the quantity [w1;w2]Z:=EPfz(w1)z(w2)gis locally equivalent to the linear metrichw1;w2iX=EPfwT1XXTw2g=hw1;Xw2i, and that the linearization error decreases with theangle between w1andw2. Without loss of generality, we assume here that kw1k=kw2k= 1, andwe writekwk2Z=Efjz(w)j2g.Proposition 2.3. Let= cos1(hw1;w2i)be the angle between unitary vectors w1andw2and letwm=w1+w2kw1+w2kbe their unitary bisector. Then1 + cos2kwmk2Z2kXk1cos2+ sin2[w1;w2]Z1 + cos2kwmk2Z: (7)The termkXkis overly pessimistic: we can replace it by the energy of Xprojected into thesubspace spanned by w1andw2(which is bounded by 2kXk). Whenis small, a Taylor expansionof the trigonometric terms reveals that23kXkhw1;w2i=23kXkcos=23kXk(122+O(4))(12=4)kwmk2ZkXk(2=4 +2) +O(4)[w1;w2]Z+O(4);and similarly[w1;w2]Zhw1;w2ikwmk2ZkXkhw1;w2i:The local behavior of parameters w1;w2on our regression problem is thus equivalent to that of hav-ing a linear layer, provided w1andw2are sufficiently close to each other. This result can be seen asaspoiler of what is coming: increasing the hidden layer dimensionality mwill increase the chancesto encounter pairs of vectors w1;w2with small angle; and with it some hope of approximating theprevious linear behavior thanks to the small linearization error.In order to control the connectedness, we need a last definition. Given a hidden layer of size mwithcurrent parameters W2Rnm, we define a “robust compressibility” factor asW(l;;m) = minkk0l;supij\( ~wi;wi)jEfjYZ(~W)j2+kk1g;(lm): (8)This quantity thus measures how easily one can compress the current hidden layer representation,by keeping only a subset of lits units, but allowing these units to move by a small amount controlledby. It is a form of n-width similar to Kolmogorov width Donoho (2006) and is also related torobust sparse coding from Tang et al. (2013); Ekanadham et al. (2011).5Published as a conference paper at ICLR 20172.3.3 M AIN RESULTOur main result considers now a non-asymptotic scenario given by some fixed size mof the hid-den layer. Given two parameter values A= (WA1;WA2)2 W andB= (WB1;WB2)withFo(fA;Bg), we show that there exists a continuous path : [0;1]! W connectingAandBsuch that its oracle risk is uniformly bounded by max(;), wheredecreases with modeloverparametrization.Theorem 2.4. For anyA;B2W and2RsatisfyingFo(fA;Bg), there exists a continuouspath: [0;1]!W such that(0) =A,(1) =BandFo((t))max(;);with (9)= infl;maxne(l);WA1(m;0;m);WA1(ml;;m); (10)WB1(m;0;m);WB1(ml;;m)o+C1+O(2); (11)whereC1is an absolute constant depending only on andP.Some remarks are in order. First, our regularization term is currently a mix between `2norm con-straints on the first layer and `1norm constraints on the second layer. We believe this is an artifact ofour proof technique, and we conjecture that more general regularizations yield similar results. Next,this result uses the data distribution through the oracle bound e(m)and the covariance term. Theextension to empirical risk is accomplished by replacing the probability measure Pby the empiricalmeasure ^P=1LPl((x;y)(xl;yl)). However, our asymptotic analysis has to be carefully re-examined to take into account and avoid the trivial regime when MoutgrowsL. A consequence ofTheorem 2.4 is that as mincreases, the model becomes asymptotically connected, as proven in thefollowing corollary.Corollary 2.5. Asmincreases, the energy gap satisfies=O(m1n)and therefore the level setsbecome connected at all energy levels.This is consistent with the overparametrization results from Safran & Shamir (2015); Shamir (2016)and the general common knowledge amongst deep learning practitioners. Our next sections ex-plore this question, and refine it by considering not only topological properties but also some roughgeometrical measure of the level sets.3 G EOMETRY OF LEVEL SETS3.1 T HEGREEDY ALGORITHMThe intuition behind our main result is that, for smooth enough loss functions and for sufficientoverparameterization, it should be “easy” to connect two equally powerful models—i.e., two modelswithFoA;B. A sensible measure of this ease-of-connectedness is the normalized lengthof the geodesic connecting one model to the other: jA;B(t)j=jABj. This length representsapproximately how far of an excursion one must make in the space of models relative to the euclideandistance between a pair of models. Thus, convex models have a geodesic length of 1, becausethe geodesic is simply linear interpolation between models, while more non-convex models havegeodesic lengths strictly larger than 1.Because calculating the exact geodesic is difficult, we approximate the geodesic paths via a dynamicprogramming approach we call Dynamic String Sampling. We comment on alternative algorithmsin Appendix A.For a pair of models with network parameters i,j, each withFe()below a threshold L0, we aimto efficienly generate paths in the space of weights where the empirical loss along the path remainsbelowL0. These paths are continuous curves belonging to F()–that is, the level sets of the lossfunction of interest.6Published as a conference paper at ICLR 2017Algorithm 1 Greedy Dynamic String Sampling1:L0 Threshold below which path will be found2:1 randomly initialize 1, train (xi1)toL03:2 randomly initialize 2, train (xi2)toL04:BeadList (1;2)5:Depth 06:procedure FINDCONNECTION (1;2)7:t t such thatd(1;2;t)dtt= 0 ORt= 0:58: 3 train(xi;t1+ (1t)2)toL09: BeadList insert(3, after 1, BeadList)10:MaxError 1 maxt(Fe(t3+ (1t)1))11:MaxError 2 maxt(Fe(t2+ (1t)3))12: ifMaxError 1>L 0then return FindConnection (1;3)13: ifMaxError 2>L 0then return FindConnection (3;2)14: Depth Depth +1The algorithm recursively builds a string of models in the space of weights which continuouslyconnectitoj. Models are added and trained until the pairwise linearly interpolated loss, i.e.max tFe(ti+ (1t)j)fort2(0;1), is below the threshold, L0, for every pair of neighboringmodels on the string. We provide a cartoon of the algorithm in Appendix C.3.2 F AILURE CONDITIONS AND PRACTICALITIESWhile the algorithm presented will faithfully certify two models are connected if the algorithmconverges, it is worth emphasizing that the algorithm does not guarantee that two models are dis-connected if the algorithm fails to converge. In general, the problem of determining if two modelsare connected can be made arbitrarily difficult by choice of a particularly pathological geometry forthe loss function, so we are constrained to heuristic arguments for determining when to stop run-ning the algorithm. Thankfully, in practice, loss function geometries for problems of interest are notintractably difficult to explore. We comment more on diagnosing disconnections more carefully inAppendix E.Further, if the MaxError exceedsL0for every new recursive branch as the algorithm progresses,the worst case runtime scales as O(exp(Depth )). Empirically, we find that the number of newmodels added at each depth does grow, but eventually saturates, and falls for a wide variety ofmodels and architectures, so that the typical runtime is closer to O(poly( Depth ))—at least upuntil a critical value of L0.To aid convergence, either of the choices in line 7of the algorithm works in practice—choosing tata local maximum can provide a modest increase in algorithm runtime, but can be unstable if the thecalculated interpolated loss is particularly flat or noisy. t=:5is more stable, but slower. Finally,we find that training 3toL0for<1in line 8of the algorithm tends to aid convergence withoutnoticeably impacting our numerics. We provide further implementation details in 4.4 N UMERICAL EXPERIMENTSFor our numerical experiments, we calculated normalized geodesic lengths for a variety of regressionand classification tasks. In practice, this involved training a pair of randomly initialized models tothe desired test loss value/accuracy/perplexity, and then attempting to connect that pair of models viathe Dynamic String Sampling algorithm. We also tabulated the average number of “beads”, or thenumber intermediate models needed by the algorithm to connect two initial models. For all of thebelow experiments, the reported losses and accuracies are on a restricted test set. For more completearchitecture and implementation details, see our GitHub page.The results are broadly organized by increasing model complexity and task difficulty, from easiestto hardest. Throughout, and remarkably, we were able to easily connect models for every datasetand architecture investigated except the one explicitly constructed counterexample discussed in Ap-pendix E.1. Qualitatively, all of the models exhibit a transition from a highly convex regime at highloss to a non-convex regime at low loss, as demonstrated by the growth of the normalized length aswell as the monotonic increase in the number of required “beads” to form a low-loss connection.7Published as a conference paper at ICLR 20174.1 P OLYNOMIAL REGRESSIONWe studied a 1-4-4-1 fully connected multilayer perceptron style architecture with sigmoid nonlin-earities and RMSProp/ADAM optimization. For ease-of-analysis, we restricted the training and testdata to be strictly contained in the interval x2[0;1]andf(x)2[0;1]. The number of requiredbeads, and thus the runtime of the algorithm, grew approximately as a power-law, as demonstratedin Table 1 Fig. 1. We also provide a visualization of a representative connecting path between twomodels of equivalent power in Appendix D.0.00 0.02 0.04 0.06 0.08 0.10 0.12 0.14L01.21.41.61.82.0NormalizedLength(1a)2-102-92-82-72-62-52-42-3L021222324252627NumberofBeads (1b)0.00 0.05 0.10 0.15 0.20 0.25 0.30L01.01.52.02.53.0NormalizedLength(2a)2-72-62-52-42-32-2L0212223242526NumberofBeads (2b)0.5 1.0 1.5 2.0 2.5 3.0 3.5 4.0%errorontestset1.001.021.041.061.081.10NormalizedLength(3a)0.5 1.0 1.5 2.0 2.5 3.0 3.5 4.0%errorontestset3.54.04.55.05.56.06.57.07.5NumberofBeads (3b)10 20 30 40 50 60 70 80 90%errorontestset1.01.21.41.61.82.0NormalizedLength(4a)20 30 40 50 60 70 80 90%errorontestset2.53.03.54.04.55.05.56.06.57.0NumberofBeads (4b)100 200 300 400 500 600 700 800 900Perplexityontestset1.01.21.41.61.82.0NormalizedLength(5a)100 200 300 400 500 600 700 800 900Perplexityontestset2.53.03.54.04.55.05.56.06.57.0NumberofBeads (5b)Figure 1: (Column a) Average normalized geodesic length and (Column b) average number of beadsversus loss. (1) A quadratic regression task. (2) A cubic regression task. (3) A convnet for MNIST.(4) A convnet inspired by Krizhevsky for CIFAR10. (5) A RNN inspired by Zaremba for PTB nextword prediction.The cubic regression task exhibits an interesting feature around L0=:15in Table 1 Fig. 2, wherethe normalized length spikes, but the number of required beads remains low. Up until this point, the8Published as a conference paper at ICLR 2017cubic model is strongly convex, so this first spike seems to indicate the onset of non-convex behaviorand a concomitant radical change in the geometry of the loss surface for lower loss.4.2 C ONVOLUTIONAL NEURAL NETWORKSTo test the algorithm on larger architectures, we ran it on the MNIST hand written digit recognitiontask as well as the CIFAR10 image recognition task, indicated in Table 1, Figs. 3 and 4. Again,the data exhibits strong qualitative similarity with the previous models: normalized length remainslow until a threshold loss value, after which it grows approximately as a power law. Interestingly,the MNIST dataset exhibits very low normalized length, even for models nearly at the state of theart in classification power, in agreement with the folk-understanding that MNIST is highly convexand/or “easy”. The CIFAR10 dataset, however, exhibits large non-convexity, even at the modest testaccuracy of 80%.4.3 R ECURRENT NEURAL NETWORKSTo gauge the generalizability of our algorithm, we also applied it to an LSTM architecture for solvingthe next word prediction task on the PTB dataset, depicted in Table 1 Fig. 5. Noteably, even for aradically different architecture, loss function, and data set, the normalized lengths produced by theDSS algorithm recapitulate the same qualitative features seen in the above datasets—i.e., modelscan be easily connected at high perplexity, and the normalized length grows at lower and lowerperplexity after a threshold value, indicating an onset of increased non-convexity of the loss surface.5 D ISCUSSIONWe have addressed the problem of characterizing the loss surface of neural networks from the per-spective of gradient descent algorithms. We explored two angles – topological and geometricalaspects – that build on top of each other.On the one hand, we have presented new theoretical results that quantify the amount of uphill climb-ing that is required in order to progress to lower energy configurations in single hidden-layer ReLUnetworks, and proved that this amount converges to zero with overparametrization under mild con-ditions. On the other hand, we have introduced a dynamic programming algorithm that efficientlyapproximates geodesics within each level set, providing a tool that not only verifies the connected-ness of level sets, but also estimates the geometric regularity of these sets. Thanks to this informa-tion, we can quantify how ‘non-convex’ an optimization problem is, and verify that the optimizationof quintessential deep learning tasks – CIFAR-10 and MNIST classification using CNNs, and nextword prediction using LSTMs – behaves in a nearly convex fashion up until they reach high accuracylevels.That said, there are some limitations to our framework. In particular, we do not address saddle-pointissues that can greatly affect the actual convergence of gradient descent methods. There are also anumber of open questions; amongst those, in the near future we shall concentrate on:Extending Theorem 2.4 to the multilayer case . We believe this is within reach, since themain analytic tool we use is that small changes in the parameters result in small changes inthe covariance structure of the features. That remains the case in the multilayer case.Empirical versus Oracle Risk . A big limitation of our theory is that right now it does notinform us on the differences between optimizing the empirical risk versus the oracle risk.Understanding the impact of generalization error and stochastic gradient in the ability to dosmall uphill climbs is an open line of research.Influence of symmetry groups . Under appropriate conditions, the presence of discrete sym-metry groups does not prevent the loss from being connected, but at the expense of increas-ing the capacity. An important open question is whether one can improve the asymptoticproperties by relaxing connectedness to being connected up to discrete symmetry.Improving numerics with Hyperplane method . Our current numerical experiments employ agreedy (albeit faster) algorithm to discover connected components and estimate geodesics.We plan to perform experiments using the less greedy algorithm described in Appendix A.9Published as a conference paper at ICLR 2017ACKNOWLEDGMENTSWe would like to thank Mark Tygert for pointing out the reference to the -nets and Kolmogorovcapacity, and Martin Arjovsky for spotting several bugs in early version of the results. We wouldalso like to thank Maithra Raghu and Jascha Sohl-Dickstein for enlightening discussions, as well asYasaman Bahri for helpful feedback on an early version of the manuscript. CDF was supported bythe NSF Graduate Research Fellowship under Grant DGE-1106400.
HJ7JiAHEl
incremental result on the loss surface of deep neural networks
2: Strong rejection
This is an incremental result (several related results that the authors of the paper mentioned here were already published). The authors claim that they can get rid of the technical assumptions from the previous papers but the results they propose are significantly weaker and also quite technical. The main theoretical result - Theorem 2.4 is not convincing at all. Furthermore, the paper is badly written. No theoretical intuition is given, the experimental section is weak and in some places the formatting is wrong.
5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature
Bk0FWVcgx
ICLR.cc/2017/conference
2017
Topology and Geometry of Half-Rectified Network Optimization
["C. Daniel Freeman", "Joan Bruna"]
The loss surface of deep neural networks has recently attracted interest in the optimization and machine learning communities as a prime example of high-dimensional non-convex problem. Some insights were recently gained using spin glass models and mean-field approximations, but at the expense of strongly simplifying the nonlinear nature of the model. In this work, we do not make any such approximation and study conditions on the data distribution and model architecture that prevent the existence of bad local minima. Our theoretical work quantifies and formalizes two important folklore facts: (i) the landscape of deep linear networks has a radically different topology from that of deep half-rectified ones, and (ii) that the energy landscape in the non-linear case is fundamentally controlled by the interplay between the smoothness of the data distribution and model over-parametrization. Our main theoretical contribution is to prove that half-rectified single layer networks are asymptotically connected, and we provide explicit bounds that reveal the aforementioned interplay. The conditioning of gradient descent is the next challenge we address. We study this question through the geometry of the level sets, and we introduce an algorithm to efficiently estimate the regularity of such sets on large-scale networks. Our empirical results show that these level sets remain connected throughout all the learning phase, suggesting a near convex behavior, but they become exponentially more curvy as the energy level decays, in accordance to what is observed in practice with very low curvature attractors.
["Theory", "Deep learning"]
ABSTRACTThe loss surface of deep neural networks has recently attracted interest in theoptimization and machine learning communities as a prime example of high-dimensional non-convex problem. Some insights were recently gained using spinglass models and mean-field approximations, but at the expense of strongly sim-plifying the nonlinear nature of the model.In this work, we do not make any such assumption and study conditions on the datadistribution and model architecture that prevent the existence of bad local minima.Our theoretical work quantifies and formalizes two important folklore facts: (i) thelandscape of deep linear networks has a radically different topology from that ofdeep half-rectified ones, and (ii) that the energy landscape in the non-linear caseis fundamentally controlled by the interplay between the smoothness of the datadistribution and model over-parametrization. Our main theoretical contributionis to prove that half-rectified single layer networks are asymptotically connected,and we provide explicit bounds that reveal the aforementioned interplay.The conditioning of gradient descent is the next challenge we address. We studythis question through the geometry of the level sets, and we introduce an algo-rithm to efficiently estimate the regularity of such sets on large-scale networks.Our empirical results show that these level sets remain connected throughout allthe learning phase, suggesting a near convex behavior, but they become exponen-tially more curvy as the energy level decays, in accordance to what is observed inpractice with very low curvature attractors.1 I NTRODUCTIONOptimization is a critical component in deep learning, governing its success in different areas ofcomputer vision, speech processing and natural language processing. The prevalent optimizationstrategy is Stochastic Gradient Descent, invented by Robbins and Munro in the 50s. The empiricalperformance of SGD on these models is better than one could expect in generic, arbitrary non-convexloss surfaces, often aided by modifications yielding significant speedups Duchi et al. (2011); Hintonet al. (2012); Ioffe & Szegedy (2015); Kingma & Ba (2014). This raises a number of theoreticalquestions as to why neural network optimization does not suffer in practice from poor local minima.The loss surface of deep neural networks has recently attracted interest in the optimization and ma-chine learning communities as a paradigmatic example of a hard, high-dimensional, non-convexproblem. Recent work has explored models from statistical physics such as spin glasses Choroman-ska et al. (2015), in order to understand the macroscopic properties of the system, but at the expenseof strongly simplifying the nonlinear nature of the model. Other authors have advocated that the realdanger in high-dimensional setups are saddle points rather than poor local minima Dauphin et al.(2014), although recent results rigorously establish that gradient descent does not get stuck on saddlepoints Lee et al. (2016) but merely slowed down. Other notable recent contributions are Kawaguchi(2016), which further develops the spin-glass connection from Choromanska et al. (2015) and re-solves the linear case by showing that no poor local minima exist; Sagun et al. (2014) which alsoCurrently on leave from UC Berkeley.1Published as a conference paper at ICLR 2017discusses the impact of stochastic vs plain gradient, Soudry & Carmon (2016), that studies Empir-ical Risk Minimization for piecewise multilayer neural networks under overparametrization (whichneeds to grow with the amount of available data), and Goodfellow et al. (2014), which provided in-sightful intuitions on the loss surface of large deep learning models and partly motivated our work.Additionally, the work Safran & Shamir (2015) studies some topological properties of homogeneousnonlinear networks and shows how overparametrization acts upon these properties, and the pioneer-ing Shamir (2016) studied the distribution-specific hardness of optimizing non-convex objectives.Lastly, several papers submitted concurrently and independently of this one deserve note, particu-larly Swirszcz et al. (2016) which analyzes the explicit criteria under which sigmoid-based neuralnetworks become trapped by poor local minima, as well as Tian (2017), which offers a complemen-tary study of two layer ReLU based networks, and their learning dynamics.In this work, we do not make any linearity assumption and study conditions on the data distributionand model architecture that prevent the existence of bad local minima. The loss surface F()ofa given model can be expressed in terms of its level sets , which contain for each energy levelall parameters yielding a loss smaller or equal than . A first question we address concernsthe topology of these level sets, i.e. under which conditions they are connected. Connected levelsets imply that one can always find a descent direction at each energy level, and therefore that nopoor local minima can exist. In absence of nonlinearities, deep (linear) networks have connectedlevel sets Kawaguchi (2016). We first generalize this result to include ridge regression (in the twolayer case) and provide an alternative, more direct proof of the general case. We then move to thehalf-rectified case and show that the topology is intrinsically different and clearly dependent on theinterplay between data distribution and model architecture. Our main theoretical contribution is toprove that half-rectified single layer networks are asymptotically connected, and we provide explicitbounds that reveal the aforementioned interplay.Beyond the question of whether the loss contains poor local minima or not, the immediate follow-upquestion that determines the convergence of algorithms in practice is the local conditioning of theloss surface. It is thus related not to the topology but to the shape or geometry of the level sets.As the energy level decays, one expects the level sets to exhibit more complex irregular structures,which correspond to regions where F()has small curvature. In order to verify this intuition, weintroduce an efficient algorithm to estimate the geometric regularity of these level sets by approx-imating geodesics of each level set starting at two random boundary points. Our algorithm usesdynamic programming and can be efficiently deployed to study mid-scale CNN architectures onMNIST, CIFAR-10 and RNN models on Penn Treebank next word prediction. Our empirical resultsshow that these models have a nearly convex behavior up until their lowest test errors, with a singleconnected component that becomes more elongated as the energy decays. The rest of the paper isstructured as follows. Section 2 presents our theoretical results on the topological connectednessof multilayer networks. Section 3 presents our path discovery algorithm and Section 4 covers thenumerical experiments.2 T OPOLOGY OF LEVEL SETSLetPbe a probability measure on a product space XY , where we assume XandYare Euclideanvector spaces for simplicity. Let f(xi;yi)gibe an iid sample of size Ldrawn from Pdefining thetraining set. We consider the classic empirical risk minimization of the formFe() =1LLXl=1k(xi;)yik2+R(); (1)where (x;)encapsulates the feature representation that uses parameters 2RSandR()is aregularization term. In a deep neural network, contains the weights and biases used in all layers.For convenience, in our analysis we will also use the oracle risk minimization:Fo() =E(X;Y)Pk(X;)Yk2+R(): (2)Our setup considers the case where Rconsists on either `1or`2norms, as we shall describe below.They correspond to well-known sparse and ridge regularization respectively.2Published as a conference paper at ICLR 20172.1 P OOR LOCAL MINIMA CHARACTERIZATION FROM TOPOLOGICAL CONNECTEDNESSWe define the level set of F()asF() =f2RS;F()g: (3)The first question we study is the structure of critical points of Fe()andFo()when is a mul-tilayer neural network. For simplicity, we consider first a strict notion of local minima: 2RSisa strict local minima of Fif there is >0withF(0)> F()for all02B(;)and06=.In particular, we are interested to know whether Fehas local minima which are not global minima.This question is answered by knowing whether F()is connected at each energy level :Proposition 2.1. IfF()is connected for all then every local minima of F()is a global minima.Strict local minima implies that rF() = 0 andHF()0, but avoids degenerate cases whereFis constant along a manifold intersecting . In that scenario, if Udenotes that manifold, ourreasoning immediately implies that if F()are connected, then for all >0there exists0withdist(0;U)andF(0)<F(). In other words, some element at the boundary of Umust be asaddle point. A stronger property that eliminates the risk of gradient descent getting stuck at Uisthatallelements at the boundary of Uare saddle points. This can be guaranteed if one can showthat there exists a path connecting any to the lowest energy level such that Fis strictly decreasingalong it.Such degenerate cases arise in deep linear networks in absence of regularization. If =(W1;:::;WK)denotes any parameter value, with N1;:::NKdenoting the hidden layer sizes, andFk2GL+Nk(R)are arbitrary elements of the general linear group of invertible NkNkmatriceswith positive determinant, thenU=fW1F11;F1W2F12;:::;FKWK;Fk2GL+Nk(R)g:In particular,Uhas a Lie Group structure. In the half-rectified nonlinear case, the general lineargroup is replaced by the Lie group of homogeneous invertible matrices Fk=diag(1;:::;Nk)withj>0.This proposition shows that a sufficient condition to prevent the existence of poor local minima ishaving connected level sets, but this condition is not necessary: one can have isolated local minimalying at the same energy level. This can be the case in systems that are defined up to a discretesymmetry group, such as multilayer neural networks. However, as we shall see next, this case putsthe system in a brittle position, since one needs to be able to account for all the local minima (andthere can be exponentially many of them as the parameter dimensionality increases) and verify thattheir energy is indeed equal.2.2 T HELINEAR CASEWe first consider the particularly simple case where Fis a multilayer network defined by(x;) =WK:::W 1x; = (W1;:::;WK): (4)and the ridge regression R() =kk2. This model defines a non-convex (and non-concave) lossFe(). When= 0, it has been shown in Saxe et al. (2013) and Kawaguchi (2016) that in this case,every local minima is a global minima. We provide here an alternative proof of that result that usesa somewhat simpler argument and allows for >0in the caseK= 2.Proposition 2.2. LetW1;W2;:::;WKbe weight matrices of sizes nknk+1,k < K , and letFe(),Fo()denote the risk minimizations using as in (4). Assume that njmin(n1;nK)forj= 2:::K1. Then Fe()(and Fo) is connected for all and allKwhen= 0, and for>0whenK= 2; and therefore there are no poor local minima in these cases. Moreover, any can be connected to the lowest energy level with a strictly decreasing path.Let us highlight that this result is slightly complementary than that of Kawaguchi (2016), Theorem2.3. Whereas we require njmin(n1;nK)forj= 2:::K1and our analysis does not informabout the order of the saddle points, we do not need full rank assumptions on Xnor the weightsWk.3Published as a conference paper at ICLR 2017This result does also highlight a certain mismatch between the picture of having no poor local min-ima and generalization error. Incorporating regularization drastically changes the topology, and thefact that we are able to show connectedness only in the two-layer case with ridge regression is pro-found; we conjecture that extending it to deeper models requires a different regularization, perhapsusing more general atomic norms Bach (2013). But we now move our interest to the nonlinear case,which is more relevant to our purposes.2.3 H ALF-RECTIFIED NONLINEAR CASEWe now study the setting given by(x;) =WKWK1:::W 1x; = (W1;:::;WK); (5)where(z) = max(0 ;z). The biases can be implemented by replacing the input vector xwithx= (x;1)and by rebranding each parameter matrix asWi=Wibi01;wherebicontains the biases for each layer. For simplicity, we continue to use Wiandxin thefollowing.2.3.1 N ONLINEAR MODELS ARE GENERALLY DISCONNECTEDOne may wonder whether the same phenomena of global connectedness also holds in the half-rectified case. A simple motivating counterexample shows that this is not the case in general. Con-sider a simple setup with X2R2drawn from a mixture of two Gaussians N1andN1, and letY= (XZ)Z, whereZis the (hidden) mixture component taking f1;1gvalues. Let^Y= (X;fW1;W2g)be a single-hidden layer ReLU network, with two hidden units. Let Abea configuration that bisects the two mixture components, and let Bthe same configuration, butswapping the bisectrices. One can verify that they can both achieve arbitrarily small risk by lettingthe covariance of the mixture components go to 0. However, any path that connects AtoBmustnecessarily pass through a point in which W1has rank 1, which leads to an estimator with risk atleast1=2.In fact, it is easy to see that this counter-example can be extended to any generic half-rectified ar-chitecture, if one is allowed to adversarially design a data distribution. For any given (X;)witharbitrary architecture and current parameters = (Wi), letP=fA1;:::;ASgbe the underly-ing tessellation of the input space given by our current choice of parameters; that is, (X;)ispiece-wise linear and Pcontains those pieces. Now let Xbe any arbitrary distribution with densityp(x)>0for allx2Rn, for example a Gaussian, and let YjXd= (X;). Since is invariantunder a subgroup of permutations of its hidden layers, it is easy to see that one can find two pa-rameter values A=andB=such thatFo(A) =Fo(B) = 0 , but any continuous path (t)fromAtoBwill have a different tessellation and therefore won’t satisfy Fo((t)) = 0 . Moreover,one can build on this counter-example to show that not only the level sets are disconnected, but alsothat there exist poor local minima. Let 0be a different set of parameters, and Y0jXd= (X;0)be a different target distribution. Now consider the data distribution given by the mixtureXjp(x); zBernoulli (); YjX;zd=z(X;) + (1z)(X;0):By adjusting the mixture component we can clearly change the risk at and0and make themdifferent, but we conjecture that this preserves the status of local minima of and0. Appendix Econstructs a counter-example numerically.This illustrates an intrinsic difficulty in the optimization landscape if one is after universal guaranteesthat do not depend upon the data distribution. This difficulty is non-existent in the linear case andnot easy to exploit in mean-field approaches such as Choromanska et al. (2015), and shows thatin general we should not expect to obtain connected level sets. However, connectedness can berecovered if one is willing to accept a small increase of energy and make some assumptions on thecomplexity of the regression task. Our main result shows that the amount by which the energy isallowed to increase is upper bounded by a quantity that trades-off model overparametrization andsmoothness in the data distribution.4Published as a conference paper at ICLR 2017For that purpose, we start with a characterization of the oracle loss, and for simplicity let us assumeY2Rand let us first consider the case with a single hidden layer and `1regularization:R() =kk1.2.3.2 P RELIMINARIESBefore proving our main result, we need to introduce preliminary notation and results. We firstdescribe the case with a single hidden layer of size m.We definee(m) = minW12Rmn;kW1(i)k21;W22RmEfj(X;)Yj2g+kW2k1: (6)to be the oracle risk using mhidden units with norm 1and using sparse regression. It is a wellknown result by Hornik and Cybenko that a single hidden layer is a universal approximator undervery mild assumptions, i.e. limm!1e(m) = 0 . This result merely states that our statistical setup isconsistent, and it should not be surprising to the reader familiar with classic approximation theory.A more interesting question is the rate at which e(m)decays, which depends on the smoothness ofthe joint density (X;Y )Prelative to the nonlinear activation family we have chosen.For convenience, we redefine W=W1and=W2andZ(W) = max(0;WX ). We also writez(w) = max(0;hw;Xi)where (X;Y )Pandw2RNis any deterministic vector. Let X=EPXXT2RNNbe the covariance operator of the random input X. We assumekXk<1.A fundamental property that will be essential to our analysis is that, despite the fact that Zisnonlinear, the quantity [w1;w2]Z:=EPfz(w1)z(w2)gis locally equivalent to the linear metrichw1;w2iX=EPfwT1XXTw2g=hw1;Xw2i, and that the linearization error decreases with theangle between w1andw2. Without loss of generality, we assume here that kw1k=kw2k= 1, andwe writekwk2Z=Efjz(w)j2g.Proposition 2.3. Let= cos1(hw1;w2i)be the angle between unitary vectors w1andw2and letwm=w1+w2kw1+w2kbe their unitary bisector. Then1 + cos2kwmk2Z2kXk1cos2+ sin2[w1;w2]Z1 + cos2kwmk2Z: (7)The termkXkis overly pessimistic: we can replace it by the energy of Xprojected into thesubspace spanned by w1andw2(which is bounded by 2kXk). Whenis small, a Taylor expansionof the trigonometric terms reveals that23kXkhw1;w2i=23kXkcos=23kXk(122+O(4))(12=4)kwmk2ZkXk(2=4 +2) +O(4)[w1;w2]Z+O(4);and similarly[w1;w2]Zhw1;w2ikwmk2ZkXkhw1;w2i:The local behavior of parameters w1;w2on our regression problem is thus equivalent to that of hav-ing a linear layer, provided w1andw2are sufficiently close to each other. This result can be seen asaspoiler of what is coming: increasing the hidden layer dimensionality mwill increase the chancesto encounter pairs of vectors w1;w2with small angle; and with it some hope of approximating theprevious linear behavior thanks to the small linearization error.In order to control the connectedness, we need a last definition. Given a hidden layer of size mwithcurrent parameters W2Rnm, we define a “robust compressibility” factor asW(l;;m) = minkk0l;supij\( ~wi;wi)jEfjYZ(~W)j2+kk1g;(lm): (8)This quantity thus measures how easily one can compress the current hidden layer representation,by keeping only a subset of lits units, but allowing these units to move by a small amount controlledby. It is a form of n-width similar to Kolmogorov width Donoho (2006) and is also related torobust sparse coding from Tang et al. (2013); Ekanadham et al. (2011).5Published as a conference paper at ICLR 20172.3.3 M AIN RESULTOur main result considers now a non-asymptotic scenario given by some fixed size mof the hid-den layer. Given two parameter values A= (WA1;WA2)2 W andB= (WB1;WB2)withFo(fA;Bg), we show that there exists a continuous path : [0;1]! W connectingAandBsuch that its oracle risk is uniformly bounded by max(;), wheredecreases with modeloverparametrization.Theorem 2.4. For anyA;B2W and2RsatisfyingFo(fA;Bg), there exists a continuouspath: [0;1]!W such that(0) =A,(1) =BandFo((t))max(;);with (9)= infl;maxne(l);WA1(m;0;m);WA1(ml;;m); (10)WB1(m;0;m);WB1(ml;;m)o+C1+O(2); (11)whereC1is an absolute constant depending only on andP.Some remarks are in order. First, our regularization term is currently a mix between `2norm con-straints on the first layer and `1norm constraints on the second layer. We believe this is an artifact ofour proof technique, and we conjecture that more general regularizations yield similar results. Next,this result uses the data distribution through the oracle bound e(m)and the covariance term. Theextension to empirical risk is accomplished by replacing the probability measure Pby the empiricalmeasure ^P=1LPl((x;y)(xl;yl)). However, our asymptotic analysis has to be carefully re-examined to take into account and avoid the trivial regime when MoutgrowsL. A consequence ofTheorem 2.4 is that as mincreases, the model becomes asymptotically connected, as proven in thefollowing corollary.Corollary 2.5. Asmincreases, the energy gap satisfies=O(m1n)and therefore the level setsbecome connected at all energy levels.This is consistent with the overparametrization results from Safran & Shamir (2015); Shamir (2016)and the general common knowledge amongst deep learning practitioners. Our next sections ex-plore this question, and refine it by considering not only topological properties but also some roughgeometrical measure of the level sets.3 G EOMETRY OF LEVEL SETS3.1 T HEGREEDY ALGORITHMThe intuition behind our main result is that, for smooth enough loss functions and for sufficientoverparameterization, it should be “easy” to connect two equally powerful models—i.e., two modelswithFoA;B. A sensible measure of this ease-of-connectedness is the normalized lengthof the geodesic connecting one model to the other: jA;B(t)j=jABj. This length representsapproximately how far of an excursion one must make in the space of models relative to the euclideandistance between a pair of models. Thus, convex models have a geodesic length of 1, becausethe geodesic is simply linear interpolation between models, while more non-convex models havegeodesic lengths strictly larger than 1.Because calculating the exact geodesic is difficult, we approximate the geodesic paths via a dynamicprogramming approach we call Dynamic String Sampling. We comment on alternative algorithmsin Appendix A.For a pair of models with network parameters i,j, each withFe()below a threshold L0, we aimto efficienly generate paths in the space of weights where the empirical loss along the path remainsbelowL0. These paths are continuous curves belonging to F()–that is, the level sets of the lossfunction of interest.6Published as a conference paper at ICLR 2017Algorithm 1 Greedy Dynamic String Sampling1:L0 Threshold below which path will be found2:1 randomly initialize 1, train (xi1)toL03:2 randomly initialize 2, train (xi2)toL04:BeadList (1;2)5:Depth 06:procedure FINDCONNECTION (1;2)7:t t such thatd(1;2;t)dtt= 0 ORt= 0:58: 3 train(xi;t1+ (1t)2)toL09: BeadList insert(3, after 1, BeadList)10:MaxError 1 maxt(Fe(t3+ (1t)1))11:MaxError 2 maxt(Fe(t2+ (1t)3))12: ifMaxError 1>L 0then return FindConnection (1;3)13: ifMaxError 2>L 0then return FindConnection (3;2)14: Depth Depth +1The algorithm recursively builds a string of models in the space of weights which continuouslyconnectitoj. Models are added and trained until the pairwise linearly interpolated loss, i.e.max tFe(ti+ (1t)j)fort2(0;1), is below the threshold, L0, for every pair of neighboringmodels on the string. We provide a cartoon of the algorithm in Appendix C.3.2 F AILURE CONDITIONS AND PRACTICALITIESWhile the algorithm presented will faithfully certify two models are connected if the algorithmconverges, it is worth emphasizing that the algorithm does not guarantee that two models are dis-connected if the algorithm fails to converge. In general, the problem of determining if two modelsare connected can be made arbitrarily difficult by choice of a particularly pathological geometry forthe loss function, so we are constrained to heuristic arguments for determining when to stop run-ning the algorithm. Thankfully, in practice, loss function geometries for problems of interest are notintractably difficult to explore. We comment more on diagnosing disconnections more carefully inAppendix E.Further, if the MaxError exceedsL0for every new recursive branch as the algorithm progresses,the worst case runtime scales as O(exp(Depth )). Empirically, we find that the number of newmodels added at each depth does grow, but eventually saturates, and falls for a wide variety ofmodels and architectures, so that the typical runtime is closer to O(poly( Depth ))—at least upuntil a critical value of L0.To aid convergence, either of the choices in line 7of the algorithm works in practice—choosing tata local maximum can provide a modest increase in algorithm runtime, but can be unstable if the thecalculated interpolated loss is particularly flat or noisy. t=:5is more stable, but slower. Finally,we find that training 3toL0for<1in line 8of the algorithm tends to aid convergence withoutnoticeably impacting our numerics. We provide further implementation details in 4.4 N UMERICAL EXPERIMENTSFor our numerical experiments, we calculated normalized geodesic lengths for a variety of regressionand classification tasks. In practice, this involved training a pair of randomly initialized models tothe desired test loss value/accuracy/perplexity, and then attempting to connect that pair of models viathe Dynamic String Sampling algorithm. We also tabulated the average number of “beads”, or thenumber intermediate models needed by the algorithm to connect two initial models. For all of thebelow experiments, the reported losses and accuracies are on a restricted test set. For more completearchitecture and implementation details, see our GitHub page.The results are broadly organized by increasing model complexity and task difficulty, from easiestto hardest. Throughout, and remarkably, we were able to easily connect models for every datasetand architecture investigated except the one explicitly constructed counterexample discussed in Ap-pendix E.1. Qualitatively, all of the models exhibit a transition from a highly convex regime at highloss to a non-convex regime at low loss, as demonstrated by the growth of the normalized length aswell as the monotonic increase in the number of required “beads” to form a low-loss connection.7Published as a conference paper at ICLR 20174.1 P OLYNOMIAL REGRESSIONWe studied a 1-4-4-1 fully connected multilayer perceptron style architecture with sigmoid nonlin-earities and RMSProp/ADAM optimization. For ease-of-analysis, we restricted the training and testdata to be strictly contained in the interval x2[0;1]andf(x)2[0;1]. The number of requiredbeads, and thus the runtime of the algorithm, grew approximately as a power-law, as demonstratedin Table 1 Fig. 1. We also provide a visualization of a representative connecting path between twomodels of equivalent power in Appendix D.0.00 0.02 0.04 0.06 0.08 0.10 0.12 0.14L01.21.41.61.82.0NormalizedLength(1a)2-102-92-82-72-62-52-42-3L021222324252627NumberofBeads (1b)0.00 0.05 0.10 0.15 0.20 0.25 0.30L01.01.52.02.53.0NormalizedLength(2a)2-72-62-52-42-32-2L0212223242526NumberofBeads (2b)0.5 1.0 1.5 2.0 2.5 3.0 3.5 4.0%errorontestset1.001.021.041.061.081.10NormalizedLength(3a)0.5 1.0 1.5 2.0 2.5 3.0 3.5 4.0%errorontestset3.54.04.55.05.56.06.57.07.5NumberofBeads (3b)10 20 30 40 50 60 70 80 90%errorontestset1.01.21.41.61.82.0NormalizedLength(4a)20 30 40 50 60 70 80 90%errorontestset2.53.03.54.04.55.05.56.06.57.0NumberofBeads (4b)100 200 300 400 500 600 700 800 900Perplexityontestset1.01.21.41.61.82.0NormalizedLength(5a)100 200 300 400 500 600 700 800 900Perplexityontestset2.53.03.54.04.55.05.56.06.57.0NumberofBeads (5b)Figure 1: (Column a) Average normalized geodesic length and (Column b) average number of beadsversus loss. (1) A quadratic regression task. (2) A cubic regression task. (3) A convnet for MNIST.(4) A convnet inspired by Krizhevsky for CIFAR10. (5) A RNN inspired by Zaremba for PTB nextword prediction.The cubic regression task exhibits an interesting feature around L0=:15in Table 1 Fig. 2, wherethe normalized length spikes, but the number of required beads remains low. Up until this point, the8Published as a conference paper at ICLR 2017cubic model is strongly convex, so this first spike seems to indicate the onset of non-convex behaviorand a concomitant radical change in the geometry of the loss surface for lower loss.4.2 C ONVOLUTIONAL NEURAL NETWORKSTo test the algorithm on larger architectures, we ran it on the MNIST hand written digit recognitiontask as well as the CIFAR10 image recognition task, indicated in Table 1, Figs. 3 and 4. Again,the data exhibits strong qualitative similarity with the previous models: normalized length remainslow until a threshold loss value, after which it grows approximately as a power law. Interestingly,the MNIST dataset exhibits very low normalized length, even for models nearly at the state of theart in classification power, in agreement with the folk-understanding that MNIST is highly convexand/or “easy”. The CIFAR10 dataset, however, exhibits large non-convexity, even at the modest testaccuracy of 80%.4.3 R ECURRENT NEURAL NETWORKSTo gauge the generalizability of our algorithm, we also applied it to an LSTM architecture for solvingthe next word prediction task on the PTB dataset, depicted in Table 1 Fig. 5. Noteably, even for aradically different architecture, loss function, and data set, the normalized lengths produced by theDSS algorithm recapitulate the same qualitative features seen in the above datasets—i.e., modelscan be easily connected at high perplexity, and the normalized length grows at lower and lowerperplexity after a threshold value, indicating an onset of increased non-convexity of the loss surface.5 D ISCUSSIONWe have addressed the problem of characterizing the loss surface of neural networks from the per-spective of gradient descent algorithms. We explored two angles – topological and geometricalaspects – that build on top of each other.On the one hand, we have presented new theoretical results that quantify the amount of uphill climb-ing that is required in order to progress to lower energy configurations in single hidden-layer ReLUnetworks, and proved that this amount converges to zero with overparametrization under mild con-ditions. On the other hand, we have introduced a dynamic programming algorithm that efficientlyapproximates geodesics within each level set, providing a tool that not only verifies the connected-ness of level sets, but also estimates the geometric regularity of these sets. Thanks to this informa-tion, we can quantify how ‘non-convex’ an optimization problem is, and verify that the optimizationof quintessential deep learning tasks – CIFAR-10 and MNIST classification using CNNs, and nextword prediction using LSTMs – behaves in a nearly convex fashion up until they reach high accuracylevels.That said, there are some limitations to our framework. In particular, we do not address saddle-pointissues that can greatly affect the actual convergence of gradient descent methods. There are also anumber of open questions; amongst those, in the near future we shall concentrate on:Extending Theorem 2.4 to the multilayer case . We believe this is within reach, since themain analytic tool we use is that small changes in the parameters result in small changes inthe covariance structure of the features. That remains the case in the multilayer case.Empirical versus Oracle Risk . A big limitation of our theory is that right now it does notinform us on the differences between optimizing the empirical risk versus the oracle risk.Understanding the impact of generalization error and stochastic gradient in the ability to dosmall uphill climbs is an open line of research.Influence of symmetry groups . Under appropriate conditions, the presence of discrete sym-metry groups does not prevent the loss from being connected, but at the expense of increas-ing the capacity. An important open question is whether one can improve the asymptoticproperties by relaxing connectedness to being connected up to discrete symmetry.Improving numerics with Hyperplane method . Our current numerical experiments employ agreedy (albeit faster) algorithm to discover connected components and estimate geodesics.We plan to perform experiments using the less greedy algorithm described in Appendix A.9Published as a conference paper at ICLR 2017ACKNOWLEDGMENTSWe would like to thank Mark Tygert for pointing out the reference to the -nets and Kolmogorovcapacity, and Martin Arjovsky for spotting several bugs in early version of the results. We wouldalso like to thank Maithra Raghu and Jascha Sohl-Dickstein for enlightening discussions, as well asYasaman Bahri for helpful feedback on an early version of the manuscript. CDF was supported bythe NSF Graduate Research Fellowship under Grant DGE-1106400.
S1vHT3l4l
Insightful Results
8: Top 50% of accepted papers, clear accept
This work contributes to understanding the landscape of deep networks in terms of its topology and geometry. The paper analyzes the former theoretically, and studies the latter empirically. Although the provided contributions are very specific (ReLU nets with single hidden layer, and a heuristic to calculate the normalized geodesic), the results are original and of interest. Thus, they could potentially be used as stepping stones for deeper developments in this area. Pros: 1. Providing new theory about existence of "poor" local minima for ReLU networks with a hidden unit that relies on input distribution properties as well as the size of the hidden layer. 2. Coming up with a heuristic algorithm to compute the normalized geodesic between two solution points. The latter reflects how curved the path between the two is. Cons: The results are very specific in both topology and geometry analysis. 1. The analysis is performed only over a "single" hidden layer ReLU network. Given the importance of depth in deep architectures, this result cannot really explain the kinds of architectures we are interested in practically. 2. The normalized geodesic criterion is somewhat limited in representing how easy it is to connect two equally good points. For example, there might exist a straight line between the two (which is considered as easy by the geodesic criterion), but this line might be going through a very narrow valley, challenging gradient based optimization algorithms (and thus extremely difficult to navigate in practice). In addition, the proposed algorithm for computing the normalized geodesic is a greedy heuristic, which as far as I can tell, makes it difficult to know how we can trust in the estimated geodesics obtained by this algorithm. With all cons said, I stress that I understand both problems tackled in the paper are challenging, and thus I find the contributions valuable and interesting.
3: The reviewer is fairly confident that the evaluation is correct
SJttqw5ge
ICLR.cc/2017/conference
2017
Communicating Hierarchical Neural Controllers for Learning Zero-shot Task Generalization
["Junhyuk Oh", "Satinder Singh", "Honglak Lee", "Pushmeet Kohli"]
The ability to generalize from past experience to solve previously unseen tasks is a key research challenge in reinforcement learning (RL). In this paper, we consider RL tasks defined as a sequence of high-level instructions described by natural language and study two types of generalization: to unseen and longer sequences of previously seen instructions, and to sequences where the instructions themselves were previously not seen. We present a novel hierarchical deep RL architecture that consists of two interacting neural controllers: a meta controller that reads instructions and repeatedly communicates subtasks to a subtask controller that in turn learns to perform such subtasks. To generalize better to unseen instructions, we propose a regularizer that encourages to learn subtask embeddings that capture correspondences between similar subtasks. We also propose a new differentiable neural network architecture in the meta controller that learns temporal abstractions which makes learning more stable under delayed reward. Our architecture is evaluated on a stochastic 2D grid world and a 3D visual environment where the agent should execute a list of instructions. We demonstrate that the proposed architecture is able to generalize well over unseen instructions as well as longer lists of instructions.
["Reinforcement Learning", "Deep learning"]
ABSTRACTThe ability to generalize from past experience to solve previously unseen tasks is akey research challenge in reinforcement learning (RL). In this paper, we considerRL tasks defined as a sequence of high-level instructions described by natural lan-guage and study two types of generalization: to unseen and longer sequences ofpreviously seen instructions, and to sequences where the instructions themselveswere previously not seen. We present a novel hierarchical deep RL architecturethat consists of two interacting neural controllers: a meta controller that reads in-structions and repeatedly communicates subtasks to a subtask controller that inturn learns to perform such subtasks. To generalize better to unseen instructions,we propose a regularizer that encourages to learn subtask embeddings that capturecorrespondences between similar subtasks. We also propose a new differentiableneural network architecture in the meta controller that learns temporal abstrac-tions which makes learning more stable under delayed reward. Our architectureis evaluated on a stochastic 2D grid world and a 3D visual environment wherethe agent should execute a list of instructions. We demonstrate that the proposedarchitecture is able to generalize well over unseen instructions as well as longerlists of instructions.1 I NTRODUCTIONHumans can often generalize to novel tasks even without any additional learning by leveraging pastlearning experience. We would like our artificial agents to have similar “zero-shot” generalizationcapabilities. For example, after learning to solve tasks with instructions such as ‘Go to X (or Y)’ and‘Pick up Y (or Z)’, our agent should be able to infer the underlying goal of new tasks with instruc-tions like ‘Go to Z’, which requires disentangling the verbs (‘Go to/Pick up’) and the nouns/objects(‘X, Y , Z’). Furthermore, we would like our agents to learn to compose policies to solve novel taskscomposed of sequences of seen and unseen instructions. Developing the ability to achieve suchgeneralizations is a key challenge in artificial intelligence and the subfield of reinforcement learning(RL).Figure 1: Example of grid-world and in-structions. The agent is tasked to exe-cute longer sequences of instructions aftertrained on short sequences of instructions; inaddition previously unseen instructions canbe given during evaluation (blue text). Theagent can get more rewards if it deals withrandomly appearing enemies (red outlinedbox) regardless of current instructions.In this paper, we study the problem of zero-shot task gen-eralization in RL by introducing the “instruction execu-tion” problem where the agent is required to learn throughinteraction with its environment how to achieve an overalltask specified by a list of high-level instructions (see Fig-ure 1). As motivation for this problem consider a humanowner training its new household robot to execute com-plex tasks specified by natural language text that decom-pose the task into a sequence of instructions. Given thatit is infeasible to explicitly train the robot on all possibleinstruction-sequences, this problem involves two types ofgeneralizations: to unseen and longer sequences of previ-ously seen instructions, and sequences where the some ofthe instructions themselves were previously not seen. Ofcourse, the usual RL problem of learning policies throughinteraction to accomplish the goals of an instruction re-mains part of the problem as well. We assume that theagent does notreceive any signal on completing or fail-1Under review as a conference paper at ICLR 2017ing to complete individual instructions from the environment/owner and so the informative rewardsignal is delayed until the end. Furthermore, there can be random events in the environment thatrequire the agent to interrupt whatever it is doing and deviate from the instructions to maintain somebackground task as described in Figure 1. Altogether this makes for a challenging zero-shot taskgeneralization RL problem.Brief Background: RL tasks composed of sequences of subtasks have been studied before and anumber of hierearchical RL approaches designed for them. Typically these have the form of a metacontroller and a set of lower-level controllers for subtasks (Sutton et al., 1999; Dietterich, 2000;Parr and Russell, 1997). The meta controller is limited to selecting one from a set of lower-levelcontrollers to employ at any time. This makes it impossible for the low-level controller to generalizeto new subtasks without training a new low-level controller separately. Much of the previous workalso assumes that the overall task is fixed (e.g., Taxi domain (Dietterich, 2000; Ghavamzadeh andMahadevan, 2003)). Transfer learning across multiple compositional tasks has typically been studiedin RL formulations in which new tasks are only presented via a new reward function from theenvironment (Singh, 1991; 1992) and so there is no opportunity for fast model-free generalization.To the best of our knowledge, zero-shot model-free generalization to new or longer tasks as well asunseen tasks has not been well-studied in the RL setting.Our Architecture: This paper presents a hierarchical deep RL architecture (see Figure 2) that con-sists of two interacting neural controllers: a meta controller that repeatedly chooses an instructionand conditioned on the current state of the environment translates it into subtask-arguments (detailson this in later sections) and communicates those to the subtask controller that in turn chooses prim-itive actions given the subtask. This makes the subtask controller a parameterized option (Suttonet al., 1999) module in which the parameters are the subtask-arguments mentioned above. On top ofthe subtask controller, the meta controller is trained to select proper subtask-arguments dependingon observations from the environment, feedback from the subtask controller about termination, andthe task instructions. In order to generalize over unseen instructions, we propose analogy-makingregularization (discussed in Section 4.1) which encourages to learn subtask embeddings that capturecorrespondences between similar subtasks. In addition, we propose a new differentiable neural ar-chitecture in the meta controller that implicitly learns temporal abstractions so that it can operate ata larger time-scale and update the subtask-arguments to the subtask controller only when needed.Our Results: We developed a 2D grid world environment where the agent can interact with manyobjects as illustrated in Figure 1 based on MazeBase (Sukhbaatar et al., 2015) (see Section 6.1 fordetails). The empirical results show that the meta-controller’s ability to learn temporal abstractionsand a form of analogy-making regularization were all key in allowing our hierarchical architectureto generalize in a zero-shot fashion to unseen tasks. We also demonstrated that the same architecturecan also generalize to unseen and longer instructions in a 3D visual environment.2 R ELATED WORKHierarchical Reinforcement Learning. In addition to hierarchical RL described in Section 1,there is a line of work on portable options for solving sequential tasks (Konidaris et al., 2012;Konidaris and Barto, 2007). They proposed agent-space options that can be re-used to deal withnew problems. However, the optimal sequence of options (e.g., picking up a key followed by open-ing a door) is fixed throughout training and evaluation in their problem. On the other hand, the agentis required to perform new sequences of tasks depending on given instructions in our work. Ourwork is also closely related to Programmable HAM (PHAM) (Andre and Russell, 2000; 2002) inthat PHAM is designed to execute a given program. However, the program explicitly specifies thepolicy in PHAM which effectively reduces state-action space. In contrast, a list of instructions is apartial description of the task in our work, which means that the policy is not forced to follow theinstructions but to use them as a guide to maximize its reward. For example, interrupt conditionsneed be manually specified by the program in PHAM, while they are not specified in the instructionsbut should be learned by the agent in our framework.Hierarhical RL has been recently combined with deep learning. Kulkarni et al. (2016) proposedhierarchical Deep Q-Learning and demonstrated improved exploration in a challenging Atari game.Tessler et al. (2016) proposed a similar architecture that allows the high-level controller to chooseprimitive actions directly. Bacon and Precup (2015) proposed option-critic architecture which learnsoptions without any domain knowledge and demonstrated that it can learn distinct options in Atari2Under review as a conference paper at ICLR 2017games. Vezhnevets et al. (2016) proposed a deep architecture that automatically learns macro-actions. Unlike these recent works that aim to solve a single task, the goal of our work is to build amulti-task policy that can generalize over many different sequences of tasks.Zero-shot Task Generalization and Parameterized Option. There has been only a few stud-ies that aim to generalize over new tasks in a zero-shot fashion (i.e., without additional learning).da Silva et al. (2012) proposed the concept of parameterized skill which maps a set of task descrip-tions to policies. Similarly, Isele et al. (2016) proposed a method for zero-shot task generalizationwhich uses task descriptors to predict the parameter of the policy and proposed a coupled dictionarylearning with sparsity constraints to enable zero-shot learning. Schaul et al. (2015) proposed univer-sal value function approximators (UVFA) that learn a value function given a state and goal pair andshowed that their framework can generalize over unseen goals. Borsa et al. (2016) proposed to learna representation of state and action shared across different tasks. However, the proposed approachlacks the ability to solve new tasks in a zero-shot way. Our subtask controller implements the idea ofparameterized skill or universal option. Unlike the previous works, however, we propose to build ahigh-level controller (meta controller) on top of the subtask controller to deal with sequential tasks.Instruction Execution. There has been a line of work for building agents that can execute naturallanguage instructions: Tellex et al. (2011; 2014) for robotics and MacMahon et al. (2006); Chen andMooney (2011); Mei et al. (2015) for a simulated environment. However, these approaches focuson natural language understanding to map instructions to a sequence of actions or groundings ina supervised setting. In contrast, we focus on generalization to different sequences of instructionswithout any supervision for language understanding or for actions. Branavan et al. (2009) also tacklea similar problem of mapping from natural language instructions to a sequence of actions throughRL. However, the agent is given a single sentence at a time from the environment, while the agenthas to deal with a full list of instructions in our problem. In addition, they do not discuss how to dealwith unseen instructions which is the main focus of our paper.3 O VERVIEWFigure 2: Overview of our architectureGoal. We aim to learn a multi-task policy which is a map-ping:SM!A whereSis a set of states (or obser-vations),Mis a set of lists of instructions, and Ais a setof primitive actions. More importantly, since Mcan be ar-bitrary large, our goal is to find an optimal policy on avery small set of lists of instructions M0M such thatis also optimal in the entire set of lists of instructions M.Hierarchical Structure and Communication Protocol.As illustrated in Figure 2, the proposed architecture consistsof a meta controller which selects a subtask and a subtaskcontroller which executes the given subtask. The subtask isfurther decomposed into several arguments. More specif-ically, a space of subtasks Gis defined using the Carte-sian product of their arguments G(1)G(n), whereG(i)is a set of the i-th arguments (e.g.,G=fVisit;Pick upgf A;Bg). In addition, the subtask controller provides a useful informationto meta controller by giving a terminal signal for the given subtask. This communication protocolallows each controller to not only focus on their own independent roles but also communicate witheach other to learn a complex closed-loop policy.Subtask Controller. The subtask controller is a mapping SG!AB which maps a state and asubtask to an action and a termination signal ( B=f0;1g) indicating whether the subtask is finishedor not. The subtask controller is trained prior to training the meta controller. The main challenge forthe subtask controller is that only a subset of subtasks ( UG ) is observed during training, and itshould be able to generalize over unseen subtasks without experiencing them. Section 4 describeshow to construct the subtask architecture parameterized by a neural network and discusses how togeneralize over unseen subtasks.Meta Controller. The meta controller is a mapping SMGB!G which decides a subtaskfrom a state, a list of instructions, a subtask that is currently being executed, and whether the subtaskis finished as input. Thus, the meta controller should understand natural language instructions andpass proper subtask arguments to the subtask controller.3Under review as a conference paper at ICLR 2017ObservationSubtaskargumentsActionTermination?SubtaskembeddingInputOutputRecurrent(a) Subtask controllerObservationContextSubtaskargumentsSubtaskargumentsRetrieved instructionSubtasktermination?InstructionmemorySubtaskUpdaterUpdateYesNoInstructions (b) Meta controllerFigure 3: Proposed neural network architectures. See text for details.It is important to note that natural language instructions are not directly subtasks; indeed there is nota one-to-one correspondence between instructions and subtask-arguments. This is due to a numberof important reasons. First, instructions such as ’Pick up all X’ are executed by repeatedly solving asubtask [Pick up, X]. Second, the meta controller sometimes needs to interrupt ongoing subtasks andreplace them with other subtasks that are not relevant to the instruction because of the backgroundtask based on the stochastic events as described in Figure 1.Another challenge for the meta controller is that it should deal with partial observability induced bythe list of instructions. This is because the agent is notgiven which instruction to execute at eachtime-step from the environment but given just a full list of instructions. Thus, the meta controllershould remember how many instructions it has executed and decide when to move to the next in-struction. Section 5.1 describes how to construct a memory-based neural network to deal with thischallenge.Finally, it is desirable for the meta controller to operate in a larger time-scale due to the fact that asubtask does not change frequently once it is chosen. We describe a novel way to implicitly learnsuch a temporal scale of the meta controller through neural networks in Section 5.2.4 S UBTASK CONTROLLERGiven an observation st2S and subtask arguments g=g(1);:::;g(n)2G, the subtask controlleris defined as the following functions:Policy:(atjst;g) Termination: (btjst;g) =P(st2T g)whereis the policy optimized for the subtask. is a termination function which is a probabilitythat the state is terminal or not for given a subtask. Tgis the set of terminal states. The subtaskcontroller is parameterized by which is represented by a neural network as illustrated in Figure 3a.The network learns a representation of the subtask '(g), and it is used to condition the entire networkthrough multiplicative interactions as suggested by Memisevic and Hinton (2010); Lei Ba et al.(2015); Bertinetto et al. (2016). Further details are described in Appendix F.4.1 L EARNING TO GENERALIZE BY ANALOGY -MAKINGWhen learning a non-linear subtask embedding from arguments, '(g), it is desirable for the networkto learn prior knowledge about the relationship between different subtask arguments in order to inferthe goal of unseen configurations of arguments. To this end, we propose a novel analogy-makingregularizer inspired by Reed et al. (2015); Hadsell et al. (2006); Reed et al. (2014). The main idea isto learn correspondences between subtasks. For example, if target objects and ‘Visit/Pick up’ tasksare independent, we can enforce [Visit, X] : [Visit, Y] :: [Pick up, X] : [Pick up, Y] for any X and Yin the embedding space so that the agent learns to perform [Pick up, Y] as it performs [Pick up, X]and vice versa.More specifically, we define several constraints as follows:k'(gA)'(gB)'(gC) +'(gD)k0 ifgA:gB::gC:gD (1)k'(gA)'(gB)(gC) +'(gD)kdis ifgA:gB6=gC:gD (2)k'(gA)'(gB)kdiff ifgA6=gB (3)4Under review as a conference paper at ICLR 2017where gk=hg(1)k;g(2)k;:::;g(n)ki2Gare subtask arguments. Eq. (1) represents the analogy-makingrelationship, while Eq. (2) and and Eq. (3) prevent trivial solutions. To satisfy the above constraints,we propose the following objective functions based on contrastive loss (Hadsell et al., 2006):Lsim=E(gA;gB;gC;gD)Gsimk'(gA)'(gB)(gC) +'(gD)k2(4)Ldis=E(gA;gB;gC;gD)Gdishmax (0;disk'(gA)'(gB)(gC) +'(gD)k)2i(5)Ldiff=E(gA;gB)Gdiffhmax (0;diffk'(gA)'(gB)k)2i(6)whereGsim;Gdis;Gdiff consist of subtask arguments satisfying conditions in Eq. (1), Eq. (2) andEq. (3) respectively. dis;diff are threshold distances (hyperparameters). The final analogy-making regularizer is the weighted sum of the above three objectives.Analogies Under Non-independence. Although we use analogy-making regularizer so that allconfigurations of subtasks arguments are valid and independent from each other throughout themain experiment, our analogy-making regularizer can also be used to inject prior knowledge so thatthe agent generalizes to unseen subtasks in a specific way. For example, if some objects should behandled in a different way given the same subtask, we can apply analogy-making regularizer so thatEq. 1 is satisfied only between the same type of objects. This is further discussed in Appendix B.4.2 T RAININGThe subtask controller is trained on a subset of subtasks ( U G ) by directly providing subtaskarguments. The policy of the subtask controller is trained through the actor-critic method (Kondaand Tsitsiklis, 1999) with generalized advantage estimation (GAE) (Schulman et al., 2015). We alsofound that pre-training the subtask controller through policy distillation (Rusu et al., 2015; Parisottoet al., 2015) gives slightly better results. The idea of policy distillation is to train separate policiesfor each subtask and use them to provide action labels to train the subtask controller. Throughouttraining, the subtask controller is also made to predict whether the current state is terminal or notthrough a binary classification objective, and analogy-making regularizer is applied to the subtaskembedding separately. The full details of the learning objectives are described in Appendix E.1.5 M ETA CONTROLLERThe role of the meta controller is to decide subtask arguments gt2Gfrom an observation st2S, alist of instructions M2M , the previously selected subtask gt1, and its termination signal ( b)from the subtask controller. Section 5.1 describes the overall architecture of the meta controller fordealing with the partial observability induced by the list of instructions as discussed in Section 3. Wedescribe a novel way to learn the time-scale of the meta controller so that it can implicitly operate ina large time-scale in Section 5.2.5.1 A RCHITECTUREIn order to keep track of its progress on instruction execution, the meta controller maintains itsinternal state by computing a context vector (described in Section 5.1.1) and by focusing on one in-struction at a time from the list of instructions M(described in Section 5.1.2). The entire architectureis illustrated in Figure 3b and further details are described in Appendix F.5.1.1 C ONTEXTGiven the sentence embedding rt1retrieved at the previous time-step from the instructions (de-scribed in Section 5.1.2), the previously selected subtask gt1, and the subtask termination btbtjst;gt1, the meta controller computes the context vector ( ht) through a neural network:ht=fst;rt1;gt1;btwherefis a neural network parameterized by . Intuitively, gt1andbtprovide information aboutwhich subtask was being solved by the subtask controller and whether it has been finished or not.Note that the subtask does not necessarily match with the retrieved instruction ( rt1), e.g., whenthe agent is dealing with the background task. By combining all the information, htencodes thespatio-temporal context which is used to determine which instruction to solve and the next subtask.5Under review as a conference paper at ICLR 20175.1.2 S UBTASK UPDATERThe meta controller has a subtask updater that constructs a memory structure from the list of instruc-tions, retrieves an instruction by maintaining a pointer into the memory structure, and computes thesubtask arguments.Instruction Memory. Given instructions as a list of sentences M= (m1;m2;:::;mK), whereeach sentence consists of a list of words, mi=w1;:::;wjmij, the ‘subtask updater constructsmemory blocks M2REK, where each column is E-dimensional embedding of a sentence. Thesubtask module maintains a memory pointer defined over memory locations, pt2RK, which isused for instruction retrieval. Memory construction and retrieval is formally described as:Memory: M= ['w(m1);'w(m2);:::;'w(mK)] Retrieval: rt=Mpt:Here'w(mi)2REis the embedding of the i-th sentence (e.g., Bag-of-words). The memorypointer ptis a non-negative vector which sums up to 1. rt2REis the retrieved sentence embeddingwhich is used for computing the subtask-arguments. Intuitively, if the memory pointer is a one-hotvector, rtindicates a single instruction from the whole list of instructions. The meta controllershould learn to manage ptso that it can focus on the correct instruction at each time-step, which isfurther described below.Location-based Memory Addressing. Since instructions should be executed sequentially, we usea location-based memory addressing mechanism (Zaremba and Sutskever, 2015; Graves et al., 2014)to manage the memory pointer. Specifically, the subtask updater shifts the memory pointer by [1;1]as:pt=ltpt1where ltSoftmax'shift(ht)(7)whereis a convolution operator, and 'shiftis a multi-layer perceptron (MLP). lt2R3is aninternal action that shifts the memory pointer ( pt) by either -1, 0, or +1. This mechanism is illustratedin Figure 9b.Subtask Arguments. The subtask updater takes the context ( ht), updates the memory pointer ( pt),retrieves a sentence embedding ( rt), and finally computes subtask-arguments as follows:(gtjht;rt) =Yig(i)tjht;rtwhereg(i)tjht;rt/exp'goali(ht;rt)where'goaliis an MLP for the i-th subtask argument.5.2 D IFFERENTIABLE TEMPORAL ABSTRACTIONSAlgorithm 1 Subtask update (Hard)Input: ht;pt1;rt1;gt1Output: pt;rt;gtct'update(ht)ifct= 1then .UpdateltSoftmax'shift(ht)pt ltpt1 .Shiftrt M>pt .Retrievegt(gtjht;rt).Subtaskelsept pt1;rt rt1;gt gt1end ifAlthough the subtask updater can update the memorypointer and compute correct subtask-arguments in prin-ciple, making a decision at every time-step can be ineffi-cient because subtasks do not change very frequently. In-stead, having temporally-extended actions can be usefulfor dealing with delayed reward by operating at a largertime-scale (Sutton et al., 1999). Although one could usethe termination signal of the subtask controller to definethe temporal scale of the meta controller, this approachwould result in an open-loop policy that is not able to in-terrupt ongoing subtasks, which is necessary to deal withstochastic events.To address this challenge, we introduce an internal binary action ctwhich decides whether to updatethe subtask updater or not. This action is defined as: ct'update(ht). Ifct= 1, the subtaskupdater updates the memory pointer, retrieves an instruction, and updates the subtask arguments.Otherwise, the meta controller continues communicating the current subtask arguments withoutinvolving the subtask updater. During training of the update decision, we use L1 regularization onthe probability of update to penalize frequent updates as in Vezhnevets et al. (2016). The entirescheme is described in Algorithm 1.6Under review as a conference paper at ICLR 2017Algorithm 2 Subtask update (Soft)Input: ht;pt1;rt1;gt1Output: pt;rt;gtct 'update(ht)lt Softmax'shift(ht)~pt ltpt1~rt M>~ptpt ct~pt+ (1ct)pt1rt ct~rt+ (1ct)rt1g(i)tctg(i)tjht;~rt+ (1ct)g(i)t18iHowever, the update decision introduces a non-differentiablevariable which is known to be difficult to optimize in prac-tice. Thus, we propose a differentiable relaxation of the updatedecision. The key idea is to take the weighted sum of both‘update’ and ‘no update’ scenarios. This idea is described inAlgorithm 2. We found that training the meta controller us-ing Algorithm 2 followed by fine-tuning using Algorithm 1 iscrucial for training the meta controller. Note that Algorithm 2reduces to Algorithm 1 if we sample ctandltinstead of takingthe weighted sum, which justifies our initialization trick.5.3 T RAININGThe meta controller is trained on a training set of lists of instructions. Actor-critic method is usedto update the parameters of the meta controller, while a pre-trained subtask controller is given andfixed. Since the meta controller also learns a subtask embedding '(gt1)and has to deal withunseen subtasks during evaluation, we applied analogy-making regularization to its embedding.More details of the objective functions are provided in Appendix E.6 E XPERIMENTS AND RESULTSOur experiments were designed to explore the following hypotheses: our proposed hierarchicalarchitecture will generalize better than a non-hierarchical controller, that analogy-making regu-larization and learning temporal abstractions in the meta controller will both separately be ben-eficial for task generalization. We are also interested in understanding the qualitative proper-ties of our agent’s behavior. The demo videos are available at the following website: https://sites.google.com/a/umich.edu/junhyuk-oh/task-generalization .6.1 E XPERIMENTAL SETTINGEnvironment. We developed a 2D grid world based on MazeBase (Sukhbaatar et al., 2015) wherethe agent can interact with many objects as illustrated in Figure 1. Unlike the original MazeBase,an observation is represented as a binary 3D tensor: xt2R181010where 18is the number ofobject types and 1010is the size of the grid world. Each channel is a binary mask indicating thepresence of each object type. There are agent, blocks, water, and 15 types of objects with which theagent can interact (see Appendix D), and all of them are randomly placed for each episode.The agent has 13 primitive actions: No-operation ,Move (North/South/West/East, referred to as“NSWE”), Pick up (NSWE), and Transform (NSWE). Move actions move the agent by one cell inthe specified direction. Pick up actions remove the adjacent object in the corresponding relativeposition, and depending on the object type Transform actions either remove it or transform it toanother object.The agent receives a time penalty ( 0:1) for each time-step. Water cells act as obstacles which give0:3when the agent visits them. The agent receives +1reward when it finishes all instructions inthe correct order. Throughout the episode, an enemy randomly appears, moves, and disappears after10 steps. Transforming an enemy gives +0:9reward. More details are described in the appendix D.Subtasks and Instructions. The subtask space is defined as the Cartesian product of two argu-ments:G=fVisit;Pick up;TransformgfX1;X2;:::;X 15gwhereXiis an object type. The agentshould be on the same cell of the target object to finish ‘Visit’ task. For ‘Pick up’ and ‘Transform’tasks, the agent should perform the corresponding primitive action to the target object. If there aremultiple target objects in the world, the agent can perform the action to any of the target objects.The instructions are represented as a sequence of sentences, each of which is one of the following:Visit X ,Pick up X ,Transform X ,Pick up all X , and Transform all X where ‘X’ is the target objecttype. While the first three instructions require the agent to perform the corresponding subtask, thelast two instructions require the agent to repeat the same subtask until the target objects completelydisappear from the world.Task Split. Among 45 subtasks in G, only 30 subtasks are presented to the subtask controllerduring training. 3 subtasks from the training subtasks and 3 subtasks from the unseen subtasks7Under review as a conference paper at ICLR 2017AgentTrain UnseenReward Success Accuracy Reward Success Accuracyw/o Analogy 0.56 99.9% 100.0% -1.88 60:8% 49:6%w/ Analogy 0.56 99.9% 100.0% 0.55 99.8% 99.6%Table 1: Performance of subtask controller. ‘Analogy’ indicates analogy-making regularization. ‘Accuracy’represents termination prediction accuracy. We assume a termination prediction is correct only if predictionsare correct throughout the whole episode.were selected as the validation set to pick the best-performing subtask controller. For training themeta controller, we created four sets of sequences of instructions: training, validation, and two testsets. The training tasks consist of sequences of up to 4 instructions sampled from the set of traininginstructions. The validation set consists of sequences of 7 instructions with small overlaps withthe training instructions and unseen instructions. The two test sets consist of 20 seen and unseeninstructions respectively. More details of the task split are described in the appendix D.Flat Controller. To understand the advantage of using the communicating hierarchical structureof our controllers, we trained a flat controller which is almost identical to the meta controller archi-tecture except that it directly chooses primitive actions without using the subtask controller. Detailsof the flat controller architecture are described in the appendix F. The flat controller is pre-trainedon the training set of subtasks. To be specific, we removed the instruction memory and fed a singleinstruction as an additional input (i.e., rtis fixed throughout the episode). We found that the flatcontroller could not learn any reasonable policy without this pre-training step which requires mod-ification of the architecture based on domain knowledge. After pre-training, we fine-tuned the flatcontroller with the instruction memory on lists of instructions. Note that the flat controller is alsocapable of executing instructions as well as dealing with random events in principle.6.2 T RAINING DETAILSThe subtask controller consists of 3 convolution layers and 2 fully-connected layers and takes thelast 2 observations concatenated through channels as input. Each subtask argument ( g(i)) is linearlytransformed and multiplied with each other to compute the joint subtask embedding. This is furtherlinearly transformed into the weight of the first convolution layer, and the weight of the first fully-connected layer. The meta controller takes the current observation as input and has 2 convolutionlayers and 2 fully-connected layers where the parameters of the first convolution layer and the firstfully-connected layer are predicted by the joint embedding of rt1;'(gt1), andbt.We implemented synchronous actor-critic with 16 CPU threads based on MazeBase (Sukhbaataret al., 2015), each of which samples a mini-batch of episodes ( K) in parallel. The parameters areupdated after 16Kepisodes. The details of architectures and hyperparameters are described inthe appendix F.Curriculum Learning via a Forgiving World. We conducted curriculum training by changingthe size of the grid world, the density of objects, and the number of instructions according to theagent’s success rate. In addition, we trained the soft-architectures on an easier forgiving environmentwhich generates target objects whenever they do not exist. Crucially, this allows the agent to recoverfrom past mistakes in which it removed needed target objects. The soft-architectures are fine-tunedon the original (and far more unforgiving) environment which does not regenerate target objectsin the middle of the episode. Training directly in the original environment without first training inthe forgiving environment leads to too much failture at executing the task and the agent does notlearn successfuly. Finally, the hard-architectures are initialized by the soft-architectures and furtherfine-tuned on the original environment.6.3 E VALUATION OF SUBTASK CONTROLLERTo see how well the subtask controller performs separately from the meta controller, we evaluatedit on the training set of subtasks and unseen subtasks in Table 1. It is shown that analogy-makingregularization is crucial for generalization to unseen subtasks. This result suggests that analogy-making regularization plays an important role in learning the relationship between different subtasksand enabling generalization to unseen subtasks.In addition, we observed that the subtask controller learned a non-trivial policy by exploiting causalrelationships. For example, when [Pick up, egg] is given as the subtask arguments, but a duckis very close to the agent, it learned to transform the duck and pick up the resulting egg because8Under review as a conference paper at ICLR 2017Train Test #1 Test #2 Test #3 Test #4Set of instructions Seen Seen Unseen Seen w/o all Unseen w/o allNum of instructions 4 20 20 20 20ForgivingShortest Path -1.56 (99:6%) -11.94 (99:1%) -9.62 (99:1%)Near-Optimal -0.96 (99:6%) -9.99 (99:1%) -8.19 (99:1%)Flat -1.64 (85:8%) -14.53 (65:9%) -17.25 (23:7%) -12.38 (60:4%) -14.18 (16:7%)Hierarchical-TA-Analogy -1.05 (92:4%)-11.06 (86:2%)-13.69 (51:2%) -8.54 (91:9%) -9.91 (75:2%)OriginalShortest Path -1.62 (99:7%) -11.94 (99:4%) -8.72 (99:6%)Near-Optimal -1.34 (99:5%) -10.30 (99:3%) -7.62 (99:4%)Flat -2.38 (76:0%) -18.83 (0:1%) -18.92 (0:0%) -15.09 (0:0%) -15.17 (0:0%)Hierarchical -2.04 (72:8%) -16.85 (16:6%) -17.66 (6:9%) -10.99 (49:4%) -11.40 (47:4%)Hierarchical-Analogy -1.74 (81:0%) -15.89 (28:0%) -17.23 (11:3%) -10.11 (61:8%) -10.66 (57:7%)Hierarchical-TA -1.38 (92:6%) -12.96 (62:9%) -17.19 (13:0%) -9.11 (74:4%) -10.37 (61:2%)Hierarchical-TA-Analogy -1.26 (95:5%)-11.30 (81:3%)-14.75 (40:3%) -8.24 (85:5%) -9.51 (73:9%)Table 2: Performance of meta controller. Each column corresponds to different evaluation sets of instructions,while each row corresponds to different configurations of our architecture and the flat controller. Test #3and Test #4 do not include ‘Transform/Pick up all X’ instructions. ‘TA’ indicates the meta controller withtemporal abstraction. Each entry in the table represents reward with success rate in parentheses averaged over10-best runs among 20 independent runs. ‘Shortest Path’ is a hand-designed policy which executes instructionsoptimally based on the shortest path but ignores enemies. ‘Near-Optimal’ is a near-optimal policy that executesinstructions based the shortest path and transforms enemies when they are close to the agent. ‘Forgiving’rows show the result from the forgiving environment used for curriculum learning where target objects areregenerated whenever they do not exist in the world.5 10 15 20Num of instructions−20−15−10−505Reward5 10 15 20Num of instructions0.00.20.40.60.81.0Success rate5 10 15 20Num of instructions050100150200250#steps5 10 15 20Num of instructions05101520#instructions completedShortest-HueristicFlat (Seen)Flat (Unseen)Hierarchy (Seen)Hierarchy (Unseen)Hierarchy-Analogy (Seen)Hierarchy-Analogoy (Unseen)Hierarchy-TA (Seen)Hierarchy-TA (Unseen)Hierarchy-TA-Analogy (Seen)Hierarchy-TA-Analogy (Unseen)Figure 4: Performance per number of instructions. From left to right, the plots show reward, success rate, thenumber of steps, and the average number of instructions completed respectively. Solid and dashed curves showthe performances on seen instructions and unseen instructions respectively.transforming the duck transforms it to an egg in our environment. More analysis of the subtaskcontroller and the effect of analogy-making regularization is discussed in the appendix A and B.6.4 E VALUATION OF META CONTROLLERWe evaluated the meta controller separately from the subtask controller by providing the best-performing subtask controller during training and evaluation. The results are summarized in Table 2and Figure 4. Note that there is a discrepancy between reward and success rate, because success rateis measured only based on the instruction execution, while reward takes into account the backgroundtask (i.e., handling randomly appearing enemy) as well as the instruction execution.Overall performance. Table 2 shows that our hierarchical agent with temporal abstraction andanalogy-making regularization, denoted Hierarchical-TA-Analogy in the table, can handle 20 seeninstructions (Test #1) and 20 unseen instructions (Test #2) correctly with reasonably high successrates. In addition, that agent learned to deal with enemies whenever they appear, and thus it out-performs the ‘Shortest Path’ policy which is near-optimal in executing instructions while ignoringenemies. We further investigated how the number of instructions affects the performance in Figure 4.Although the performance is degraded as the number of instructions increases, our architecture fin-ishes 18 out of 20 seen instructions and 12 out of 20 unseen instructions on average. These resultsshow that our agent is able to generalize to longer compositions of instructions as well as unseeninstructions by just learning to solve short sequences of a subset of instructions.Flat vs. Hierarchy. All our hierarchical controllers outperform the flat controller both on thetraining tasks and longer/unseen instructions (see Table 2). We observed that the flat controllerlearned a sub-optimal policy which assumes that ‘Transform/Pick up X’ instructions are identical to‘Transform/Pick up all X’ instructions. In other words, it always transforms or picks up all existingtargets. Although this simple strategy is a reasonable sub-optimal policy because such wrong actionsare not explicitly penalized in our environment other than through the accumulating penalty per-9Under review as a conference paper at ICLR 2017UpdateShiftABCDABCD-10+1Figure 5: Analysis of the learned policy. ‘Update’ shows our agent’s internal update decision. ‘Shift’ showsour agent’s memory-shift decision which is either -1, 0, or +1 from top to bottom. The bottom text shows theinstruction indicated by the memory pointer, while the top text shows the subtask chosen by the meta controller.(A) the agent transforms the pig given ‘Transform Pig’ instruction and decides to update the subtask (Updateis true) and move to the next instruction. (B) an enemy (red) appears while the agent is executing ‘Pick up allmeat’ instruction (green boxes for meat). The agent changes the subtask to [Transform, enemy]. (C) the agentsuccessfully transforms the enemy and sets the subtask to [Pick up, meat] to resume executing the instruction.(D) the agent picks up the last meat in the world, moves the memory pointer to the next instruction, and sets anew subtask according to the next instruction.time-step, it often unnecessarily removes objects that can be potentially target objects in the futureinstructions. This is why the flat controller performs reasonably well on the short sequences ofinstructions (training) where such cases are rare and on the forgiving environment where targetobjects are restored whenever needed. But, it completely fails on longer instructions in the originalenvironment because the entire task becomes unsolvable when target objects are removed in error.This implies that the flat controller struggles with detecting when a subtask is finished precisely,whereas our hierarchical controllers can easily detect when a subtask is done, because the subtaskcontroller in our communicating architecture provides a termination signal to the meta controller.In addition, the flat controller tends to ignore enemies, while the hierarchical controllers try to dealwith enemies whenever they exist by changing the subtask-arguments communicated by the metacontroller to the subtask controller, which is a better strategy to maximize the reward. The flatcontroller instead has to use primitive actions to deal with both instructions and enemies. Thisimplies that our communicating hierarchical controllers have more advantages for context switchingbetween different sources of tasks (i.e., executing instructions and dealing with enemies).Finally, we observed that the flat controller often makes many mistakes on unseen instructions (e.g.,transform X given ‘Visit X’ as instruction). In contrast, the hierarchical controllers do not make suchmistakes as the subtask controller generalizes well to unseen instructions as discussed in Section 6.3.Effect of Analogy-making. Table 2 shows that analogy-making significantly improves general-ization performance especially on Test #2 (Hierarchical-Analogy outperforms Hierarchical, andHierarchical-TA-Analogy outperforms Hierarchical-TA). This implies that given an unseen targetobject for the ‘Transform/Pick up all’ instruction, the meta controller without analogy-making tendsto fail to check if the target object exists or not. On the other hand, there is almost no improvementby using analogy-making on Test #3 and Test #4 where there are no ‘all’ instruction. This is becausethe meta controller can simply rely on the subtask termination ( bt) given by the subtask controllerto check if the current instruction is finished for non-‘all’ instructions, and the subtask controller(trained with analogy-making) successfully generalizes to unseen subtasks and provides accuratetermination signals to the meta controller. The empirical results showing that analogy-making con-sistently improves generalization performance in both non-analogy-making controllers suggests thatanalogy-making is crucial for generalization to unseen tasks.Effect of Temporal Abstraction. To see the effect of temporal abstractions, we trained a baselinethat updates the memory pointer and the subtask at every time-step which is shown as ‘Hierarchical’and ‘Hierarchical-Analogy’ in Table 2. It turns out that the agent without temporal abstractionsperforms much worse both on the training tasks and testing tasks. We hypothesize that temporalcredit assignment becomes easier with temporal abstractions because the subtask updater (describedin Section 5.1.2) can operate at a larger time-scale by decoupling the update decision from the10Under review as a conference paper at ICLR 2017ABCPick up brownVisit blueVisit redPick up yellowTransform redTransform purplePick up yellowPick up purplePick up yellowTransform purpleVisit yellowVisit redPick up brownVisit yellowPick up purpleTransform blueTransform brownVisit bluePick up purpleTransform blueFirst-person-view(Observation)Top-down-view(Not visible)DABCDFigure 6: Learned policy in 3D environment. The agent observes ‘First-person-view’ images, while ‘Top-down-view’ is not available to the agent. The right text shows the list of instructions. (A) The agent cannot see thetarget block (blue) at this point due to the partially observable nature of the environment and the randomnessof the topology. The agent learned to explore the map to find the target block. (B) Although the currentinstruction is ‘Transform purple’, the agent decides to transform the green block because transforming a greenblock gives a large positive reward (stochastic event). (C) After dealing with the stochastic event, the agentresumes executing the instruction (Traansform purple). (D) The agent finishes the whole list of instructions.Train Test #1 Test #2Set of instructions Seen Seen UnseenNum of instructions 4 20 20Flat -1.87 (92:2%) -22.35 (68:7%) -39.24 (0:0%)Ours -1.41 (95:0%) -15.60 (92:2%) -17.80 (84:3%)Table 3: Performance on 3D environment.subtask selection. In particular, given ‘all’ instructions, the agent should repeat the same subtaskwhile not changing the memory pointer for a long time and the reward is even more delayed. Thiscan possibly confuse the subtask updater without temporal abstractions because it should make thesame decision for the entire time-steps of such instructions. In contrast, the subtask updater withtemporal abstractions can get a direct feedback from the long-term future, since one decision madeby the subtask updater results in multiple primitive actions. We conjecture that this is why the agentslearn more stably with temporal abstractions under delayed reward.Analysis of The Learned Policy. We visualized our agent’s behavior on a task with a long list ofinstructions in Figure 5. We observed that our meta controller learned to communicate the correctsubtask-arguments to the subtask controller and learned to move precisely to the next instructionby shifting the memory pointer whenever the instruction is finished. More interestingly, wheneveran enemy appears, our meta controller immediately changes the subtask to [Transform, enemy]regardless of the instruction and resumes executing the instruction after dealing with the enemy.Throughout the background task and the ‘all’ instructions, the meta controller keeps the memorypointer unchanged as illustrated in (B-D) in the figure. In addition, the agent learned to update thememory pointer and the subtask-argument almost only when it is needed, which provides the subtaskupdater with temporally-extended actions. This is not only computationally efficient but also usefulfor learning a better policy as discussed above.6.5 E VALUATION IN 3D V ISUAL ENVIRONMENTWe developed a similar set of tasks in Minecraft environment based on Oh et al. (2016) as shownin Figure 6. In this environment, the agent can observe only the first-person-view images, whichnaturally involves partial observability. In this environment, even executing a simple instruction(e.g., Visit X) requires the agent to explore the topology to find the target.An observation is represented as a 6464RGB image ( xt2R36464). There are 7 different typesof colored blocks: red, blue, green, yellow, brown, purple, and black which correspond to differenttypes of objects in the grid world experiment. Like 2D grid world environment, the topology of11Under review as a conference paper at ICLR 2017walls and the colored blocks are randomly generated for every episode. A wall not only acts as anobstacle but also occludes the objects behind it as shown in Figure 6, which makes the task morechallenging.The agent has 9 actions: Look (Left/Right/Up/Down), Move (Forward/Backward), Pick up ,Trans-form , and No operation .Look left /right actions change the yaw of the agent by 90 degree, whileLook up /down actions change the pitch of the agent by 45 degree. Move forward /backward actionsmove the agent by one block according to the agent’s looking direction. Pick up removes the blockin front of the agent, and Transform changes the block in front of the agent to the black-coloredblock.We used the same reward function used in the 2D grid world experiment. In addition, a green blockrandomly appears and transforming a green block gives +0:9reward regardless of instructions,which acts as a stochastic event. Each instruction is one of the following: Visit X, Pick up X, andTransform X where ‘X’ is the target color. We excluded ‘all’ instructions in this environment becausewe found that solving ‘all’ instructions given a limited amount of time is extremely challenging evenfor humans due to the partial observability.We used almost the same architectures used in the 2D grid world experiment except that a longshort-term memory (Hochreiter and Schmidhuber, 1997) is added on top of the final convolutionlayer both in the subtask controller and the meta controller, as it is one of the simplest ways to dealwith partial observability (Hausknecht and Stone, 2015; Mnih et al., 2016; Oh et al., 2016). Wefollowed the same training scheme used in the 2D grid world experiment.Table 3 shows that our proposed architecture significantly outperforms the flat controller baselineespecially on the test sets of instructions. We observed that the flat controller tends to strugglewith detecting when an instruction is finished and completely fails on unseen instructions, while ourarchitecture performs well on unseen and longer instructions. As shown in Figure 6, our architecturelearned to find the target blocks, detect when an instruction is finished, and deal with the stochasticevent. This result demonstrates that the proposed approach can also be applied to a more complexvisual environment.7 C ONCLUSIONIn this paper, we explored zero-shot task generalization in RL with a new problem where the agentis required to execute a sequence of instructions and to generalize over longer sequences of (un-seen) instructions without additional learning. To solve the problem, we presented a hierarchicaldeep RL architecture in which a meta controller learns a closed-loop policy of subtask-argumentcommunications to a subtask controller which executes the given subtask and communicates its ac-complishment back to the meta controller. Our architecture not only generalizes to unseen tasksafter training but also deals with random events relevant to a background task. In addition, we pro-posed several techniques that led to improvements in both training and generalization performance.First, analogy-making regularization turned out to be crucial for generalization to unseen subtasks.Second, learning temporal abstractions improved the performance by making the subtask updateroperate at a larger time-scale. One interesting line of future work would be to define and solvericher task instructions such as conditional statements (i.e., IF-THEN-ELSE) and loop instructions(i.e., collect 3 target objects). Moreover, end-to-end training of the whole hierarchy and discoveringthe subtask decomposition would be important future work.
SJF9aBYBl
Potentially good architecture; insufficient evaluation for "large-scale" tasks, no comparison to other state-of-the-art methods
4: Ok but not good enough - rejection
Description: This paper presents a reinforcement learning architecture where, based on "natural-language" input, a meta-controller chooses subtasks and communicates them to a subtask controller that choose primitive actions, based on the communicated subtask. The goal is to scale up reinforcement learning agents to large-scale tasks. The subtask controller embeds the subtask definition (arguments) into vectors by a multi-layer perceptron including an "analogy-making" regularization. The subtask vectors are combined with inputs at each layer of a CNN. CNN outputs (given the observation and the subtask) are then fed to one of two MLPs; one to compute action probabilities in the policy (exponential falloff of MLP outputs) and the other to compute termination probability (sigmoid from MLP outputs). The meta controller takes a list of sentences as instructions embeds them into a sequence of subtask arguments (not necessarily a one-to-one mapping). A context vector is computed by a CNN from the observation, the previous sentence embedding, the previous subtask and its completion state. The subtask arguments are computed from the context vector through further mechanisms involving instruction retrieval from memory pointers, and hard/soft decisions whether to update the subtask or not. Training involves policy distillation+actor-critic training for the subtask controller, and actor-critic training for the meta controller keeping the subtask controller frozen. The system is tested in a grid world where the agent moves and interacts with (picks up/transforms) various item/enemy types. It is compared to a) a flat controller not using a subtask controller, and b) subtask control by mere concatenation of the subtask embedding to the input with/without the analogy-making regularization. Evaluation: The proposed architecture seems reasonable, although it is not clear why the specific way of combining subtask embeddings in the subtask controller would be the "right" way to do it. I do not feel the grid world here really represents a "large-scale task": in particular the 10x10 size of the grid is very small. This is disappointing since this was a main motivation of the work. Moreover, the method is not compared to any state of the art alternatives. This is especially problematic because the test is not on established benchmarks. It is not really possible, based on the shown results, to put the performance in context of other works.
3: The reviewer is fairly confident that the evaluation is correct
SJttqw5ge
ICLR.cc/2017/conference
2017
Communicating Hierarchical Neural Controllers for Learning Zero-shot Task Generalization
["Junhyuk Oh", "Satinder Singh", "Honglak Lee", "Pushmeet Kohli"]
The ability to generalize from past experience to solve previously unseen tasks is a key research challenge in reinforcement learning (RL). In this paper, we consider RL tasks defined as a sequence of high-level instructions described by natural language and study two types of generalization: to unseen and longer sequences of previously seen instructions, and to sequences where the instructions themselves were previously not seen. We present a novel hierarchical deep RL architecture that consists of two interacting neural controllers: a meta controller that reads instructions and repeatedly communicates subtasks to a subtask controller that in turn learns to perform such subtasks. To generalize better to unseen instructions, we propose a regularizer that encourages to learn subtask embeddings that capture correspondences between similar subtasks. We also propose a new differentiable neural network architecture in the meta controller that learns temporal abstractions which makes learning more stable under delayed reward. Our architecture is evaluated on a stochastic 2D grid world and a 3D visual environment where the agent should execute a list of instructions. We demonstrate that the proposed architecture is able to generalize well over unseen instructions as well as longer lists of instructions.
["Reinforcement Learning", "Deep learning"]
ABSTRACTThe ability to generalize from past experience to solve previously unseen tasks is akey research challenge in reinforcement learning (RL). In this paper, we considerRL tasks defined as a sequence of high-level instructions described by natural lan-guage and study two types of generalization: to unseen and longer sequences ofpreviously seen instructions, and to sequences where the instructions themselveswere previously not seen. We present a novel hierarchical deep RL architecturethat consists of two interacting neural controllers: a meta controller that reads in-structions and repeatedly communicates subtasks to a subtask controller that inturn learns to perform such subtasks. To generalize better to unseen instructions,we propose a regularizer that encourages to learn subtask embeddings that capturecorrespondences between similar subtasks. We also propose a new differentiableneural network architecture in the meta controller that learns temporal abstrac-tions which makes learning more stable under delayed reward. Our architectureis evaluated on a stochastic 2D grid world and a 3D visual environment wherethe agent should execute a list of instructions. We demonstrate that the proposedarchitecture is able to generalize well over unseen instructions as well as longerlists of instructions.1 I NTRODUCTIONHumans can often generalize to novel tasks even without any additional learning by leveraging pastlearning experience. We would like our artificial agents to have similar “zero-shot” generalizationcapabilities. For example, after learning to solve tasks with instructions such as ‘Go to X (or Y)’ and‘Pick up Y (or Z)’, our agent should be able to infer the underlying goal of new tasks with instruc-tions like ‘Go to Z’, which requires disentangling the verbs (‘Go to/Pick up’) and the nouns/objects(‘X, Y , Z’). Furthermore, we would like our agents to learn to compose policies to solve novel taskscomposed of sequences of seen and unseen instructions. Developing the ability to achieve suchgeneralizations is a key challenge in artificial intelligence and the subfield of reinforcement learning(RL).Figure 1: Example of grid-world and in-structions. The agent is tasked to exe-cute longer sequences of instructions aftertrained on short sequences of instructions; inaddition previously unseen instructions canbe given during evaluation (blue text). Theagent can get more rewards if it deals withrandomly appearing enemies (red outlinedbox) regardless of current instructions.In this paper, we study the problem of zero-shot task gen-eralization in RL by introducing the “instruction execu-tion” problem where the agent is required to learn throughinteraction with its environment how to achieve an overalltask specified by a list of high-level instructions (see Fig-ure 1). As motivation for this problem consider a humanowner training its new household robot to execute com-plex tasks specified by natural language text that decom-pose the task into a sequence of instructions. Given thatit is infeasible to explicitly train the robot on all possibleinstruction-sequences, this problem involves two types ofgeneralizations: to unseen and longer sequences of previ-ously seen instructions, and sequences where the some ofthe instructions themselves were previously not seen. Ofcourse, the usual RL problem of learning policies throughinteraction to accomplish the goals of an instruction re-mains part of the problem as well. We assume that theagent does notreceive any signal on completing or fail-1Under review as a conference paper at ICLR 2017ing to complete individual instructions from the environment/owner and so the informative rewardsignal is delayed until the end. Furthermore, there can be random events in the environment thatrequire the agent to interrupt whatever it is doing and deviate from the instructions to maintain somebackground task as described in Figure 1. Altogether this makes for a challenging zero-shot taskgeneralization RL problem.Brief Background: RL tasks composed of sequences of subtasks have been studied before and anumber of hierearchical RL approaches designed for them. Typically these have the form of a metacontroller and a set of lower-level controllers for subtasks (Sutton et al., 1999; Dietterich, 2000;Parr and Russell, 1997). The meta controller is limited to selecting one from a set of lower-levelcontrollers to employ at any time. This makes it impossible for the low-level controller to generalizeto new subtasks without training a new low-level controller separately. Much of the previous workalso assumes that the overall task is fixed (e.g., Taxi domain (Dietterich, 2000; Ghavamzadeh andMahadevan, 2003)). Transfer learning across multiple compositional tasks has typically been studiedin RL formulations in which new tasks are only presented via a new reward function from theenvironment (Singh, 1991; 1992) and so there is no opportunity for fast model-free generalization.To the best of our knowledge, zero-shot model-free generalization to new or longer tasks as well asunseen tasks has not been well-studied in the RL setting.Our Architecture: This paper presents a hierarchical deep RL architecture (see Figure 2) that con-sists of two interacting neural controllers: a meta controller that repeatedly chooses an instructionand conditioned on the current state of the environment translates it into subtask-arguments (detailson this in later sections) and communicates those to the subtask controller that in turn chooses prim-itive actions given the subtask. This makes the subtask controller a parameterized option (Suttonet al., 1999) module in which the parameters are the subtask-arguments mentioned above. On top ofthe subtask controller, the meta controller is trained to select proper subtask-arguments dependingon observations from the environment, feedback from the subtask controller about termination, andthe task instructions. In order to generalize over unseen instructions, we propose analogy-makingregularization (discussed in Section 4.1) which encourages to learn subtask embeddings that capturecorrespondences between similar subtasks. In addition, we propose a new differentiable neural ar-chitecture in the meta controller that implicitly learns temporal abstractions so that it can operate ata larger time-scale and update the subtask-arguments to the subtask controller only when needed.Our Results: We developed a 2D grid world environment where the agent can interact with manyobjects as illustrated in Figure 1 based on MazeBase (Sukhbaatar et al., 2015) (see Section 6.1 fordetails). The empirical results show that the meta-controller’s ability to learn temporal abstractionsand a form of analogy-making regularization were all key in allowing our hierarchical architectureto generalize in a zero-shot fashion to unseen tasks. We also demonstrated that the same architecturecan also generalize to unseen and longer instructions in a 3D visual environment.2 R ELATED WORKHierarchical Reinforcement Learning. In addition to hierarchical RL described in Section 1,there is a line of work on portable options for solving sequential tasks (Konidaris et al., 2012;Konidaris and Barto, 2007). They proposed agent-space options that can be re-used to deal withnew problems. However, the optimal sequence of options (e.g., picking up a key followed by open-ing a door) is fixed throughout training and evaluation in their problem. On the other hand, the agentis required to perform new sequences of tasks depending on given instructions in our work. Ourwork is also closely related to Programmable HAM (PHAM) (Andre and Russell, 2000; 2002) inthat PHAM is designed to execute a given program. However, the program explicitly specifies thepolicy in PHAM which effectively reduces state-action space. In contrast, a list of instructions is apartial description of the task in our work, which means that the policy is not forced to follow theinstructions but to use them as a guide to maximize its reward. For example, interrupt conditionsneed be manually specified by the program in PHAM, while they are not specified in the instructionsbut should be learned by the agent in our framework.Hierarhical RL has been recently combined with deep learning. Kulkarni et al. (2016) proposedhierarchical Deep Q-Learning and demonstrated improved exploration in a challenging Atari game.Tessler et al. (2016) proposed a similar architecture that allows the high-level controller to chooseprimitive actions directly. Bacon and Precup (2015) proposed option-critic architecture which learnsoptions without any domain knowledge and demonstrated that it can learn distinct options in Atari2Under review as a conference paper at ICLR 2017games. Vezhnevets et al. (2016) proposed a deep architecture that automatically learns macro-actions. Unlike these recent works that aim to solve a single task, the goal of our work is to build amulti-task policy that can generalize over many different sequences of tasks.Zero-shot Task Generalization and Parameterized Option. There has been only a few stud-ies that aim to generalize over new tasks in a zero-shot fashion (i.e., without additional learning).da Silva et al. (2012) proposed the concept of parameterized skill which maps a set of task descrip-tions to policies. Similarly, Isele et al. (2016) proposed a method for zero-shot task generalizationwhich uses task descriptors to predict the parameter of the policy and proposed a coupled dictionarylearning with sparsity constraints to enable zero-shot learning. Schaul et al. (2015) proposed univer-sal value function approximators (UVFA) that learn a value function given a state and goal pair andshowed that their framework can generalize over unseen goals. Borsa et al. (2016) proposed to learna representation of state and action shared across different tasks. However, the proposed approachlacks the ability to solve new tasks in a zero-shot way. Our subtask controller implements the idea ofparameterized skill or universal option. Unlike the previous works, however, we propose to build ahigh-level controller (meta controller) on top of the subtask controller to deal with sequential tasks.Instruction Execution. There has been a line of work for building agents that can execute naturallanguage instructions: Tellex et al. (2011; 2014) for robotics and MacMahon et al. (2006); Chen andMooney (2011); Mei et al. (2015) for a simulated environment. However, these approaches focuson natural language understanding to map instructions to a sequence of actions or groundings ina supervised setting. In contrast, we focus on generalization to different sequences of instructionswithout any supervision for language understanding or for actions. Branavan et al. (2009) also tacklea similar problem of mapping from natural language instructions to a sequence of actions throughRL. However, the agent is given a single sentence at a time from the environment, while the agenthas to deal with a full list of instructions in our problem. In addition, they do not discuss how to dealwith unseen instructions which is the main focus of our paper.3 O VERVIEWFigure 2: Overview of our architectureGoal. We aim to learn a multi-task policy which is a map-ping:SM!A whereSis a set of states (or obser-vations),Mis a set of lists of instructions, and Ais a setof primitive actions. More importantly, since Mcan be ar-bitrary large, our goal is to find an optimal policy on avery small set of lists of instructions M0M such thatis also optimal in the entire set of lists of instructions M.Hierarchical Structure and Communication Protocol.As illustrated in Figure 2, the proposed architecture consistsof a meta controller which selects a subtask and a subtaskcontroller which executes the given subtask. The subtask isfurther decomposed into several arguments. More specif-ically, a space of subtasks Gis defined using the Carte-sian product of their arguments G(1)G(n), whereG(i)is a set of the i-th arguments (e.g.,G=fVisit;Pick upgf A;Bg). In addition, the subtask controller provides a useful informationto meta controller by giving a terminal signal for the given subtask. This communication protocolallows each controller to not only focus on their own independent roles but also communicate witheach other to learn a complex closed-loop policy.Subtask Controller. The subtask controller is a mapping SG!AB which maps a state and asubtask to an action and a termination signal ( B=f0;1g) indicating whether the subtask is finishedor not. The subtask controller is trained prior to training the meta controller. The main challenge forthe subtask controller is that only a subset of subtasks ( UG ) is observed during training, and itshould be able to generalize over unseen subtasks without experiencing them. Section 4 describeshow to construct the subtask architecture parameterized by a neural network and discusses how togeneralize over unseen subtasks.Meta Controller. The meta controller is a mapping SMGB!G which decides a subtaskfrom a state, a list of instructions, a subtask that is currently being executed, and whether the subtaskis finished as input. Thus, the meta controller should understand natural language instructions andpass proper subtask arguments to the subtask controller.3Under review as a conference paper at ICLR 2017ObservationSubtaskargumentsActionTermination?SubtaskembeddingInputOutputRecurrent(a) Subtask controllerObservationContextSubtaskargumentsSubtaskargumentsRetrieved instructionSubtasktermination?InstructionmemorySubtaskUpdaterUpdateYesNoInstructions (b) Meta controllerFigure 3: Proposed neural network architectures. See text for details.It is important to note that natural language instructions are not directly subtasks; indeed there is nota one-to-one correspondence between instructions and subtask-arguments. This is due to a numberof important reasons. First, instructions such as ’Pick up all X’ are executed by repeatedly solving asubtask [Pick up, X]. Second, the meta controller sometimes needs to interrupt ongoing subtasks andreplace them with other subtasks that are not relevant to the instruction because of the backgroundtask based on the stochastic events as described in Figure 1.Another challenge for the meta controller is that it should deal with partial observability induced bythe list of instructions. This is because the agent is notgiven which instruction to execute at eachtime-step from the environment but given just a full list of instructions. Thus, the meta controllershould remember how many instructions it has executed and decide when to move to the next in-struction. Section 5.1 describes how to construct a memory-based neural network to deal with thischallenge.Finally, it is desirable for the meta controller to operate in a larger time-scale due to the fact that asubtask does not change frequently once it is chosen. We describe a novel way to implicitly learnsuch a temporal scale of the meta controller through neural networks in Section 5.2.4 S UBTASK CONTROLLERGiven an observation st2S and subtask arguments g=g(1);:::;g(n)2G, the subtask controlleris defined as the following functions:Policy:(atjst;g) Termination: (btjst;g) =P(st2T g)whereis the policy optimized for the subtask. is a termination function which is a probabilitythat the state is terminal or not for given a subtask. Tgis the set of terminal states. The subtaskcontroller is parameterized by which is represented by a neural network as illustrated in Figure 3a.The network learns a representation of the subtask '(g), and it is used to condition the entire networkthrough multiplicative interactions as suggested by Memisevic and Hinton (2010); Lei Ba et al.(2015); Bertinetto et al. (2016). Further details are described in Appendix F.4.1 L EARNING TO GENERALIZE BY ANALOGY -MAKINGWhen learning a non-linear subtask embedding from arguments, '(g), it is desirable for the networkto learn prior knowledge about the relationship between different subtask arguments in order to inferthe goal of unseen configurations of arguments. To this end, we propose a novel analogy-makingregularizer inspired by Reed et al. (2015); Hadsell et al. (2006); Reed et al. (2014). The main idea isto learn correspondences between subtasks. For example, if target objects and ‘Visit/Pick up’ tasksare independent, we can enforce [Visit, X] : [Visit, Y] :: [Pick up, X] : [Pick up, Y] for any X and Yin the embedding space so that the agent learns to perform [Pick up, Y] as it performs [Pick up, X]and vice versa.More specifically, we define several constraints as follows:k'(gA)'(gB)'(gC) +'(gD)k0 ifgA:gB::gC:gD (1)k'(gA)'(gB)(gC) +'(gD)kdis ifgA:gB6=gC:gD (2)k'(gA)'(gB)kdiff ifgA6=gB (3)4Under review as a conference paper at ICLR 2017where gk=hg(1)k;g(2)k;:::;g(n)ki2Gare subtask arguments. Eq. (1) represents the analogy-makingrelationship, while Eq. (2) and and Eq. (3) prevent trivial solutions. To satisfy the above constraints,we propose the following objective functions based on contrastive loss (Hadsell et al., 2006):Lsim=E(gA;gB;gC;gD)Gsimk'(gA)'(gB)(gC) +'(gD)k2(4)Ldis=E(gA;gB;gC;gD)Gdishmax (0;disk'(gA)'(gB)(gC) +'(gD)k)2i(5)Ldiff=E(gA;gB)Gdiffhmax (0;diffk'(gA)'(gB)k)2i(6)whereGsim;Gdis;Gdiff consist of subtask arguments satisfying conditions in Eq. (1), Eq. (2) andEq. (3) respectively. dis;diff are threshold distances (hyperparameters). The final analogy-making regularizer is the weighted sum of the above three objectives.Analogies Under Non-independence. Although we use analogy-making regularizer so that allconfigurations of subtasks arguments are valid and independent from each other throughout themain experiment, our analogy-making regularizer can also be used to inject prior knowledge so thatthe agent generalizes to unseen subtasks in a specific way. For example, if some objects should behandled in a different way given the same subtask, we can apply analogy-making regularizer so thatEq. 1 is satisfied only between the same type of objects. This is further discussed in Appendix B.4.2 T RAININGThe subtask controller is trained on a subset of subtasks ( U G ) by directly providing subtaskarguments. The policy of the subtask controller is trained through the actor-critic method (Kondaand Tsitsiklis, 1999) with generalized advantage estimation (GAE) (Schulman et al., 2015). We alsofound that pre-training the subtask controller through policy distillation (Rusu et al., 2015; Parisottoet al., 2015) gives slightly better results. The idea of policy distillation is to train separate policiesfor each subtask and use them to provide action labels to train the subtask controller. Throughouttraining, the subtask controller is also made to predict whether the current state is terminal or notthrough a binary classification objective, and analogy-making regularizer is applied to the subtaskembedding separately. The full details of the learning objectives are described in Appendix E.1.5 M ETA CONTROLLERThe role of the meta controller is to decide subtask arguments gt2Gfrom an observation st2S, alist of instructions M2M , the previously selected subtask gt1, and its termination signal ( b)from the subtask controller. Section 5.1 describes the overall architecture of the meta controller fordealing with the partial observability induced by the list of instructions as discussed in Section 3. Wedescribe a novel way to learn the time-scale of the meta controller so that it can implicitly operate ina large time-scale in Section 5.2.5.1 A RCHITECTUREIn order to keep track of its progress on instruction execution, the meta controller maintains itsinternal state by computing a context vector (described in Section 5.1.1) and by focusing on one in-struction at a time from the list of instructions M(described in Section 5.1.2). The entire architectureis illustrated in Figure 3b and further details are described in Appendix F.5.1.1 C ONTEXTGiven the sentence embedding rt1retrieved at the previous time-step from the instructions (de-scribed in Section 5.1.2), the previously selected subtask gt1, and the subtask termination btbtjst;gt1, the meta controller computes the context vector ( ht) through a neural network:ht=fst;rt1;gt1;btwherefis a neural network parameterized by . Intuitively, gt1andbtprovide information aboutwhich subtask was being solved by the subtask controller and whether it has been finished or not.Note that the subtask does not necessarily match with the retrieved instruction ( rt1), e.g., whenthe agent is dealing with the background task. By combining all the information, htencodes thespatio-temporal context which is used to determine which instruction to solve and the next subtask.5Under review as a conference paper at ICLR 20175.1.2 S UBTASK UPDATERThe meta controller has a subtask updater that constructs a memory structure from the list of instruc-tions, retrieves an instruction by maintaining a pointer into the memory structure, and computes thesubtask arguments.Instruction Memory. Given instructions as a list of sentences M= (m1;m2;:::;mK), whereeach sentence consists of a list of words, mi=w1;:::;wjmij, the ‘subtask updater constructsmemory blocks M2REK, where each column is E-dimensional embedding of a sentence. Thesubtask module maintains a memory pointer defined over memory locations, pt2RK, which isused for instruction retrieval. Memory construction and retrieval is formally described as:Memory: M= ['w(m1);'w(m2);:::;'w(mK)] Retrieval: rt=Mpt:Here'w(mi)2REis the embedding of the i-th sentence (e.g., Bag-of-words). The memorypointer ptis a non-negative vector which sums up to 1. rt2REis the retrieved sentence embeddingwhich is used for computing the subtask-arguments. Intuitively, if the memory pointer is a one-hotvector, rtindicates a single instruction from the whole list of instructions. The meta controllershould learn to manage ptso that it can focus on the correct instruction at each time-step, which isfurther described below.Location-based Memory Addressing. Since instructions should be executed sequentially, we usea location-based memory addressing mechanism (Zaremba and Sutskever, 2015; Graves et al., 2014)to manage the memory pointer. Specifically, the subtask updater shifts the memory pointer by [1;1]as:pt=ltpt1where ltSoftmax'shift(ht)(7)whereis a convolution operator, and 'shiftis a multi-layer perceptron (MLP). lt2R3is aninternal action that shifts the memory pointer ( pt) by either -1, 0, or +1. This mechanism is illustratedin Figure 9b.Subtask Arguments. The subtask updater takes the context ( ht), updates the memory pointer ( pt),retrieves a sentence embedding ( rt), and finally computes subtask-arguments as follows:(gtjht;rt) =Yig(i)tjht;rtwhereg(i)tjht;rt/exp'goali(ht;rt)where'goaliis an MLP for the i-th subtask argument.5.2 D IFFERENTIABLE TEMPORAL ABSTRACTIONSAlgorithm 1 Subtask update (Hard)Input: ht;pt1;rt1;gt1Output: pt;rt;gtct'update(ht)ifct= 1then .UpdateltSoftmax'shift(ht)pt ltpt1 .Shiftrt M>pt .Retrievegt(gtjht;rt).Subtaskelsept pt1;rt rt1;gt gt1end ifAlthough the subtask updater can update the memorypointer and compute correct subtask-arguments in prin-ciple, making a decision at every time-step can be ineffi-cient because subtasks do not change very frequently. In-stead, having temporally-extended actions can be usefulfor dealing with delayed reward by operating at a largertime-scale (Sutton et al., 1999). Although one could usethe termination signal of the subtask controller to definethe temporal scale of the meta controller, this approachwould result in an open-loop policy that is not able to in-terrupt ongoing subtasks, which is necessary to deal withstochastic events.To address this challenge, we introduce an internal binary action ctwhich decides whether to updatethe subtask updater or not. This action is defined as: ct'update(ht). Ifct= 1, the subtaskupdater updates the memory pointer, retrieves an instruction, and updates the subtask arguments.Otherwise, the meta controller continues communicating the current subtask arguments withoutinvolving the subtask updater. During training of the update decision, we use L1 regularization onthe probability of update to penalize frequent updates as in Vezhnevets et al. (2016). The entirescheme is described in Algorithm 1.6Under review as a conference paper at ICLR 2017Algorithm 2 Subtask update (Soft)Input: ht;pt1;rt1;gt1Output: pt;rt;gtct 'update(ht)lt Softmax'shift(ht)~pt ltpt1~rt M>~ptpt ct~pt+ (1ct)pt1rt ct~rt+ (1ct)rt1g(i)tctg(i)tjht;~rt+ (1ct)g(i)t18iHowever, the update decision introduces a non-differentiablevariable which is known to be difficult to optimize in prac-tice. Thus, we propose a differentiable relaxation of the updatedecision. The key idea is to take the weighted sum of both‘update’ and ‘no update’ scenarios. This idea is described inAlgorithm 2. We found that training the meta controller us-ing Algorithm 2 followed by fine-tuning using Algorithm 1 iscrucial for training the meta controller. Note that Algorithm 2reduces to Algorithm 1 if we sample ctandltinstead of takingthe weighted sum, which justifies our initialization trick.5.3 T RAININGThe meta controller is trained on a training set of lists of instructions. Actor-critic method is usedto update the parameters of the meta controller, while a pre-trained subtask controller is given andfixed. Since the meta controller also learns a subtask embedding '(gt1)and has to deal withunseen subtasks during evaluation, we applied analogy-making regularization to its embedding.More details of the objective functions are provided in Appendix E.6 E XPERIMENTS AND RESULTSOur experiments were designed to explore the following hypotheses: our proposed hierarchicalarchitecture will generalize better than a non-hierarchical controller, that analogy-making regu-larization and learning temporal abstractions in the meta controller will both separately be ben-eficial for task generalization. We are also interested in understanding the qualitative proper-ties of our agent’s behavior. The demo videos are available at the following website: https://sites.google.com/a/umich.edu/junhyuk-oh/task-generalization .6.1 E XPERIMENTAL SETTINGEnvironment. We developed a 2D grid world based on MazeBase (Sukhbaatar et al., 2015) wherethe agent can interact with many objects as illustrated in Figure 1. Unlike the original MazeBase,an observation is represented as a binary 3D tensor: xt2R181010where 18is the number ofobject types and 1010is the size of the grid world. Each channel is a binary mask indicating thepresence of each object type. There are agent, blocks, water, and 15 types of objects with which theagent can interact (see Appendix D), and all of them are randomly placed for each episode.The agent has 13 primitive actions: No-operation ,Move (North/South/West/East, referred to as“NSWE”), Pick up (NSWE), and Transform (NSWE). Move actions move the agent by one cell inthe specified direction. Pick up actions remove the adjacent object in the corresponding relativeposition, and depending on the object type Transform actions either remove it or transform it toanother object.The agent receives a time penalty ( 0:1) for each time-step. Water cells act as obstacles which give0:3when the agent visits them. The agent receives +1reward when it finishes all instructions inthe correct order. Throughout the episode, an enemy randomly appears, moves, and disappears after10 steps. Transforming an enemy gives +0:9reward. More details are described in the appendix D.Subtasks and Instructions. The subtask space is defined as the Cartesian product of two argu-ments:G=fVisit;Pick up;TransformgfX1;X2;:::;X 15gwhereXiis an object type. The agentshould be on the same cell of the target object to finish ‘Visit’ task. For ‘Pick up’ and ‘Transform’tasks, the agent should perform the corresponding primitive action to the target object. If there aremultiple target objects in the world, the agent can perform the action to any of the target objects.The instructions are represented as a sequence of sentences, each of which is one of the following:Visit X ,Pick up X ,Transform X ,Pick up all X , and Transform all X where ‘X’ is the target objecttype. While the first three instructions require the agent to perform the corresponding subtask, thelast two instructions require the agent to repeat the same subtask until the target objects completelydisappear from the world.Task Split. Among 45 subtasks in G, only 30 subtasks are presented to the subtask controllerduring training. 3 subtasks from the training subtasks and 3 subtasks from the unseen subtasks7Under review as a conference paper at ICLR 2017AgentTrain UnseenReward Success Accuracy Reward Success Accuracyw/o Analogy 0.56 99.9% 100.0% -1.88 60:8% 49:6%w/ Analogy 0.56 99.9% 100.0% 0.55 99.8% 99.6%Table 1: Performance of subtask controller. ‘Analogy’ indicates analogy-making regularization. ‘Accuracy’represents termination prediction accuracy. We assume a termination prediction is correct only if predictionsare correct throughout the whole episode.were selected as the validation set to pick the best-performing subtask controller. For training themeta controller, we created four sets of sequences of instructions: training, validation, and two testsets. The training tasks consist of sequences of up to 4 instructions sampled from the set of traininginstructions. The validation set consists of sequences of 7 instructions with small overlaps withthe training instructions and unseen instructions. The two test sets consist of 20 seen and unseeninstructions respectively. More details of the task split are described in the appendix D.Flat Controller. To understand the advantage of using the communicating hierarchical structureof our controllers, we trained a flat controller which is almost identical to the meta controller archi-tecture except that it directly chooses primitive actions without using the subtask controller. Detailsof the flat controller architecture are described in the appendix F. The flat controller is pre-trainedon the training set of subtasks. To be specific, we removed the instruction memory and fed a singleinstruction as an additional input (i.e., rtis fixed throughout the episode). We found that the flatcontroller could not learn any reasonable policy without this pre-training step which requires mod-ification of the architecture based on domain knowledge. After pre-training, we fine-tuned the flatcontroller with the instruction memory on lists of instructions. Note that the flat controller is alsocapable of executing instructions as well as dealing with random events in principle.6.2 T RAINING DETAILSThe subtask controller consists of 3 convolution layers and 2 fully-connected layers and takes thelast 2 observations concatenated through channels as input. Each subtask argument ( g(i)) is linearlytransformed and multiplied with each other to compute the joint subtask embedding. This is furtherlinearly transformed into the weight of the first convolution layer, and the weight of the first fully-connected layer. The meta controller takes the current observation as input and has 2 convolutionlayers and 2 fully-connected layers where the parameters of the first convolution layer and the firstfully-connected layer are predicted by the joint embedding of rt1;'(gt1), andbt.We implemented synchronous actor-critic with 16 CPU threads based on MazeBase (Sukhbaataret al., 2015), each of which samples a mini-batch of episodes ( K) in parallel. The parameters areupdated after 16Kepisodes. The details of architectures and hyperparameters are described inthe appendix F.Curriculum Learning via a Forgiving World. We conducted curriculum training by changingthe size of the grid world, the density of objects, and the number of instructions according to theagent’s success rate. In addition, we trained the soft-architectures on an easier forgiving environmentwhich generates target objects whenever they do not exist. Crucially, this allows the agent to recoverfrom past mistakes in which it removed needed target objects. The soft-architectures are fine-tunedon the original (and far more unforgiving) environment which does not regenerate target objectsin the middle of the episode. Training directly in the original environment without first training inthe forgiving environment leads to too much failture at executing the task and the agent does notlearn successfuly. Finally, the hard-architectures are initialized by the soft-architectures and furtherfine-tuned on the original environment.6.3 E VALUATION OF SUBTASK CONTROLLERTo see how well the subtask controller performs separately from the meta controller, we evaluatedit on the training set of subtasks and unseen subtasks in Table 1. It is shown that analogy-makingregularization is crucial for generalization to unseen subtasks. This result suggests that analogy-making regularization plays an important role in learning the relationship between different subtasksand enabling generalization to unseen subtasks.In addition, we observed that the subtask controller learned a non-trivial policy by exploiting causalrelationships. For example, when [Pick up, egg] is given as the subtask arguments, but a duckis very close to the agent, it learned to transform the duck and pick up the resulting egg because8Under review as a conference paper at ICLR 2017Train Test #1 Test #2 Test #3 Test #4Set of instructions Seen Seen Unseen Seen w/o all Unseen w/o allNum of instructions 4 20 20 20 20ForgivingShortest Path -1.56 (99:6%) -11.94 (99:1%) -9.62 (99:1%)Near-Optimal -0.96 (99:6%) -9.99 (99:1%) -8.19 (99:1%)Flat -1.64 (85:8%) -14.53 (65:9%) -17.25 (23:7%) -12.38 (60:4%) -14.18 (16:7%)Hierarchical-TA-Analogy -1.05 (92:4%)-11.06 (86:2%)-13.69 (51:2%) -8.54 (91:9%) -9.91 (75:2%)OriginalShortest Path -1.62 (99:7%) -11.94 (99:4%) -8.72 (99:6%)Near-Optimal -1.34 (99:5%) -10.30 (99:3%) -7.62 (99:4%)Flat -2.38 (76:0%) -18.83 (0:1%) -18.92 (0:0%) -15.09 (0:0%) -15.17 (0:0%)Hierarchical -2.04 (72:8%) -16.85 (16:6%) -17.66 (6:9%) -10.99 (49:4%) -11.40 (47:4%)Hierarchical-Analogy -1.74 (81:0%) -15.89 (28:0%) -17.23 (11:3%) -10.11 (61:8%) -10.66 (57:7%)Hierarchical-TA -1.38 (92:6%) -12.96 (62:9%) -17.19 (13:0%) -9.11 (74:4%) -10.37 (61:2%)Hierarchical-TA-Analogy -1.26 (95:5%)-11.30 (81:3%)-14.75 (40:3%) -8.24 (85:5%) -9.51 (73:9%)Table 2: Performance of meta controller. Each column corresponds to different evaluation sets of instructions,while each row corresponds to different configurations of our architecture and the flat controller. Test #3and Test #4 do not include ‘Transform/Pick up all X’ instructions. ‘TA’ indicates the meta controller withtemporal abstraction. Each entry in the table represents reward with success rate in parentheses averaged over10-best runs among 20 independent runs. ‘Shortest Path’ is a hand-designed policy which executes instructionsoptimally based on the shortest path but ignores enemies. ‘Near-Optimal’ is a near-optimal policy that executesinstructions based the shortest path and transforms enemies when they are close to the agent. ‘Forgiving’rows show the result from the forgiving environment used for curriculum learning where target objects areregenerated whenever they do not exist in the world.5 10 15 20Num of instructions−20−15−10−505Reward5 10 15 20Num of instructions0.00.20.40.60.81.0Success rate5 10 15 20Num of instructions050100150200250#steps5 10 15 20Num of instructions05101520#instructions completedShortest-HueristicFlat (Seen)Flat (Unseen)Hierarchy (Seen)Hierarchy (Unseen)Hierarchy-Analogy (Seen)Hierarchy-Analogoy (Unseen)Hierarchy-TA (Seen)Hierarchy-TA (Unseen)Hierarchy-TA-Analogy (Seen)Hierarchy-TA-Analogy (Unseen)Figure 4: Performance per number of instructions. From left to right, the plots show reward, success rate, thenumber of steps, and the average number of instructions completed respectively. Solid and dashed curves showthe performances on seen instructions and unseen instructions respectively.transforming the duck transforms it to an egg in our environment. More analysis of the subtaskcontroller and the effect of analogy-making regularization is discussed in the appendix A and B.6.4 E VALUATION OF META CONTROLLERWe evaluated the meta controller separately from the subtask controller by providing the best-performing subtask controller during training and evaluation. The results are summarized in Table 2and Figure 4. Note that there is a discrepancy between reward and success rate, because success rateis measured only based on the instruction execution, while reward takes into account the backgroundtask (i.e., handling randomly appearing enemy) as well as the instruction execution.Overall performance. Table 2 shows that our hierarchical agent with temporal abstraction andanalogy-making regularization, denoted Hierarchical-TA-Analogy in the table, can handle 20 seeninstructions (Test #1) and 20 unseen instructions (Test #2) correctly with reasonably high successrates. In addition, that agent learned to deal with enemies whenever they appear, and thus it out-performs the ‘Shortest Path’ policy which is near-optimal in executing instructions while ignoringenemies. We further investigated how the number of instructions affects the performance in Figure 4.Although the performance is degraded as the number of instructions increases, our architecture fin-ishes 18 out of 20 seen instructions and 12 out of 20 unseen instructions on average. These resultsshow that our agent is able to generalize to longer compositions of instructions as well as unseeninstructions by just learning to solve short sequences of a subset of instructions.Flat vs. Hierarchy. All our hierarchical controllers outperform the flat controller both on thetraining tasks and longer/unseen instructions (see Table 2). We observed that the flat controllerlearned a sub-optimal policy which assumes that ‘Transform/Pick up X’ instructions are identical to‘Transform/Pick up all X’ instructions. In other words, it always transforms or picks up all existingtargets. Although this simple strategy is a reasonable sub-optimal policy because such wrong actionsare not explicitly penalized in our environment other than through the accumulating penalty per-9Under review as a conference paper at ICLR 2017UpdateShiftABCDABCD-10+1Figure 5: Analysis of the learned policy. ‘Update’ shows our agent’s internal update decision. ‘Shift’ showsour agent’s memory-shift decision which is either -1, 0, or +1 from top to bottom. The bottom text shows theinstruction indicated by the memory pointer, while the top text shows the subtask chosen by the meta controller.(A) the agent transforms the pig given ‘Transform Pig’ instruction and decides to update the subtask (Updateis true) and move to the next instruction. (B) an enemy (red) appears while the agent is executing ‘Pick up allmeat’ instruction (green boxes for meat). The agent changes the subtask to [Transform, enemy]. (C) the agentsuccessfully transforms the enemy and sets the subtask to [Pick up, meat] to resume executing the instruction.(D) the agent picks up the last meat in the world, moves the memory pointer to the next instruction, and sets anew subtask according to the next instruction.time-step, it often unnecessarily removes objects that can be potentially target objects in the futureinstructions. This is why the flat controller performs reasonably well on the short sequences ofinstructions (training) where such cases are rare and on the forgiving environment where targetobjects are restored whenever needed. But, it completely fails on longer instructions in the originalenvironment because the entire task becomes unsolvable when target objects are removed in error.This implies that the flat controller struggles with detecting when a subtask is finished precisely,whereas our hierarchical controllers can easily detect when a subtask is done, because the subtaskcontroller in our communicating architecture provides a termination signal to the meta controller.In addition, the flat controller tends to ignore enemies, while the hierarchical controllers try to dealwith enemies whenever they exist by changing the subtask-arguments communicated by the metacontroller to the subtask controller, which is a better strategy to maximize the reward. The flatcontroller instead has to use primitive actions to deal with both instructions and enemies. Thisimplies that our communicating hierarchical controllers have more advantages for context switchingbetween different sources of tasks (i.e., executing instructions and dealing with enemies).Finally, we observed that the flat controller often makes many mistakes on unseen instructions (e.g.,transform X given ‘Visit X’ as instruction). In contrast, the hierarchical controllers do not make suchmistakes as the subtask controller generalizes well to unseen instructions as discussed in Section 6.3.Effect of Analogy-making. Table 2 shows that analogy-making significantly improves general-ization performance especially on Test #2 (Hierarchical-Analogy outperforms Hierarchical, andHierarchical-TA-Analogy outperforms Hierarchical-TA). This implies that given an unseen targetobject for the ‘Transform/Pick up all’ instruction, the meta controller without analogy-making tendsto fail to check if the target object exists or not. On the other hand, there is almost no improvementby using analogy-making on Test #3 and Test #4 where there are no ‘all’ instruction. This is becausethe meta controller can simply rely on the subtask termination ( bt) given by the subtask controllerto check if the current instruction is finished for non-‘all’ instructions, and the subtask controller(trained with analogy-making) successfully generalizes to unseen subtasks and provides accuratetermination signals to the meta controller. The empirical results showing that analogy-making con-sistently improves generalization performance in both non-analogy-making controllers suggests thatanalogy-making is crucial for generalization to unseen tasks.Effect of Temporal Abstraction. To see the effect of temporal abstractions, we trained a baselinethat updates the memory pointer and the subtask at every time-step which is shown as ‘Hierarchical’and ‘Hierarchical-Analogy’ in Table 2. It turns out that the agent without temporal abstractionsperforms much worse both on the training tasks and testing tasks. We hypothesize that temporalcredit assignment becomes easier with temporal abstractions because the subtask updater (describedin Section 5.1.2) can operate at a larger time-scale by decoupling the update decision from the10Under review as a conference paper at ICLR 2017ABCPick up brownVisit blueVisit redPick up yellowTransform redTransform purplePick up yellowPick up purplePick up yellowTransform purpleVisit yellowVisit redPick up brownVisit yellowPick up purpleTransform blueTransform brownVisit bluePick up purpleTransform blueFirst-person-view(Observation)Top-down-view(Not visible)DABCDFigure 6: Learned policy in 3D environment. The agent observes ‘First-person-view’ images, while ‘Top-down-view’ is not available to the agent. The right text shows the list of instructions. (A) The agent cannot see thetarget block (blue) at this point due to the partially observable nature of the environment and the randomnessof the topology. The agent learned to explore the map to find the target block. (B) Although the currentinstruction is ‘Transform purple’, the agent decides to transform the green block because transforming a greenblock gives a large positive reward (stochastic event). (C) After dealing with the stochastic event, the agentresumes executing the instruction (Traansform purple). (D) The agent finishes the whole list of instructions.Train Test #1 Test #2Set of instructions Seen Seen UnseenNum of instructions 4 20 20Flat -1.87 (92:2%) -22.35 (68:7%) -39.24 (0:0%)Ours -1.41 (95:0%) -15.60 (92:2%) -17.80 (84:3%)Table 3: Performance on 3D environment.subtask selection. In particular, given ‘all’ instructions, the agent should repeat the same subtaskwhile not changing the memory pointer for a long time and the reward is even more delayed. Thiscan possibly confuse the subtask updater without temporal abstractions because it should make thesame decision for the entire time-steps of such instructions. In contrast, the subtask updater withtemporal abstractions can get a direct feedback from the long-term future, since one decision madeby the subtask updater results in multiple primitive actions. We conjecture that this is why the agentslearn more stably with temporal abstractions under delayed reward.Analysis of The Learned Policy. We visualized our agent’s behavior on a task with a long list ofinstructions in Figure 5. We observed that our meta controller learned to communicate the correctsubtask-arguments to the subtask controller and learned to move precisely to the next instructionby shifting the memory pointer whenever the instruction is finished. More interestingly, wheneveran enemy appears, our meta controller immediately changes the subtask to [Transform, enemy]regardless of the instruction and resumes executing the instruction after dealing with the enemy.Throughout the background task and the ‘all’ instructions, the meta controller keeps the memorypointer unchanged as illustrated in (B-D) in the figure. In addition, the agent learned to update thememory pointer and the subtask-argument almost only when it is needed, which provides the subtaskupdater with temporally-extended actions. This is not only computationally efficient but also usefulfor learning a better policy as discussed above.6.5 E VALUATION IN 3D V ISUAL ENVIRONMENTWe developed a similar set of tasks in Minecraft environment based on Oh et al. (2016) as shownin Figure 6. In this environment, the agent can observe only the first-person-view images, whichnaturally involves partial observability. In this environment, even executing a simple instruction(e.g., Visit X) requires the agent to explore the topology to find the target.An observation is represented as a 6464RGB image ( xt2R36464). There are 7 different typesof colored blocks: red, blue, green, yellow, brown, purple, and black which correspond to differenttypes of objects in the grid world experiment. Like 2D grid world environment, the topology of11Under review as a conference paper at ICLR 2017walls and the colored blocks are randomly generated for every episode. A wall not only acts as anobstacle but also occludes the objects behind it as shown in Figure 6, which makes the task morechallenging.The agent has 9 actions: Look (Left/Right/Up/Down), Move (Forward/Backward), Pick up ,Trans-form , and No operation .Look left /right actions change the yaw of the agent by 90 degree, whileLook up /down actions change the pitch of the agent by 45 degree. Move forward /backward actionsmove the agent by one block according to the agent’s looking direction. Pick up removes the blockin front of the agent, and Transform changes the block in front of the agent to the black-coloredblock.We used the same reward function used in the 2D grid world experiment. In addition, a green blockrandomly appears and transforming a green block gives +0:9reward regardless of instructions,which acts as a stochastic event. Each instruction is one of the following: Visit X, Pick up X, andTransform X where ‘X’ is the target color. We excluded ‘all’ instructions in this environment becausewe found that solving ‘all’ instructions given a limited amount of time is extremely challenging evenfor humans due to the partial observability.We used almost the same architectures used in the 2D grid world experiment except that a longshort-term memory (Hochreiter and Schmidhuber, 1997) is added on top of the final convolutionlayer both in the subtask controller and the meta controller, as it is one of the simplest ways to dealwith partial observability (Hausknecht and Stone, 2015; Mnih et al., 2016; Oh et al., 2016). Wefollowed the same training scheme used in the 2D grid world experiment.Table 3 shows that our proposed architecture significantly outperforms the flat controller baselineespecially on the test sets of instructions. We observed that the flat controller tends to strugglewith detecting when an instruction is finished and completely fails on unseen instructions, while ourarchitecture performs well on unseen and longer instructions. As shown in Figure 6, our architecturelearned to find the target blocks, detect when an instruction is finished, and deal with the stochasticevent. This result demonstrates that the proposed approach can also be applied to a more complexvisual environment.7 C ONCLUSIONIn this paper, we explored zero-shot task generalization in RL with a new problem where the agentis required to execute a sequence of instructions and to generalize over longer sequences of (un-seen) instructions without additional learning. To solve the problem, we presented a hierarchicaldeep RL architecture in which a meta controller learns a closed-loop policy of subtask-argumentcommunications to a subtask controller which executes the given subtask and communicates its ac-complishment back to the meta controller. Our architecture not only generalizes to unseen tasksafter training but also deals with random events relevant to a background task. In addition, we pro-posed several techniques that led to improvements in both training and generalization performance.First, analogy-making regularization turned out to be crucial for generalization to unseen subtasks.Second, learning temporal abstractions improved the performance by making the subtask updateroperate at a larger time-scale. One interesting line of future work would be to define and solvericher task instructions such as conditional statements (i.e., IF-THEN-ELSE) and loop instructions(i.e., collect 3 target objects). Moreover, end-to-end training of the whole hierarchy and discoveringthe subtask decomposition would be important future work.
Sy1CLUdrx
Novel architectural ideas ; algorithmically complex
7: Good paper, accept
This paper presents an architecture and corresponding algorithms for learning to act across multiple tasks, described in natural language. The proposed system is hierarchical and is closely related to the options framework. However, rather than learning a discrete set of options, it learns a mapping from natural instructions to an embedding which implicitly (dynamically) defines an option. This is a novel and interesting new perspective on options which had only slightly been explored in the linear setting (see comments below). I find the use of policy distillation particularly relevant for this setting. This, on its own, could be a takeaway for many RL readers who might not necessarily be interested about NLP applications. In general, the paper does not describe a single, simple, end-to-end, recipe for learning with this architecture. It rather relies on many recent advances skillfully combined: generalized advantage estimation, analogy-making regularizers, L1 regularization, memory addressing, matrix factorization, policy distillation. I would have liked to see some analysis but understand that it would have certainly been no easy task. For example, when you say "while the parameters of the subtask controller are frozen", this sounds to me like you're having some kind of two-timescale stochastic gradient descent. I'm also unsure how you deal with the SMDP structure in your gradient updates when you move to the "temporal abstractions" setting. I am inclined to believe that this approach has the potential to scale up to very large domains, but paper currently does not demonstrate this empirically. Like any typical reviewer, I would be tempted to say that you should perform larger experiments. However, I'm also glad that you have shown that your system also performs well in a "toy" domain. The characterization in figure 3 is insightful and makes a good point for the analogy regularizer and need for hierarchy. Overall, I think that the proposed architecture would inspire other researchers and would be worth being presented at ICLR. It also contains novel elements (subtask embeddings) which could be useful outside the deep and NLP communities into the more "traditional" RL communities. # Parameterized Options Sutton et. al (1999) did not explore the concept of *parameterized* options originally. It only came later, perhaps first with ["Optimal policy switching algorithms for reinforcement learning, Comanici & Precup, 2010"] or ["Unified Inter and Intra Options Learning Using Policy Gradient Methods", Levy & Shimkin, 2011]. Konidaris also has a line of work on "parametrized skills": ["Learning Parameterized Skills". da Silva, Konidaris, Barto, 2012)] or ["Reinforcement Learning with Parameterized Actions". Masson, Ranchod, Konidaris, 2015]. Also, I feel that there is a very important distinction to be made with the expression "parametrized options". In your work, "parametrized" comes in two flavors. In the spirit of policy gradient methods, we can have options whose policies and termination functions are represented by function approximators (in the same way that we have function approximation for value functions). Those options have parameters and we might call them "parameterized" because of that. This is the setting of Comanicy & Precup (2010), Levy & Shimkin (2011) Bacon & Precup (2015), Mankowitz, Mann, and Mannor (2016) for example. Now, there a second case where options/policies/skills take parameters *as inputs* and act accordingly. This is what Konidaris & al. means by "parameterized", whose meaning differs from the "function approximation" case above. In your work, the embedding of subtasks arguments is the "input" to your options and therefore behave as "parameters" in the sense of Konidaris. # Related Work I CTRL-F through the PDF but couldn't find references to any of S.R.K. Branavan's work. Branavan's PhD thesis had to do with using control techniques from RL in order to interpret natural instructions so as to achieve a goal. For example, in "Reinforcement Learning for Mapping Instructions to Actions", an RL agent learns from "Windows troubleshooting articles" to interact with UI elements (environment) through a Softmax policy (over linear features) learned by policy gradient methods. As you mention under "Instruction execution" the focus of your work in on generalization, which is not treated explicitely (afaik) in Branavan's work. Still, it shares some important algorithmic and architectural similarities which should be discussed explicitly or perhaps even compared to in your experiments (as a baseline). ## Zero-shot and UVFA It might also want to consider "Learning Shared Representations for Value Functions in Multi-task Reinforcement Learning", Borsa, Graepel, Shawe-Taylor] under the section "zero-shot tasks generalization". # Minor Issues I first read the abstract without knowing what the paper would be about and got confused in the second sentence. You talk about "longer sequences of previously seen instructions", but I didn't know what clearly meant by "instructions" until the second to last sentence where you specify "instructions described by *natural language*." You could perhaps re-order the sentences to make it clear in the second sentence that you are interested in NLP problems. Zero-generalization: I was familiar with the term "one-shot" but not "zero-shot". The way that the second sentence "[...] to have *similar* zero-shot [...]" follows from the first sentence might as well hold for the "one-shot" setting. You could perhaps add a citation to "zero-shot", or define it more explicitly from the beginning and compare it to the one-shot setting. It could also be useful if you explain how zero-shot relates to just the notion of learning with "priors". Under section 3, you say "cooperate with each other" which sounds to me very much like a multi-agent setting, which your work does not explore in this way. You might want to choose a different terminology or explain more precisely if there is any connection with the multi-agent setting. The second sentence of section 6 is way to long and difficult to parse. You could probably split it in two or three sentences.
SJttqw5ge
ICLR.cc/2017/conference
2017
Communicating Hierarchical Neural Controllers for Learning Zero-shot Task Generalization
["Junhyuk Oh", "Satinder Singh", "Honglak Lee", "Pushmeet Kohli"]
The ability to generalize from past experience to solve previously unseen tasks is a key research challenge in reinforcement learning (RL). In this paper, we consider RL tasks defined as a sequence of high-level instructions described by natural language and study two types of generalization: to unseen and longer sequences of previously seen instructions, and to sequences where the instructions themselves were previously not seen. We present a novel hierarchical deep RL architecture that consists of two interacting neural controllers: a meta controller that reads instructions and repeatedly communicates subtasks to a subtask controller that in turn learns to perform such subtasks. To generalize better to unseen instructions, we propose a regularizer that encourages to learn subtask embeddings that capture correspondences between similar subtasks. We also propose a new differentiable neural network architecture in the meta controller that learns temporal abstractions which makes learning more stable under delayed reward. Our architecture is evaluated on a stochastic 2D grid world and a 3D visual environment where the agent should execute a list of instructions. We demonstrate that the proposed architecture is able to generalize well over unseen instructions as well as longer lists of instructions.
["Reinforcement Learning", "Deep learning"]
ABSTRACTThe ability to generalize from past experience to solve previously unseen tasks is akey research challenge in reinforcement learning (RL). In this paper, we considerRL tasks defined as a sequence of high-level instructions described by natural lan-guage and study two types of generalization: to unseen and longer sequences ofpreviously seen instructions, and to sequences where the instructions themselveswere previously not seen. We present a novel hierarchical deep RL architecturethat consists of two interacting neural controllers: a meta controller that reads in-structions and repeatedly communicates subtasks to a subtask controller that inturn learns to perform such subtasks. To generalize better to unseen instructions,we propose a regularizer that encourages to learn subtask embeddings that capturecorrespondences between similar subtasks. We also propose a new differentiableneural network architecture in the meta controller that learns temporal abstrac-tions which makes learning more stable under delayed reward. Our architectureis evaluated on a stochastic 2D grid world and a 3D visual environment wherethe agent should execute a list of instructions. We demonstrate that the proposedarchitecture is able to generalize well over unseen instructions as well as longerlists of instructions.1 I NTRODUCTIONHumans can often generalize to novel tasks even without any additional learning by leveraging pastlearning experience. We would like our artificial agents to have similar “zero-shot” generalizationcapabilities. For example, after learning to solve tasks with instructions such as ‘Go to X (or Y)’ and‘Pick up Y (or Z)’, our agent should be able to infer the underlying goal of new tasks with instruc-tions like ‘Go to Z’, which requires disentangling the verbs (‘Go to/Pick up’) and the nouns/objects(‘X, Y , Z’). Furthermore, we would like our agents to learn to compose policies to solve novel taskscomposed of sequences of seen and unseen instructions. Developing the ability to achieve suchgeneralizations is a key challenge in artificial intelligence and the subfield of reinforcement learning(RL).Figure 1: Example of grid-world and in-structions. The agent is tasked to exe-cute longer sequences of instructions aftertrained on short sequences of instructions; inaddition previously unseen instructions canbe given during evaluation (blue text). Theagent can get more rewards if it deals withrandomly appearing enemies (red outlinedbox) regardless of current instructions.In this paper, we study the problem of zero-shot task gen-eralization in RL by introducing the “instruction execu-tion” problem where the agent is required to learn throughinteraction with its environment how to achieve an overalltask specified by a list of high-level instructions (see Fig-ure 1). As motivation for this problem consider a humanowner training its new household robot to execute com-plex tasks specified by natural language text that decom-pose the task into a sequence of instructions. Given thatit is infeasible to explicitly train the robot on all possibleinstruction-sequences, this problem involves two types ofgeneralizations: to unseen and longer sequences of previ-ously seen instructions, and sequences where the some ofthe instructions themselves were previously not seen. Ofcourse, the usual RL problem of learning policies throughinteraction to accomplish the goals of an instruction re-mains part of the problem as well. We assume that theagent does notreceive any signal on completing or fail-1Under review as a conference paper at ICLR 2017ing to complete individual instructions from the environment/owner and so the informative rewardsignal is delayed until the end. Furthermore, there can be random events in the environment thatrequire the agent to interrupt whatever it is doing and deviate from the instructions to maintain somebackground task as described in Figure 1. Altogether this makes for a challenging zero-shot taskgeneralization RL problem.Brief Background: RL tasks composed of sequences of subtasks have been studied before and anumber of hierearchical RL approaches designed for them. Typically these have the form of a metacontroller and a set of lower-level controllers for subtasks (Sutton et al., 1999; Dietterich, 2000;Parr and Russell, 1997). The meta controller is limited to selecting one from a set of lower-levelcontrollers to employ at any time. This makes it impossible for the low-level controller to generalizeto new subtasks without training a new low-level controller separately. Much of the previous workalso assumes that the overall task is fixed (e.g., Taxi domain (Dietterich, 2000; Ghavamzadeh andMahadevan, 2003)). Transfer learning across multiple compositional tasks has typically been studiedin RL formulations in which new tasks are only presented via a new reward function from theenvironment (Singh, 1991; 1992) and so there is no opportunity for fast model-free generalization.To the best of our knowledge, zero-shot model-free generalization to new or longer tasks as well asunseen tasks has not been well-studied in the RL setting.Our Architecture: This paper presents a hierarchical deep RL architecture (see Figure 2) that con-sists of two interacting neural controllers: a meta controller that repeatedly chooses an instructionand conditioned on the current state of the environment translates it into subtask-arguments (detailson this in later sections) and communicates those to the subtask controller that in turn chooses prim-itive actions given the subtask. This makes the subtask controller a parameterized option (Suttonet al., 1999) module in which the parameters are the subtask-arguments mentioned above. On top ofthe subtask controller, the meta controller is trained to select proper subtask-arguments dependingon observations from the environment, feedback from the subtask controller about termination, andthe task instructions. In order to generalize over unseen instructions, we propose analogy-makingregularization (discussed in Section 4.1) which encourages to learn subtask embeddings that capturecorrespondences between similar subtasks. In addition, we propose a new differentiable neural ar-chitecture in the meta controller that implicitly learns temporal abstractions so that it can operate ata larger time-scale and update the subtask-arguments to the subtask controller only when needed.Our Results: We developed a 2D grid world environment where the agent can interact with manyobjects as illustrated in Figure 1 based on MazeBase (Sukhbaatar et al., 2015) (see Section 6.1 fordetails). The empirical results show that the meta-controller’s ability to learn temporal abstractionsand a form of analogy-making regularization were all key in allowing our hierarchical architectureto generalize in a zero-shot fashion to unseen tasks. We also demonstrated that the same architecturecan also generalize to unseen and longer instructions in a 3D visual environment.2 R ELATED WORKHierarchical Reinforcement Learning. In addition to hierarchical RL described in Section 1,there is a line of work on portable options for solving sequential tasks (Konidaris et al., 2012;Konidaris and Barto, 2007). They proposed agent-space options that can be re-used to deal withnew problems. However, the optimal sequence of options (e.g., picking up a key followed by open-ing a door) is fixed throughout training and evaluation in their problem. On the other hand, the agentis required to perform new sequences of tasks depending on given instructions in our work. Ourwork is also closely related to Programmable HAM (PHAM) (Andre and Russell, 2000; 2002) inthat PHAM is designed to execute a given program. However, the program explicitly specifies thepolicy in PHAM which effectively reduces state-action space. In contrast, a list of instructions is apartial description of the task in our work, which means that the policy is not forced to follow theinstructions but to use them as a guide to maximize its reward. For example, interrupt conditionsneed be manually specified by the program in PHAM, while they are not specified in the instructionsbut should be learned by the agent in our framework.Hierarhical RL has been recently combined with deep learning. Kulkarni et al. (2016) proposedhierarchical Deep Q-Learning and demonstrated improved exploration in a challenging Atari game.Tessler et al. (2016) proposed a similar architecture that allows the high-level controller to chooseprimitive actions directly. Bacon and Precup (2015) proposed option-critic architecture which learnsoptions without any domain knowledge and demonstrated that it can learn distinct options in Atari2Under review as a conference paper at ICLR 2017games. Vezhnevets et al. (2016) proposed a deep architecture that automatically learns macro-actions. Unlike these recent works that aim to solve a single task, the goal of our work is to build amulti-task policy that can generalize over many different sequences of tasks.Zero-shot Task Generalization and Parameterized Option. There has been only a few stud-ies that aim to generalize over new tasks in a zero-shot fashion (i.e., without additional learning).da Silva et al. (2012) proposed the concept of parameterized skill which maps a set of task descrip-tions to policies. Similarly, Isele et al. (2016) proposed a method for zero-shot task generalizationwhich uses task descriptors to predict the parameter of the policy and proposed a coupled dictionarylearning with sparsity constraints to enable zero-shot learning. Schaul et al. (2015) proposed univer-sal value function approximators (UVFA) that learn a value function given a state and goal pair andshowed that their framework can generalize over unseen goals. Borsa et al. (2016) proposed to learna representation of state and action shared across different tasks. However, the proposed approachlacks the ability to solve new tasks in a zero-shot way. Our subtask controller implements the idea ofparameterized skill or universal option. Unlike the previous works, however, we propose to build ahigh-level controller (meta controller) on top of the subtask controller to deal with sequential tasks.Instruction Execution. There has been a line of work for building agents that can execute naturallanguage instructions: Tellex et al. (2011; 2014) for robotics and MacMahon et al. (2006); Chen andMooney (2011); Mei et al. (2015) for a simulated environment. However, these approaches focuson natural language understanding to map instructions to a sequence of actions or groundings ina supervised setting. In contrast, we focus on generalization to different sequences of instructionswithout any supervision for language understanding or for actions. Branavan et al. (2009) also tacklea similar problem of mapping from natural language instructions to a sequence of actions throughRL. However, the agent is given a single sentence at a time from the environment, while the agenthas to deal with a full list of instructions in our problem. In addition, they do not discuss how to dealwith unseen instructions which is the main focus of our paper.3 O VERVIEWFigure 2: Overview of our architectureGoal. We aim to learn a multi-task policy which is a map-ping:SM!A whereSis a set of states (or obser-vations),Mis a set of lists of instructions, and Ais a setof primitive actions. More importantly, since Mcan be ar-bitrary large, our goal is to find an optimal policy on avery small set of lists of instructions M0M such thatis also optimal in the entire set of lists of instructions M.Hierarchical Structure and Communication Protocol.As illustrated in Figure 2, the proposed architecture consistsof a meta controller which selects a subtask and a subtaskcontroller which executes the given subtask. The subtask isfurther decomposed into several arguments. More specif-ically, a space of subtasks Gis defined using the Carte-sian product of their arguments G(1)G(n), whereG(i)is a set of the i-th arguments (e.g.,G=fVisit;Pick upgf A;Bg). In addition, the subtask controller provides a useful informationto meta controller by giving a terminal signal for the given subtask. This communication protocolallows each controller to not only focus on their own independent roles but also communicate witheach other to learn a complex closed-loop policy.Subtask Controller. The subtask controller is a mapping SG!AB which maps a state and asubtask to an action and a termination signal ( B=f0;1g) indicating whether the subtask is finishedor not. The subtask controller is trained prior to training the meta controller. The main challenge forthe subtask controller is that only a subset of subtasks ( UG ) is observed during training, and itshould be able to generalize over unseen subtasks without experiencing them. Section 4 describeshow to construct the subtask architecture parameterized by a neural network and discusses how togeneralize over unseen subtasks.Meta Controller. The meta controller is a mapping SMGB!G which decides a subtaskfrom a state, a list of instructions, a subtask that is currently being executed, and whether the subtaskis finished as input. Thus, the meta controller should understand natural language instructions andpass proper subtask arguments to the subtask controller.3Under review as a conference paper at ICLR 2017ObservationSubtaskargumentsActionTermination?SubtaskembeddingInputOutputRecurrent(a) Subtask controllerObservationContextSubtaskargumentsSubtaskargumentsRetrieved instructionSubtasktermination?InstructionmemorySubtaskUpdaterUpdateYesNoInstructions (b) Meta controllerFigure 3: Proposed neural network architectures. See text for details.It is important to note that natural language instructions are not directly subtasks; indeed there is nota one-to-one correspondence between instructions and subtask-arguments. This is due to a numberof important reasons. First, instructions such as ’Pick up all X’ are executed by repeatedly solving asubtask [Pick up, X]. Second, the meta controller sometimes needs to interrupt ongoing subtasks andreplace them with other subtasks that are not relevant to the instruction because of the backgroundtask based on the stochastic events as described in Figure 1.Another challenge for the meta controller is that it should deal with partial observability induced bythe list of instructions. This is because the agent is notgiven which instruction to execute at eachtime-step from the environment but given just a full list of instructions. Thus, the meta controllershould remember how many instructions it has executed and decide when to move to the next in-struction. Section 5.1 describes how to construct a memory-based neural network to deal with thischallenge.Finally, it is desirable for the meta controller to operate in a larger time-scale due to the fact that asubtask does not change frequently once it is chosen. We describe a novel way to implicitly learnsuch a temporal scale of the meta controller through neural networks in Section 5.2.4 S UBTASK CONTROLLERGiven an observation st2S and subtask arguments g=g(1);:::;g(n)2G, the subtask controlleris defined as the following functions:Policy:(atjst;g) Termination: (btjst;g) =P(st2T g)whereis the policy optimized for the subtask. is a termination function which is a probabilitythat the state is terminal or not for given a subtask. Tgis the set of terminal states. The subtaskcontroller is parameterized by which is represented by a neural network as illustrated in Figure 3a.The network learns a representation of the subtask '(g), and it is used to condition the entire networkthrough multiplicative interactions as suggested by Memisevic and Hinton (2010); Lei Ba et al.(2015); Bertinetto et al. (2016). Further details are described in Appendix F.4.1 L EARNING TO GENERALIZE BY ANALOGY -MAKINGWhen learning a non-linear subtask embedding from arguments, '(g), it is desirable for the networkto learn prior knowledge about the relationship between different subtask arguments in order to inferthe goal of unseen configurations of arguments. To this end, we propose a novel analogy-makingregularizer inspired by Reed et al. (2015); Hadsell et al. (2006); Reed et al. (2014). The main idea isto learn correspondences between subtasks. For example, if target objects and ‘Visit/Pick up’ tasksare independent, we can enforce [Visit, X] : [Visit, Y] :: [Pick up, X] : [Pick up, Y] for any X and Yin the embedding space so that the agent learns to perform [Pick up, Y] as it performs [Pick up, X]and vice versa.More specifically, we define several constraints as follows:k'(gA)'(gB)'(gC) +'(gD)k0 ifgA:gB::gC:gD (1)k'(gA)'(gB)(gC) +'(gD)kdis ifgA:gB6=gC:gD (2)k'(gA)'(gB)kdiff ifgA6=gB (3)4Under review as a conference paper at ICLR 2017where gk=hg(1)k;g(2)k;:::;g(n)ki2Gare subtask arguments. Eq. (1) represents the analogy-makingrelationship, while Eq. (2) and and Eq. (3) prevent trivial solutions. To satisfy the above constraints,we propose the following objective functions based on contrastive loss (Hadsell et al., 2006):Lsim=E(gA;gB;gC;gD)Gsimk'(gA)'(gB)(gC) +'(gD)k2(4)Ldis=E(gA;gB;gC;gD)Gdishmax (0;disk'(gA)'(gB)(gC) +'(gD)k)2i(5)Ldiff=E(gA;gB)Gdiffhmax (0;diffk'(gA)'(gB)k)2i(6)whereGsim;Gdis;Gdiff consist of subtask arguments satisfying conditions in Eq. (1), Eq. (2) andEq. (3) respectively. dis;diff are threshold distances (hyperparameters). The final analogy-making regularizer is the weighted sum of the above three objectives.Analogies Under Non-independence. Although we use analogy-making regularizer so that allconfigurations of subtasks arguments are valid and independent from each other throughout themain experiment, our analogy-making regularizer can also be used to inject prior knowledge so thatthe agent generalizes to unseen subtasks in a specific way. For example, if some objects should behandled in a different way given the same subtask, we can apply analogy-making regularizer so thatEq. 1 is satisfied only between the same type of objects. This is further discussed in Appendix B.4.2 T RAININGThe subtask controller is trained on a subset of subtasks ( U G ) by directly providing subtaskarguments. The policy of the subtask controller is trained through the actor-critic method (Kondaand Tsitsiklis, 1999) with generalized advantage estimation (GAE) (Schulman et al., 2015). We alsofound that pre-training the subtask controller through policy distillation (Rusu et al., 2015; Parisottoet al., 2015) gives slightly better results. The idea of policy distillation is to train separate policiesfor each subtask and use them to provide action labels to train the subtask controller. Throughouttraining, the subtask controller is also made to predict whether the current state is terminal or notthrough a binary classification objective, and analogy-making regularizer is applied to the subtaskembedding separately. The full details of the learning objectives are described in Appendix E.1.5 M ETA CONTROLLERThe role of the meta controller is to decide subtask arguments gt2Gfrom an observation st2S, alist of instructions M2M , the previously selected subtask gt1, and its termination signal ( b)from the subtask controller. Section 5.1 describes the overall architecture of the meta controller fordealing with the partial observability induced by the list of instructions as discussed in Section 3. Wedescribe a novel way to learn the time-scale of the meta controller so that it can implicitly operate ina large time-scale in Section 5.2.5.1 A RCHITECTUREIn order to keep track of its progress on instruction execution, the meta controller maintains itsinternal state by computing a context vector (described in Section 5.1.1) and by focusing on one in-struction at a time from the list of instructions M(described in Section 5.1.2). The entire architectureis illustrated in Figure 3b and further details are described in Appendix F.5.1.1 C ONTEXTGiven the sentence embedding rt1retrieved at the previous time-step from the instructions (de-scribed in Section 5.1.2), the previously selected subtask gt1, and the subtask termination btbtjst;gt1, the meta controller computes the context vector ( ht) through a neural network:ht=fst;rt1;gt1;btwherefis a neural network parameterized by . Intuitively, gt1andbtprovide information aboutwhich subtask was being solved by the subtask controller and whether it has been finished or not.Note that the subtask does not necessarily match with the retrieved instruction ( rt1), e.g., whenthe agent is dealing with the background task. By combining all the information, htencodes thespatio-temporal context which is used to determine which instruction to solve and the next subtask.5Under review as a conference paper at ICLR 20175.1.2 S UBTASK UPDATERThe meta controller has a subtask updater that constructs a memory structure from the list of instruc-tions, retrieves an instruction by maintaining a pointer into the memory structure, and computes thesubtask arguments.Instruction Memory. Given instructions as a list of sentences M= (m1;m2;:::;mK), whereeach sentence consists of a list of words, mi=w1;:::;wjmij, the ‘subtask updater constructsmemory blocks M2REK, where each column is E-dimensional embedding of a sentence. Thesubtask module maintains a memory pointer defined over memory locations, pt2RK, which isused for instruction retrieval. Memory construction and retrieval is formally described as:Memory: M= ['w(m1);'w(m2);:::;'w(mK)] Retrieval: rt=Mpt:Here'w(mi)2REis the embedding of the i-th sentence (e.g., Bag-of-words). The memorypointer ptis a non-negative vector which sums up to 1. rt2REis the retrieved sentence embeddingwhich is used for computing the subtask-arguments. Intuitively, if the memory pointer is a one-hotvector, rtindicates a single instruction from the whole list of instructions. The meta controllershould learn to manage ptso that it can focus on the correct instruction at each time-step, which isfurther described below.Location-based Memory Addressing. Since instructions should be executed sequentially, we usea location-based memory addressing mechanism (Zaremba and Sutskever, 2015; Graves et al., 2014)to manage the memory pointer. Specifically, the subtask updater shifts the memory pointer by [1;1]as:pt=ltpt1where ltSoftmax'shift(ht)(7)whereis a convolution operator, and 'shiftis a multi-layer perceptron (MLP). lt2R3is aninternal action that shifts the memory pointer ( pt) by either -1, 0, or +1. This mechanism is illustratedin Figure 9b.Subtask Arguments. The subtask updater takes the context ( ht), updates the memory pointer ( pt),retrieves a sentence embedding ( rt), and finally computes subtask-arguments as follows:(gtjht;rt) =Yig(i)tjht;rtwhereg(i)tjht;rt/exp'goali(ht;rt)where'goaliis an MLP for the i-th subtask argument.5.2 D IFFERENTIABLE TEMPORAL ABSTRACTIONSAlgorithm 1 Subtask update (Hard)Input: ht;pt1;rt1;gt1Output: pt;rt;gtct'update(ht)ifct= 1then .UpdateltSoftmax'shift(ht)pt ltpt1 .Shiftrt M>pt .Retrievegt(gtjht;rt).Subtaskelsept pt1;rt rt1;gt gt1end ifAlthough the subtask updater can update the memorypointer and compute correct subtask-arguments in prin-ciple, making a decision at every time-step can be ineffi-cient because subtasks do not change very frequently. In-stead, having temporally-extended actions can be usefulfor dealing with delayed reward by operating at a largertime-scale (Sutton et al., 1999). Although one could usethe termination signal of the subtask controller to definethe temporal scale of the meta controller, this approachwould result in an open-loop policy that is not able to in-terrupt ongoing subtasks, which is necessary to deal withstochastic events.To address this challenge, we introduce an internal binary action ctwhich decides whether to updatethe subtask updater or not. This action is defined as: ct'update(ht). Ifct= 1, the subtaskupdater updates the memory pointer, retrieves an instruction, and updates the subtask arguments.Otherwise, the meta controller continues communicating the current subtask arguments withoutinvolving the subtask updater. During training of the update decision, we use L1 regularization onthe probability of update to penalize frequent updates as in Vezhnevets et al. (2016). The entirescheme is described in Algorithm 1.6Under review as a conference paper at ICLR 2017Algorithm 2 Subtask update (Soft)Input: ht;pt1;rt1;gt1Output: pt;rt;gtct 'update(ht)lt Softmax'shift(ht)~pt ltpt1~rt M>~ptpt ct~pt+ (1ct)pt1rt ct~rt+ (1ct)rt1g(i)tctg(i)tjht;~rt+ (1ct)g(i)t18iHowever, the update decision introduces a non-differentiablevariable which is known to be difficult to optimize in prac-tice. Thus, we propose a differentiable relaxation of the updatedecision. The key idea is to take the weighted sum of both‘update’ and ‘no update’ scenarios. This idea is described inAlgorithm 2. We found that training the meta controller us-ing Algorithm 2 followed by fine-tuning using Algorithm 1 iscrucial for training the meta controller. Note that Algorithm 2reduces to Algorithm 1 if we sample ctandltinstead of takingthe weighted sum, which justifies our initialization trick.5.3 T RAININGThe meta controller is trained on a training set of lists of instructions. Actor-critic method is usedto update the parameters of the meta controller, while a pre-trained subtask controller is given andfixed. Since the meta controller also learns a subtask embedding '(gt1)and has to deal withunseen subtasks during evaluation, we applied analogy-making regularization to its embedding.More details of the objective functions are provided in Appendix E.6 E XPERIMENTS AND RESULTSOur experiments were designed to explore the following hypotheses: our proposed hierarchicalarchitecture will generalize better than a non-hierarchical controller, that analogy-making regu-larization and learning temporal abstractions in the meta controller will both separately be ben-eficial for task generalization. We are also interested in understanding the qualitative proper-ties of our agent’s behavior. The demo videos are available at the following website: https://sites.google.com/a/umich.edu/junhyuk-oh/task-generalization .6.1 E XPERIMENTAL SETTINGEnvironment. We developed a 2D grid world based on MazeBase (Sukhbaatar et al., 2015) wherethe agent can interact with many objects as illustrated in Figure 1. Unlike the original MazeBase,an observation is represented as a binary 3D tensor: xt2R181010where 18is the number ofobject types and 1010is the size of the grid world. Each channel is a binary mask indicating thepresence of each object type. There are agent, blocks, water, and 15 types of objects with which theagent can interact (see Appendix D), and all of them are randomly placed for each episode.The agent has 13 primitive actions: No-operation ,Move (North/South/West/East, referred to as“NSWE”), Pick up (NSWE), and Transform (NSWE). Move actions move the agent by one cell inthe specified direction. Pick up actions remove the adjacent object in the corresponding relativeposition, and depending on the object type Transform actions either remove it or transform it toanother object.The agent receives a time penalty ( 0:1) for each time-step. Water cells act as obstacles which give0:3when the agent visits them. The agent receives +1reward when it finishes all instructions inthe correct order. Throughout the episode, an enemy randomly appears, moves, and disappears after10 steps. Transforming an enemy gives +0:9reward. More details are described in the appendix D.Subtasks and Instructions. The subtask space is defined as the Cartesian product of two argu-ments:G=fVisit;Pick up;TransformgfX1;X2;:::;X 15gwhereXiis an object type. The agentshould be on the same cell of the target object to finish ‘Visit’ task. For ‘Pick up’ and ‘Transform’tasks, the agent should perform the corresponding primitive action to the target object. If there aremultiple target objects in the world, the agent can perform the action to any of the target objects.The instructions are represented as a sequence of sentences, each of which is one of the following:Visit X ,Pick up X ,Transform X ,Pick up all X , and Transform all X where ‘X’ is the target objecttype. While the first three instructions require the agent to perform the corresponding subtask, thelast two instructions require the agent to repeat the same subtask until the target objects completelydisappear from the world.Task Split. Among 45 subtasks in G, only 30 subtasks are presented to the subtask controllerduring training. 3 subtasks from the training subtasks and 3 subtasks from the unseen subtasks7Under review as a conference paper at ICLR 2017AgentTrain UnseenReward Success Accuracy Reward Success Accuracyw/o Analogy 0.56 99.9% 100.0% -1.88 60:8% 49:6%w/ Analogy 0.56 99.9% 100.0% 0.55 99.8% 99.6%Table 1: Performance of subtask controller. ‘Analogy’ indicates analogy-making regularization. ‘Accuracy’represents termination prediction accuracy. We assume a termination prediction is correct only if predictionsare correct throughout the whole episode.were selected as the validation set to pick the best-performing subtask controller. For training themeta controller, we created four sets of sequences of instructions: training, validation, and two testsets. The training tasks consist of sequences of up to 4 instructions sampled from the set of traininginstructions. The validation set consists of sequences of 7 instructions with small overlaps withthe training instructions and unseen instructions. The two test sets consist of 20 seen and unseeninstructions respectively. More details of the task split are described in the appendix D.Flat Controller. To understand the advantage of using the communicating hierarchical structureof our controllers, we trained a flat controller which is almost identical to the meta controller archi-tecture except that it directly chooses primitive actions without using the subtask controller. Detailsof the flat controller architecture are described in the appendix F. The flat controller is pre-trainedon the training set of subtasks. To be specific, we removed the instruction memory and fed a singleinstruction as an additional input (i.e., rtis fixed throughout the episode). We found that the flatcontroller could not learn any reasonable policy without this pre-training step which requires mod-ification of the architecture based on domain knowledge. After pre-training, we fine-tuned the flatcontroller with the instruction memory on lists of instructions. Note that the flat controller is alsocapable of executing instructions as well as dealing with random events in principle.6.2 T RAINING DETAILSThe subtask controller consists of 3 convolution layers and 2 fully-connected layers and takes thelast 2 observations concatenated through channels as input. Each subtask argument ( g(i)) is linearlytransformed and multiplied with each other to compute the joint subtask embedding. This is furtherlinearly transformed into the weight of the first convolution layer, and the weight of the first fully-connected layer. The meta controller takes the current observation as input and has 2 convolutionlayers and 2 fully-connected layers where the parameters of the first convolution layer and the firstfully-connected layer are predicted by the joint embedding of rt1;'(gt1), andbt.We implemented synchronous actor-critic with 16 CPU threads based on MazeBase (Sukhbaataret al., 2015), each of which samples a mini-batch of episodes ( K) in parallel. The parameters areupdated after 16Kepisodes. The details of architectures and hyperparameters are described inthe appendix F.Curriculum Learning via a Forgiving World. We conducted curriculum training by changingthe size of the grid world, the density of objects, and the number of instructions according to theagent’s success rate. In addition, we trained the soft-architectures on an easier forgiving environmentwhich generates target objects whenever they do not exist. Crucially, this allows the agent to recoverfrom past mistakes in which it removed needed target objects. The soft-architectures are fine-tunedon the original (and far more unforgiving) environment which does not regenerate target objectsin the middle of the episode. Training directly in the original environment without first training inthe forgiving environment leads to too much failture at executing the task and the agent does notlearn successfuly. Finally, the hard-architectures are initialized by the soft-architectures and furtherfine-tuned on the original environment.6.3 E VALUATION OF SUBTASK CONTROLLERTo see how well the subtask controller performs separately from the meta controller, we evaluatedit on the training set of subtasks and unseen subtasks in Table 1. It is shown that analogy-makingregularization is crucial for generalization to unseen subtasks. This result suggests that analogy-making regularization plays an important role in learning the relationship between different subtasksand enabling generalization to unseen subtasks.In addition, we observed that the subtask controller learned a non-trivial policy by exploiting causalrelationships. For example, when [Pick up, egg] is given as the subtask arguments, but a duckis very close to the agent, it learned to transform the duck and pick up the resulting egg because8Under review as a conference paper at ICLR 2017Train Test #1 Test #2 Test #3 Test #4Set of instructions Seen Seen Unseen Seen w/o all Unseen w/o allNum of instructions 4 20 20 20 20ForgivingShortest Path -1.56 (99:6%) -11.94 (99:1%) -9.62 (99:1%)Near-Optimal -0.96 (99:6%) -9.99 (99:1%) -8.19 (99:1%)Flat -1.64 (85:8%) -14.53 (65:9%) -17.25 (23:7%) -12.38 (60:4%) -14.18 (16:7%)Hierarchical-TA-Analogy -1.05 (92:4%)-11.06 (86:2%)-13.69 (51:2%) -8.54 (91:9%) -9.91 (75:2%)OriginalShortest Path -1.62 (99:7%) -11.94 (99:4%) -8.72 (99:6%)Near-Optimal -1.34 (99:5%) -10.30 (99:3%) -7.62 (99:4%)Flat -2.38 (76:0%) -18.83 (0:1%) -18.92 (0:0%) -15.09 (0:0%) -15.17 (0:0%)Hierarchical -2.04 (72:8%) -16.85 (16:6%) -17.66 (6:9%) -10.99 (49:4%) -11.40 (47:4%)Hierarchical-Analogy -1.74 (81:0%) -15.89 (28:0%) -17.23 (11:3%) -10.11 (61:8%) -10.66 (57:7%)Hierarchical-TA -1.38 (92:6%) -12.96 (62:9%) -17.19 (13:0%) -9.11 (74:4%) -10.37 (61:2%)Hierarchical-TA-Analogy -1.26 (95:5%)-11.30 (81:3%)-14.75 (40:3%) -8.24 (85:5%) -9.51 (73:9%)Table 2: Performance of meta controller. Each column corresponds to different evaluation sets of instructions,while each row corresponds to different configurations of our architecture and the flat controller. Test #3and Test #4 do not include ‘Transform/Pick up all X’ instructions. ‘TA’ indicates the meta controller withtemporal abstraction. Each entry in the table represents reward with success rate in parentheses averaged over10-best runs among 20 independent runs. ‘Shortest Path’ is a hand-designed policy which executes instructionsoptimally based on the shortest path but ignores enemies. ‘Near-Optimal’ is a near-optimal policy that executesinstructions based the shortest path and transforms enemies when they are close to the agent. ‘Forgiving’rows show the result from the forgiving environment used for curriculum learning where target objects areregenerated whenever they do not exist in the world.5 10 15 20Num of instructions−20−15−10−505Reward5 10 15 20Num of instructions0.00.20.40.60.81.0Success rate5 10 15 20Num of instructions050100150200250#steps5 10 15 20Num of instructions05101520#instructions completedShortest-HueristicFlat (Seen)Flat (Unseen)Hierarchy (Seen)Hierarchy (Unseen)Hierarchy-Analogy (Seen)Hierarchy-Analogoy (Unseen)Hierarchy-TA (Seen)Hierarchy-TA (Unseen)Hierarchy-TA-Analogy (Seen)Hierarchy-TA-Analogy (Unseen)Figure 4: Performance per number of instructions. From left to right, the plots show reward, success rate, thenumber of steps, and the average number of instructions completed respectively. Solid and dashed curves showthe performances on seen instructions and unseen instructions respectively.transforming the duck transforms it to an egg in our environment. More analysis of the subtaskcontroller and the effect of analogy-making regularization is discussed in the appendix A and B.6.4 E VALUATION OF META CONTROLLERWe evaluated the meta controller separately from the subtask controller by providing the best-performing subtask controller during training and evaluation. The results are summarized in Table 2and Figure 4. Note that there is a discrepancy between reward and success rate, because success rateis measured only based on the instruction execution, while reward takes into account the backgroundtask (i.e., handling randomly appearing enemy) as well as the instruction execution.Overall performance. Table 2 shows that our hierarchical agent with temporal abstraction andanalogy-making regularization, denoted Hierarchical-TA-Analogy in the table, can handle 20 seeninstructions (Test #1) and 20 unseen instructions (Test #2) correctly with reasonably high successrates. In addition, that agent learned to deal with enemies whenever they appear, and thus it out-performs the ‘Shortest Path’ policy which is near-optimal in executing instructions while ignoringenemies. We further investigated how the number of instructions affects the performance in Figure 4.Although the performance is degraded as the number of instructions increases, our architecture fin-ishes 18 out of 20 seen instructions and 12 out of 20 unseen instructions on average. These resultsshow that our agent is able to generalize to longer compositions of instructions as well as unseeninstructions by just learning to solve short sequences of a subset of instructions.Flat vs. Hierarchy. All our hierarchical controllers outperform the flat controller both on thetraining tasks and longer/unseen instructions (see Table 2). We observed that the flat controllerlearned a sub-optimal policy which assumes that ‘Transform/Pick up X’ instructions are identical to‘Transform/Pick up all X’ instructions. In other words, it always transforms or picks up all existingtargets. Although this simple strategy is a reasonable sub-optimal policy because such wrong actionsare not explicitly penalized in our environment other than through the accumulating penalty per-9Under review as a conference paper at ICLR 2017UpdateShiftABCDABCD-10+1Figure 5: Analysis of the learned policy. ‘Update’ shows our agent’s internal update decision. ‘Shift’ showsour agent’s memory-shift decision which is either -1, 0, or +1 from top to bottom. The bottom text shows theinstruction indicated by the memory pointer, while the top text shows the subtask chosen by the meta controller.(A) the agent transforms the pig given ‘Transform Pig’ instruction and decides to update the subtask (Updateis true) and move to the next instruction. (B) an enemy (red) appears while the agent is executing ‘Pick up allmeat’ instruction (green boxes for meat). The agent changes the subtask to [Transform, enemy]. (C) the agentsuccessfully transforms the enemy and sets the subtask to [Pick up, meat] to resume executing the instruction.(D) the agent picks up the last meat in the world, moves the memory pointer to the next instruction, and sets anew subtask according to the next instruction.time-step, it often unnecessarily removes objects that can be potentially target objects in the futureinstructions. This is why the flat controller performs reasonably well on the short sequences ofinstructions (training) where such cases are rare and on the forgiving environment where targetobjects are restored whenever needed. But, it completely fails on longer instructions in the originalenvironment because the entire task becomes unsolvable when target objects are removed in error.This implies that the flat controller struggles with detecting when a subtask is finished precisely,whereas our hierarchical controllers can easily detect when a subtask is done, because the subtaskcontroller in our communicating architecture provides a termination signal to the meta controller.In addition, the flat controller tends to ignore enemies, while the hierarchical controllers try to dealwith enemies whenever they exist by changing the subtask-arguments communicated by the metacontroller to the subtask controller, which is a better strategy to maximize the reward. The flatcontroller instead has to use primitive actions to deal with both instructions and enemies. Thisimplies that our communicating hierarchical controllers have more advantages for context switchingbetween different sources of tasks (i.e., executing instructions and dealing with enemies).Finally, we observed that the flat controller often makes many mistakes on unseen instructions (e.g.,transform X given ‘Visit X’ as instruction). In contrast, the hierarchical controllers do not make suchmistakes as the subtask controller generalizes well to unseen instructions as discussed in Section 6.3.Effect of Analogy-making. Table 2 shows that analogy-making significantly improves general-ization performance especially on Test #2 (Hierarchical-Analogy outperforms Hierarchical, andHierarchical-TA-Analogy outperforms Hierarchical-TA). This implies that given an unseen targetobject for the ‘Transform/Pick up all’ instruction, the meta controller without analogy-making tendsto fail to check if the target object exists or not. On the other hand, there is almost no improvementby using analogy-making on Test #3 and Test #4 where there are no ‘all’ instruction. This is becausethe meta controller can simply rely on the subtask termination ( bt) given by the subtask controllerto check if the current instruction is finished for non-‘all’ instructions, and the subtask controller(trained with analogy-making) successfully generalizes to unseen subtasks and provides accuratetermination signals to the meta controller. The empirical results showing that analogy-making con-sistently improves generalization performance in both non-analogy-making controllers suggests thatanalogy-making is crucial for generalization to unseen tasks.Effect of Temporal Abstraction. To see the effect of temporal abstractions, we trained a baselinethat updates the memory pointer and the subtask at every time-step which is shown as ‘Hierarchical’and ‘Hierarchical-Analogy’ in Table 2. It turns out that the agent without temporal abstractionsperforms much worse both on the training tasks and testing tasks. We hypothesize that temporalcredit assignment becomes easier with temporal abstractions because the subtask updater (describedin Section 5.1.2) can operate at a larger time-scale by decoupling the update decision from the10Under review as a conference paper at ICLR 2017ABCPick up brownVisit blueVisit redPick up yellowTransform redTransform purplePick up yellowPick up purplePick up yellowTransform purpleVisit yellowVisit redPick up brownVisit yellowPick up purpleTransform blueTransform brownVisit bluePick up purpleTransform blueFirst-person-view(Observation)Top-down-view(Not visible)DABCDFigure 6: Learned policy in 3D environment. The agent observes ‘First-person-view’ images, while ‘Top-down-view’ is not available to the agent. The right text shows the list of instructions. (A) The agent cannot see thetarget block (blue) at this point due to the partially observable nature of the environment and the randomnessof the topology. The agent learned to explore the map to find the target block. (B) Although the currentinstruction is ‘Transform purple’, the agent decides to transform the green block because transforming a greenblock gives a large positive reward (stochastic event). (C) After dealing with the stochastic event, the agentresumes executing the instruction (Traansform purple). (D) The agent finishes the whole list of instructions.Train Test #1 Test #2Set of instructions Seen Seen UnseenNum of instructions 4 20 20Flat -1.87 (92:2%) -22.35 (68:7%) -39.24 (0:0%)Ours -1.41 (95:0%) -15.60 (92:2%) -17.80 (84:3%)Table 3: Performance on 3D environment.subtask selection. In particular, given ‘all’ instructions, the agent should repeat the same subtaskwhile not changing the memory pointer for a long time and the reward is even more delayed. Thiscan possibly confuse the subtask updater without temporal abstractions because it should make thesame decision for the entire time-steps of such instructions. In contrast, the subtask updater withtemporal abstractions can get a direct feedback from the long-term future, since one decision madeby the subtask updater results in multiple primitive actions. We conjecture that this is why the agentslearn more stably with temporal abstractions under delayed reward.Analysis of The Learned Policy. We visualized our agent’s behavior on a task with a long list ofinstructions in Figure 5. We observed that our meta controller learned to communicate the correctsubtask-arguments to the subtask controller and learned to move precisely to the next instructionby shifting the memory pointer whenever the instruction is finished. More interestingly, wheneveran enemy appears, our meta controller immediately changes the subtask to [Transform, enemy]regardless of the instruction and resumes executing the instruction after dealing with the enemy.Throughout the background task and the ‘all’ instructions, the meta controller keeps the memorypointer unchanged as illustrated in (B-D) in the figure. In addition, the agent learned to update thememory pointer and the subtask-argument almost only when it is needed, which provides the subtaskupdater with temporally-extended actions. This is not only computationally efficient but also usefulfor learning a better policy as discussed above.6.5 E VALUATION IN 3D V ISUAL ENVIRONMENTWe developed a similar set of tasks in Minecraft environment based on Oh et al. (2016) as shownin Figure 6. In this environment, the agent can observe only the first-person-view images, whichnaturally involves partial observability. In this environment, even executing a simple instruction(e.g., Visit X) requires the agent to explore the topology to find the target.An observation is represented as a 6464RGB image ( xt2R36464). There are 7 different typesof colored blocks: red, blue, green, yellow, brown, purple, and black which correspond to differenttypes of objects in the grid world experiment. Like 2D grid world environment, the topology of11Under review as a conference paper at ICLR 2017walls and the colored blocks are randomly generated for every episode. A wall not only acts as anobstacle but also occludes the objects behind it as shown in Figure 6, which makes the task morechallenging.The agent has 9 actions: Look (Left/Right/Up/Down), Move (Forward/Backward), Pick up ,Trans-form , and No operation .Look left /right actions change the yaw of the agent by 90 degree, whileLook up /down actions change the pitch of the agent by 45 degree. Move forward /backward actionsmove the agent by one block according to the agent’s looking direction. Pick up removes the blockin front of the agent, and Transform changes the block in front of the agent to the black-coloredblock.We used the same reward function used in the 2D grid world experiment. In addition, a green blockrandomly appears and transforming a green block gives +0:9reward regardless of instructions,which acts as a stochastic event. Each instruction is one of the following: Visit X, Pick up X, andTransform X where ‘X’ is the target color. We excluded ‘all’ instructions in this environment becausewe found that solving ‘all’ instructions given a limited amount of time is extremely challenging evenfor humans due to the partial observability.We used almost the same architectures used in the 2D grid world experiment except that a longshort-term memory (Hochreiter and Schmidhuber, 1997) is added on top of the final convolutionlayer both in the subtask controller and the meta controller, as it is one of the simplest ways to dealwith partial observability (Hausknecht and Stone, 2015; Mnih et al., 2016; Oh et al., 2016). Wefollowed the same training scheme used in the 2D grid world experiment.Table 3 shows that our proposed architecture significantly outperforms the flat controller baselineespecially on the test sets of instructions. We observed that the flat controller tends to strugglewith detecting when an instruction is finished and completely fails on unseen instructions, while ourarchitecture performs well on unseen and longer instructions. As shown in Figure 6, our architecturelearned to find the target blocks, detect when an instruction is finished, and deal with the stochasticevent. This result demonstrates that the proposed approach can also be applied to a more complexvisual environment.7 C ONCLUSIONIn this paper, we explored zero-shot task generalization in RL with a new problem where the agentis required to execute a sequence of instructions and to generalize over longer sequences of (un-seen) instructions without additional learning. To solve the problem, we presented a hierarchicaldeep RL architecture in which a meta controller learns a closed-loop policy of subtask-argumentcommunications to a subtask controller which executes the given subtask and communicates its ac-complishment back to the meta controller. Our architecture not only generalizes to unseen tasksafter training but also deals with random events relevant to a background task. In addition, we pro-posed several techniques that led to improvements in both training and generalization performance.First, analogy-making regularization turned out to be crucial for generalization to unseen subtasks.Second, learning temporal abstractions improved the performance by making the subtask updateroperate at a larger time-scale. One interesting line of future work would be to define and solvericher task instructions such as conditional statements (i.e., IF-THEN-ELSE) and loop instructions(i.e., collect 3 target objects). Moreover, end-to-end training of the whole hierarchy and discoveringthe subtask decomposition would be important future work.
rJD_Y3GNg
RL by learning to take advice
5: Marginally below acceptance threshold
This paper can be seen as instantiating a famous paper by the founder of AI John McCarthy on learning to take advice (which was studied in depth by other later researchers, such as Jack Mostow in the card game Hearts). The idea is that the agent is given high level instructions on how to solve a problem, and must distill from it a low level policy. This is quite related to how humans learn complex tasks in many domains (e.g., driving, where a driving instructor may provide advice such as "keep a certain distance from the car in front"). A fairly complex neural deep learning controller architecture is used, although the details of this system are somewhat confusing in terms of many details that are presented. A simpler approach might have been easier to follow, at least initially. The experiments unfortunately are on a rather simplistic 2D maze, and it would have been worthwhile to see how the approach scaled to more complex tasks of the sort usually seen in deep RL papers these days (e.g, Atari, physics simulators etc.). Nice overall idea, somewhat confusing description of the solution, and an inadequate set of experiments on a less than satisfactory domain of 2D grid worlds.
5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature
SJttqw5ge
ICLR.cc/2017/conference
2017
Communicating Hierarchical Neural Controllers for Learning Zero-shot Task Generalization
["Junhyuk Oh", "Satinder Singh", "Honglak Lee", "Pushmeet Kohli"]
The ability to generalize from past experience to solve previously unseen tasks is a key research challenge in reinforcement learning (RL). In this paper, we consider RL tasks defined as a sequence of high-level instructions described by natural language and study two types of generalization: to unseen and longer sequences of previously seen instructions, and to sequences where the instructions themselves were previously not seen. We present a novel hierarchical deep RL architecture that consists of two interacting neural controllers: a meta controller that reads instructions and repeatedly communicates subtasks to a subtask controller that in turn learns to perform such subtasks. To generalize better to unseen instructions, we propose a regularizer that encourages to learn subtask embeddings that capture correspondences between similar subtasks. We also propose a new differentiable neural network architecture in the meta controller that learns temporal abstractions which makes learning more stable under delayed reward. Our architecture is evaluated on a stochastic 2D grid world and a 3D visual environment where the agent should execute a list of instructions. We demonstrate that the proposed architecture is able to generalize well over unseen instructions as well as longer lists of instructions.
["Reinforcement Learning", "Deep learning"]
ABSTRACTThe ability to generalize from past experience to solve previously unseen tasks is akey research challenge in reinforcement learning (RL). In this paper, we considerRL tasks defined as a sequence of high-level instructions described by natural lan-guage and study two types of generalization: to unseen and longer sequences ofpreviously seen instructions, and to sequences where the instructions themselveswere previously not seen. We present a novel hierarchical deep RL architecturethat consists of two interacting neural controllers: a meta controller that reads in-structions and repeatedly communicates subtasks to a subtask controller that inturn learns to perform such subtasks. To generalize better to unseen instructions,we propose a regularizer that encourages to learn subtask embeddings that capturecorrespondences between similar subtasks. We also propose a new differentiableneural network architecture in the meta controller that learns temporal abstrac-tions which makes learning more stable under delayed reward. Our architectureis evaluated on a stochastic 2D grid world and a 3D visual environment wherethe agent should execute a list of instructions. We demonstrate that the proposedarchitecture is able to generalize well over unseen instructions as well as longerlists of instructions.1 I NTRODUCTIONHumans can often generalize to novel tasks even without any additional learning by leveraging pastlearning experience. We would like our artificial agents to have similar “zero-shot” generalizationcapabilities. For example, after learning to solve tasks with instructions such as ‘Go to X (or Y)’ and‘Pick up Y (or Z)’, our agent should be able to infer the underlying goal of new tasks with instruc-tions like ‘Go to Z’, which requires disentangling the verbs (‘Go to/Pick up’) and the nouns/objects(‘X, Y , Z’). Furthermore, we would like our agents to learn to compose policies to solve novel taskscomposed of sequences of seen and unseen instructions. Developing the ability to achieve suchgeneralizations is a key challenge in artificial intelligence and the subfield of reinforcement learning(RL).Figure 1: Example of grid-world and in-structions. The agent is tasked to exe-cute longer sequences of instructions aftertrained on short sequences of instructions; inaddition previously unseen instructions canbe given during evaluation (blue text). Theagent can get more rewards if it deals withrandomly appearing enemies (red outlinedbox) regardless of current instructions.In this paper, we study the problem of zero-shot task gen-eralization in RL by introducing the “instruction execu-tion” problem where the agent is required to learn throughinteraction with its environment how to achieve an overalltask specified by a list of high-level instructions (see Fig-ure 1). As motivation for this problem consider a humanowner training its new household robot to execute com-plex tasks specified by natural language text that decom-pose the task into a sequence of instructions. Given thatit is infeasible to explicitly train the robot on all possibleinstruction-sequences, this problem involves two types ofgeneralizations: to unseen and longer sequences of previ-ously seen instructions, and sequences where the some ofthe instructions themselves were previously not seen. Ofcourse, the usual RL problem of learning policies throughinteraction to accomplish the goals of an instruction re-mains part of the problem as well. We assume that theagent does notreceive any signal on completing or fail-1Under review as a conference paper at ICLR 2017ing to complete individual instructions from the environment/owner and so the informative rewardsignal is delayed until the end. Furthermore, there can be random events in the environment thatrequire the agent to interrupt whatever it is doing and deviate from the instructions to maintain somebackground task as described in Figure 1. Altogether this makes for a challenging zero-shot taskgeneralization RL problem.Brief Background: RL tasks composed of sequences of subtasks have been studied before and anumber of hierearchical RL approaches designed for them. Typically these have the form of a metacontroller and a set of lower-level controllers for subtasks (Sutton et al., 1999; Dietterich, 2000;Parr and Russell, 1997). The meta controller is limited to selecting one from a set of lower-levelcontrollers to employ at any time. This makes it impossible for the low-level controller to generalizeto new subtasks without training a new low-level controller separately. Much of the previous workalso assumes that the overall task is fixed (e.g., Taxi domain (Dietterich, 2000; Ghavamzadeh andMahadevan, 2003)). Transfer learning across multiple compositional tasks has typically been studiedin RL formulations in which new tasks are only presented via a new reward function from theenvironment (Singh, 1991; 1992) and so there is no opportunity for fast model-free generalization.To the best of our knowledge, zero-shot model-free generalization to new or longer tasks as well asunseen tasks has not been well-studied in the RL setting.Our Architecture: This paper presents a hierarchical deep RL architecture (see Figure 2) that con-sists of two interacting neural controllers: a meta controller that repeatedly chooses an instructionand conditioned on the current state of the environment translates it into subtask-arguments (detailson this in later sections) and communicates those to the subtask controller that in turn chooses prim-itive actions given the subtask. This makes the subtask controller a parameterized option (Suttonet al., 1999) module in which the parameters are the subtask-arguments mentioned above. On top ofthe subtask controller, the meta controller is trained to select proper subtask-arguments dependingon observations from the environment, feedback from the subtask controller about termination, andthe task instructions. In order to generalize over unseen instructions, we propose analogy-makingregularization (discussed in Section 4.1) which encourages to learn subtask embeddings that capturecorrespondences between similar subtasks. In addition, we propose a new differentiable neural ar-chitecture in the meta controller that implicitly learns temporal abstractions so that it can operate ata larger time-scale and update the subtask-arguments to the subtask controller only when needed.Our Results: We developed a 2D grid world environment where the agent can interact with manyobjects as illustrated in Figure 1 based on MazeBase (Sukhbaatar et al., 2015) (see Section 6.1 fordetails). The empirical results show that the meta-controller’s ability to learn temporal abstractionsand a form of analogy-making regularization were all key in allowing our hierarchical architectureto generalize in a zero-shot fashion to unseen tasks. We also demonstrated that the same architecturecan also generalize to unseen and longer instructions in a 3D visual environment.2 R ELATED WORKHierarchical Reinforcement Learning. In addition to hierarchical RL described in Section 1,there is a line of work on portable options for solving sequential tasks (Konidaris et al., 2012;Konidaris and Barto, 2007). They proposed agent-space options that can be re-used to deal withnew problems. However, the optimal sequence of options (e.g., picking up a key followed by open-ing a door) is fixed throughout training and evaluation in their problem. On the other hand, the agentis required to perform new sequences of tasks depending on given instructions in our work. Ourwork is also closely related to Programmable HAM (PHAM) (Andre and Russell, 2000; 2002) inthat PHAM is designed to execute a given program. However, the program explicitly specifies thepolicy in PHAM which effectively reduces state-action space. In contrast, a list of instructions is apartial description of the task in our work, which means that the policy is not forced to follow theinstructions but to use them as a guide to maximize its reward. For example, interrupt conditionsneed be manually specified by the program in PHAM, while they are not specified in the instructionsbut should be learned by the agent in our framework.Hierarhical RL has been recently combined with deep learning. Kulkarni et al. (2016) proposedhierarchical Deep Q-Learning and demonstrated improved exploration in a challenging Atari game.Tessler et al. (2016) proposed a similar architecture that allows the high-level controller to chooseprimitive actions directly. Bacon and Precup (2015) proposed option-critic architecture which learnsoptions without any domain knowledge and demonstrated that it can learn distinct options in Atari2Under review as a conference paper at ICLR 2017games. Vezhnevets et al. (2016) proposed a deep architecture that automatically learns macro-actions. Unlike these recent works that aim to solve a single task, the goal of our work is to build amulti-task policy that can generalize over many different sequences of tasks.Zero-shot Task Generalization and Parameterized Option. There has been only a few stud-ies that aim to generalize over new tasks in a zero-shot fashion (i.e., without additional learning).da Silva et al. (2012) proposed the concept of parameterized skill which maps a set of task descrip-tions to policies. Similarly, Isele et al. (2016) proposed a method for zero-shot task generalizationwhich uses task descriptors to predict the parameter of the policy and proposed a coupled dictionarylearning with sparsity constraints to enable zero-shot learning. Schaul et al. (2015) proposed univer-sal value function approximators (UVFA) that learn a value function given a state and goal pair andshowed that their framework can generalize over unseen goals. Borsa et al. (2016) proposed to learna representation of state and action shared across different tasks. However, the proposed approachlacks the ability to solve new tasks in a zero-shot way. Our subtask controller implements the idea ofparameterized skill or universal option. Unlike the previous works, however, we propose to build ahigh-level controller (meta controller) on top of the subtask controller to deal with sequential tasks.Instruction Execution. There has been a line of work for building agents that can execute naturallanguage instructions: Tellex et al. (2011; 2014) for robotics and MacMahon et al. (2006); Chen andMooney (2011); Mei et al. (2015) for a simulated environment. However, these approaches focuson natural language understanding to map instructions to a sequence of actions or groundings ina supervised setting. In contrast, we focus on generalization to different sequences of instructionswithout any supervision for language understanding or for actions. Branavan et al. (2009) also tacklea similar problem of mapping from natural language instructions to a sequence of actions throughRL. However, the agent is given a single sentence at a time from the environment, while the agenthas to deal with a full list of instructions in our problem. In addition, they do not discuss how to dealwith unseen instructions which is the main focus of our paper.3 O VERVIEWFigure 2: Overview of our architectureGoal. We aim to learn a multi-task policy which is a map-ping:SM!A whereSis a set of states (or obser-vations),Mis a set of lists of instructions, and Ais a setof primitive actions. More importantly, since Mcan be ar-bitrary large, our goal is to find an optimal policy on avery small set of lists of instructions M0M such thatis also optimal in the entire set of lists of instructions M.Hierarchical Structure and Communication Protocol.As illustrated in Figure 2, the proposed architecture consistsof a meta controller which selects a subtask and a subtaskcontroller which executes the given subtask. The subtask isfurther decomposed into several arguments. More specif-ically, a space of subtasks Gis defined using the Carte-sian product of their arguments G(1)G(n), whereG(i)is a set of the i-th arguments (e.g.,G=fVisit;Pick upgf A;Bg). In addition, the subtask controller provides a useful informationto meta controller by giving a terminal signal for the given subtask. This communication protocolallows each controller to not only focus on their own independent roles but also communicate witheach other to learn a complex closed-loop policy.Subtask Controller. The subtask controller is a mapping SG!AB which maps a state and asubtask to an action and a termination signal ( B=f0;1g) indicating whether the subtask is finishedor not. The subtask controller is trained prior to training the meta controller. The main challenge forthe subtask controller is that only a subset of subtasks ( UG ) is observed during training, and itshould be able to generalize over unseen subtasks without experiencing them. Section 4 describeshow to construct the subtask architecture parameterized by a neural network and discusses how togeneralize over unseen subtasks.Meta Controller. The meta controller is a mapping SMGB!G which decides a subtaskfrom a state, a list of instructions, a subtask that is currently being executed, and whether the subtaskis finished as input. Thus, the meta controller should understand natural language instructions andpass proper subtask arguments to the subtask controller.3Under review as a conference paper at ICLR 2017ObservationSubtaskargumentsActionTermination?SubtaskembeddingInputOutputRecurrent(a) Subtask controllerObservationContextSubtaskargumentsSubtaskargumentsRetrieved instructionSubtasktermination?InstructionmemorySubtaskUpdaterUpdateYesNoInstructions (b) Meta controllerFigure 3: Proposed neural network architectures. See text for details.It is important to note that natural language instructions are not directly subtasks; indeed there is nota one-to-one correspondence between instructions and subtask-arguments. This is due to a numberof important reasons. First, instructions such as ’Pick up all X’ are executed by repeatedly solving asubtask [Pick up, X]. Second, the meta controller sometimes needs to interrupt ongoing subtasks andreplace them with other subtasks that are not relevant to the instruction because of the backgroundtask based on the stochastic events as described in Figure 1.Another challenge for the meta controller is that it should deal with partial observability induced bythe list of instructions. This is because the agent is notgiven which instruction to execute at eachtime-step from the environment but given just a full list of instructions. Thus, the meta controllershould remember how many instructions it has executed and decide when to move to the next in-struction. Section 5.1 describes how to construct a memory-based neural network to deal with thischallenge.Finally, it is desirable for the meta controller to operate in a larger time-scale due to the fact that asubtask does not change frequently once it is chosen. We describe a novel way to implicitly learnsuch a temporal scale of the meta controller through neural networks in Section 5.2.4 S UBTASK CONTROLLERGiven an observation st2S and subtask arguments g=g(1);:::;g(n)2G, the subtask controlleris defined as the following functions:Policy:(atjst;g) Termination: (btjst;g) =P(st2T g)whereis the policy optimized for the subtask. is a termination function which is a probabilitythat the state is terminal or not for given a subtask. Tgis the set of terminal states. The subtaskcontroller is parameterized by which is represented by a neural network as illustrated in Figure 3a.The network learns a representation of the subtask '(g), and it is used to condition the entire networkthrough multiplicative interactions as suggested by Memisevic and Hinton (2010); Lei Ba et al.(2015); Bertinetto et al. (2016). Further details are described in Appendix F.4.1 L EARNING TO GENERALIZE BY ANALOGY -MAKINGWhen learning a non-linear subtask embedding from arguments, '(g), it is desirable for the networkto learn prior knowledge about the relationship between different subtask arguments in order to inferthe goal of unseen configurations of arguments. To this end, we propose a novel analogy-makingregularizer inspired by Reed et al. (2015); Hadsell et al. (2006); Reed et al. (2014). The main idea isto learn correspondences between subtasks. For example, if target objects and ‘Visit/Pick up’ tasksare independent, we can enforce [Visit, X] : [Visit, Y] :: [Pick up, X] : [Pick up, Y] for any X and Yin the embedding space so that the agent learns to perform [Pick up, Y] as it performs [Pick up, X]and vice versa.More specifically, we define several constraints as follows:k'(gA)'(gB)'(gC) +'(gD)k0 ifgA:gB::gC:gD (1)k'(gA)'(gB)(gC) +'(gD)kdis ifgA:gB6=gC:gD (2)k'(gA)'(gB)kdiff ifgA6=gB (3)4Under review as a conference paper at ICLR 2017where gk=hg(1)k;g(2)k;:::;g(n)ki2Gare subtask arguments. Eq. (1) represents the analogy-makingrelationship, while Eq. (2) and and Eq. (3) prevent trivial solutions. To satisfy the above constraints,we propose the following objective functions based on contrastive loss (Hadsell et al., 2006):Lsim=E(gA;gB;gC;gD)Gsimk'(gA)'(gB)(gC) +'(gD)k2(4)Ldis=E(gA;gB;gC;gD)Gdishmax (0;disk'(gA)'(gB)(gC) +'(gD)k)2i(5)Ldiff=E(gA;gB)Gdiffhmax (0;diffk'(gA)'(gB)k)2i(6)whereGsim;Gdis;Gdiff consist of subtask arguments satisfying conditions in Eq. (1), Eq. (2) andEq. (3) respectively. dis;diff are threshold distances (hyperparameters). The final analogy-making regularizer is the weighted sum of the above three objectives.Analogies Under Non-independence. Although we use analogy-making regularizer so that allconfigurations of subtasks arguments are valid and independent from each other throughout themain experiment, our analogy-making regularizer can also be used to inject prior knowledge so thatthe agent generalizes to unseen subtasks in a specific way. For example, if some objects should behandled in a different way given the same subtask, we can apply analogy-making regularizer so thatEq. 1 is satisfied only between the same type of objects. This is further discussed in Appendix B.4.2 T RAININGThe subtask controller is trained on a subset of subtasks ( U G ) by directly providing subtaskarguments. The policy of the subtask controller is trained through the actor-critic method (Kondaand Tsitsiklis, 1999) with generalized advantage estimation (GAE) (Schulman et al., 2015). We alsofound that pre-training the subtask controller through policy distillation (Rusu et al., 2015; Parisottoet al., 2015) gives slightly better results. The idea of policy distillation is to train separate policiesfor each subtask and use them to provide action labels to train the subtask controller. Throughouttraining, the subtask controller is also made to predict whether the current state is terminal or notthrough a binary classification objective, and analogy-making regularizer is applied to the subtaskembedding separately. The full details of the learning objectives are described in Appendix E.1.5 M ETA CONTROLLERThe role of the meta controller is to decide subtask arguments gt2Gfrom an observation st2S, alist of instructions M2M , the previously selected subtask gt1, and its termination signal ( b)from the subtask controller. Section 5.1 describes the overall architecture of the meta controller fordealing with the partial observability induced by the list of instructions as discussed in Section 3. Wedescribe a novel way to learn the time-scale of the meta controller so that it can implicitly operate ina large time-scale in Section 5.2.5.1 A RCHITECTUREIn order to keep track of its progress on instruction execution, the meta controller maintains itsinternal state by computing a context vector (described in Section 5.1.1) and by focusing on one in-struction at a time from the list of instructions M(described in Section 5.1.2). The entire architectureis illustrated in Figure 3b and further details are described in Appendix F.5.1.1 C ONTEXTGiven the sentence embedding rt1retrieved at the previous time-step from the instructions (de-scribed in Section 5.1.2), the previously selected subtask gt1, and the subtask termination btbtjst;gt1, the meta controller computes the context vector ( ht) through a neural network:ht=fst;rt1;gt1;btwherefis a neural network parameterized by . Intuitively, gt1andbtprovide information aboutwhich subtask was being solved by the subtask controller and whether it has been finished or not.Note that the subtask does not necessarily match with the retrieved instruction ( rt1), e.g., whenthe agent is dealing with the background task. By combining all the information, htencodes thespatio-temporal context which is used to determine which instruction to solve and the next subtask.5Under review as a conference paper at ICLR 20175.1.2 S UBTASK UPDATERThe meta controller has a subtask updater that constructs a memory structure from the list of instruc-tions, retrieves an instruction by maintaining a pointer into the memory structure, and computes thesubtask arguments.Instruction Memory. Given instructions as a list of sentences M= (m1;m2;:::;mK), whereeach sentence consists of a list of words, mi=w1;:::;wjmij, the ‘subtask updater constructsmemory blocks M2REK, where each column is E-dimensional embedding of a sentence. Thesubtask module maintains a memory pointer defined over memory locations, pt2RK, which isused for instruction retrieval. Memory construction and retrieval is formally described as:Memory: M= ['w(m1);'w(m2);:::;'w(mK)] Retrieval: rt=Mpt:Here'w(mi)2REis the embedding of the i-th sentence (e.g., Bag-of-words). The memorypointer ptis a non-negative vector which sums up to 1. rt2REis the retrieved sentence embeddingwhich is used for computing the subtask-arguments. Intuitively, if the memory pointer is a one-hotvector, rtindicates a single instruction from the whole list of instructions. The meta controllershould learn to manage ptso that it can focus on the correct instruction at each time-step, which isfurther described below.Location-based Memory Addressing. Since instructions should be executed sequentially, we usea location-based memory addressing mechanism (Zaremba and Sutskever, 2015; Graves et al., 2014)to manage the memory pointer. Specifically, the subtask updater shifts the memory pointer by [1;1]as:pt=ltpt1where ltSoftmax'shift(ht)(7)whereis a convolution operator, and 'shiftis a multi-layer perceptron (MLP). lt2R3is aninternal action that shifts the memory pointer ( pt) by either -1, 0, or +1. This mechanism is illustratedin Figure 9b.Subtask Arguments. The subtask updater takes the context ( ht), updates the memory pointer ( pt),retrieves a sentence embedding ( rt), and finally computes subtask-arguments as follows:(gtjht;rt) =Yig(i)tjht;rtwhereg(i)tjht;rt/exp'goali(ht;rt)where'goaliis an MLP for the i-th subtask argument.5.2 D IFFERENTIABLE TEMPORAL ABSTRACTIONSAlgorithm 1 Subtask update (Hard)Input: ht;pt1;rt1;gt1Output: pt;rt;gtct'update(ht)ifct= 1then .UpdateltSoftmax'shift(ht)pt ltpt1 .Shiftrt M>pt .Retrievegt(gtjht;rt).Subtaskelsept pt1;rt rt1;gt gt1end ifAlthough the subtask updater can update the memorypointer and compute correct subtask-arguments in prin-ciple, making a decision at every time-step can be ineffi-cient because subtasks do not change very frequently. In-stead, having temporally-extended actions can be usefulfor dealing with delayed reward by operating at a largertime-scale (Sutton et al., 1999). Although one could usethe termination signal of the subtask controller to definethe temporal scale of the meta controller, this approachwould result in an open-loop policy that is not able to in-terrupt ongoing subtasks, which is necessary to deal withstochastic events.To address this challenge, we introduce an internal binary action ctwhich decides whether to updatethe subtask updater or not. This action is defined as: ct'update(ht). Ifct= 1, the subtaskupdater updates the memory pointer, retrieves an instruction, and updates the subtask arguments.Otherwise, the meta controller continues communicating the current subtask arguments withoutinvolving the subtask updater. During training of the update decision, we use L1 regularization onthe probability of update to penalize frequent updates as in Vezhnevets et al. (2016). The entirescheme is described in Algorithm 1.6Under review as a conference paper at ICLR 2017Algorithm 2 Subtask update (Soft)Input: ht;pt1;rt1;gt1Output: pt;rt;gtct 'update(ht)lt Softmax'shift(ht)~pt ltpt1~rt M>~ptpt ct~pt+ (1ct)pt1rt ct~rt+ (1ct)rt1g(i)tctg(i)tjht;~rt+ (1ct)g(i)t18iHowever, the update decision introduces a non-differentiablevariable which is known to be difficult to optimize in prac-tice. Thus, we propose a differentiable relaxation of the updatedecision. The key idea is to take the weighted sum of both‘update’ and ‘no update’ scenarios. This idea is described inAlgorithm 2. We found that training the meta controller us-ing Algorithm 2 followed by fine-tuning using Algorithm 1 iscrucial for training the meta controller. Note that Algorithm 2reduces to Algorithm 1 if we sample ctandltinstead of takingthe weighted sum, which justifies our initialization trick.5.3 T RAININGThe meta controller is trained on a training set of lists of instructions. Actor-critic method is usedto update the parameters of the meta controller, while a pre-trained subtask controller is given andfixed. Since the meta controller also learns a subtask embedding '(gt1)and has to deal withunseen subtasks during evaluation, we applied analogy-making regularization to its embedding.More details of the objective functions are provided in Appendix E.6 E XPERIMENTS AND RESULTSOur experiments were designed to explore the following hypotheses: our proposed hierarchicalarchitecture will generalize better than a non-hierarchical controller, that analogy-making regu-larization and learning temporal abstractions in the meta controller will both separately be ben-eficial for task generalization. We are also interested in understanding the qualitative proper-ties of our agent’s behavior. The demo videos are available at the following website: https://sites.google.com/a/umich.edu/junhyuk-oh/task-generalization .6.1 E XPERIMENTAL SETTINGEnvironment. We developed a 2D grid world based on MazeBase (Sukhbaatar et al., 2015) wherethe agent can interact with many objects as illustrated in Figure 1. Unlike the original MazeBase,an observation is represented as a binary 3D tensor: xt2R181010where 18is the number ofobject types and 1010is the size of the grid world. Each channel is a binary mask indicating thepresence of each object type. There are agent, blocks, water, and 15 types of objects with which theagent can interact (see Appendix D), and all of them are randomly placed for each episode.The agent has 13 primitive actions: No-operation ,Move (North/South/West/East, referred to as“NSWE”), Pick up (NSWE), and Transform (NSWE). Move actions move the agent by one cell inthe specified direction. Pick up actions remove the adjacent object in the corresponding relativeposition, and depending on the object type Transform actions either remove it or transform it toanother object.The agent receives a time penalty ( 0:1) for each time-step. Water cells act as obstacles which give0:3when the agent visits them. The agent receives +1reward when it finishes all instructions inthe correct order. Throughout the episode, an enemy randomly appears, moves, and disappears after10 steps. Transforming an enemy gives +0:9reward. More details are described in the appendix D.Subtasks and Instructions. The subtask space is defined as the Cartesian product of two argu-ments:G=fVisit;Pick up;TransformgfX1;X2;:::;X 15gwhereXiis an object type. The agentshould be on the same cell of the target object to finish ‘Visit’ task. For ‘Pick up’ and ‘Transform’tasks, the agent should perform the corresponding primitive action to the target object. If there aremultiple target objects in the world, the agent can perform the action to any of the target objects.The instructions are represented as a sequence of sentences, each of which is one of the following:Visit X ,Pick up X ,Transform X ,Pick up all X , and Transform all X where ‘X’ is the target objecttype. While the first three instructions require the agent to perform the corresponding subtask, thelast two instructions require the agent to repeat the same subtask until the target objects completelydisappear from the world.Task Split. Among 45 subtasks in G, only 30 subtasks are presented to the subtask controllerduring training. 3 subtasks from the training subtasks and 3 subtasks from the unseen subtasks7Under review as a conference paper at ICLR 2017AgentTrain UnseenReward Success Accuracy Reward Success Accuracyw/o Analogy 0.56 99.9% 100.0% -1.88 60:8% 49:6%w/ Analogy 0.56 99.9% 100.0% 0.55 99.8% 99.6%Table 1: Performance of subtask controller. ‘Analogy’ indicates analogy-making regularization. ‘Accuracy’represents termination prediction accuracy. We assume a termination prediction is correct only if predictionsare correct throughout the whole episode.were selected as the validation set to pick the best-performing subtask controller. For training themeta controller, we created four sets of sequences of instructions: training, validation, and two testsets. The training tasks consist of sequences of up to 4 instructions sampled from the set of traininginstructions. The validation set consists of sequences of 7 instructions with small overlaps withthe training instructions and unseen instructions. The two test sets consist of 20 seen and unseeninstructions respectively. More details of the task split are described in the appendix D.Flat Controller. To understand the advantage of using the communicating hierarchical structureof our controllers, we trained a flat controller which is almost identical to the meta controller archi-tecture except that it directly chooses primitive actions without using the subtask controller. Detailsof the flat controller architecture are described in the appendix F. The flat controller is pre-trainedon the training set of subtasks. To be specific, we removed the instruction memory and fed a singleinstruction as an additional input (i.e., rtis fixed throughout the episode). We found that the flatcontroller could not learn any reasonable policy without this pre-training step which requires mod-ification of the architecture based on domain knowledge. After pre-training, we fine-tuned the flatcontroller with the instruction memory on lists of instructions. Note that the flat controller is alsocapable of executing instructions as well as dealing with random events in principle.6.2 T RAINING DETAILSThe subtask controller consists of 3 convolution layers and 2 fully-connected layers and takes thelast 2 observations concatenated through channels as input. Each subtask argument ( g(i)) is linearlytransformed and multiplied with each other to compute the joint subtask embedding. This is furtherlinearly transformed into the weight of the first convolution layer, and the weight of the first fully-connected layer. The meta controller takes the current observation as input and has 2 convolutionlayers and 2 fully-connected layers where the parameters of the first convolution layer and the firstfully-connected layer are predicted by the joint embedding of rt1;'(gt1), andbt.We implemented synchronous actor-critic with 16 CPU threads based on MazeBase (Sukhbaataret al., 2015), each of which samples a mini-batch of episodes ( K) in parallel. The parameters areupdated after 16Kepisodes. The details of architectures and hyperparameters are described inthe appendix F.Curriculum Learning via a Forgiving World. We conducted curriculum training by changingthe size of the grid world, the density of objects, and the number of instructions according to theagent’s success rate. In addition, we trained the soft-architectures on an easier forgiving environmentwhich generates target objects whenever they do not exist. Crucially, this allows the agent to recoverfrom past mistakes in which it removed needed target objects. The soft-architectures are fine-tunedon the original (and far more unforgiving) environment which does not regenerate target objectsin the middle of the episode. Training directly in the original environment without first training inthe forgiving environment leads to too much failture at executing the task and the agent does notlearn successfuly. Finally, the hard-architectures are initialized by the soft-architectures and furtherfine-tuned on the original environment.6.3 E VALUATION OF SUBTASK CONTROLLERTo see how well the subtask controller performs separately from the meta controller, we evaluatedit on the training set of subtasks and unseen subtasks in Table 1. It is shown that analogy-makingregularization is crucial for generalization to unseen subtasks. This result suggests that analogy-making regularization plays an important role in learning the relationship between different subtasksand enabling generalization to unseen subtasks.In addition, we observed that the subtask controller learned a non-trivial policy by exploiting causalrelationships. For example, when [Pick up, egg] is given as the subtask arguments, but a duckis very close to the agent, it learned to transform the duck and pick up the resulting egg because8Under review as a conference paper at ICLR 2017Train Test #1 Test #2 Test #3 Test #4Set of instructions Seen Seen Unseen Seen w/o all Unseen w/o allNum of instructions 4 20 20 20 20ForgivingShortest Path -1.56 (99:6%) -11.94 (99:1%) -9.62 (99:1%)Near-Optimal -0.96 (99:6%) -9.99 (99:1%) -8.19 (99:1%)Flat -1.64 (85:8%) -14.53 (65:9%) -17.25 (23:7%) -12.38 (60:4%) -14.18 (16:7%)Hierarchical-TA-Analogy -1.05 (92:4%)-11.06 (86:2%)-13.69 (51:2%) -8.54 (91:9%) -9.91 (75:2%)OriginalShortest Path -1.62 (99:7%) -11.94 (99:4%) -8.72 (99:6%)Near-Optimal -1.34 (99:5%) -10.30 (99:3%) -7.62 (99:4%)Flat -2.38 (76:0%) -18.83 (0:1%) -18.92 (0:0%) -15.09 (0:0%) -15.17 (0:0%)Hierarchical -2.04 (72:8%) -16.85 (16:6%) -17.66 (6:9%) -10.99 (49:4%) -11.40 (47:4%)Hierarchical-Analogy -1.74 (81:0%) -15.89 (28:0%) -17.23 (11:3%) -10.11 (61:8%) -10.66 (57:7%)Hierarchical-TA -1.38 (92:6%) -12.96 (62:9%) -17.19 (13:0%) -9.11 (74:4%) -10.37 (61:2%)Hierarchical-TA-Analogy -1.26 (95:5%)-11.30 (81:3%)-14.75 (40:3%) -8.24 (85:5%) -9.51 (73:9%)Table 2: Performance of meta controller. Each column corresponds to different evaluation sets of instructions,while each row corresponds to different configurations of our architecture and the flat controller. Test #3and Test #4 do not include ‘Transform/Pick up all X’ instructions. ‘TA’ indicates the meta controller withtemporal abstraction. Each entry in the table represents reward with success rate in parentheses averaged over10-best runs among 20 independent runs. ‘Shortest Path’ is a hand-designed policy which executes instructionsoptimally based on the shortest path but ignores enemies. ‘Near-Optimal’ is a near-optimal policy that executesinstructions based the shortest path and transforms enemies when they are close to the agent. ‘Forgiving’rows show the result from the forgiving environment used for curriculum learning where target objects areregenerated whenever they do not exist in the world.5 10 15 20Num of instructions−20−15−10−505Reward5 10 15 20Num of instructions0.00.20.40.60.81.0Success rate5 10 15 20Num of instructions050100150200250#steps5 10 15 20Num of instructions05101520#instructions completedShortest-HueristicFlat (Seen)Flat (Unseen)Hierarchy (Seen)Hierarchy (Unseen)Hierarchy-Analogy (Seen)Hierarchy-Analogoy (Unseen)Hierarchy-TA (Seen)Hierarchy-TA (Unseen)Hierarchy-TA-Analogy (Seen)Hierarchy-TA-Analogy (Unseen)Figure 4: Performance per number of instructions. From left to right, the plots show reward, success rate, thenumber of steps, and the average number of instructions completed respectively. Solid and dashed curves showthe performances on seen instructions and unseen instructions respectively.transforming the duck transforms it to an egg in our environment. More analysis of the subtaskcontroller and the effect of analogy-making regularization is discussed in the appendix A and B.6.4 E VALUATION OF META CONTROLLERWe evaluated the meta controller separately from the subtask controller by providing the best-performing subtask controller during training and evaluation. The results are summarized in Table 2and Figure 4. Note that there is a discrepancy between reward and success rate, because success rateis measured only based on the instruction execution, while reward takes into account the backgroundtask (i.e., handling randomly appearing enemy) as well as the instruction execution.Overall performance. Table 2 shows that our hierarchical agent with temporal abstraction andanalogy-making regularization, denoted Hierarchical-TA-Analogy in the table, can handle 20 seeninstructions (Test #1) and 20 unseen instructions (Test #2) correctly with reasonably high successrates. In addition, that agent learned to deal with enemies whenever they appear, and thus it out-performs the ‘Shortest Path’ policy which is near-optimal in executing instructions while ignoringenemies. We further investigated how the number of instructions affects the performance in Figure 4.Although the performance is degraded as the number of instructions increases, our architecture fin-ishes 18 out of 20 seen instructions and 12 out of 20 unseen instructions on average. These resultsshow that our agent is able to generalize to longer compositions of instructions as well as unseeninstructions by just learning to solve short sequences of a subset of instructions.Flat vs. Hierarchy. All our hierarchical controllers outperform the flat controller both on thetraining tasks and longer/unseen instructions (see Table 2). We observed that the flat controllerlearned a sub-optimal policy which assumes that ‘Transform/Pick up X’ instructions are identical to‘Transform/Pick up all X’ instructions. In other words, it always transforms or picks up all existingtargets. Although this simple strategy is a reasonable sub-optimal policy because such wrong actionsare not explicitly penalized in our environment other than through the accumulating penalty per-9Under review as a conference paper at ICLR 2017UpdateShiftABCDABCD-10+1Figure 5: Analysis of the learned policy. ‘Update’ shows our agent’s internal update decision. ‘Shift’ showsour agent’s memory-shift decision which is either -1, 0, or +1 from top to bottom. The bottom text shows theinstruction indicated by the memory pointer, while the top text shows the subtask chosen by the meta controller.(A) the agent transforms the pig given ‘Transform Pig’ instruction and decides to update the subtask (Updateis true) and move to the next instruction. (B) an enemy (red) appears while the agent is executing ‘Pick up allmeat’ instruction (green boxes for meat). The agent changes the subtask to [Transform, enemy]. (C) the agentsuccessfully transforms the enemy and sets the subtask to [Pick up, meat] to resume executing the instruction.(D) the agent picks up the last meat in the world, moves the memory pointer to the next instruction, and sets anew subtask according to the next instruction.time-step, it often unnecessarily removes objects that can be potentially target objects in the futureinstructions. This is why the flat controller performs reasonably well on the short sequences ofinstructions (training) where such cases are rare and on the forgiving environment where targetobjects are restored whenever needed. But, it completely fails on longer instructions in the originalenvironment because the entire task becomes unsolvable when target objects are removed in error.This implies that the flat controller struggles with detecting when a subtask is finished precisely,whereas our hierarchical controllers can easily detect when a subtask is done, because the subtaskcontroller in our communicating architecture provides a termination signal to the meta controller.In addition, the flat controller tends to ignore enemies, while the hierarchical controllers try to dealwith enemies whenever they exist by changing the subtask-arguments communicated by the metacontroller to the subtask controller, which is a better strategy to maximize the reward. The flatcontroller instead has to use primitive actions to deal with both instructions and enemies. Thisimplies that our communicating hierarchical controllers have more advantages for context switchingbetween different sources of tasks (i.e., executing instructions and dealing with enemies).Finally, we observed that the flat controller often makes many mistakes on unseen instructions (e.g.,transform X given ‘Visit X’ as instruction). In contrast, the hierarchical controllers do not make suchmistakes as the subtask controller generalizes well to unseen instructions as discussed in Section 6.3.Effect of Analogy-making. Table 2 shows that analogy-making significantly improves general-ization performance especially on Test #2 (Hierarchical-Analogy outperforms Hierarchical, andHierarchical-TA-Analogy outperforms Hierarchical-TA). This implies that given an unseen targetobject for the ‘Transform/Pick up all’ instruction, the meta controller without analogy-making tendsto fail to check if the target object exists or not. On the other hand, there is almost no improvementby using analogy-making on Test #3 and Test #4 where there are no ‘all’ instruction. This is becausethe meta controller can simply rely on the subtask termination ( bt) given by the subtask controllerto check if the current instruction is finished for non-‘all’ instructions, and the subtask controller(trained with analogy-making) successfully generalizes to unseen subtasks and provides accuratetermination signals to the meta controller. The empirical results showing that analogy-making con-sistently improves generalization performance in both non-analogy-making controllers suggests thatanalogy-making is crucial for generalization to unseen tasks.Effect of Temporal Abstraction. To see the effect of temporal abstractions, we trained a baselinethat updates the memory pointer and the subtask at every time-step which is shown as ‘Hierarchical’and ‘Hierarchical-Analogy’ in Table 2. It turns out that the agent without temporal abstractionsperforms much worse both on the training tasks and testing tasks. We hypothesize that temporalcredit assignment becomes easier with temporal abstractions because the subtask updater (describedin Section 5.1.2) can operate at a larger time-scale by decoupling the update decision from the10Under review as a conference paper at ICLR 2017ABCPick up brownVisit blueVisit redPick up yellowTransform redTransform purplePick up yellowPick up purplePick up yellowTransform purpleVisit yellowVisit redPick up brownVisit yellowPick up purpleTransform blueTransform brownVisit bluePick up purpleTransform blueFirst-person-view(Observation)Top-down-view(Not visible)DABCDFigure 6: Learned policy in 3D environment. The agent observes ‘First-person-view’ images, while ‘Top-down-view’ is not available to the agent. The right text shows the list of instructions. (A) The agent cannot see thetarget block (blue) at this point due to the partially observable nature of the environment and the randomnessof the topology. The agent learned to explore the map to find the target block. (B) Although the currentinstruction is ‘Transform purple’, the agent decides to transform the green block because transforming a greenblock gives a large positive reward (stochastic event). (C) After dealing with the stochastic event, the agentresumes executing the instruction (Traansform purple). (D) The agent finishes the whole list of instructions.Train Test #1 Test #2Set of instructions Seen Seen UnseenNum of instructions 4 20 20Flat -1.87 (92:2%) -22.35 (68:7%) -39.24 (0:0%)Ours -1.41 (95:0%) -15.60 (92:2%) -17.80 (84:3%)Table 3: Performance on 3D environment.subtask selection. In particular, given ‘all’ instructions, the agent should repeat the same subtaskwhile not changing the memory pointer for a long time and the reward is even more delayed. Thiscan possibly confuse the subtask updater without temporal abstractions because it should make thesame decision for the entire time-steps of such instructions. In contrast, the subtask updater withtemporal abstractions can get a direct feedback from the long-term future, since one decision madeby the subtask updater results in multiple primitive actions. We conjecture that this is why the agentslearn more stably with temporal abstractions under delayed reward.Analysis of The Learned Policy. We visualized our agent’s behavior on a task with a long list ofinstructions in Figure 5. We observed that our meta controller learned to communicate the correctsubtask-arguments to the subtask controller and learned to move precisely to the next instructionby shifting the memory pointer whenever the instruction is finished. More interestingly, wheneveran enemy appears, our meta controller immediately changes the subtask to [Transform, enemy]regardless of the instruction and resumes executing the instruction after dealing with the enemy.Throughout the background task and the ‘all’ instructions, the meta controller keeps the memorypointer unchanged as illustrated in (B-D) in the figure. In addition, the agent learned to update thememory pointer and the subtask-argument almost only when it is needed, which provides the subtaskupdater with temporally-extended actions. This is not only computationally efficient but also usefulfor learning a better policy as discussed above.6.5 E VALUATION IN 3D V ISUAL ENVIRONMENTWe developed a similar set of tasks in Minecraft environment based on Oh et al. (2016) as shownin Figure 6. In this environment, the agent can observe only the first-person-view images, whichnaturally involves partial observability. In this environment, even executing a simple instruction(e.g., Visit X) requires the agent to explore the topology to find the target.An observation is represented as a 6464RGB image ( xt2R36464). There are 7 different typesof colored blocks: red, blue, green, yellow, brown, purple, and black which correspond to differenttypes of objects in the grid world experiment. Like 2D grid world environment, the topology of11Under review as a conference paper at ICLR 2017walls and the colored blocks are randomly generated for every episode. A wall not only acts as anobstacle but also occludes the objects behind it as shown in Figure 6, which makes the task morechallenging.The agent has 9 actions: Look (Left/Right/Up/Down), Move (Forward/Backward), Pick up ,Trans-form , and No operation .Look left /right actions change the yaw of the agent by 90 degree, whileLook up /down actions change the pitch of the agent by 45 degree. Move forward /backward actionsmove the agent by one block according to the agent’s looking direction. Pick up removes the blockin front of the agent, and Transform changes the block in front of the agent to the black-coloredblock.We used the same reward function used in the 2D grid world experiment. In addition, a green blockrandomly appears and transforming a green block gives +0:9reward regardless of instructions,which acts as a stochastic event. Each instruction is one of the following: Visit X, Pick up X, andTransform X where ‘X’ is the target color. We excluded ‘all’ instructions in this environment becausewe found that solving ‘all’ instructions given a limited amount of time is extremely challenging evenfor humans due to the partial observability.We used almost the same architectures used in the 2D grid world experiment except that a longshort-term memory (Hochreiter and Schmidhuber, 1997) is added on top of the final convolutionlayer both in the subtask controller and the meta controller, as it is one of the simplest ways to dealwith partial observability (Hausknecht and Stone, 2015; Mnih et al., 2016; Oh et al., 2016). Wefollowed the same training scheme used in the 2D grid world experiment.Table 3 shows that our proposed architecture significantly outperforms the flat controller baselineespecially on the test sets of instructions. We observed that the flat controller tends to strugglewith detecting when an instruction is finished and completely fails on unseen instructions, while ourarchitecture performs well on unseen and longer instructions. As shown in Figure 6, our architecturelearned to find the target blocks, detect when an instruction is finished, and deal with the stochasticevent. This result demonstrates that the proposed approach can also be applied to a more complexvisual environment.7 C ONCLUSIONIn this paper, we explored zero-shot task generalization in RL with a new problem where the agentis required to execute a sequence of instructions and to generalize over longer sequences of (un-seen) instructions without additional learning. To solve the problem, we presented a hierarchicaldeep RL architecture in which a meta controller learns a closed-loop policy of subtask-argumentcommunications to a subtask controller which executes the given subtask and communicates its ac-complishment back to the meta controller. Our architecture not only generalizes to unseen tasksafter training but also deals with random events relevant to a background task. In addition, we pro-posed several techniques that led to improvements in both training and generalization performance.First, analogy-making regularization turned out to be crucial for generalization to unseen subtasks.Second, learning temporal abstractions improved the performance by making the subtask updateroperate at a larger time-scale. One interesting line of future work would be to define and solvericher task instructions such as conditional statements (i.e., IF-THEN-ELSE) and loop instructions(i.e., collect 3 target objects). Moreover, end-to-end training of the whole hierarchy and discoveringthe subtask decomposition would be important future work.
SJDFGUZEe
Review.
3: Clear rejection
The paper presents a hierarchical DRL algorithm that solves sequences of navigate-and-act tasks in a 2D maze domain. During training and evaluation, a list of sub-goals represented by text is given to the agent and its goal is to learn to use pre-learned skills in order to solve a list of sub-goals. The authors demonstrate that their method generalizes well to sequences of varying length as well as to new combinations of sub-goals (i.e., if the agent knows how to pick up a diamond and how to visit an apple, it can also visit the diamond). Overall, the paper is of high technical quality and presents an interesting and non-trivial combination of state-of-the-art advancements in Deep Learning (DL) and Deep Reinforcement Learning (DRL). In particular, the authors presents a DRL agent that is hierarchical in the sense that it can learn skills and plan using them. The skills are learned using a differential temporally extended memory networks with an attention mechanism. The authors also make a novel use of analogy making and parameter prediction. However, I find it difficult to understand from the paper why the presented problem is interesting and why hadn't it bee solved before. Since the domain being evaluated is a simple 2D maze, using deep networks is not well motivated. Similar problems have been solved using simpler models. In particular, there is a reach literature about planning with skills that had been ignored completely by the authors. Since all of the skills are trained prior to the evaluation of the hierarchical agent, the problem that is being solved is much more similar to supervised learning than reinforcement learning (since when using the pre-trained skills the reward is not particularly delayed). The generalization that is demonstrated seems to be limited to breaking a sentence (describing the subtask) into words (item, location, action). The paper is difficult to read, it is constantly switching between describing the algorithm and giving technical details. In particular, I find it to be overloaded with details that interfere with the general understanding of the paper. I suggest moving many of the implementation details into the appendix. The paper should be self-contained, please do not assume that the reader is familiar with all the methods that you use and introduce all the relevant notations. I believe that the paper will benefit from addressing the problems I described above and will make a better contribution to the community in a future conference.
4: The reviewer is confident but not absolutely certain that the evaluation is correct
SJNDWNOlg
ICLR.cc/2017/conference
2017
What Is the Best Practice for CNNs Applied to Visual Instance Retrieval?
["Jiedong Hao", "Jing Dong", "Wei Wang", "Tieniu Tan"]
Previous work has shown that feature maps of deep convolutional neural networks (CNNs) can be interpreted as feature representation of a particular image region. Features aggregated from these feature maps have been exploited for image retrieval tasks and achieved state-of-the-art performances in recent years. The key to the success of such methods is the feature representation. However, the different factors that impact the effectiveness of features are still not explored thoroughly. There are much less discussion about the best combination of them. The main contribution of our paper is the thorough evaluations of the various factors that affect the discriminative ability of the features extracted from CNNs. Based on the evaluation results, we also identify the best choices for different factors and propose a new multi-scale image feature representation method to encode the image effectively. Finally, we show that the proposed method generalises well and outperforms the state-of-the-art methods on four typical datasets used for visual instance retrieval.
["Computer vision", "Deep learning"]
ABSTRACTPrevious work has shown that feature maps of deep convolutional neural networks(CNNs) can be interpreted as feature representation of a particular image region.Features aggregated from these feature maps have been exploited for image re-trieval tasks and achieved state-of-the-art performances in recent years. The keyto the success of such methods is the feature representation. However, the differentfactors that impact the effectiveness of features are still not explored thoroughly.There are much less discussion about the best combination of them.The main contribution of our paper is the thorough evaluations of the various fac-tors that affect the discriminative ability of the features extracted from CNNs.Based on the evaluation results, we also identify the best choices for differentfactors and propose a new multi-scale image feature representation method to en-code the image effectively. Finally, we show that the proposed method generaliseswell and outperforms the state-of-the-art methods on four typical datasets used forvisual instance retrieval.1 I NTRODUCTIONImage retrieval is an important problem both for academic research and for industrial applications.Although it has been studied for many years (Sivic & Zisserman, 2003; Philbin et al., 2007; Toliaset al., 2015), it is still a challenging task. Generally, image retrieval is divided into two groups. Thefirst one is the category-level image retrieval (Sharma & Schiele, 2015), in which an image in thedataset is deemed to be similar to the query image if they share the same class or they are similar inshape and local structures. The other group is the instance-level image retrieval (Tolias et al., 2015),in which an image is considered to match the query if they contain the same object or the samescene. The instance-level image retrieval is harder in that the retrieval method need to encode thelocal and detailed information in order to tell two images apart, e.g., the algorithm should be ableto detect the differences between the Eiffel Tower and other steel towers although they have similarshapes. In this paper, we focus on the instance-level image retrieval.Traditionally, visual instance retrieval is mainly addressed by the BoF (bag of features) based meth-ods using the local feature descriptors such as SIFT (Lowe, 2004). In order to boost the retrievalperformances, post-processing techniques such as query expansion (Chum et al., 2007) and spatialverification (Philbin et al., 2007) are also employed.With the decisive victory (Krizhevsky et al., 2012) over traditional models in the ImageNet (Rus-sakovsky et al., 2015) image classification challenge, convolutional neural networks (Lecun et al.,1998) continue to achieve remarkable success in diverse fields such as object detection (Liu et al.,2015; Shaoqing Ren, 2015), semantic segmentation (Dai et al., 2016) and even image style trans-fer (Gatys et al., 2016). Networks trained on the Imagenet classification task can generalize quitewell to other tasks, which are either used off-the-shelf (Razavian et al., 2014a) or fine-tuned on thetask-specific datasets (Azizpour et al., 2014; Long et al., 2015). Inspired by all these, researchersin the field of image retrieval also shift their interest to the CNNs. Their experiments have shownpromising and surprising results (Babenko et al., 2014; Razavian et al., 2014c; Tolias et al., 2015),which are on par with or surpass the performances of conventional methods like BoF and VLAD(vector of locally aggregated descriptors) (J ́egou et al., 2010; Arandjelovi ́c & Zisserman, 2013) .1Under review as a conference paper at ICLR 2017Despite all these previous advances (Babenko et al., 2014; Babenko & Lempitsky, 2015; Toliaset al., 2015) on using CNNs for image feature representation, the underlying factors that contributeto the success of off-the-shelf CNNs on the image retrieval tasks are still largely unclear and un-explored, e.g., which layer is the best choice for instance retrieval, the convolutional layer or thefully-connected layer? What is the best way to represent the multi-scale information of an image?Clarifying these questions will help us advance a further step towards building a more robust andaccurate retrieval system. Also in situations where a large numbers of training samples are not avail-able, instance retrieval using unsupervised method is still preferable and may be the only option.In this paper, we aim to answer these questions and make three novel contributions. Unlike pre-vious papers, we explicitly choose five factors to study the image representations based on CNNsand conduct extensive experiments to evaluate their impacts on the retrieval performances. We alsogive detailed analysis on these factors and give our recommendations for combining them. Dur-ing experiments, we borrow wisdoms from literatures and evaluate their usefulness, but find thatthey are not as effective as some of the simpler design choices. Second, by combining the insightsobtained during the individual experiments, we are able to propose a new multi-scale image rep-resentation, which is compact yet effective. Finally, we evaluate our method on four challengingdatasets, i.e., Oxford5k, Paris6k, Oxford105k and UKB. Experimental results show that our methodis generally applicable and outperforms all previous methods on compact image representations bya large margin.2 R ELATED WORKMulti-scale image representation . Lazebnik et al. (2006) propose the spatial pyramid matchingapproach to encode the spatial information using BoF based methods. They represent an image us-ing a pyramid of several levels or scales. Features from different scales are combined to form theimage representation in such a way that coarser levels get less weight while finer levels get moreweight. Their argument is that matches found in coarser levels may involve increasingly dissimilarimage features. In our paper, we also explore the multi-scale paradigm in the same spirit using theconvolutional feature maps as the local descriptors. We find that the deep features from the convolu-tional feature maps are distinct from the traditional descriptors: the weighted sum of different levelof features shows no superior performances than a simple summation of them. Kaiming et al. (2014)devise an approach called SPP (spatial pyramid pooling). In SPP, feature maps of the last convo-lutional layer are divided into a 3 or 4 scale pyramid. First the regional features in each scale areconcatenated, then the scale-level features are concatenated to a fixed length vector to be forwardedto the next fully-connected layers. We find that this strategy is ineffective for unsupervised instanceretrieval, leading to inferior performances compared to other simple combination methods (see thepart about multi-scale representation in section 5.2 for more details.).Image representation using off-the-shelf CNNs . Gong et al. (2014) propose the MOP (multi-scale orderless pooling) method to represent an image in which VLAD is used to encode the level2 and level 3 features. Then features from different scales are PCA-compressed and concatenatedto form the image features. This method is rather complicated and time-consuming. At the sametime, Babenko et al. (2014) use Alexnet (Krizhevsky et al., 2012) trained on the Imagenet 1000-classclassification task and retrain the network on task-related dataset. The retraining procedure gives aboost to the retrieval performances. Instead of using the output of the fully-connected layers as theimage feature representations, Babenko & Lempitsky (2015) use the output feature maps of last con-volutional layer to compute the image features. Recently, instead of sum-pooling the convolutionalfeatures, Tolias et al. (2015) use max-pooling to aggregate the deep descriptors. Their multi-scalemethod, called R-MAC (regional maximum activation of convolutions), further improves the pre-vious results on four common instance retrieval datasets. Our work differs from these papers inthat we explicitly explore the various factors that underpin the success of unsupervised instance re-trieval, which have not been fully explored and analysed. By carefully choosing the different settingfor each factor and combining them in a complementary way, we show that a large improvement canbe achieved without additional cost.2Under review as a conference paper at ICLR 20173 I MPACTING FACTORSWhen we employ off-the-shelf CNNs for the task of instance-level image retrieval, a natural questionis: what kind of design choices should we make in order to make full use of the representationalpower of existing models? In this section, we summarize the five factors that may greatly impactthe performance of the final image retrieval system. In section 5.2, we will show our experimentalresults on each key factor. Before we delve into the impacting factors, first we will give a briefintroduction about how to represent an image using the activation feature maps of a certain layer.3.1 CNN F EATURES FOR INSTANCE RETRIEVALIn this paper, we are mainly interested in extracting compact and discriminative image features usingthe off-the-shelf CNNs in an efficient way. For a given image I, we simply subtract the mean valueof the RGB channels from the original image and do not do other sophisticated preprocessing. Thenthe image is fed into the convolutional network and goes through a series of convolutions, non-linearactivations and pooling operations. The feature activation maps of a certain layer can be interpretedas the raw image features, based on which we build the final image features. These feature mapsform a tensor of size KHW, where Kis the number of feature channels, and HandWareheight and width of a feature map. Each feature map represents a specific pattern which encodesa small part of information about the original image. If we represent the set of feature maps asF=fFig; i= 1;2; : : : ; K , where Fiis the ithactivation feature map, then the most simple imagefeature is formulated as:f= [f1; f2; : : : ; f i; : : : ; f K]T: (1)In the above equation 1, fiis obtained by applying the feature aggregation method (see section 3.2)over the ithfeature map Fi. Throughout this paper, we use feature maps after the non-linear acti-vations (ReLU) so that the elements in each feature map are all non-negative. We also experimentwith feature maps prior to ReLU, but find that they lead to inferior performances. After the imagefeature representation is obtained, post-processing techniques such as PCA and whitening can befurther applied.3.2 I MPACTING FACTORS ON PERFORMANCEFeature aggregation and normalization. After the feature maps of a certain layer are obtained,it is still challenging to aggregate the 3-dimensional feature maps to get compact vector represen-tations for images. Previous papers use either sum-pooling (Babenko & Lempitsky, 2015) or max-pooling (Tolias et al., 2015) followed by l2-normalization. Sum-pooling over a particular featuremapFiis expressed asfi=HXm=1WXn=1Fi(m; n); i2f1;2; : : : ; Kg; (2)while max-pooling is given byfi= maxm;nFi(m; n); (3)where m; n are all the possible values over the spatial coordinate of size HW. In this paper,for the first time, different combinations of aggregation and normalization methods ( l2andl1in themanner of RootSIFT (Arandjelovi ́c & Zisserman, 2012)) are evaluated and their results are reported.Output layer selection. Zeiler & Fergus (2014) has shown that image features aggregated fromthe feature activation maps of certain layers have interpretable semantic meanings. Gong et al.(2014) and Babenko et al. (2014) use the output of the first fully-connected layer to obtain theimage features, while Babenko & Lempitsky (2015) and Tolias et al. (2015) use the output featuremaps of the last convolutional layer. But these choices are somewhat subjective. In this paper, weextract dataset image features from the output feature maps of different layers and compare theirretrieval performances. Based on the finding in this experiment, we choose the best-performinglayer and also come up with a layer ensemble approach which outperforms state-of-the-art methods(see section 5.3).Image resizing. Famous models such as Alexnet (Krizhevsky et al., 2012) and VGGnet (Simonyan& Zisserman, 2014) all require that the input images have fixed size. In order to meet this require-ment, previous papers (Gong et al., 2014; Babenko & Lempitsky, 2015) usually resize the input3Under review as a conference paper at ICLR 2017(a) level 1 (b) level 2 (c) level 3Figure 1: An illustration of multi-scale representation of an image. The whole image is divided into 3levels from the coarsest (level 1) to the finest (level 3). At each level, the image is divided into different numberof equal-sized regions.images to the fixed size. We postulate that the resizing operation may lead to the distortion of im-portant information about the objects in the natural images. Ultimately, this kind of operation mayhurt the discriminative power of image features extracted from the network, thus degrading the re-trieval performances. For the task of image retrieval, we think it is best to keep the images theiroriginal sizes and feed them directly to the network whenever possible. In this paper, three imageresizing strategies are explored:• Both the height and width of the dataset images are set to the same fixed value (denoted astwo-fixed ).• The minimum of each dataset image’s size is set to a fixed value. (The aspect ratio of theoriginal image is kept.) (denoted as one-fixed ).• The images are kept their original sizes. (denoted as free).Multi-scale feature representation. Unlike local feature descriptors such as SIFT (Lowe, 2004),the feature vector extracted from the deep convolutional networks for an image is a global descriptorwhich encodes the holistic information. When used for image retrieval, this kind of features stilllack the detailed and local information desired to accurately match two images. Inspired by spatialpyramid matching (Lazebnik et al., 2006) and SPP (Kaiming et al., 2014), we explore the feasibilityof applying this powerful method to obtain discriminative image features. An image is representedby aL-level pyramid, and at each level, the image is divided evenly into several overlapping ornon-overlapping regions. The vector representations of these small regions are computed, then theregional vectors are combined to form the image feature vectors. The single scale representation ofan image is just a special case of the multi-scale method in which the number of level Lequals 1.Figure 1 shows an example of 3level representations of an image. The time cost of re-feeding thosesmall regions into the network to compute the regional vectors would be huge, thus unacceptablefor instance retrieval tasks. Inspired by the work of Girshick (2015) and Tolias et al. (2015), weassume a linear projection between the original image regions and the regions in the feature mapsof a certain layer. Then the regional feature vectors can be efficiently computed without re-feedingthe corresponding image regions. In section 5.2, various settings for the multi-scale and scale-level feature combination methods are explored and their retrieval performances are reported andanalysed.PCA and whitening. Principal Component Analysis (PCA) is a simple yet efficient method forreducing the dimensionality of feature vectors and decorrelating the feature elements. Previouswork (Babenko et al., 2014; J ́egou et al., 2010) has shown evidences that PCA and whitened featurescan actually boost the performances of image retrieval. In this paper, we further investigate theusefulness of PCA and whitening within our pipeline and give some recommendations.4Under review as a conference paper at ICLR 20174 I MPLEMENTATIONWe use the open source deep learning framework Caffe (Jia et al., 2014) for our whole experiments.The aim of this research is to investigate the most effective ways to exploit the feature activations ofexisting deep convolutional models. Based on past practices for networks to go deeper (Krizhevskyet al., 2012; Simonyan & Zisserman, 2014; Szegedy et al., 2015; He et al., 2015), a consideration formoderate computational cost, and also the results from Tolias et al. (2015) that deeper networks workbetter than shallower ones, we decide to use the popular VGG-19 model (Simonyan & Zisserman,2014) trained on ImageNet as our model.Network transformation . The original VGG-19 network only accepts an image of fixed size ( 224224), which is not the optimal choice when extracting image features for retrieval tasks. In order forthe network to be able to process an image of arbitrary size (of course, the image size can not exceedthe GPU’s memory limit) and for us to experiment with different input image resizing strategies, weadapt the original VGG-19 network and change the fully-connected layers to convolutional (Longet al., 2015) layers. For more details about network transformations, see appendix A.5 E XPERIMENTSIn this section, we first introduce the datasets used and the evaluation metrics. Then we reportour experimental results for different impacting factors and give detailed analysis. In the last part,we show the performance of our method considering all these impacting factors and compare ourmethod with the state-of-the-art methods on four datasets.5.1 D ATASETS AND EVALUATION METRICSThe Oxford5k dataset (Philbin et al., 2007) contains 5062 images crawled from Flickr by using11 Oxford landmarks as queries. A total of 11 groups of queries — each having 5 queries withtheir ground truth relevant image list, are provided. For each query, a bounding box annotation isalso provided to denote the query region. During experiment, we report results using the full queryimages (denoted as full-query) and image regions within the bounding boxes of the query images(denoted as cropped-query). The performance on this dataset is measured by mAP (mean averageprecision) over all queries.The Paris6k dataset (Philbin et al., 2008) includes 6412 images1from Flickr which contains 11landmark buildings and the general scenes from Paris. Similar to the Oxford5k dataset, a total of 55queries belonging to 11 groups and the ground truth bounding boxes for each query are provided .The performance is reported as mAP over 55 queries.The Oxford105k2dataset contains the original Oxford5k dataset and additional 100,000 im-ages (Philbin et al., 2007) from Flickr. The 100,000 images are disjoint with the Oxford5k datasetand are used as distractors to test the retrieval performance when the dataset scales to larger size.We use the same evaluation protocol as the Oxford5k on this dataset.TheUKB dataset (Nist ́er & Stew ́enius, 2006) consists of 10200 photographs of 2550 objects, eachobject having exactly 4 images. The pictures of these objects are all taken indoor with large variationin orientation, scale, lighting and shooting angles. During experiment, each image is used to querythe whole dataset. The performance is measured by the average number of same-object images inthe top-4 results.5.2 R ESULTS AND DISCUSSIONIn this section, we report the results of experiments on the impact of different factors and analysetheir particular impact. The experiments in this section are conducted on the Oxford5k dataset.Feature aggregation and normalization. In this experiment, we compare the different combina-tions of feature aggregation (sum-pooling and max-pooling) and normalization methods ( l2andl1)1Following conventions, 20 corrupted images from this dataset are removed, leaving 6392 valid images.2The image named “portrait 000801.jpg” was corrupted and manually removed from this dataset.5Under review as a conference paper at ICLR 2017Table 1: Comparison between different combi-nations of feature aggregation and normaliza-tion methods.Method full-query cropped-querymax -l1 52.4 48.0sum -l2 58.0 52.6sum -l1 60.3 56.3max -l2 60.1 53.5Table 2: Comparison between different imageresizing strategies. The numbers in the parenthe-ses denote the sizes in which the maximum mAPsare achieved.Method full-query cropped-querytwo-fixed 55.5 (864) 38.7 (896)one-fixed 59.0 (800) 39.3 (737)free 58.0 52.6in terms of their retrieval performances. We use features from the layer conv5 4 with the freeinputimage size. The results (%) are shown in Table 1. Sum-pooling followed by l1normalization leadsto slightly better results than the other combinations, especially for the cropped-query. However,after preliminary experiment with a multi-scale version of sum -l1andmax -l2, we find that max -l2is much better than sum -l1. For example, employing a 4 level representation of images in the Ox-ford5k dataset, for the case of full-query, we find that the mAP for the max -l2method is 65.1, whilethe mAP for sum -l1is only 51.3 (even lower than the single scale representation). Base on theseresults, we stick to max -l2in computing the final image features.Output layer selection. In order to verify their feasibility for instance retrieval, we extract fromthe network the output feature maps of different layers and aggregate them to get the image featurevectors. We evaluate the performances using features from layer conv3 3 up to the highest fc7-convlayer (except the pooling layers, i.e.pool3, pool4 and pool5). Single-scale representations of thedataset images are used in this experiment.Figure 2 shows the retrieval performances of image features corresponding to different layers. Theretrieval performances for both the full and cropped queries increase as the layer increases fromlower layer conv3 3 to higher layers and plateau in layer conv5 4 and fc6-conv, then the perfor-mances begin to decrease as the layers increase to fc7-conv. The result shows that features fromlower layers such as conv3 3 and conv3 4 are too generic and lack the semantic meanings of theobject in the image, thus rendering them unsuitable for instance retrieval. On the other hand, fea-tures from the highest layer (fc7-conv) contain the semantic meaning of objects but lack the detailedand local information needed to match two similar images. The best results are obtained in layerconv5 4 (0.601) and fc6-conv (0.618), where the feature vectors combine both the low-level detailedinformation and high level semantic meanings of the image. Based on these observations and therequirement for keeping the image features compact, we mainly focus on image features from thelayer conv5 4 (dimensionality = 512 compared to 4096 of layer fc6-conv).conv3_3 conv3_4 conv4_1 conv4_2 conv4_3 conv4_4 conv5_1 conv5_2 conv5_3 conv5_4 fc6-conv fc7-convlayer names0.160.240.320.400.480.560.64mAPfull-querycropped-queryFigure 2: Performance comparison between different layers. This experiment is conducted using the freeinput image size.Image resizing. We experiment with 3 kinds of image resizing strategies which are detailed insection 3.2. We use grid search to find the optimal size for the two-fixed andone-fixed strategy. Asis shown in Table 2, the freeinput strategy outperforms or is close to the other two strategies: it6Under review as a conference paper at ICLR 2017performs especially well in the cropped-query case. This experiment shows that changing the imageaspect ratio ( two-fixed ) distorts the image information, thus reducing the performance dramatically.Theone-fixed way is better than the two-fixed method. But information loss still occurs due to theresizing operation. The freemethod is able to capture more natural and un-distorted informationfrom the images, which explains its superior performance over the other two methods. It is best tokeep the images their original sizes for the instance retrieval tasks.The benefit of multi-scale representation. In our multi-scale approach, the regional vectors fromeach scale are simply added together and l2-normalized to form the scale-level feature vectors. Thenfeatures from different scales are combined and l2-normalized to form the image representations. Infact, we also experimented with two methods which concatenate features from different scales. Thefirst method is in same vein to spatial pyramid pooling (Kaiming et al., 2014), i.e., region-level aswell as the scale-level features are all concatenated to form a high dimensional vector. In the secondmethod, region-level features are added while scale-level features are concatenated. We find thatthese two methods all lead to inferior results. The performance drop for the first in the case ofcropped-query can be as large as 41%. The high dimensionality of the concatenated features (largerthan 1.5k) will also lead to longer running times. Considering all these, we do not use concatenationof features in the following experiments.Table 3: Multi-scale representation: comparison between different methods. “overlap” denotes whetherthe regions in each level (see Figure 1) have some overlapping areas. “s2”,“s3” mean that overlap occurs inlevel 2 or 3. “weighing” means if the features from each level are added using same weight or different weight.“version” means the different choice of the number of regions in each scale.scale overlap weighing version full-query cropped-query(a1) 2 × × - 63.5 59.0(a2) 2 × X - 63.9 61.0(b1) 3 × × - 64.2 60.9(b2) 3 × X - 62.6 61.0(b3) 3 s2 × - 64.8 60.8(c1) 4 s3 × v1 65.1 61.4(c2) 4 s3 X v1 64.8 60.7(c3) 4 s2,s3 × v1 65.5 60.8(c4) 4 s2,s3 × v2 65.9 61.5(c5) 4 s2,s3 X v2 65.4 61.2(c6) 4 × × v3 64.5 61.3(c7) 4 s3 × v3 65.8 62.2(c8) 4 s2,s3 × v3 66.3 62.6We conduct extensive experiments to decide the best configurations for the multi-scale approach andreport our results in Table 3. First, we explore the impact of the number of scales on the retrievalperformances. For the 2 and 3 scale representations, The region number for each level are f11,22g,f11,22,33g. For the 4 scale representation, 3 versions are used and they differ in thenumber of regions in each scale: for “v1”, “v2”, and “v3”, the number of regions are f11,22,33,44g,f11,22,33,55gandf11,22,33,66g. Table 3 (a1)(b1)(c6) showthe performances of using 2, 3, and 4 scales to represent the dataset images, respectively. Clearly,more scale levels improve the results and in the case of cropped-query, increase the performance byan absolute 2%.We also conduct experiments to find whether the weighing of different scales leads to improvedperformance. The weighing method for features from different scales is similar to the manner ofspatial pyramid matching (Lazebnik et al., 2006) — features from coarser level are given less weightwhile features from the finer levels are given more weight. Suppose the features of different scalesfor an Lscale representation are f1; f2; : : : ; fL, then the image representation fis expressed as:f=12L1f1+LXi=212Li+1fi. (4)More details can be found in Lazebnik et al. (2006). Comparing the results of row (a1) and (a2), itseems that weighing different scales leads to better performance. But after more experiments, wefind that the weighing method generally leads to inferior results as the number of scales increase,7Under review as a conference paper at ICLR 201716 80 144 208 272 336 400 464 528number of principal component reserved.0.250.350.450.550.650.75mAPcrop-pariscrop-selffull-parisfull-selfFigure 3: The number of principal component reserved VS mAP. We show the results of full and croppedquery using the PCA and whitening matrix learned from the Oxford5k itself and Paris6k, denoted as “full-self”,“full-paris” and “crop-self”, “crop-paris”.e.g., compare the results of row pair(b1)(b2) and (c1)(c2). These results suggest that deep featuresare different from the traditional local feature descriptors such as SIFT. We should exercise withcaution when we apply the traditional wisdom found in SIFT to the deep convolutional descriptors,which is also suggested in Babenko & Lempitsky (2015). Based on the results of this experiment,no weighing methods are used in computing our final image feature representations.Next, we look into the issue of overlapping between different scales and try to verify its usefulness.For each scale and its different versions, we set some overlapping areas between the neighboringregions in either one or two scales of the pyramid (For the exact configurations of overlap in all casesin Table 3, see appendix B for the complete descriptions). From the row pair (b1)(b3) and (c1)(c3),we can see that overlap increase the performance for full-query but decrease a little the performancefor cropped-query. But for 4 scale v3 (note the pair(c7)(c8)), we see a consistent improvement forboth the full and cropped queries. So we decided to use overlap in level 2 and 3 in computing ourfinal features.Table 4: The impact of PCA and whitening. “PCA on self” and “PCA on Paris” mean that the correspondingfeatures are post-processed by the PCA and whitening matrices learned on the Oxford5k and Paris6k datasets,respectively. The numbers in the parentheses indicate the dimensionality of features used for obtaining thecorresponding results.Feature full-query cropped-query3scale overlap, original 64.8 60.83scale overlap, PCA on self 65.4(80) 60.9(112)3scale overlap, PCA on Paris 70.6(464) 67.3(480)4scale v3overlap(s3), original 65.1 61.44scale v3overlap(s3), PCA on self 66.9(80) 61.9(96)4scale v3overlap(s3), PCA on Paris 72.3(464) 70.8(496)4scale v3overlap(s2,s3),original 66.3 62.84scale v3overlap(s2,s3), PCA on self 69.0(80) 63.9(144)4scale v3overlap(s2,s3), PCA on Paris 73.2(496) 71.2(448)PCA and whitening . We perform PCA and whitening for the features extracted from the Oxford5kdataset using the PCA and whitening matrix learned from the Oxford5k or the Paris6k dataset andl2-normalize these features to get the final image representations.The retrieval results for 3 groups of features (from Table 3(b3)(c1)(c8)) are shown in Table 4.Clearly, PCA and whitening lead to better performances. For all 3 groups of features, PCA and8Under review as a conference paper at ICLR 2017Table 5: Comparison with state-of-the-art methods. “single” means multi-scale features from single layer(conv5 4) are used. “single, compression” uses the same features but compresses them to get the best perfor-mances. “layer ensemble” combines the similarity score from layer conv5 4 and fc6-conv. The dimensionalityof the combined feature is set to 1024 for compactness considerations. All our methods use PCA and whitening.method DOxford5k Paris6k Oxford105kUKBfull cropped full cropped full croppedJ ́egou & Zisserman (2014) 128 - 43.3 - - - 35.3 3.40Arandjelovi ́c & Zisserman (2012) 128 - 44.8 - - - 37.4 -J ́egou & Zisserman (2014) 1024 - 56.0 - - - 50.2 3.51Razavian et al. (2014b) 256 53.3 - 67.0 - 48.9 - 3.38Babenko et al. (2014) 512 55.7 - - - 52.2 - 3.56Babenko & Lempitsky (2015) 256 58.9 53.1 - - 57.8 50.1 3.65Arandjelovi ́c et al. (2016) 256 62.5 63.5 72.0 73.5 - - -Tolias et al. (2015) 512 - 66.8 - 83.0 - 61.6 -ours (single) 512 73.0 70.6 82.0 83.3 68.9 65.3 3.75ours (single, compression) - 73.2 71.2 83.0 84.0 68.9 65.8 3.76ours (layer ensemble) 1024 75.6 73.7 85.7 85.9 71.6 69.2 3.81whitening on the same dataset lead to insignificant improvement both in the case of full and croppedquery. But after doing PCA and whitening on the Paris6k dataset, the results for both the full andcropped queries improve greatly. In fact, the improvement for the case of cropped-query is evenmore surprising. For example, for the third feature group, the improvement are 10.4% and 13.4%for the full and cropped queries. It should also be noted that as the the number of principal compo-nent reserved increases, the performance for “PCA on self” and “PCA on Paris” differs greatly. As isshown in Figure 3, the performance for the former peaks at a relatively low dimension (around 100)and begins to decrease, while for the latter, the performance increases as the number of principalcomponent gets larger and then plateaus.Do the above results mean that we should always compute the PCA and whitening matrix from anydatasets other than the query dataset itself? The short answer is no. We find that for UKB, learningthe PCA and whitening matrix on the Oxford5k dataset shows inferior results compared to learningthe PCA and whitening matrix on UKB itself (about 2% drop in accuracy). This may be due to thelarge differences between the images of the two datasets as the Oxford5k dataset are mainly imagesof buildings while the images in UKB are mainly small indoor objects. We therefore recommendlearning the PCA and whitening matrix on a similar dataset to achieve good performances.5.3 C OMPARISON WITH OTHER METHODSBased on the previous experimental results and our analysis of different impacting factors on theretrieval performances, we propose a new multi-scale image feature representation. For a givenimage in the dataset, the whole process of image feature representation is divided into two steps.First, the input image is fed into the network without the resizing operation (the freeway) and a4-scale feature representation is built on top of the feature maps of layer conv5 4. During the multi-scale representation step, max-pooling of feature maps are used and regional vectors from the samescale are added together and l2-normalized. After that, features from different scales are summedandl2-normalized again. The second step involves applying the PCA and whitening operations onfeatures from the first step. The PCA and whitening matrix used are either learned from differentor same dataset: specifically, for the Oxford5k and Oxford105k, it is learned in the Paris6k, whilefor Paris6k and UKB, it is learned on Oxford5k and UKB respectively. The final PCA and whitenedimage features are used for reporting our method’s performances.Layer ensemble . Inspired by previous work on model ensemble to boost the classification perfor-mances (Krizhevsky et al., 2012; Simonyan & Zisserman, 2014), we consider fusing the similarityscore from different layers to improve the retrieval performances. Specifically, for two images, theirsimilarity score is computed as the weighted sum of the scores from different layers (these weightssum to 1 so that overall similarity score between two images are still in the range [0;1].). We haveevaluated various combination of layers to see their performances and find that best performanceis achieved by combining the score from conv5 4 and fc6-conv. For the fc6-conv features of animage, we use a 3-scale representation as the size of output feature maps are already very small.9Under review as a conference paper at ICLR 2017The fc6-conv features are compressed to low dimensional vectors for faster computation. Our layerensemble achieves 75.6% and 73.7% on Oxford5k for the full and cropped queries respectively,showing a large improvement over previous methods. This suggests that features from the fc6-convand conv5 4 are complementary. See Table 5 for the complete results on all four datasets.Comparison . We compare the performance of our method with several state-of-the-art methodswhich use small footprint representations and do not employ the complicated post-processing tech-niques such as geometric re-ranking (Philbin et al., 2007) and query expansion (Arandjelovi ́c &Zisserman, 2012). The results are shown in Table 5. In all the datasets and different scenarios(full or cropped), our method achieves the best performance with comparable cost. For Oxford5k(cropped) and UKB dataset, the relative improvement of our best results over previous methods(from Tolias et al. (2015) and Babenko & Lempitsky (2015)) are 10.3% and 4.4%.6 C ONCLUSIONIn this paper, we focus on instance retrieval based on features extracted from CNNs. we have con-ducted extensive experiments to evaluate the impact of five factors on the performances of imageretrieval and analysed their particular impacts. Based on the insights gained from these experiments,we have proposed a new multi-scale image representation which shows superior performances overprevious methods on four datasets. When combined with the technique “layer ensemble”, ourmethod can achieve further improvements. Overall, we have provided a viable and efficient solutionto apply CNNs in an unsupervised way to datasets with a relatively small number of images.
By5gfegEg
An outdated method with misleading claims.
3: Clear rejection
This paper explores different strategies for instance-level image retrieval with deep CNNs. The approach consists of extracting features from a network pre-trained for image classification (e.g. VGG), and post-process them for image retrieval. In other words, the network is off-the-shelf and solely acts as a feature extractor. The post-processing strategies are borrowed from traditional retrieval pipelines relying on hand-crafted features (e.g. SIFT + Fisher Vectors), denoted by the authors as "traditional wisdom". Specifically, the authors examine where to extract features in the network (i.e. features are neurons activations of a convolution layer), which type of feature aggregation and normalization performs best, whether resizing images helps, whether combining multiple scales helps, and so on. While this type of experimental study is reasonable and well motivated, it suffers from a huge problem. Namely it "ignores" 2 major recent works that are in direct contradictions with many claims of the paper ([a] "End-to-end Learning of Deep Visual Representations for Image Retrieval" by Gordo et al. and [b] "CNN Image Retrieval Learns from BoW: Unsupervised Fine-Tuning with Hard Examples" by Radenović et al., both ECCV'16 papers). These works have shown that training for retrieval can be achieved with a siamese architectures and have demonstrated outstanding performance. As a result, many claims and findings of the paper are either outdated, questionable or just wrong. Here are some of the misleading claims: - "Features aggregated from these feature maps have been exploited for image retrieval tasks and achieved state-of-the-art performances in recent years." Until [a] (not cited), the state-of-the-art was still largely dominated by sparse invariant features based methods (see last Table in [a]). - "the proposed method [...] outperforms the state-of-the-art methods on four typical datasets" That is not true, for the same reasons than above, and also because the state-of-the-art is now dominated by [a] and [b]. - "Also in situations where a large numbers of training samples are not available, instance retrieval using unsupervised method is still preferable and may be the only option.". This is a questionable opinion. The method exposed in "End-to-end Learning of Deep Visual Representations for Image Retrieval" by Gordo et al. outperforms the state-of-the-art on the UKB dataset (3.84 without QE or DBA) whereas it was trained for landmarks retrieval and not objects, i.e. in a different retrieval context. This demonstrates that in spite of insufficient training data, training is still possible and beneficial. - Finally, most findings are not even new or surprising (e.g. aggregate several regions in a multi-scale manner was already achieved by Tolias at al, etc.). So the interest of the paper is limited overall. In addition, there are some problems in the experiments. For instance, the tuning experiments are only conducted on the Oxford dataset and using a single network (VGG-19), whereas it is not clear whether these conditions are well representative of all datasets and all networks (it is well known that the Oxford dataset behaves very differently than the Holidays dataset, for instance). In addition, tuning is performed very aggressively, making it look like the authors are tuning on the test set (e.g. see Table 3). To conclude, the paper is one year too late with respect to recent developments in the state of the art.
5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature
SJNDWNOlg
ICLR.cc/2017/conference
2017
What Is the Best Practice for CNNs Applied to Visual Instance Retrieval?
["Jiedong Hao", "Jing Dong", "Wei Wang", "Tieniu Tan"]
Previous work has shown that feature maps of deep convolutional neural networks (CNNs) can be interpreted as feature representation of a particular image region. Features aggregated from these feature maps have been exploited for image retrieval tasks and achieved state-of-the-art performances in recent years. The key to the success of such methods is the feature representation. However, the different factors that impact the effectiveness of features are still not explored thoroughly. There are much less discussion about the best combination of them. The main contribution of our paper is the thorough evaluations of the various factors that affect the discriminative ability of the features extracted from CNNs. Based on the evaluation results, we also identify the best choices for different factors and propose a new multi-scale image feature representation method to encode the image effectively. Finally, we show that the proposed method generalises well and outperforms the state-of-the-art methods on four typical datasets used for visual instance retrieval.
["Computer vision", "Deep learning"]
ABSTRACTPrevious work has shown that feature maps of deep convolutional neural networks(CNNs) can be interpreted as feature representation of a particular image region.Features aggregated from these feature maps have been exploited for image re-trieval tasks and achieved state-of-the-art performances in recent years. The keyto the success of such methods is the feature representation. However, the differentfactors that impact the effectiveness of features are still not explored thoroughly.There are much less discussion about the best combination of them.The main contribution of our paper is the thorough evaluations of the various fac-tors that affect the discriminative ability of the features extracted from CNNs.Based on the evaluation results, we also identify the best choices for differentfactors and propose a new multi-scale image feature representation method to en-code the image effectively. Finally, we show that the proposed method generaliseswell and outperforms the state-of-the-art methods on four typical datasets used forvisual instance retrieval.1 I NTRODUCTIONImage retrieval is an important problem both for academic research and for industrial applications.Although it has been studied for many years (Sivic & Zisserman, 2003; Philbin et al., 2007; Toliaset al., 2015), it is still a challenging task. Generally, image retrieval is divided into two groups. Thefirst one is the category-level image retrieval (Sharma & Schiele, 2015), in which an image in thedataset is deemed to be similar to the query image if they share the same class or they are similar inshape and local structures. The other group is the instance-level image retrieval (Tolias et al., 2015),in which an image is considered to match the query if they contain the same object or the samescene. The instance-level image retrieval is harder in that the retrieval method need to encode thelocal and detailed information in order to tell two images apart, e.g., the algorithm should be ableto detect the differences between the Eiffel Tower and other steel towers although they have similarshapes. In this paper, we focus on the instance-level image retrieval.Traditionally, visual instance retrieval is mainly addressed by the BoF (bag of features) based meth-ods using the local feature descriptors such as SIFT (Lowe, 2004). In order to boost the retrievalperformances, post-processing techniques such as query expansion (Chum et al., 2007) and spatialverification (Philbin et al., 2007) are also employed.With the decisive victory (Krizhevsky et al., 2012) over traditional models in the ImageNet (Rus-sakovsky et al., 2015) image classification challenge, convolutional neural networks (Lecun et al.,1998) continue to achieve remarkable success in diverse fields such as object detection (Liu et al.,2015; Shaoqing Ren, 2015), semantic segmentation (Dai et al., 2016) and even image style trans-fer (Gatys et al., 2016). Networks trained on the Imagenet classification task can generalize quitewell to other tasks, which are either used off-the-shelf (Razavian et al., 2014a) or fine-tuned on thetask-specific datasets (Azizpour et al., 2014; Long et al., 2015). Inspired by all these, researchersin the field of image retrieval also shift their interest to the CNNs. Their experiments have shownpromising and surprising results (Babenko et al., 2014; Razavian et al., 2014c; Tolias et al., 2015),which are on par with or surpass the performances of conventional methods like BoF and VLAD(vector of locally aggregated descriptors) (J ́egou et al., 2010; Arandjelovi ́c & Zisserman, 2013) .1Under review as a conference paper at ICLR 2017Despite all these previous advances (Babenko et al., 2014; Babenko & Lempitsky, 2015; Toliaset al., 2015) on using CNNs for image feature representation, the underlying factors that contributeto the success of off-the-shelf CNNs on the image retrieval tasks are still largely unclear and un-explored, e.g., which layer is the best choice for instance retrieval, the convolutional layer or thefully-connected layer? What is the best way to represent the multi-scale information of an image?Clarifying these questions will help us advance a further step towards building a more robust andaccurate retrieval system. Also in situations where a large numbers of training samples are not avail-able, instance retrieval using unsupervised method is still preferable and may be the only option.In this paper, we aim to answer these questions and make three novel contributions. Unlike pre-vious papers, we explicitly choose five factors to study the image representations based on CNNsand conduct extensive experiments to evaluate their impacts on the retrieval performances. We alsogive detailed analysis on these factors and give our recommendations for combining them. Dur-ing experiments, we borrow wisdoms from literatures and evaluate their usefulness, but find thatthey are not as effective as some of the simpler design choices. Second, by combining the insightsobtained during the individual experiments, we are able to propose a new multi-scale image rep-resentation, which is compact yet effective. Finally, we evaluate our method on four challengingdatasets, i.e., Oxford5k, Paris6k, Oxford105k and UKB. Experimental results show that our methodis generally applicable and outperforms all previous methods on compact image representations bya large margin.2 R ELATED WORKMulti-scale image representation . Lazebnik et al. (2006) propose the spatial pyramid matchingapproach to encode the spatial information using BoF based methods. They represent an image us-ing a pyramid of several levels or scales. Features from different scales are combined to form theimage representation in such a way that coarser levels get less weight while finer levels get moreweight. Their argument is that matches found in coarser levels may involve increasingly dissimilarimage features. In our paper, we also explore the multi-scale paradigm in the same spirit using theconvolutional feature maps as the local descriptors. We find that the deep features from the convolu-tional feature maps are distinct from the traditional descriptors: the weighted sum of different levelof features shows no superior performances than a simple summation of them. Kaiming et al. (2014)devise an approach called SPP (spatial pyramid pooling). In SPP, feature maps of the last convo-lutional layer are divided into a 3 or 4 scale pyramid. First the regional features in each scale areconcatenated, then the scale-level features are concatenated to a fixed length vector to be forwardedto the next fully-connected layers. We find that this strategy is ineffective for unsupervised instanceretrieval, leading to inferior performances compared to other simple combination methods (see thepart about multi-scale representation in section 5.2 for more details.).Image representation using off-the-shelf CNNs . Gong et al. (2014) propose the MOP (multi-scale orderless pooling) method to represent an image in which VLAD is used to encode the level2 and level 3 features. Then features from different scales are PCA-compressed and concatenatedto form the image features. This method is rather complicated and time-consuming. At the sametime, Babenko et al. (2014) use Alexnet (Krizhevsky et al., 2012) trained on the Imagenet 1000-classclassification task and retrain the network on task-related dataset. The retraining procedure gives aboost to the retrieval performances. Instead of using the output of the fully-connected layers as theimage feature representations, Babenko & Lempitsky (2015) use the output feature maps of last con-volutional layer to compute the image features. Recently, instead of sum-pooling the convolutionalfeatures, Tolias et al. (2015) use max-pooling to aggregate the deep descriptors. Their multi-scalemethod, called R-MAC (regional maximum activation of convolutions), further improves the pre-vious results on four common instance retrieval datasets. Our work differs from these papers inthat we explicitly explore the various factors that underpin the success of unsupervised instance re-trieval, which have not been fully explored and analysed. By carefully choosing the different settingfor each factor and combining them in a complementary way, we show that a large improvement canbe achieved without additional cost.2Under review as a conference paper at ICLR 20173 I MPACTING FACTORSWhen we employ off-the-shelf CNNs for the task of instance-level image retrieval, a natural questionis: what kind of design choices should we make in order to make full use of the representationalpower of existing models? In this section, we summarize the five factors that may greatly impactthe performance of the final image retrieval system. In section 5.2, we will show our experimentalresults on each key factor. Before we delve into the impacting factors, first we will give a briefintroduction about how to represent an image using the activation feature maps of a certain layer.3.1 CNN F EATURES FOR INSTANCE RETRIEVALIn this paper, we are mainly interested in extracting compact and discriminative image features usingthe off-the-shelf CNNs in an efficient way. For a given image I, we simply subtract the mean valueof the RGB channels from the original image and do not do other sophisticated preprocessing. Thenthe image is fed into the convolutional network and goes through a series of convolutions, non-linearactivations and pooling operations. The feature activation maps of a certain layer can be interpretedas the raw image features, based on which we build the final image features. These feature mapsform a tensor of size KHW, where Kis the number of feature channels, and HandWareheight and width of a feature map. Each feature map represents a specific pattern which encodesa small part of information about the original image. If we represent the set of feature maps asF=fFig; i= 1;2; : : : ; K , where Fiis the ithactivation feature map, then the most simple imagefeature is formulated as:f= [f1; f2; : : : ; f i; : : : ; f K]T: (1)In the above equation 1, fiis obtained by applying the feature aggregation method (see section 3.2)over the ithfeature map Fi. Throughout this paper, we use feature maps after the non-linear acti-vations (ReLU) so that the elements in each feature map are all non-negative. We also experimentwith feature maps prior to ReLU, but find that they lead to inferior performances. After the imagefeature representation is obtained, post-processing techniques such as PCA and whitening can befurther applied.3.2 I MPACTING FACTORS ON PERFORMANCEFeature aggregation and normalization. After the feature maps of a certain layer are obtained,it is still challenging to aggregate the 3-dimensional feature maps to get compact vector represen-tations for images. Previous papers use either sum-pooling (Babenko & Lempitsky, 2015) or max-pooling (Tolias et al., 2015) followed by l2-normalization. Sum-pooling over a particular featuremapFiis expressed asfi=HXm=1WXn=1Fi(m; n); i2f1;2; : : : ; Kg; (2)while max-pooling is given byfi= maxm;nFi(m; n); (3)where m; n are all the possible values over the spatial coordinate of size HW. In this paper,for the first time, different combinations of aggregation and normalization methods ( l2andl1in themanner of RootSIFT (Arandjelovi ́c & Zisserman, 2012)) are evaluated and their results are reported.Output layer selection. Zeiler & Fergus (2014) has shown that image features aggregated fromthe feature activation maps of certain layers have interpretable semantic meanings. Gong et al.(2014) and Babenko et al. (2014) use the output of the first fully-connected layer to obtain theimage features, while Babenko & Lempitsky (2015) and Tolias et al. (2015) use the output featuremaps of the last convolutional layer. But these choices are somewhat subjective. In this paper, weextract dataset image features from the output feature maps of different layers and compare theirretrieval performances. Based on the finding in this experiment, we choose the best-performinglayer and also come up with a layer ensemble approach which outperforms state-of-the-art methods(see section 5.3).Image resizing. Famous models such as Alexnet (Krizhevsky et al., 2012) and VGGnet (Simonyan& Zisserman, 2014) all require that the input images have fixed size. In order to meet this require-ment, previous papers (Gong et al., 2014; Babenko & Lempitsky, 2015) usually resize the input3Under review as a conference paper at ICLR 2017(a) level 1 (b) level 2 (c) level 3Figure 1: An illustration of multi-scale representation of an image. The whole image is divided into 3levels from the coarsest (level 1) to the finest (level 3). At each level, the image is divided into different numberof equal-sized regions.images to the fixed size. We postulate that the resizing operation may lead to the distortion of im-portant information about the objects in the natural images. Ultimately, this kind of operation mayhurt the discriminative power of image features extracted from the network, thus degrading the re-trieval performances. For the task of image retrieval, we think it is best to keep the images theiroriginal sizes and feed them directly to the network whenever possible. In this paper, three imageresizing strategies are explored:• Both the height and width of the dataset images are set to the same fixed value (denoted astwo-fixed ).• The minimum of each dataset image’s size is set to a fixed value. (The aspect ratio of theoriginal image is kept.) (denoted as one-fixed ).• The images are kept their original sizes. (denoted as free).Multi-scale feature representation. Unlike local feature descriptors such as SIFT (Lowe, 2004),the feature vector extracted from the deep convolutional networks for an image is a global descriptorwhich encodes the holistic information. When used for image retrieval, this kind of features stilllack the detailed and local information desired to accurately match two images. Inspired by spatialpyramid matching (Lazebnik et al., 2006) and SPP (Kaiming et al., 2014), we explore the feasibilityof applying this powerful method to obtain discriminative image features. An image is representedby aL-level pyramid, and at each level, the image is divided evenly into several overlapping ornon-overlapping regions. The vector representations of these small regions are computed, then theregional vectors are combined to form the image feature vectors. The single scale representation ofan image is just a special case of the multi-scale method in which the number of level Lequals 1.Figure 1 shows an example of 3level representations of an image. The time cost of re-feeding thosesmall regions into the network to compute the regional vectors would be huge, thus unacceptablefor instance retrieval tasks. Inspired by the work of Girshick (2015) and Tolias et al. (2015), weassume a linear projection between the original image regions and the regions in the feature mapsof a certain layer. Then the regional feature vectors can be efficiently computed without re-feedingthe corresponding image regions. In section 5.2, various settings for the multi-scale and scale-level feature combination methods are explored and their retrieval performances are reported andanalysed.PCA and whitening. Principal Component Analysis (PCA) is a simple yet efficient method forreducing the dimensionality of feature vectors and decorrelating the feature elements. Previouswork (Babenko et al., 2014; J ́egou et al., 2010) has shown evidences that PCA and whitened featurescan actually boost the performances of image retrieval. In this paper, we further investigate theusefulness of PCA and whitening within our pipeline and give some recommendations.4Under review as a conference paper at ICLR 20174 I MPLEMENTATIONWe use the open source deep learning framework Caffe (Jia et al., 2014) for our whole experiments.The aim of this research is to investigate the most effective ways to exploit the feature activations ofexisting deep convolutional models. Based on past practices for networks to go deeper (Krizhevskyet al., 2012; Simonyan & Zisserman, 2014; Szegedy et al., 2015; He et al., 2015), a consideration formoderate computational cost, and also the results from Tolias et al. (2015) that deeper networks workbetter than shallower ones, we decide to use the popular VGG-19 model (Simonyan & Zisserman,2014) trained on ImageNet as our model.Network transformation . The original VGG-19 network only accepts an image of fixed size ( 224224), which is not the optimal choice when extracting image features for retrieval tasks. In order forthe network to be able to process an image of arbitrary size (of course, the image size can not exceedthe GPU’s memory limit) and for us to experiment with different input image resizing strategies, weadapt the original VGG-19 network and change the fully-connected layers to convolutional (Longet al., 2015) layers. For more details about network transformations, see appendix A.5 E XPERIMENTSIn this section, we first introduce the datasets used and the evaluation metrics. Then we reportour experimental results for different impacting factors and give detailed analysis. In the last part,we show the performance of our method considering all these impacting factors and compare ourmethod with the state-of-the-art methods on four datasets.5.1 D ATASETS AND EVALUATION METRICSThe Oxford5k dataset (Philbin et al., 2007) contains 5062 images crawled from Flickr by using11 Oxford landmarks as queries. A total of 11 groups of queries — each having 5 queries withtheir ground truth relevant image list, are provided. For each query, a bounding box annotation isalso provided to denote the query region. During experiment, we report results using the full queryimages (denoted as full-query) and image regions within the bounding boxes of the query images(denoted as cropped-query). The performance on this dataset is measured by mAP (mean averageprecision) over all queries.The Paris6k dataset (Philbin et al., 2008) includes 6412 images1from Flickr which contains 11landmark buildings and the general scenes from Paris. Similar to the Oxford5k dataset, a total of 55queries belonging to 11 groups and the ground truth bounding boxes for each query are provided .The performance is reported as mAP over 55 queries.The Oxford105k2dataset contains the original Oxford5k dataset and additional 100,000 im-ages (Philbin et al., 2007) from Flickr. The 100,000 images are disjoint with the Oxford5k datasetand are used as distractors to test the retrieval performance when the dataset scales to larger size.We use the same evaluation protocol as the Oxford5k on this dataset.TheUKB dataset (Nist ́er & Stew ́enius, 2006) consists of 10200 photographs of 2550 objects, eachobject having exactly 4 images. The pictures of these objects are all taken indoor with large variationin orientation, scale, lighting and shooting angles. During experiment, each image is used to querythe whole dataset. The performance is measured by the average number of same-object images inthe top-4 results.5.2 R ESULTS AND DISCUSSIONIn this section, we report the results of experiments on the impact of different factors and analysetheir particular impact. The experiments in this section are conducted on the Oxford5k dataset.Feature aggregation and normalization. In this experiment, we compare the different combina-tions of feature aggregation (sum-pooling and max-pooling) and normalization methods ( l2andl1)1Following conventions, 20 corrupted images from this dataset are removed, leaving 6392 valid images.2The image named “portrait 000801.jpg” was corrupted and manually removed from this dataset.5Under review as a conference paper at ICLR 2017Table 1: Comparison between different combi-nations of feature aggregation and normaliza-tion methods.Method full-query cropped-querymax -l1 52.4 48.0sum -l2 58.0 52.6sum -l1 60.3 56.3max -l2 60.1 53.5Table 2: Comparison between different imageresizing strategies. The numbers in the parenthe-ses denote the sizes in which the maximum mAPsare achieved.Method full-query cropped-querytwo-fixed 55.5 (864) 38.7 (896)one-fixed 59.0 (800) 39.3 (737)free 58.0 52.6in terms of their retrieval performances. We use features from the layer conv5 4 with the freeinputimage size. The results (%) are shown in Table 1. Sum-pooling followed by l1normalization leadsto slightly better results than the other combinations, especially for the cropped-query. However,after preliminary experiment with a multi-scale version of sum -l1andmax -l2, we find that max -l2is much better than sum -l1. For example, employing a 4 level representation of images in the Ox-ford5k dataset, for the case of full-query, we find that the mAP for the max -l2method is 65.1, whilethe mAP for sum -l1is only 51.3 (even lower than the single scale representation). Base on theseresults, we stick to max -l2in computing the final image features.Output layer selection. In order to verify their feasibility for instance retrieval, we extract fromthe network the output feature maps of different layers and aggregate them to get the image featurevectors. We evaluate the performances using features from layer conv3 3 up to the highest fc7-convlayer (except the pooling layers, i.e.pool3, pool4 and pool5). Single-scale representations of thedataset images are used in this experiment.Figure 2 shows the retrieval performances of image features corresponding to different layers. Theretrieval performances for both the full and cropped queries increase as the layer increases fromlower layer conv3 3 to higher layers and plateau in layer conv5 4 and fc6-conv, then the perfor-mances begin to decrease as the layers increase to fc7-conv. The result shows that features fromlower layers such as conv3 3 and conv3 4 are too generic and lack the semantic meanings of theobject in the image, thus rendering them unsuitable for instance retrieval. On the other hand, fea-tures from the highest layer (fc7-conv) contain the semantic meaning of objects but lack the detailedand local information needed to match two similar images. The best results are obtained in layerconv5 4 (0.601) and fc6-conv (0.618), where the feature vectors combine both the low-level detailedinformation and high level semantic meanings of the image. Based on these observations and therequirement for keeping the image features compact, we mainly focus on image features from thelayer conv5 4 (dimensionality = 512 compared to 4096 of layer fc6-conv).conv3_3 conv3_4 conv4_1 conv4_2 conv4_3 conv4_4 conv5_1 conv5_2 conv5_3 conv5_4 fc6-conv fc7-convlayer names0.160.240.320.400.480.560.64mAPfull-querycropped-queryFigure 2: Performance comparison between different layers. This experiment is conducted using the freeinput image size.Image resizing. We experiment with 3 kinds of image resizing strategies which are detailed insection 3.2. We use grid search to find the optimal size for the two-fixed andone-fixed strategy. Asis shown in Table 2, the freeinput strategy outperforms or is close to the other two strategies: it6Under review as a conference paper at ICLR 2017performs especially well in the cropped-query case. This experiment shows that changing the imageaspect ratio ( two-fixed ) distorts the image information, thus reducing the performance dramatically.Theone-fixed way is better than the two-fixed method. But information loss still occurs due to theresizing operation. The freemethod is able to capture more natural and un-distorted informationfrom the images, which explains its superior performance over the other two methods. It is best tokeep the images their original sizes for the instance retrieval tasks.The benefit of multi-scale representation. In our multi-scale approach, the regional vectors fromeach scale are simply added together and l2-normalized to form the scale-level feature vectors. Thenfeatures from different scales are combined and l2-normalized to form the image representations. Infact, we also experimented with two methods which concatenate features from different scales. Thefirst method is in same vein to spatial pyramid pooling (Kaiming et al., 2014), i.e., region-level aswell as the scale-level features are all concatenated to form a high dimensional vector. In the secondmethod, region-level features are added while scale-level features are concatenated. We find thatthese two methods all lead to inferior results. The performance drop for the first in the case ofcropped-query can be as large as 41%. The high dimensionality of the concatenated features (largerthan 1.5k) will also lead to longer running times. Considering all these, we do not use concatenationof features in the following experiments.Table 3: Multi-scale representation: comparison between different methods. “overlap” denotes whetherthe regions in each level (see Figure 1) have some overlapping areas. “s2”,“s3” mean that overlap occurs inlevel 2 or 3. “weighing” means if the features from each level are added using same weight or different weight.“version” means the different choice of the number of regions in each scale.scale overlap weighing version full-query cropped-query(a1) 2 × × - 63.5 59.0(a2) 2 × X - 63.9 61.0(b1) 3 × × - 64.2 60.9(b2) 3 × X - 62.6 61.0(b3) 3 s2 × - 64.8 60.8(c1) 4 s3 × v1 65.1 61.4(c2) 4 s3 X v1 64.8 60.7(c3) 4 s2,s3 × v1 65.5 60.8(c4) 4 s2,s3 × v2 65.9 61.5(c5) 4 s2,s3 X v2 65.4 61.2(c6) 4 × × v3 64.5 61.3(c7) 4 s3 × v3 65.8 62.2(c8) 4 s2,s3 × v3 66.3 62.6We conduct extensive experiments to decide the best configurations for the multi-scale approach andreport our results in Table 3. First, we explore the impact of the number of scales on the retrievalperformances. For the 2 and 3 scale representations, The region number for each level are f11,22g,f11,22,33g. For the 4 scale representation, 3 versions are used and they differ in thenumber of regions in each scale: for “v1”, “v2”, and “v3”, the number of regions are f11,22,33,44g,f11,22,33,55gandf11,22,33,66g. Table 3 (a1)(b1)(c6) showthe performances of using 2, 3, and 4 scales to represent the dataset images, respectively. Clearly,more scale levels improve the results and in the case of cropped-query, increase the performance byan absolute 2%.We also conduct experiments to find whether the weighing of different scales leads to improvedperformance. The weighing method for features from different scales is similar to the manner ofspatial pyramid matching (Lazebnik et al., 2006) — features from coarser level are given less weightwhile features from the finer levels are given more weight. Suppose the features of different scalesfor an Lscale representation are f1; f2; : : : ; fL, then the image representation fis expressed as:f=12L1f1+LXi=212Li+1fi. (4)More details can be found in Lazebnik et al. (2006). Comparing the results of row (a1) and (a2), itseems that weighing different scales leads to better performance. But after more experiments, wefind that the weighing method generally leads to inferior results as the number of scales increase,7Under review as a conference paper at ICLR 201716 80 144 208 272 336 400 464 528number of principal component reserved.0.250.350.450.550.650.75mAPcrop-pariscrop-selffull-parisfull-selfFigure 3: The number of principal component reserved VS mAP. We show the results of full and croppedquery using the PCA and whitening matrix learned from the Oxford5k itself and Paris6k, denoted as “full-self”,“full-paris” and “crop-self”, “crop-paris”.e.g., compare the results of row pair(b1)(b2) and (c1)(c2). These results suggest that deep featuresare different from the traditional local feature descriptors such as SIFT. We should exercise withcaution when we apply the traditional wisdom found in SIFT to the deep convolutional descriptors,which is also suggested in Babenko & Lempitsky (2015). Based on the results of this experiment,no weighing methods are used in computing our final image feature representations.Next, we look into the issue of overlapping between different scales and try to verify its usefulness.For each scale and its different versions, we set some overlapping areas between the neighboringregions in either one or two scales of the pyramid (For the exact configurations of overlap in all casesin Table 3, see appendix B for the complete descriptions). From the row pair (b1)(b3) and (c1)(c3),we can see that overlap increase the performance for full-query but decrease a little the performancefor cropped-query. But for 4 scale v3 (note the pair(c7)(c8)), we see a consistent improvement forboth the full and cropped queries. So we decided to use overlap in level 2 and 3 in computing ourfinal features.Table 4: The impact of PCA and whitening. “PCA on self” and “PCA on Paris” mean that the correspondingfeatures are post-processed by the PCA and whitening matrices learned on the Oxford5k and Paris6k datasets,respectively. The numbers in the parentheses indicate the dimensionality of features used for obtaining thecorresponding results.Feature full-query cropped-query3scale overlap, original 64.8 60.83scale overlap, PCA on self 65.4(80) 60.9(112)3scale overlap, PCA on Paris 70.6(464) 67.3(480)4scale v3overlap(s3), original 65.1 61.44scale v3overlap(s3), PCA on self 66.9(80) 61.9(96)4scale v3overlap(s3), PCA on Paris 72.3(464) 70.8(496)4scale v3overlap(s2,s3),original 66.3 62.84scale v3overlap(s2,s3), PCA on self 69.0(80) 63.9(144)4scale v3overlap(s2,s3), PCA on Paris 73.2(496) 71.2(448)PCA and whitening . We perform PCA and whitening for the features extracted from the Oxford5kdataset using the PCA and whitening matrix learned from the Oxford5k or the Paris6k dataset andl2-normalize these features to get the final image representations.The retrieval results for 3 groups of features (from Table 3(b3)(c1)(c8)) are shown in Table 4.Clearly, PCA and whitening lead to better performances. For all 3 groups of features, PCA and8Under review as a conference paper at ICLR 2017Table 5: Comparison with state-of-the-art methods. “single” means multi-scale features from single layer(conv5 4) are used. “single, compression” uses the same features but compresses them to get the best perfor-mances. “layer ensemble” combines the similarity score from layer conv5 4 and fc6-conv. The dimensionalityof the combined feature is set to 1024 for compactness considerations. All our methods use PCA and whitening.method DOxford5k Paris6k Oxford105kUKBfull cropped full cropped full croppedJ ́egou & Zisserman (2014) 128 - 43.3 - - - 35.3 3.40Arandjelovi ́c & Zisserman (2012) 128 - 44.8 - - - 37.4 -J ́egou & Zisserman (2014) 1024 - 56.0 - - - 50.2 3.51Razavian et al. (2014b) 256 53.3 - 67.0 - 48.9 - 3.38Babenko et al. (2014) 512 55.7 - - - 52.2 - 3.56Babenko & Lempitsky (2015) 256 58.9 53.1 - - 57.8 50.1 3.65Arandjelovi ́c et al. (2016) 256 62.5 63.5 72.0 73.5 - - -Tolias et al. (2015) 512 - 66.8 - 83.0 - 61.6 -ours (single) 512 73.0 70.6 82.0 83.3 68.9 65.3 3.75ours (single, compression) - 73.2 71.2 83.0 84.0 68.9 65.8 3.76ours (layer ensemble) 1024 75.6 73.7 85.7 85.9 71.6 69.2 3.81whitening on the same dataset lead to insignificant improvement both in the case of full and croppedquery. But after doing PCA and whitening on the Paris6k dataset, the results for both the full andcropped queries improve greatly. In fact, the improvement for the case of cropped-query is evenmore surprising. For example, for the third feature group, the improvement are 10.4% and 13.4%for the full and cropped queries. It should also be noted that as the the number of principal compo-nent reserved increases, the performance for “PCA on self” and “PCA on Paris” differs greatly. As isshown in Figure 3, the performance for the former peaks at a relatively low dimension (around 100)and begins to decrease, while for the latter, the performance increases as the number of principalcomponent gets larger and then plateaus.Do the above results mean that we should always compute the PCA and whitening matrix from anydatasets other than the query dataset itself? The short answer is no. We find that for UKB, learningthe PCA and whitening matrix on the Oxford5k dataset shows inferior results compared to learningthe PCA and whitening matrix on UKB itself (about 2% drop in accuracy). This may be due to thelarge differences between the images of the two datasets as the Oxford5k dataset are mainly imagesof buildings while the images in UKB are mainly small indoor objects. We therefore recommendlearning the PCA and whitening matrix on a similar dataset to achieve good performances.5.3 C OMPARISON WITH OTHER METHODSBased on the previous experimental results and our analysis of different impacting factors on theretrieval performances, we propose a new multi-scale image feature representation. For a givenimage in the dataset, the whole process of image feature representation is divided into two steps.First, the input image is fed into the network without the resizing operation (the freeway) and a4-scale feature representation is built on top of the feature maps of layer conv5 4. During the multi-scale representation step, max-pooling of feature maps are used and regional vectors from the samescale are added together and l2-normalized. After that, features from different scales are summedandl2-normalized again. The second step involves applying the PCA and whitening operations onfeatures from the first step. The PCA and whitening matrix used are either learned from differentor same dataset: specifically, for the Oxford5k and Oxford105k, it is learned in the Paris6k, whilefor Paris6k and UKB, it is learned on Oxford5k and UKB respectively. The final PCA and whitenedimage features are used for reporting our method’s performances.Layer ensemble . Inspired by previous work on model ensemble to boost the classification perfor-mances (Krizhevsky et al., 2012; Simonyan & Zisserman, 2014), we consider fusing the similarityscore from different layers to improve the retrieval performances. Specifically, for two images, theirsimilarity score is computed as the weighted sum of the scores from different layers (these weightssum to 1 so that overall similarity score between two images are still in the range [0;1].). We haveevaluated various combination of layers to see their performances and find that best performanceis achieved by combining the score from conv5 4 and fc6-conv. For the fc6-conv features of animage, we use a 3-scale representation as the size of output feature maps are already very small.9Under review as a conference paper at ICLR 2017The fc6-conv features are compressed to low dimensional vectors for faster computation. Our layerensemble achieves 75.6% and 73.7% on Oxford5k for the full and cropped queries respectively,showing a large improvement over previous methods. This suggests that features from the fc6-convand conv5 4 are complementary. See Table 5 for the complete results on all four datasets.Comparison . We compare the performance of our method with several state-of-the-art methodswhich use small footprint representations and do not employ the complicated post-processing tech-niques such as geometric re-ranking (Philbin et al., 2007) and query expansion (Arandjelovi ́c &Zisserman, 2012). The results are shown in Table 5. In all the datasets and different scenarios(full or cropped), our method achieves the best performance with comparable cost. For Oxford5k(cropped) and UKB dataset, the relative improvement of our best results over previous methods(from Tolias et al. (2015) and Babenko & Lempitsky (2015)) are 10.3% and 4.4%.6 C ONCLUSIONIn this paper, we focus on instance retrieval based on features extracted from CNNs. we have con-ducted extensive experiments to evaluate the impact of five factors on the performances of imageretrieval and analysed their particular impacts. Based on the insights gained from these experiments,we have proposed a new multi-scale image representation which shows superior performances overprevious methods on four datasets. When combined with the technique “layer ensemble”, ourmethod can achieve further improvements. Overall, we have provided a viable and efficient solutionto apply CNNs in an unsupervised way to datasets with a relatively small number of images.
SkvcxOZVg
Not much utility in the paper
3: Clear rejection
Authors investigate how to use pretrained CNNs for retrieval and perform an extensive evaluation of the influence of various parameters. For detailed comments on everything see the questions I posted earlier. The summary is here: I don't think we learn much from this paper: we already knew that we should use the last conv layer, we knew we should use PCA with whitening, we knew we should use original size images (authors say Tolias didn't do this as they resized the images, but they did this exactly for the same reason as authors didn't evaluate on Holidays - the images are too big. So they basically used "as large as possible" image sizes, which is what this paper effectively suggests as well), etc. This paper essentially concatenates methods that people have already used, and performs some more parameter tweaking to achieve the state-of-the-art (while the tweaking is actually performed on the test set of some of the tests). The setting of the state-of-the-art results is quite misleading as it doesn't really come from the good choice of parameters, but mainly due to the usage of the deeper VGG-19 network. Furthermore, I don't think it's sufficient to just try one network and claim these are the best practices for using CNNs for instance retrieval - what about ResNet, what about Inception, I don't know how to apply any of these conclusions for those networks, and would these conclusions even hold for them. Furthermore the parameter tweaking was done on Oxford, I really can't tell what conclusions would we get if we tuned on UKB for example. So a more appropriate paper title would be "What are the best parameter values for VGG-19 on Oxford/Paris benchmarks?" - I don't think this is sufficiently novel nor interesting for the community.
5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature
SJNDWNOlg
ICLR.cc/2017/conference
2017
What Is the Best Practice for CNNs Applied to Visual Instance Retrieval?
["Jiedong Hao", "Jing Dong", "Wei Wang", "Tieniu Tan"]
Previous work has shown that feature maps of deep convolutional neural networks (CNNs) can be interpreted as feature representation of a particular image region. Features aggregated from these feature maps have been exploited for image retrieval tasks and achieved state-of-the-art performances in recent years. The key to the success of such methods is the feature representation. However, the different factors that impact the effectiveness of features are still not explored thoroughly. There are much less discussion about the best combination of them. The main contribution of our paper is the thorough evaluations of the various factors that affect the discriminative ability of the features extracted from CNNs. Based on the evaluation results, we also identify the best choices for different factors and propose a new multi-scale image feature representation method to encode the image effectively. Finally, we show that the proposed method generalises well and outperforms the state-of-the-art methods on four typical datasets used for visual instance retrieval.
["Computer vision", "Deep learning"]
ABSTRACTPrevious work has shown that feature maps of deep convolutional neural networks(CNNs) can be interpreted as feature representation of a particular image region.Features aggregated from these feature maps have been exploited for image re-trieval tasks and achieved state-of-the-art performances in recent years. The keyto the success of such methods is the feature representation. However, the differentfactors that impact the effectiveness of features are still not explored thoroughly.There are much less discussion about the best combination of them.The main contribution of our paper is the thorough evaluations of the various fac-tors that affect the discriminative ability of the features extracted from CNNs.Based on the evaluation results, we also identify the best choices for differentfactors and propose a new multi-scale image feature representation method to en-code the image effectively. Finally, we show that the proposed method generaliseswell and outperforms the state-of-the-art methods on four typical datasets used forvisual instance retrieval.1 I NTRODUCTIONImage retrieval is an important problem both for academic research and for industrial applications.Although it has been studied for many years (Sivic & Zisserman, 2003; Philbin et al., 2007; Toliaset al., 2015), it is still a challenging task. Generally, image retrieval is divided into two groups. Thefirst one is the category-level image retrieval (Sharma & Schiele, 2015), in which an image in thedataset is deemed to be similar to the query image if they share the same class or they are similar inshape and local structures. The other group is the instance-level image retrieval (Tolias et al., 2015),in which an image is considered to match the query if they contain the same object or the samescene. The instance-level image retrieval is harder in that the retrieval method need to encode thelocal and detailed information in order to tell two images apart, e.g., the algorithm should be ableto detect the differences between the Eiffel Tower and other steel towers although they have similarshapes. In this paper, we focus on the instance-level image retrieval.Traditionally, visual instance retrieval is mainly addressed by the BoF (bag of features) based meth-ods using the local feature descriptors such as SIFT (Lowe, 2004). In order to boost the retrievalperformances, post-processing techniques such as query expansion (Chum et al., 2007) and spatialverification (Philbin et al., 2007) are also employed.With the decisive victory (Krizhevsky et al., 2012) over traditional models in the ImageNet (Rus-sakovsky et al., 2015) image classification challenge, convolutional neural networks (Lecun et al.,1998) continue to achieve remarkable success in diverse fields such as object detection (Liu et al.,2015; Shaoqing Ren, 2015), semantic segmentation (Dai et al., 2016) and even image style trans-fer (Gatys et al., 2016). Networks trained on the Imagenet classification task can generalize quitewell to other tasks, which are either used off-the-shelf (Razavian et al., 2014a) or fine-tuned on thetask-specific datasets (Azizpour et al., 2014; Long et al., 2015). Inspired by all these, researchersin the field of image retrieval also shift their interest to the CNNs. Their experiments have shownpromising and surprising results (Babenko et al., 2014; Razavian et al., 2014c; Tolias et al., 2015),which are on par with or surpass the performances of conventional methods like BoF and VLAD(vector of locally aggregated descriptors) (J ́egou et al., 2010; Arandjelovi ́c & Zisserman, 2013) .1Under review as a conference paper at ICLR 2017Despite all these previous advances (Babenko et al., 2014; Babenko & Lempitsky, 2015; Toliaset al., 2015) on using CNNs for image feature representation, the underlying factors that contributeto the success of off-the-shelf CNNs on the image retrieval tasks are still largely unclear and un-explored, e.g., which layer is the best choice for instance retrieval, the convolutional layer or thefully-connected layer? What is the best way to represent the multi-scale information of an image?Clarifying these questions will help us advance a further step towards building a more robust andaccurate retrieval system. Also in situations where a large numbers of training samples are not avail-able, instance retrieval using unsupervised method is still preferable and may be the only option.In this paper, we aim to answer these questions and make three novel contributions. Unlike pre-vious papers, we explicitly choose five factors to study the image representations based on CNNsand conduct extensive experiments to evaluate their impacts on the retrieval performances. We alsogive detailed analysis on these factors and give our recommendations for combining them. Dur-ing experiments, we borrow wisdoms from literatures and evaluate their usefulness, but find thatthey are not as effective as some of the simpler design choices. Second, by combining the insightsobtained during the individual experiments, we are able to propose a new multi-scale image rep-resentation, which is compact yet effective. Finally, we evaluate our method on four challengingdatasets, i.e., Oxford5k, Paris6k, Oxford105k and UKB. Experimental results show that our methodis generally applicable and outperforms all previous methods on compact image representations bya large margin.2 R ELATED WORKMulti-scale image representation . Lazebnik et al. (2006) propose the spatial pyramid matchingapproach to encode the spatial information using BoF based methods. They represent an image us-ing a pyramid of several levels or scales. Features from different scales are combined to form theimage representation in such a way that coarser levels get less weight while finer levels get moreweight. Their argument is that matches found in coarser levels may involve increasingly dissimilarimage features. In our paper, we also explore the multi-scale paradigm in the same spirit using theconvolutional feature maps as the local descriptors. We find that the deep features from the convolu-tional feature maps are distinct from the traditional descriptors: the weighted sum of different levelof features shows no superior performances than a simple summation of them. Kaiming et al. (2014)devise an approach called SPP (spatial pyramid pooling). In SPP, feature maps of the last convo-lutional layer are divided into a 3 or 4 scale pyramid. First the regional features in each scale areconcatenated, then the scale-level features are concatenated to a fixed length vector to be forwardedto the next fully-connected layers. We find that this strategy is ineffective for unsupervised instanceretrieval, leading to inferior performances compared to other simple combination methods (see thepart about multi-scale representation in section 5.2 for more details.).Image representation using off-the-shelf CNNs . Gong et al. (2014) propose the MOP (multi-scale orderless pooling) method to represent an image in which VLAD is used to encode the level2 and level 3 features. Then features from different scales are PCA-compressed and concatenatedto form the image features. This method is rather complicated and time-consuming. At the sametime, Babenko et al. (2014) use Alexnet (Krizhevsky et al., 2012) trained on the Imagenet 1000-classclassification task and retrain the network on task-related dataset. The retraining procedure gives aboost to the retrieval performances. Instead of using the output of the fully-connected layers as theimage feature representations, Babenko & Lempitsky (2015) use the output feature maps of last con-volutional layer to compute the image features. Recently, instead of sum-pooling the convolutionalfeatures, Tolias et al. (2015) use max-pooling to aggregate the deep descriptors. Their multi-scalemethod, called R-MAC (regional maximum activation of convolutions), further improves the pre-vious results on four common instance retrieval datasets. Our work differs from these papers inthat we explicitly explore the various factors that underpin the success of unsupervised instance re-trieval, which have not been fully explored and analysed. By carefully choosing the different settingfor each factor and combining them in a complementary way, we show that a large improvement canbe achieved without additional cost.2Under review as a conference paper at ICLR 20173 I MPACTING FACTORSWhen we employ off-the-shelf CNNs for the task of instance-level image retrieval, a natural questionis: what kind of design choices should we make in order to make full use of the representationalpower of existing models? In this section, we summarize the five factors that may greatly impactthe performance of the final image retrieval system. In section 5.2, we will show our experimentalresults on each key factor. Before we delve into the impacting factors, first we will give a briefintroduction about how to represent an image using the activation feature maps of a certain layer.3.1 CNN F EATURES FOR INSTANCE RETRIEVALIn this paper, we are mainly interested in extracting compact and discriminative image features usingthe off-the-shelf CNNs in an efficient way. For a given image I, we simply subtract the mean valueof the RGB channels from the original image and do not do other sophisticated preprocessing. Thenthe image is fed into the convolutional network and goes through a series of convolutions, non-linearactivations and pooling operations. The feature activation maps of a certain layer can be interpretedas the raw image features, based on which we build the final image features. These feature mapsform a tensor of size KHW, where Kis the number of feature channels, and HandWareheight and width of a feature map. Each feature map represents a specific pattern which encodesa small part of information about the original image. If we represent the set of feature maps asF=fFig; i= 1;2; : : : ; K , where Fiis the ithactivation feature map, then the most simple imagefeature is formulated as:f= [f1; f2; : : : ; f i; : : : ; f K]T: (1)In the above equation 1, fiis obtained by applying the feature aggregation method (see section 3.2)over the ithfeature map Fi. Throughout this paper, we use feature maps after the non-linear acti-vations (ReLU) so that the elements in each feature map are all non-negative. We also experimentwith feature maps prior to ReLU, but find that they lead to inferior performances. After the imagefeature representation is obtained, post-processing techniques such as PCA and whitening can befurther applied.3.2 I MPACTING FACTORS ON PERFORMANCEFeature aggregation and normalization. After the feature maps of a certain layer are obtained,it is still challenging to aggregate the 3-dimensional feature maps to get compact vector represen-tations for images. Previous papers use either sum-pooling (Babenko & Lempitsky, 2015) or max-pooling (Tolias et al., 2015) followed by l2-normalization. Sum-pooling over a particular featuremapFiis expressed asfi=HXm=1WXn=1Fi(m; n); i2f1;2; : : : ; Kg; (2)while max-pooling is given byfi= maxm;nFi(m; n); (3)where m; n are all the possible values over the spatial coordinate of size HW. In this paper,for the first time, different combinations of aggregation and normalization methods ( l2andl1in themanner of RootSIFT (Arandjelovi ́c & Zisserman, 2012)) are evaluated and their results are reported.Output layer selection. Zeiler & Fergus (2014) has shown that image features aggregated fromthe feature activation maps of certain layers have interpretable semantic meanings. Gong et al.(2014) and Babenko et al. (2014) use the output of the first fully-connected layer to obtain theimage features, while Babenko & Lempitsky (2015) and Tolias et al. (2015) use the output featuremaps of the last convolutional layer. But these choices are somewhat subjective. In this paper, weextract dataset image features from the output feature maps of different layers and compare theirretrieval performances. Based on the finding in this experiment, we choose the best-performinglayer and also come up with a layer ensemble approach which outperforms state-of-the-art methods(see section 5.3).Image resizing. Famous models such as Alexnet (Krizhevsky et al., 2012) and VGGnet (Simonyan& Zisserman, 2014) all require that the input images have fixed size. In order to meet this require-ment, previous papers (Gong et al., 2014; Babenko & Lempitsky, 2015) usually resize the input3Under review as a conference paper at ICLR 2017(a) level 1 (b) level 2 (c) level 3Figure 1: An illustration of multi-scale representation of an image. The whole image is divided into 3levels from the coarsest (level 1) to the finest (level 3). At each level, the image is divided into different numberof equal-sized regions.images to the fixed size. We postulate that the resizing operation may lead to the distortion of im-portant information about the objects in the natural images. Ultimately, this kind of operation mayhurt the discriminative power of image features extracted from the network, thus degrading the re-trieval performances. For the task of image retrieval, we think it is best to keep the images theiroriginal sizes and feed them directly to the network whenever possible. In this paper, three imageresizing strategies are explored:• Both the height and width of the dataset images are set to the same fixed value (denoted astwo-fixed ).• The minimum of each dataset image’s size is set to a fixed value. (The aspect ratio of theoriginal image is kept.) (denoted as one-fixed ).• The images are kept their original sizes. (denoted as free).Multi-scale feature representation. Unlike local feature descriptors such as SIFT (Lowe, 2004),the feature vector extracted from the deep convolutional networks for an image is a global descriptorwhich encodes the holistic information. When used for image retrieval, this kind of features stilllack the detailed and local information desired to accurately match two images. Inspired by spatialpyramid matching (Lazebnik et al., 2006) and SPP (Kaiming et al., 2014), we explore the feasibilityof applying this powerful method to obtain discriminative image features. An image is representedby aL-level pyramid, and at each level, the image is divided evenly into several overlapping ornon-overlapping regions. The vector representations of these small regions are computed, then theregional vectors are combined to form the image feature vectors. The single scale representation ofan image is just a special case of the multi-scale method in which the number of level Lequals 1.Figure 1 shows an example of 3level representations of an image. The time cost of re-feeding thosesmall regions into the network to compute the regional vectors would be huge, thus unacceptablefor instance retrieval tasks. Inspired by the work of Girshick (2015) and Tolias et al. (2015), weassume a linear projection between the original image regions and the regions in the feature mapsof a certain layer. Then the regional feature vectors can be efficiently computed without re-feedingthe corresponding image regions. In section 5.2, various settings for the multi-scale and scale-level feature combination methods are explored and their retrieval performances are reported andanalysed.PCA and whitening. Principal Component Analysis (PCA) is a simple yet efficient method forreducing the dimensionality of feature vectors and decorrelating the feature elements. Previouswork (Babenko et al., 2014; J ́egou et al., 2010) has shown evidences that PCA and whitened featurescan actually boost the performances of image retrieval. In this paper, we further investigate theusefulness of PCA and whitening within our pipeline and give some recommendations.4Under review as a conference paper at ICLR 20174 I MPLEMENTATIONWe use the open source deep learning framework Caffe (Jia et al., 2014) for our whole experiments.The aim of this research is to investigate the most effective ways to exploit the feature activations ofexisting deep convolutional models. Based on past practices for networks to go deeper (Krizhevskyet al., 2012; Simonyan & Zisserman, 2014; Szegedy et al., 2015; He et al., 2015), a consideration formoderate computational cost, and also the results from Tolias et al. (2015) that deeper networks workbetter than shallower ones, we decide to use the popular VGG-19 model (Simonyan & Zisserman,2014) trained on ImageNet as our model.Network transformation . The original VGG-19 network only accepts an image of fixed size ( 224224), which is not the optimal choice when extracting image features for retrieval tasks. In order forthe network to be able to process an image of arbitrary size (of course, the image size can not exceedthe GPU’s memory limit) and for us to experiment with different input image resizing strategies, weadapt the original VGG-19 network and change the fully-connected layers to convolutional (Longet al., 2015) layers. For more details about network transformations, see appendix A.5 E XPERIMENTSIn this section, we first introduce the datasets used and the evaluation metrics. Then we reportour experimental results for different impacting factors and give detailed analysis. In the last part,we show the performance of our method considering all these impacting factors and compare ourmethod with the state-of-the-art methods on four datasets.5.1 D ATASETS AND EVALUATION METRICSThe Oxford5k dataset (Philbin et al., 2007) contains 5062 images crawled from Flickr by using11 Oxford landmarks as queries. A total of 11 groups of queries — each having 5 queries withtheir ground truth relevant image list, are provided. For each query, a bounding box annotation isalso provided to denote the query region. During experiment, we report results using the full queryimages (denoted as full-query) and image regions within the bounding boxes of the query images(denoted as cropped-query). The performance on this dataset is measured by mAP (mean averageprecision) over all queries.The Paris6k dataset (Philbin et al., 2008) includes 6412 images1from Flickr which contains 11landmark buildings and the general scenes from Paris. Similar to the Oxford5k dataset, a total of 55queries belonging to 11 groups and the ground truth bounding boxes for each query are provided .The performance is reported as mAP over 55 queries.The Oxford105k2dataset contains the original Oxford5k dataset and additional 100,000 im-ages (Philbin et al., 2007) from Flickr. The 100,000 images are disjoint with the Oxford5k datasetand are used as distractors to test the retrieval performance when the dataset scales to larger size.We use the same evaluation protocol as the Oxford5k on this dataset.TheUKB dataset (Nist ́er & Stew ́enius, 2006) consists of 10200 photographs of 2550 objects, eachobject having exactly 4 images. The pictures of these objects are all taken indoor with large variationin orientation, scale, lighting and shooting angles. During experiment, each image is used to querythe whole dataset. The performance is measured by the average number of same-object images inthe top-4 results.5.2 R ESULTS AND DISCUSSIONIn this section, we report the results of experiments on the impact of different factors and analysetheir particular impact. The experiments in this section are conducted on the Oxford5k dataset.Feature aggregation and normalization. In this experiment, we compare the different combina-tions of feature aggregation (sum-pooling and max-pooling) and normalization methods ( l2andl1)1Following conventions, 20 corrupted images from this dataset are removed, leaving 6392 valid images.2The image named “portrait 000801.jpg” was corrupted and manually removed from this dataset.5Under review as a conference paper at ICLR 2017Table 1: Comparison between different combi-nations of feature aggregation and normaliza-tion methods.Method full-query cropped-querymax -l1 52.4 48.0sum -l2 58.0 52.6sum -l1 60.3 56.3max -l2 60.1 53.5Table 2: Comparison between different imageresizing strategies. The numbers in the parenthe-ses denote the sizes in which the maximum mAPsare achieved.Method full-query cropped-querytwo-fixed 55.5 (864) 38.7 (896)one-fixed 59.0 (800) 39.3 (737)free 58.0 52.6in terms of their retrieval performances. We use features from the layer conv5 4 with the freeinputimage size. The results (%) are shown in Table 1. Sum-pooling followed by l1normalization leadsto slightly better results than the other combinations, especially for the cropped-query. However,after preliminary experiment with a multi-scale version of sum -l1andmax -l2, we find that max -l2is much better than sum -l1. For example, employing a 4 level representation of images in the Ox-ford5k dataset, for the case of full-query, we find that the mAP for the max -l2method is 65.1, whilethe mAP for sum -l1is only 51.3 (even lower than the single scale representation). Base on theseresults, we stick to max -l2in computing the final image features.Output layer selection. In order to verify their feasibility for instance retrieval, we extract fromthe network the output feature maps of different layers and aggregate them to get the image featurevectors. We evaluate the performances using features from layer conv3 3 up to the highest fc7-convlayer (except the pooling layers, i.e.pool3, pool4 and pool5). Single-scale representations of thedataset images are used in this experiment.Figure 2 shows the retrieval performances of image features corresponding to different layers. Theretrieval performances for both the full and cropped queries increase as the layer increases fromlower layer conv3 3 to higher layers and plateau in layer conv5 4 and fc6-conv, then the perfor-mances begin to decrease as the layers increase to fc7-conv. The result shows that features fromlower layers such as conv3 3 and conv3 4 are too generic and lack the semantic meanings of theobject in the image, thus rendering them unsuitable for instance retrieval. On the other hand, fea-tures from the highest layer (fc7-conv) contain the semantic meaning of objects but lack the detailedand local information needed to match two similar images. The best results are obtained in layerconv5 4 (0.601) and fc6-conv (0.618), where the feature vectors combine both the low-level detailedinformation and high level semantic meanings of the image. Based on these observations and therequirement for keeping the image features compact, we mainly focus on image features from thelayer conv5 4 (dimensionality = 512 compared to 4096 of layer fc6-conv).conv3_3 conv3_4 conv4_1 conv4_2 conv4_3 conv4_4 conv5_1 conv5_2 conv5_3 conv5_4 fc6-conv fc7-convlayer names0.160.240.320.400.480.560.64mAPfull-querycropped-queryFigure 2: Performance comparison between different layers. This experiment is conducted using the freeinput image size.Image resizing. We experiment with 3 kinds of image resizing strategies which are detailed insection 3.2. We use grid search to find the optimal size for the two-fixed andone-fixed strategy. Asis shown in Table 2, the freeinput strategy outperforms or is close to the other two strategies: it6Under review as a conference paper at ICLR 2017performs especially well in the cropped-query case. This experiment shows that changing the imageaspect ratio ( two-fixed ) distorts the image information, thus reducing the performance dramatically.Theone-fixed way is better than the two-fixed method. But information loss still occurs due to theresizing operation. The freemethod is able to capture more natural and un-distorted informationfrom the images, which explains its superior performance over the other two methods. It is best tokeep the images their original sizes for the instance retrieval tasks.The benefit of multi-scale representation. In our multi-scale approach, the regional vectors fromeach scale are simply added together and l2-normalized to form the scale-level feature vectors. Thenfeatures from different scales are combined and l2-normalized to form the image representations. Infact, we also experimented with two methods which concatenate features from different scales. Thefirst method is in same vein to spatial pyramid pooling (Kaiming et al., 2014), i.e., region-level aswell as the scale-level features are all concatenated to form a high dimensional vector. In the secondmethod, region-level features are added while scale-level features are concatenated. We find thatthese two methods all lead to inferior results. The performance drop for the first in the case ofcropped-query can be as large as 41%. The high dimensionality of the concatenated features (largerthan 1.5k) will also lead to longer running times. Considering all these, we do not use concatenationof features in the following experiments.Table 3: Multi-scale representation: comparison between different methods. “overlap” denotes whetherthe regions in each level (see Figure 1) have some overlapping areas. “s2”,“s3” mean that overlap occurs inlevel 2 or 3. “weighing” means if the features from each level are added using same weight or different weight.“version” means the different choice of the number of regions in each scale.scale overlap weighing version full-query cropped-query(a1) 2 × × - 63.5 59.0(a2) 2 × X - 63.9 61.0(b1) 3 × × - 64.2 60.9(b2) 3 × X - 62.6 61.0(b3) 3 s2 × - 64.8 60.8(c1) 4 s3 × v1 65.1 61.4(c2) 4 s3 X v1 64.8 60.7(c3) 4 s2,s3 × v1 65.5 60.8(c4) 4 s2,s3 × v2 65.9 61.5(c5) 4 s2,s3 X v2 65.4 61.2(c6) 4 × × v3 64.5 61.3(c7) 4 s3 × v3 65.8 62.2(c8) 4 s2,s3 × v3 66.3 62.6We conduct extensive experiments to decide the best configurations for the multi-scale approach andreport our results in Table 3. First, we explore the impact of the number of scales on the retrievalperformances. For the 2 and 3 scale representations, The region number for each level are f11,22g,f11,22,33g. For the 4 scale representation, 3 versions are used and they differ in thenumber of regions in each scale: for “v1”, “v2”, and “v3”, the number of regions are f11,22,33,44g,f11,22,33,55gandf11,22,33,66g. Table 3 (a1)(b1)(c6) showthe performances of using 2, 3, and 4 scales to represent the dataset images, respectively. Clearly,more scale levels improve the results and in the case of cropped-query, increase the performance byan absolute 2%.We also conduct experiments to find whether the weighing of different scales leads to improvedperformance. The weighing method for features from different scales is similar to the manner ofspatial pyramid matching (Lazebnik et al., 2006) — features from coarser level are given less weightwhile features from the finer levels are given more weight. Suppose the features of different scalesfor an Lscale representation are f1; f2; : : : ; fL, then the image representation fis expressed as:f=12L1f1+LXi=212Li+1fi. (4)More details can be found in Lazebnik et al. (2006). Comparing the results of row (a1) and (a2), itseems that weighing different scales leads to better performance. But after more experiments, wefind that the weighing method generally leads to inferior results as the number of scales increase,7Under review as a conference paper at ICLR 201716 80 144 208 272 336 400 464 528number of principal component reserved.0.250.350.450.550.650.75mAPcrop-pariscrop-selffull-parisfull-selfFigure 3: The number of principal component reserved VS mAP. We show the results of full and croppedquery using the PCA and whitening matrix learned from the Oxford5k itself and Paris6k, denoted as “full-self”,“full-paris” and “crop-self”, “crop-paris”.e.g., compare the results of row pair(b1)(b2) and (c1)(c2). These results suggest that deep featuresare different from the traditional local feature descriptors such as SIFT. We should exercise withcaution when we apply the traditional wisdom found in SIFT to the deep convolutional descriptors,which is also suggested in Babenko & Lempitsky (2015). Based on the results of this experiment,no weighing methods are used in computing our final image feature representations.Next, we look into the issue of overlapping between different scales and try to verify its usefulness.For each scale and its different versions, we set some overlapping areas between the neighboringregions in either one or two scales of the pyramid (For the exact configurations of overlap in all casesin Table 3, see appendix B for the complete descriptions). From the row pair (b1)(b3) and (c1)(c3),we can see that overlap increase the performance for full-query but decrease a little the performancefor cropped-query. But for 4 scale v3 (note the pair(c7)(c8)), we see a consistent improvement forboth the full and cropped queries. So we decided to use overlap in level 2 and 3 in computing ourfinal features.Table 4: The impact of PCA and whitening. “PCA on self” and “PCA on Paris” mean that the correspondingfeatures are post-processed by the PCA and whitening matrices learned on the Oxford5k and Paris6k datasets,respectively. The numbers in the parentheses indicate the dimensionality of features used for obtaining thecorresponding results.Feature full-query cropped-query3scale overlap, original 64.8 60.83scale overlap, PCA on self 65.4(80) 60.9(112)3scale overlap, PCA on Paris 70.6(464) 67.3(480)4scale v3overlap(s3), original 65.1 61.44scale v3overlap(s3), PCA on self 66.9(80) 61.9(96)4scale v3overlap(s3), PCA on Paris 72.3(464) 70.8(496)4scale v3overlap(s2,s3),original 66.3 62.84scale v3overlap(s2,s3), PCA on self 69.0(80) 63.9(144)4scale v3overlap(s2,s3), PCA on Paris 73.2(496) 71.2(448)PCA and whitening . We perform PCA and whitening for the features extracted from the Oxford5kdataset using the PCA and whitening matrix learned from the Oxford5k or the Paris6k dataset andl2-normalize these features to get the final image representations.The retrieval results for 3 groups of features (from Table 3(b3)(c1)(c8)) are shown in Table 4.Clearly, PCA and whitening lead to better performances. For all 3 groups of features, PCA and8Under review as a conference paper at ICLR 2017Table 5: Comparison with state-of-the-art methods. “single” means multi-scale features from single layer(conv5 4) are used. “single, compression” uses the same features but compresses them to get the best perfor-mances. “layer ensemble” combines the similarity score from layer conv5 4 and fc6-conv. The dimensionalityof the combined feature is set to 1024 for compactness considerations. All our methods use PCA and whitening.method DOxford5k Paris6k Oxford105kUKBfull cropped full cropped full croppedJ ́egou & Zisserman (2014) 128 - 43.3 - - - 35.3 3.40Arandjelovi ́c & Zisserman (2012) 128 - 44.8 - - - 37.4 -J ́egou & Zisserman (2014) 1024 - 56.0 - - - 50.2 3.51Razavian et al. (2014b) 256 53.3 - 67.0 - 48.9 - 3.38Babenko et al. (2014) 512 55.7 - - - 52.2 - 3.56Babenko & Lempitsky (2015) 256 58.9 53.1 - - 57.8 50.1 3.65Arandjelovi ́c et al. (2016) 256 62.5 63.5 72.0 73.5 - - -Tolias et al. (2015) 512 - 66.8 - 83.0 - 61.6 -ours (single) 512 73.0 70.6 82.0 83.3 68.9 65.3 3.75ours (single, compression) - 73.2 71.2 83.0 84.0 68.9 65.8 3.76ours (layer ensemble) 1024 75.6 73.7 85.7 85.9 71.6 69.2 3.81whitening on the same dataset lead to insignificant improvement both in the case of full and croppedquery. But after doing PCA and whitening on the Paris6k dataset, the results for both the full andcropped queries improve greatly. In fact, the improvement for the case of cropped-query is evenmore surprising. For example, for the third feature group, the improvement are 10.4% and 13.4%for the full and cropped queries. It should also be noted that as the the number of principal compo-nent reserved increases, the performance for “PCA on self” and “PCA on Paris” differs greatly. As isshown in Figure 3, the performance for the former peaks at a relatively low dimension (around 100)and begins to decrease, while for the latter, the performance increases as the number of principalcomponent gets larger and then plateaus.Do the above results mean that we should always compute the PCA and whitening matrix from anydatasets other than the query dataset itself? The short answer is no. We find that for UKB, learningthe PCA and whitening matrix on the Oxford5k dataset shows inferior results compared to learningthe PCA and whitening matrix on UKB itself (about 2% drop in accuracy). This may be due to thelarge differences between the images of the two datasets as the Oxford5k dataset are mainly imagesof buildings while the images in UKB are mainly small indoor objects. We therefore recommendlearning the PCA and whitening matrix on a similar dataset to achieve good performances.5.3 C OMPARISON WITH OTHER METHODSBased on the previous experimental results and our analysis of different impacting factors on theretrieval performances, we propose a new multi-scale image feature representation. For a givenimage in the dataset, the whole process of image feature representation is divided into two steps.First, the input image is fed into the network without the resizing operation (the freeway) and a4-scale feature representation is built on top of the feature maps of layer conv5 4. During the multi-scale representation step, max-pooling of feature maps are used and regional vectors from the samescale are added together and l2-normalized. After that, features from different scales are summedandl2-normalized again. The second step involves applying the PCA and whitening operations onfeatures from the first step. The PCA and whitening matrix used are either learned from differentor same dataset: specifically, for the Oxford5k and Oxford105k, it is learned in the Paris6k, whilefor Paris6k and UKB, it is learned on Oxford5k and UKB respectively. The final PCA and whitenedimage features are used for reporting our method’s performances.Layer ensemble . Inspired by previous work on model ensemble to boost the classification perfor-mances (Krizhevsky et al., 2012; Simonyan & Zisserman, 2014), we consider fusing the similarityscore from different layers to improve the retrieval performances. Specifically, for two images, theirsimilarity score is computed as the weighted sum of the scores from different layers (these weightssum to 1 so that overall similarity score between two images are still in the range [0;1].). We haveevaluated various combination of layers to see their performances and find that best performanceis achieved by combining the score from conv5 4 and fc6-conv. For the fc6-conv features of animage, we use a 3-scale representation as the size of output feature maps are already very small.9Under review as a conference paper at ICLR 2017The fc6-conv features are compressed to low dimensional vectors for faster computation. Our layerensemble achieves 75.6% and 73.7% on Oxford5k for the full and cropped queries respectively,showing a large improvement over previous methods. This suggests that features from the fc6-convand conv5 4 are complementary. See Table 5 for the complete results on all four datasets.Comparison . We compare the performance of our method with several state-of-the-art methodswhich use small footprint representations and do not employ the complicated post-processing tech-niques such as geometric re-ranking (Philbin et al., 2007) and query expansion (Arandjelovi ́c &Zisserman, 2012). The results are shown in Table 5. In all the datasets and different scenarios(full or cropped), our method achieves the best performance with comparable cost. For Oxford5k(cropped) and UKB dataset, the relative improvement of our best results over previous methods(from Tolias et al. (2015) and Babenko & Lempitsky (2015)) are 10.3% and 4.4%.6 C ONCLUSIONIn this paper, we focus on instance retrieval based on features extracted from CNNs. we have con-ducted extensive experiments to evaluate the impact of five factors on the performances of imageretrieval and analysed their particular impacts. Based on the insights gained from these experiments,we have proposed a new multi-scale image representation which shows superior performances overprevious methods on four datasets. When combined with the technique “layer ensemble”, ourmethod can achieve further improvements. Overall, we have provided a viable and efficient solutionto apply CNNs in an unsupervised way to datasets with a relatively small number of images.
SJv4GtW4g
A paper with some good but limited and possibly slightly outdated experiments on object retrieval with CNNs
6: Marginally above acceptance threshold
The paper conducts a detailed evaluation of different CNN architectures applied to image retrieval. The authors focus on testing various architectural choices, but do not propose or compare to end-to-end learning frameworks. Technically, the contribution is clear, particularly with the promised clarifications on how multiple scales are handled in the representation. However, I am still not entirely clear whether there would be a difference in the multi-scale settting for full and cropped queries. While the paper focuses on comparing different baseline architectures for CNN-based image retrieval, several recent papers have proposed to learn end-to-end representations specific for this task, with very good result (see for instance the recent work by Gordo et al. "End-to-end Learning of Deep Visual Representations for Image Retrieval"). The authors clarify that their work is orthogonal to papers such as Gordo et al. as they assess instead the performance of networks pre-trained from image classification. In fact, they also indicate that image retrieval is more difficult than image classification -- this is because it is performed by using features originally trained for classification. I can partially accept this argument. However, given the results in recent papers, it is clear than end-to-end training is far superior in practice and it is not clear the analysis developed by the authors in this work would transfer or be useful for that case as well.
4: The reviewer is confident but not absolutely certain that the evaluation is correct
rk9eAFcxg
ICLR.cc/2017/conference
2017
Variational Recurrent Adversarial Deep Domain Adaptation
["Sanjay Purushotham", "Wilka Carvalho", "Tanachat Nilanon", "Yan Liu"]
We study the problem of learning domain invariant representations for time series data while transferring the complex temporal latent dependencies between the domains. Our model termed as Variational Recurrent Adversarial Deep Domain Adaptation (VRADA) is built atop a variational recurrent neural network (VRNN) and trains adversarially to capture complex temporal relationships that are domain-invariant. This is (as far as we know) the first to capture and transfer temporal latent dependencies in multivariate time-series data. Through experiments on real-world multivariate healthcare time-series datasets, we empirically demonstrate that learning temporal dependencies helps our model's ability to create domain-invariant representations, allowing our model to outperform current state-of-the-art deep domain adaptation approaches.
["Deep learning", "Transfer Learning"]
ABSTRACTWe study the problem of learning domain invariant representations for time seriesdata while transferring the complex temporal latent dependencies between domains.Our model termed as Variational Recurrent Adversarial Deep Domain Adaptation(VRADA) is built atop a variational recurrent neural network (VRNN) and trainsadversarially to capture complex temporal relationships that are domain-invariant.This is (as far as we know) the first to capture and transfer temporal latent de-pendencies of multivariate time-series data. Through experiments on real-worldmultivariate healthcare time-series datasets, we empirically demonstrate that learn-ing temporal dependencies helps our model’s ability to create domain-invariantrepresentations, allowing our model to outperform current state-of-the-art deepdomain adaptation approaches.1 I NTRODUCTIONMany real-world applications require effective machine learning algorithms that can learn invariantrepresentations across related time-series datasets. For example, precision medicine for patients ofvarious age groups, mobile application recommendation for users based on locations, and so on.In these examples, while the domains (i.e. age group and location) may vary, there exist commonpredictive patterns that can aid in inferring knowledge from one domain to another. More often thannot, some domains have a significantly larger number of observations than others (e.g., respiratoryfailure in adults vs. children). Therefore effective domain adaption of time-series data is in greatdemand.The general approach to tackling domain adaptation has been explored under many facets whichinclude reducing the domain discrepancy between the source and target domains(Ben-David et al.(2007)), instance re-weighting (Jiang & Zhai (2007)), subspace alignment (Fernando et al. (2013)),and deep learning (Tzeng et al. (2015); Ganin & Lempitsky (2014)). Many of these approacheswork very well for non-sequential data but are not suitable for multivariate time-series data as theydo not usually capture the temporal dependencies present in the data. For sequential data, earlierwork has successfully used dynamic Bayesian Networks(Huang & Yates (2009)) and RecurrentNeural Networks (Socher et al. (2011)) to learn latent feature representations which were domain-invariant. Unfortunately, these works were not flexible enough to model non-linear dynamics ordid not explicitly capture and transfer the complex latent dependencies needed to perform domainadaptation of time-series data.In this paper, we address this problem with a model that learns temporal latent dependencies (i.e.dependencies between the latent variables across timesteps) that can be transferred across domainsthat experience different distributions in their features. We draw inspiration from the VariationalRecurrent Neural Network (Chung et al. (2016)) and use variational methods to produce a latentrepresentation that captures underlying temporal latent dependencies. Motivated by the theory ofdomain adaptation (Ben-David et al. (2010)), we perform adversarial training on this representation*: Co-first authors1Published as a conference paper at ICLR 2017Figure 1: A Story of Temporal Dependency and Domain Invariance(a)DNN (b)R-DANN (c)VRADAt-SNE projections for the latent representations of DNN, R-DANN, and our VRADA model. We show adaptionfrom Adult-AHRF to Child-AHRF data. Source data is represented with red circles and target data with bluecircles. From left to right, one can see that domain adaptation results in mixing the source and target domaindata distributions. We can also see a story of how encoding more temporal dependency into the latentrepresentation induces more domain-invariant representations. As models capture more underlying factors ofvariation, post domain adaptation representations gradually smoothen and become evenly dispersed, indicatingthat temporal dependency acts synergestically with domain adaptation.similarly to the Domain Adversarial Neural Network (DANN) (Ganin et al. (2016)) to make therepresentations invariant across domains. We call our model the Variational Recurrent AdversarialDeep Domain Adaptation (VRADA) model. As far as we know, this is the first model capable ofaccomplishing unsupervised domain adaptation while transferring temporal latent dependenciesfor complex multivariate time-series data. Figure 1 shows an example of the domain invariantrepresentations learned by different deep learning models including our VRADA model. From thisfigure, we can see that our model (VRADA) shows better mixing of the domain distributions than thecompeting models indicating that it learns better domain invariant representations.In order to prove the efficacy of our model, we perform domain adaptation using real-world healthcaretime-series data. We choose healthcare data for two primary reasons. (1) Currently, a standard protocolin healthcare is to build, evaluate, and deploy machine learning models for particular datasets thatmay perform poorly on unseen datasets with different distributions. For example, models built aroundpatient data from particular age groups perform poorly on other age groups because the features usedto train the models have different distributions across the groups (Alemayehu & Warner (2004); Laoet al. (2004); Seshamani & Gray (2004)). Knowledge learned from one group is not transferrableto the other group. Domain adaptation seems like a natural solution to this problem as knowledgeneeds to be transferred across domains which share features that exhibit different distributions. (2)Healthcare data has multiple attributes recorded per patient visit, and it is longitudinal and episodic innature. Thus, healthcare data is a suitable platform on which to study a model which seeks to capturecomplex temporal representations and transfer this knowledge across domains.The rest of the paper is structured as follows. In the following section, we briefly discuss thecurrent state-of-the-art deep domain adaptation approaches. Afterwards, we present our modelmathematically, detailing how it simultaneously learns to capture temporal latent dependencies andcreate domain-invariant representations. In Section 4, we compare and contrast the performance ofproposed approach with other approaches on two real-world health care datasets, and provide analysison our domain-invariant representations.2 R ELATED WORKDomain adaptation is a specific instance of transfer learning in which the feature spaces are shared buttheir marginal distributions are different. A good survey on the two has been done in several previousworks (Pan & Yang (2009); Jiang (2008); Patel et al. (2015)). Domain adaptation has been thoroughlystudied in computer vision(Saenko et al. (2010); Gong et al. (2012); Fernando et al. (2013)) andnatural language processing (NLP) (Blitzer (2007); Foster et al. (2010)) applications. Recently, thedeep learning paradigm has become popular in domain adaptation (Chen et al. (2012); Tzeng et al.(2015); Yang & Eisenstein; Long & Wang (2015)) due to its ability to learn rich, flexible, non-lineardomain-invariant representations. Here, we briefly discuss two deep domain adaptation approacheswhich are closely related to our proposed model. Domain Adversarial Neural Networks (DANN)2Published as a conference paper at ICLR 2017h1 h2 h3 ht:::::::::x1 x2 x3 xtz1 z2 z3 ztGyGdFigure 2: Block diagram of VRADA. Blue lines show the inference process, qe(ztjxt; z<t). Brown linesshow the generation process, pg(xtjzt; x<t). Red lines show the recurrence process where htis informed byht1, which is informed by zt1andxt1. Black lines indicate classification.(Ganin et al. (2016)) is a deep domain adaptation model which uses two core components to createdomain-invariant representations, a feature extractor that produces the data’s latent representation,and an adversarial domain labeler that attempts to classify that data’s domain to help the featureextractor produce latent representations which are domain-invariant. In Louizos et al. (2015), theauthors propose Variational Fair AutoEncoder, which uses Variational Autoencoding architecture(Kingma & Welling (2013)) to learn latent representations where most of the information aboutcertain known factors of variation are purged from the representation while still retaining as muchinformation about the data as possible. While, these deep learning approaches learn domain-invariantrepresentations, they fail to capture and transfer the underlying complex temporal latent relationshipsfrom one domain to another as they use convolutional or feed forward neural networks which weclaim are not suitable for multivariate time-series data.Other works such as Huang & Yates (2009); Xiao & Guo (2013) have used distributed representationsfor domain adaptation in NLP sequence labeling tasks. However, they either induce hidden statesas latent features using dynamic Bayesian networks (DBNs) or learn generalizable distributedrepresentations of words using Recurrent Neural Networks (RNN) (Socher et al. (2011)) to enabledomain adaptation. These works either model the highly non-linear dynamics, as one can with RNN,or capture the complex latent dependencies present in sequential data, as one can with DBNs, butnot both. To overcome the challenges of DBNs and RNNs, Variational Recurrent Neural Network(VRNN)( Chung et al. (2016)) was proposed recently to capture the complex relationship betweenthe underlying hidden factors of variation and the output variables at different time-steps. The VRNNuses Variational Autoencoders (V AEs)( Kingma & Welling (2013); Goodfellow et al. (2016)) at eachtime-step to learn a complex relationship between the latent hidden factors across time-steps. Likethe V AE, its latent variable is parametric. Combined, these things make it well-suited for multimodalsequential data such as multivariate time-series. In the following section, we discuss our approach,Variational Adversarial Deep Domain Adaptation (VRADA), which uses a VRNN to model andtransfer complex domain-invariant temporal latent relationships for unsupervised domain adaptationof multivariate time-series.3 V ARIATIONAL RECURRENT ADVERSARIAL DEEPDOMAIN ADAPTATIONIn this section, we present our Variational Recurrent Adversarial Deep Domain Adaptation (VRADA)model for the purpose of capturing and transferring temporal latent dependencies across domainsvia domain-invariant representations. First, we introduce the notations used in this paper and thendiscuss our VRADA model in detail.3.1 N OTATIONSLet us denote a multivariate variable-length time series with Ndata samples asfxi= (xit)Tit=1gNi=1,wherexit2RD. (Note: in our experiments, for all data samples Ti=, but for generality wemaintainTi). We denotefxiSgni=1as source domain data and fxiTgNi=n+1as target domain data. Weassume that each source domain data sample xiScomes with Llabelsyi2f0;1gL(for example,these labels may correspond to a clinical outcome such as mortality or ICD9 diagnosis codes), while3Published as a conference paper at ICLR 2017target domain has no labeled data samples. We assign a domain label di2f0;1gto each data sampleto indicate if it comes from the source or target domain. diwill be used for adversarial training.3.2 VRADAThe block diagram of our VRADA model is shown in Figure 2. To explicitly model the dependenciesbetween the latent random variable across time steps, the VRADA model utilizes Variational RecurrentNeural Networks (VRNN) (Chung et al. (2016)). The VRNN effectively contains a Variational Auto-Encoders (Kingma & Welling (2013)) at every time step, all of which are conditioned on previousauto-encoders via the hidden state ht1of an RNN, such as an LSTM (Hochreiter & Schmidhuber(1997)). Therefore, for each time-step of xit, we infer a latent random variable zitviazitjxitN(z;t;diag(z;t));where [z;t;z;t] ='enc('x(xit);ht1)with priorzitN(0;t;diag(0;t));where [0;t;0;t] ='prior(ht1)where;t;;tdenote parameters of a generating distribution, and 'can be any highly flexiblefunction such as deep neural networks. For each zit,xitis generated viaxitjzitN(x;t;diag(x;t));where [x;t;x;t] ='dec('z(zit);ht1)and learned by optimizing the VRNN objective function:Lr(xit;e;g) =Eqe(ziTijxiTi)[TiXt=1(D(qe(zitjxit;zi<t)jjp(zitjxi<t;zi<t))+logpg(xitjzit;xi<t))])whereqe(zitjxit;zi<t)is the inference model, p(zitjxi<t;zi<t)is the prior, pg(xitjzit;xi<t)is thegenerative model, eis the parameters of the VRNN’s encoder, gthe parameters of the VRNN’sdecoder, and D(jj)refers to KL-Divergence. Note: zTrefers to the set of all ztsuch thattT,likewise for z<T. For each xi, we use ~ziqe(ziTijxiTi;zi<Ti)as our feature representation forsource domain classification task since it captures temporal latent dependencies across the time-steps.Training the VRNN for the source domain classification involves solving the following optimization:mine;g;y1nnXi=11TiLr(xi;e;g) +1nnXi=1Ly(xi;y;e) +R(e) (1)whereR(e)is a regularizer for the parameters of VRNN encoder (which is also the feature extractorof VRADA) with a tuning hyperparameter .As we are interested in achieving domain adaptation via the latent representation ~zi(i.e. to make ~zidomain-invariant), we can adversarially train the above objective function (equation 1) by employingthe domain adaptation idea proposed in Ganin et al. (2016). Let Gy(~zi;y)andGd(~zi;d)representthe source label classifier (to predict source labels yi) and domain label classifier (to predict domainlabelsdi) respectively with parameters yanddfor a given input ~zi. Here,Gy(:)andGd(:)can bedeep neural networks. Let us denote their loss functions respectively asLy(xi;y;e) =LB(Gy(Ve(xi;e);y);yi);Ld(xi;d;e) =LB(Gd(Ve(xi;e);d);di)whereLBis the classification loss such as a binary or categorical cross-entropy loss function andVe(xi;e)is the VRNN encoder that maps input xito~zi.Now, for adversarial training, we consider the following domain adaptation term as the regularizer ofequation 1.R(e) = maxdh1nnXi=1Ld(xi;d;e)1n0NXi=n+1Ld(xi;d;e)i(2)wheren0is the number of target domain samples. As shown in Ganin et al. (2016), Ris the domainregularizer and it is derived from the empirical Hdivergence between the source domain and targetdomain samples( Ben-David et al. (2010)).4Published as a conference paper at ICLR 2017Combining the joint optimization problem of equations 1 and 2 leads to our VRADA model, wherewe minimize the source classification risk and at the same time achieve domain adaptation. Mathe-matically, we optimize the following complete objective function:E(e;g;y;d) =1NNXi=11TiLr(xi;e;g)+1nnXi=1Ly(xi;y)(1nnXi=1Ld(xi;d)+1n0NXi=n+1Ld(xi;d)))(3)whereis atrade-off between optimizing on making domain-invariant representations and optimiz-ing source classification accuracy. Our optimization involves minimization with respect to someparameters, and maximization with respect to the others, i.e., we iteratively solve the following:(^g;^y;^e) = arg ming;y;eE(e;g;y;^d)^d= arg maxdE(^e;^g;^y;d)with the gradient updates calculated as:e e(@Lr@e+@Ly@y@Ld@d) (4)g g@Lr@g(5)d d@Ld@d(6)y y@Ly@y(7)whereis the learning rate. We can use stochastic gradient descent (SGD) to solve the equations(5-7). To solve equation (4), we can use SGD and the gradient reversal layer (GRL)(Ganin et al.(2016)). The role of GRL is to reverse the gradient sign while performing backpropagation. Thisensures that the domain classification loss is maximized which makes the feature representationsdomain-invariant.Thus, VRADA results in learning feature representations which are domain-invariant (due to domainregressorR) and which capture the temporal latent dependencies (due to optimizing VRNN objectivefunctionLr). These things combine to allow the VRADAs’ discriminative power on the sourcedomain to transfer to the target domain.4 E XPERIMENTSWe conduct experiments on two real-world health care datasets to answer the following questions: (a)How does our VRADA model perform when compared to the state-of-the-art domain adaptation andnon-adaptation approaches? (b) How different are the domain-invariant representations learned byvarious domain adaptation methods? (c) How do we show that the temporal latent dependencies aretransferred between domains? In the remainder of this section, we will describe the datasets, methods,empirical results, and show visualizations to answer the above questions.4.1 D ATASET DESCRIPTIONWe conduct experiments on two health care datasets, including the MIMIC-III dataset and a PediatricICU (PICU) dataset from Children’s Hospital Los Angeles.MIMIC-III ( Johnson et al. (2016)) is a public dataset with deidentified clinical care data collected atBeth Israel Deaconess Medical Center from 2001 to 2012. It contains over 58,000 hospital admissionrecords of 38,645 adults and 7,875 neonates. For our experiments, we extracted the following twodatasets:Adult-AHRF dataset : To study domain adaptation for adult patients with acute hypoxemicrespiratory failure (AHRF), we extracted 20 time series features (such as Base excess, bloodpH value, Mean Air Pressure, PaO2, etc.) from 5527 admission records based on Khemani5Published as a conference paper at ICLR 2017et al. (2009). We grouped the patients into 4 groups/cohorts based on their age[1]- Group2: working-age adult (20 to 45 yrs, 508 patients); Group 3: old working-age adult (46 to65 yrs, 1888 patients); Group 4: elderly (66 to 85 yrs, 2394 patients); Group 5: old elderly(85 yrs and up, 437 patients). We treated each group as a separate domain with which wecould perform domain adaptation. For each patient, we used the first 4 day after admission(with each day serving as a single time-step) as time series data for training and testing ourmodels.ICD9 dataset : For this dataset we extracted 99 time series features from 19714 admissionrecords from 4 modalities including input-events (fluids into patient, e.g., insulin), output-events (fluids out of the patient, e.g., urine), lab-events (lab test results, e.g., blood pH values,platelet count, etc.) and prescription-events (drugs prescribed by doctors, e.g., aspirin,potassium chloride, etc.). These modalities are known to be extremely useful for monitoringICU patients. All the time series are of more than 48 hours of duration, and only the first 24hours (after admission) 2-hourly sampled time series data is used for training and testing ourmodels. We use this dataset to predict the ICD9 Diagnosis code categories for each patient’sadmission record.Child-AHRF dataset : This is a PICU dataset which contains health records of 398 children patientwith acute hypoxemic respiratory failure in the intensive care unit at Children’s Hospital Los Angeles(CHLA)(Khemani et al. (2009)). Similar to Adult-AHRF, this dataset has 20 time series featurescollected for 4 days after ICU admission. This dataset is considered as one group (Group 1: children,age 0 to 19 yrs) and represents one domain.4.1.1 P REDICTION AND DOMAIN ADAPTATION TASKSMortality Prediction: For Adult-AHRF and Child-AHRF datasets, we are interested in predictingmortality, i.e. whether a patient dies from AHRF during their hospital stay. 20.10% of all the patientsin Child-AHRF and 13.84% of all patients in Adult-AHRF have a positive mortality label (i.e. thepatients who die in hospital).ICD9 Code Prediction: Each admission record in MIMIC-III dataset has multiple ICD-9 diagnosiscodes. We group all the occurrences of the ICD-9 codes into 20 diagnosis groups[2]. For the ICD9dataset, we are interested in predicting these 20 ICD-9 Diagnosis Categories for each admissionrecord. We treat this as a multi-task prediction problem.Domain Adaptation Tasks: We study unsupervised domain adaptation (i.e. target domain labels areunavailable during training and validation) task with-in age groups of Adult-AHRF dataset, ICD9dataset and across Adult and Child-AHRF datasets. For Adult-AHRF and ICD9 datasets, we created12 source-target domain pairs using the age groups, pairing up each domain Diwith another domainDj6=i, for example, the source-target pair 2-5 was used for adapting from group 2 (working-age adult)to group 5 (old elderly). We also created 4 source-target pairs for performing domain adaptation from4 adult age-groups to 1 child age-group.4.2 M ETHODS AND IMPLEMENTATION DETAILSWe categorize the methods used in our main experiments into the following groups:Non-adaptive baseline methods: Logistic Regression (LR), Adaboost with decision regres-sors (Adaboost), and feed forward deep neural networks (DNN)Deep Domain adaptation methods: Domain Adversarial Neural Networks (DANN) (Ganinet al. (2016)); DANN with a RNN (LSTM) as feature extractor (R-DANN); Variational FairAutocoder (VFAE)(Louizos et al. (2015))Our method: Variational Recurrent Adversarial Deep Domain Adaptation (VRADA)[3].[1]:https://www.cms.gov/Research-Statistics-Data-and-Systems/Statistics-Trends-and-Reports/NationalHealthExpendData/[2]:http://tdrdata.com/ipd/ipd_SearchForICD9CodesAndDescriptions.aspx .“Conditions Originating in the Perinatal Period” is not present in the preprocessed dataset.[3]: Codes will be publicly released soon6Published as a conference paper at ICLR 2017In all our experiments, we conducted unsupervised domain adaptation where target domain labels areunavailable during training and validation. For R-DANN, we used LSTM(Hochreiter & Schmidhuber(1997)) as the feature extractor network instead of the feed-forward neural networks used in DANN.For VFAE, DANN and all the non-domain adaptive approaches we flattened the time series alongtime axis and treat it as the input to the model. For fairness, the classifier and feature extractors ofthe VRADA and R-DANN were equivalent in depth and both had the same model capacity. We alsoensure that the size of latent feature representation ~ziare similar for VRADA and DANN models.The model capacity of VFAE was chosen to be similar to VRADA. All the deep domain adaptationmodels including ours had depth of size 8 (including output classifier layers). We used the Adamoptimizer ( Kingma & Ba (2014)) and ran all models for 500 epochs with a learning rate of 3e4.We set an early stopping criteria that the model does not experience a decrease in the validationloss for 20 epochs. Source domain data was split into train/validation subsets with a 70/30 ratio andtarget domain data into train/validation/test subsets with a 70/15/15 ratio. In order to compare all themethods, we report AUC scores on the entire target domain set, and the test subset for each targetdomain data of a source-target pair.4.3 Q UANTITATIVE RESULTSIn Table 1, we compare non domain adaptation and domain adaptation models’ performance onthe target domain test subset for the AHRF mortality prediction task. It is immediately clear thatdomain adaptation methods consistently outperform non domain adaptation methods. We see thatgenerally the VRADA outperforms both variants of the DANN with it consistently seeing scores4%higher. While the standard deviation for the VRADA was about 1%, it was about 2%for theR-DANN, further showing our models efficacy as it converges to more stable local optima. Ourmodel VRADA beats state-of-the-art DANN(Ganin et al. (2016)) and VFAE(Louizos et al. (2015)) onall the source-pair domain adaptation tasks for Adult-AHRF dataset. For the domain adaptation fromAdult-AHRF to Child-AHRF dataset, we observe that VRADA mostly outperforms all the competingmodels. This shows that our model can perform well even for smaller target domain datasets.Table 1: AUC Comparison for AHRF Mortality Prediction task with and without Domain AdaptationSource-Target LR Adaboost DNN DANN VFAE R-DANN VRADA3- 2 0:555 0:562 0:569 0:572 0 :615 0 :603 0:6544- 2 0:624 0:645 0:569 0:589 0 :635 0 :584 0:6565- 2 0:527 0:554 0:551 0:540 0 :588 0 :611 0:6162- 3 0:627 0:621 0 :550 0:563 0 :585 0 :708 0:7244- 3 0:681 0:636 0 :542 0:527 0 :722 0:821 0:7705- 3 0:655 0:706 0:503 0:518 0 :608 0 :769 0:7822- 4 0:585 0:591 0:530 0:560 0 :582 0 :716 0:7773- 4 0:652 0:629 0 :531 0:527 0 :697 0:769 0:7645- 4 0:689 0:699 0:538 0:532 0 :614 0 :728 0:7382- 5 0:565 0:543 0 :549 0:526 0 :555 0 :659 0:7193- 5 0:576 0:587 0:510 0:526 0 :533 0 :630 0:7214- 5 0:682 0:587 0 :575 0:548 0 :712 0 :747 0:7755- 1 0:502 0:573 0:557 0:563 0 :618 0 :563 0:6394- 1 0:565 0:533 0 :572 0:542 0:668 0:577 0 :6363- 1 0:500 0 :500 0 :542 0:535 0 :570 0 :591 0:6312- 1 0:520 0:500 0 :534 0:559 0 :578 0 :630 0:637In the above table, we test classification without adaptation using Logistic Regression (LR), Adaboost withdecision tree classifiers and Feed forward Deep Neural Networks (DNN); and with adaptation using DeepDomain Adversarial Neural Networks (DANN), a DANN with an LSTM in its feature extractor (R-DANN),Variational Fair Autoencoder (VFAE) and our Variational Adversarial Domain Adaptation Model (VRADA). Allresults are reported on the target domain test subset dataset.As the AHRF mortality prediction task made it clear that domain adaptation is necessary for inter-group adaptation, for the ICD9 multi-task prediction task that involved data with time-steps of length12, we focused strictly on domain adaptive models (i.e. the DANN, R-DANN, and VRADA). Table 2shows the aggregated AUC scores on the entire target domain dataset and test data of the targetdomain for the 20 tasks of the ICD9 Code Prediction task. Here, we clearly see that VRADA and7Published as a conference paper at ICLR 2017Table 2: AUC Comparison for ICD9 Diagnosis Code Prediction taskModel 23 24 25 32 34 35 42 43 45 52 53 54DANNentire targettarget test0:5130:5090:5080:5130:5090:5310:5110:5270:5080:5150:5140:5310:5110:5150:5070:5210:5120:5210:5050:5180:5080:5140:5060:519R-DANNentire targettarget test0:6080:6050:5810:5790:5620:5700:6180:6280:6100:6090:5860:5890:6040:6140:6070:6160:5750:5860:5730:5730:5580:5630:5660:564VRADAentire targettarget test0:6200:6090:5640:5630:5570:5600:6110:6200:6170:6170:5800:5800:5980:6060:6150:6230:5880:5940:5710:5760:5820:5810:5760:576Here, we compare results for the ICD9 Diagnosis Code Prediction task on the ICD9 dataset. For each model, thetop row corresponds to the performance on the entire target domain dataset and the bottom row corresponds toperformance on the test subset (15%) of the target domain dataset.R-DANN models outperform DANN Ganin et al. (2016) by significant margins. We also observe thatVRADA outperforms R-DANN by 1:52% when averaged over all the source-target domain pairs.4.4 D ISCUSSIONFigure 3 shows the temporal latent dependencies captured by our VRADA as compared to theR-DANN for 3-4source-target pair. While both models learn temporal latent dependencies fairlywell, the VRADA outperforms the R-DANN in two ways. First, the VRADA’s neurons learnedstronger predictions of whether features are relevant towards modeling the data. If we look at theVRADA row, for both AHRF and ICD9 we see that the neural activation patterns are more consistentacross time-steps than for R-DANN. Figure 4 shows the unrolled memory cell states (in the formExamples(TimeNeurons )) for all the source and target domain data points. We see a consistentactivation firing patterns across all these data points for VRADA but not for R-DANN. Together withthe stronger performance on 3-4for AHRF and 2-5for ICD9, this potentially indicates that VRADAis better learning the temporal dependencies.Second, nuanced values are consistent across time-steps for the VRADA, exhibiting a gradualtransition towards stronger activation with time, whereas the temporal activation pattern of the R-DANN seems somewhat sporadic. While activation gradients across time are consistent for boththe R-DANN and VRADA, more consistent inhibitory and excitatory neuron firing patterns indicatethat the VRADA better transfers knowledge. Another indication of domain adaptation was shownin Figure 1c. Looking at the t-SNE projections of feature representations of DNN, R-DANN, andVRADA we can see that the addition of temporal latent dependencies might help in better mixingof the domain distributions since we observe that the data is more evenly spread out. Figure 1c andFigure 3 together indicate that the VRADA’s temporal latent dependency capturing power and abilityto create domain-invariant representations act synergistically. For plots of activation patterns withoutdomain adaptation, please see appendix section 6.2.3.5 S UMMARYBecause of its diverse range of patients and its episodic and longitudal nature, healthcare data providesa good platform to test domain adaptation techniques for temporal data. With it as our example, weshowcase the Variational Recurrent Adversarial Domain Adaptation (VRADA) model’s ability tolearn temporal latent representations that are domain-invariant. By comparing our model’s latentrepresentations to others’, we show its ability to use variational methods to capture hidden factors ofvariation and produce more robust domain-invariant representations. We hope this work serves as abedrock for future work capturing and adapting temporal latent representations across domains.ACKNOWLEDGMENTSThis material is based upon work supported by the NSF research grants IIS-1134990, IIS-1254206,Samsung GRO Grant and the NSF Graduate Research Fellowship Program under Grant No. DGE-1418060. Any opinions, findings, and conclusions or recommendations expressed in this materialare those of the author(s) and do not necessarily reflect the views of the funding agencies. We alsoacknowledge Thailand’s Development and Promotion of Science and Technology Talents Project forfinancial support. We thank Dr. Robinder Khemani for sharing the Child-AHRF dataset.8Published as a conference paper at ICLR 2017R-DANN0.511.522.533.544.52468101214161820-2-1.5-1-0.500.511.520.511.522.533.544.52468101214161820-3-2-10123VRADA0.511.522.533.544.52468101214161820-3-2-101230.511.522.533.544.52468101214161820-3-2-10123Source Target2 4 6 8 10 12510152025303540-4-3-2-1012342 4 6 8 10 12510152025303540 -3-2.5-2-1.5-1-0.500.511.522 4 6 8 10 12510152025303540-10-8-6-4-202468102 4 6 8 10 12510152025303540-10-8-6-4-20246810 Source TargetAHRF, 3-4 ICD9, 2-5Figure 3: Cell states of memory cell for R-DANN and VRADA showing temporal latent dependencies capturedby neurons of the R-DANN and VRADA for the source domain and transferred to the target domain. Each stepalong the y-axis refers to the activation of a single neuron with blue for strong inhibition and yellow for strongexcitation. Step along the x-axis refers to activation per time-step. The left shows a single example in adapting3-4 and the right for adapting 2-5.5010015020025030035040045050100150200250300350400450-8-6-4-202468105010015020025030035040045050100150200250300350400-10-8-6-4-202468105010015020025030035040045050100150200250300350400450-10-8-6-4-202468105010015020025030035040045050100150200250300350400 -10-8-6-4-20246810R-DANN VRADAFigure 4: Cell states of memory cell for R-DANN and VRADA showing activation for all ICD9 2-5 adaptationexamples. Here, we show temporal dependencies learned across time, feature pairs for examples in a domain.The y-axis values refer to values per data point and the x-axis shows activation at time, feature pairs with thetime and feature dimensions being flattened.
HkJJhAfNx
Final review.
6: Marginally above acceptance threshold
Update: I thank the authors for their comments! After reading them, I still think the paper is not novel enough so I'm leaving the rating untouched. This paper proposes a domain adaptation technique for time series. The core of the approach is a combination of variational recurrent neural networks and adversarial domain adaptation (at the last time step). Pros: 1. The authors consider a very important application of domain adaptation. 2. The paper is well-written and relatively easy to read. 3. Solid empirical evaluation. The authors compare their method against several recent domain adaptation techniques on a number of datasets. Cons: 1. The novelty of the approach is relatively low: it’s just a straightforward fusion of the existing techniques. 2. The paper lacks any motivation for use of the particular combination (VRNN and RevGrad). I still believe comparable results can be obtained by polishing R-DANN (e.g. carefully penalizing domain discrepancy at every step) Additional comments: 1. I’m not convinced by the discussion presented in Section 4.4. I don’t think the visualization of firing patterns can be used to support the efficiency of the proposed method. 2. Figure 1(c) looks very suspicious. I can hardly believe t-SNE could produce this _very_ regular structure for non-degenerate (non-synthetic, real-world) data. Overall, it’s a solid paper but I’m not sure if it is up to the ICLR standard.
4: The reviewer is confident but not absolutely certain that the evaluation is correct
rk9eAFcxg
ICLR.cc/2017/conference
2017
Variational Recurrent Adversarial Deep Domain Adaptation
["Sanjay Purushotham", "Wilka Carvalho", "Tanachat Nilanon", "Yan Liu"]
We study the problem of learning domain invariant representations for time series data while transferring the complex temporal latent dependencies between the domains. Our model termed as Variational Recurrent Adversarial Deep Domain Adaptation (VRADA) is built atop a variational recurrent neural network (VRNN) and trains adversarially to capture complex temporal relationships that are domain-invariant. This is (as far as we know) the first to capture and transfer temporal latent dependencies in multivariate time-series data. Through experiments on real-world multivariate healthcare time-series datasets, we empirically demonstrate that learning temporal dependencies helps our model's ability to create domain-invariant representations, allowing our model to outperform current state-of-the-art deep domain adaptation approaches.
["Deep learning", "Transfer Learning"]
ABSTRACTWe study the problem of learning domain invariant representations for time seriesdata while transferring the complex temporal latent dependencies between domains.Our model termed as Variational Recurrent Adversarial Deep Domain Adaptation(VRADA) is built atop a variational recurrent neural network (VRNN) and trainsadversarially to capture complex temporal relationships that are domain-invariant.This is (as far as we know) the first to capture and transfer temporal latent de-pendencies of multivariate time-series data. Through experiments on real-worldmultivariate healthcare time-series datasets, we empirically demonstrate that learn-ing temporal dependencies helps our model’s ability to create domain-invariantrepresentations, allowing our model to outperform current state-of-the-art deepdomain adaptation approaches.1 I NTRODUCTIONMany real-world applications require effective machine learning algorithms that can learn invariantrepresentations across related time-series datasets. For example, precision medicine for patients ofvarious age groups, mobile application recommendation for users based on locations, and so on.In these examples, while the domains (i.e. age group and location) may vary, there exist commonpredictive patterns that can aid in inferring knowledge from one domain to another. More often thannot, some domains have a significantly larger number of observations than others (e.g., respiratoryfailure in adults vs. children). Therefore effective domain adaption of time-series data is in greatdemand.The general approach to tackling domain adaptation has been explored under many facets whichinclude reducing the domain discrepancy between the source and target domains(Ben-David et al.(2007)), instance re-weighting (Jiang & Zhai (2007)), subspace alignment (Fernando et al. (2013)),and deep learning (Tzeng et al. (2015); Ganin & Lempitsky (2014)). Many of these approacheswork very well for non-sequential data but are not suitable for multivariate time-series data as theydo not usually capture the temporal dependencies present in the data. For sequential data, earlierwork has successfully used dynamic Bayesian Networks(Huang & Yates (2009)) and RecurrentNeural Networks (Socher et al. (2011)) to learn latent feature representations which were domain-invariant. Unfortunately, these works were not flexible enough to model non-linear dynamics ordid not explicitly capture and transfer the complex latent dependencies needed to perform domainadaptation of time-series data.In this paper, we address this problem with a model that learns temporal latent dependencies (i.e.dependencies between the latent variables across timesteps) that can be transferred across domainsthat experience different distributions in their features. We draw inspiration from the VariationalRecurrent Neural Network (Chung et al. (2016)) and use variational methods to produce a latentrepresentation that captures underlying temporal latent dependencies. Motivated by the theory ofdomain adaptation (Ben-David et al. (2010)), we perform adversarial training on this representation*: Co-first authors1Published as a conference paper at ICLR 2017Figure 1: A Story of Temporal Dependency and Domain Invariance(a)DNN (b)R-DANN (c)VRADAt-SNE projections for the latent representations of DNN, R-DANN, and our VRADA model. We show adaptionfrom Adult-AHRF to Child-AHRF data. Source data is represented with red circles and target data with bluecircles. From left to right, one can see that domain adaptation results in mixing the source and target domaindata distributions. We can also see a story of how encoding more temporal dependency into the latentrepresentation induces more domain-invariant representations. As models capture more underlying factors ofvariation, post domain adaptation representations gradually smoothen and become evenly dispersed, indicatingthat temporal dependency acts synergestically with domain adaptation.similarly to the Domain Adversarial Neural Network (DANN) (Ganin et al. (2016)) to make therepresentations invariant across domains. We call our model the Variational Recurrent AdversarialDeep Domain Adaptation (VRADA) model. As far as we know, this is the first model capable ofaccomplishing unsupervised domain adaptation while transferring temporal latent dependenciesfor complex multivariate time-series data. Figure 1 shows an example of the domain invariantrepresentations learned by different deep learning models including our VRADA model. From thisfigure, we can see that our model (VRADA) shows better mixing of the domain distributions than thecompeting models indicating that it learns better domain invariant representations.In order to prove the efficacy of our model, we perform domain adaptation using real-world healthcaretime-series data. We choose healthcare data for two primary reasons. (1) Currently, a standard protocolin healthcare is to build, evaluate, and deploy machine learning models for particular datasets thatmay perform poorly on unseen datasets with different distributions. For example, models built aroundpatient data from particular age groups perform poorly on other age groups because the features usedto train the models have different distributions across the groups (Alemayehu & Warner (2004); Laoet al. (2004); Seshamani & Gray (2004)). Knowledge learned from one group is not transferrableto the other group. Domain adaptation seems like a natural solution to this problem as knowledgeneeds to be transferred across domains which share features that exhibit different distributions. (2)Healthcare data has multiple attributes recorded per patient visit, and it is longitudinal and episodic innature. Thus, healthcare data is a suitable platform on which to study a model which seeks to capturecomplex temporal representations and transfer this knowledge across domains.The rest of the paper is structured as follows. In the following section, we briefly discuss thecurrent state-of-the-art deep domain adaptation approaches. Afterwards, we present our modelmathematically, detailing how it simultaneously learns to capture temporal latent dependencies andcreate domain-invariant representations. In Section 4, we compare and contrast the performance ofproposed approach with other approaches on two real-world health care datasets, and provide analysison our domain-invariant representations.2 R ELATED WORKDomain adaptation is a specific instance of transfer learning in which the feature spaces are shared buttheir marginal distributions are different. A good survey on the two has been done in several previousworks (Pan & Yang (2009); Jiang (2008); Patel et al. (2015)). Domain adaptation has been thoroughlystudied in computer vision(Saenko et al. (2010); Gong et al. (2012); Fernando et al. (2013)) andnatural language processing (NLP) (Blitzer (2007); Foster et al. (2010)) applications. Recently, thedeep learning paradigm has become popular in domain adaptation (Chen et al. (2012); Tzeng et al.(2015); Yang & Eisenstein; Long & Wang (2015)) due to its ability to learn rich, flexible, non-lineardomain-invariant representations. Here, we briefly discuss two deep domain adaptation approacheswhich are closely related to our proposed model. Domain Adversarial Neural Networks (DANN)2Published as a conference paper at ICLR 2017h1 h2 h3 ht:::::::::x1 x2 x3 xtz1 z2 z3 ztGyGdFigure 2: Block diagram of VRADA. Blue lines show the inference process, qe(ztjxt; z<t). Brown linesshow the generation process, pg(xtjzt; x<t). Red lines show the recurrence process where htis informed byht1, which is informed by zt1andxt1. Black lines indicate classification.(Ganin et al. (2016)) is a deep domain adaptation model which uses two core components to createdomain-invariant representations, a feature extractor that produces the data’s latent representation,and an adversarial domain labeler that attempts to classify that data’s domain to help the featureextractor produce latent representations which are domain-invariant. In Louizos et al. (2015), theauthors propose Variational Fair AutoEncoder, which uses Variational Autoencoding architecture(Kingma & Welling (2013)) to learn latent representations where most of the information aboutcertain known factors of variation are purged from the representation while still retaining as muchinformation about the data as possible. While, these deep learning approaches learn domain-invariantrepresentations, they fail to capture and transfer the underlying complex temporal latent relationshipsfrom one domain to another as they use convolutional or feed forward neural networks which weclaim are not suitable for multivariate time-series data.Other works such as Huang & Yates (2009); Xiao & Guo (2013) have used distributed representationsfor domain adaptation in NLP sequence labeling tasks. However, they either induce hidden statesas latent features using dynamic Bayesian networks (DBNs) or learn generalizable distributedrepresentations of words using Recurrent Neural Networks (RNN) (Socher et al. (2011)) to enabledomain adaptation. These works either model the highly non-linear dynamics, as one can with RNN,or capture the complex latent dependencies present in sequential data, as one can with DBNs, butnot both. To overcome the challenges of DBNs and RNNs, Variational Recurrent Neural Network(VRNN)( Chung et al. (2016)) was proposed recently to capture the complex relationship betweenthe underlying hidden factors of variation and the output variables at different time-steps. The VRNNuses Variational Autoencoders (V AEs)( Kingma & Welling (2013); Goodfellow et al. (2016)) at eachtime-step to learn a complex relationship between the latent hidden factors across time-steps. Likethe V AE, its latent variable is parametric. Combined, these things make it well-suited for multimodalsequential data such as multivariate time-series. In the following section, we discuss our approach,Variational Adversarial Deep Domain Adaptation (VRADA), which uses a VRNN to model andtransfer complex domain-invariant temporal latent relationships for unsupervised domain adaptationof multivariate time-series.3 V ARIATIONAL RECURRENT ADVERSARIAL DEEPDOMAIN ADAPTATIONIn this section, we present our Variational Recurrent Adversarial Deep Domain Adaptation (VRADA)model for the purpose of capturing and transferring temporal latent dependencies across domainsvia domain-invariant representations. First, we introduce the notations used in this paper and thendiscuss our VRADA model in detail.3.1 N OTATIONSLet us denote a multivariate variable-length time series with Ndata samples asfxi= (xit)Tit=1gNi=1,wherexit2RD. (Note: in our experiments, for all data samples Ti=, but for generality wemaintainTi). We denotefxiSgni=1as source domain data and fxiTgNi=n+1as target domain data. Weassume that each source domain data sample xiScomes with Llabelsyi2f0;1gL(for example,these labels may correspond to a clinical outcome such as mortality or ICD9 diagnosis codes), while3Published as a conference paper at ICLR 2017target domain has no labeled data samples. We assign a domain label di2f0;1gto each data sampleto indicate if it comes from the source or target domain. diwill be used for adversarial training.3.2 VRADAThe block diagram of our VRADA model is shown in Figure 2. To explicitly model the dependenciesbetween the latent random variable across time steps, the VRADA model utilizes Variational RecurrentNeural Networks (VRNN) (Chung et al. (2016)). The VRNN effectively contains a Variational Auto-Encoders (Kingma & Welling (2013)) at every time step, all of which are conditioned on previousauto-encoders via the hidden state ht1of an RNN, such as an LSTM (Hochreiter & Schmidhuber(1997)). Therefore, for each time-step of xit, we infer a latent random variable zitviazitjxitN(z;t;diag(z;t));where [z;t;z;t] ='enc('x(xit);ht1)with priorzitN(0;t;diag(0;t));where [0;t;0;t] ='prior(ht1)where;t;;tdenote parameters of a generating distribution, and 'can be any highly flexiblefunction such as deep neural networks. For each zit,xitis generated viaxitjzitN(x;t;diag(x;t));where [x;t;x;t] ='dec('z(zit);ht1)and learned by optimizing the VRNN objective function:Lr(xit;e;g) =Eqe(ziTijxiTi)[TiXt=1(D(qe(zitjxit;zi<t)jjp(zitjxi<t;zi<t))+logpg(xitjzit;xi<t))])whereqe(zitjxit;zi<t)is the inference model, p(zitjxi<t;zi<t)is the prior, pg(xitjzit;xi<t)is thegenerative model, eis the parameters of the VRNN’s encoder, gthe parameters of the VRNN’sdecoder, and D(jj)refers to KL-Divergence. Note: zTrefers to the set of all ztsuch thattT,likewise for z<T. For each xi, we use ~ziqe(ziTijxiTi;zi<Ti)as our feature representation forsource domain classification task since it captures temporal latent dependencies across the time-steps.Training the VRNN for the source domain classification involves solving the following optimization:mine;g;y1nnXi=11TiLr(xi;e;g) +1nnXi=1Ly(xi;y;e) +R(e) (1)whereR(e)is a regularizer for the parameters of VRNN encoder (which is also the feature extractorof VRADA) with a tuning hyperparameter .As we are interested in achieving domain adaptation via the latent representation ~zi(i.e. to make ~zidomain-invariant), we can adversarially train the above objective function (equation 1) by employingthe domain adaptation idea proposed in Ganin et al. (2016). Let Gy(~zi;y)andGd(~zi;d)representthe source label classifier (to predict source labels yi) and domain label classifier (to predict domainlabelsdi) respectively with parameters yanddfor a given input ~zi. Here,Gy(:)andGd(:)can bedeep neural networks. Let us denote their loss functions respectively asLy(xi;y;e) =LB(Gy(Ve(xi;e);y);yi);Ld(xi;d;e) =LB(Gd(Ve(xi;e);d);di)whereLBis the classification loss such as a binary or categorical cross-entropy loss function andVe(xi;e)is the VRNN encoder that maps input xito~zi.Now, for adversarial training, we consider the following domain adaptation term as the regularizer ofequation 1.R(e) = maxdh1nnXi=1Ld(xi;d;e)1n0NXi=n+1Ld(xi;d;e)i(2)wheren0is the number of target domain samples. As shown in Ganin et al. (2016), Ris the domainregularizer and it is derived from the empirical Hdivergence between the source domain and targetdomain samples( Ben-David et al. (2010)).4Published as a conference paper at ICLR 2017Combining the joint optimization problem of equations 1 and 2 leads to our VRADA model, wherewe minimize the source classification risk and at the same time achieve domain adaptation. Mathe-matically, we optimize the following complete objective function:E(e;g;y;d) =1NNXi=11TiLr(xi;e;g)+1nnXi=1Ly(xi;y)(1nnXi=1Ld(xi;d)+1n0NXi=n+1Ld(xi;d)))(3)whereis atrade-off between optimizing on making domain-invariant representations and optimiz-ing source classification accuracy. Our optimization involves minimization with respect to someparameters, and maximization with respect to the others, i.e., we iteratively solve the following:(^g;^y;^e) = arg ming;y;eE(e;g;y;^d)^d= arg maxdE(^e;^g;^y;d)with the gradient updates calculated as:e e(@Lr@e+@Ly@y@Ld@d) (4)g g@Lr@g(5)d d@Ld@d(6)y y@Ly@y(7)whereis the learning rate. We can use stochastic gradient descent (SGD) to solve the equations(5-7). To solve equation (4), we can use SGD and the gradient reversal layer (GRL)(Ganin et al.(2016)). The role of GRL is to reverse the gradient sign while performing backpropagation. Thisensures that the domain classification loss is maximized which makes the feature representationsdomain-invariant.Thus, VRADA results in learning feature representations which are domain-invariant (due to domainregressorR) and which capture the temporal latent dependencies (due to optimizing VRNN objectivefunctionLr). These things combine to allow the VRADAs’ discriminative power on the sourcedomain to transfer to the target domain.4 E XPERIMENTSWe conduct experiments on two real-world health care datasets to answer the following questions: (a)How does our VRADA model perform when compared to the state-of-the-art domain adaptation andnon-adaptation approaches? (b) How different are the domain-invariant representations learned byvarious domain adaptation methods? (c) How do we show that the temporal latent dependencies aretransferred between domains? In the remainder of this section, we will describe the datasets, methods,empirical results, and show visualizations to answer the above questions.4.1 D ATASET DESCRIPTIONWe conduct experiments on two health care datasets, including the MIMIC-III dataset and a PediatricICU (PICU) dataset from Children’s Hospital Los Angeles.MIMIC-III ( Johnson et al. (2016)) is a public dataset with deidentified clinical care data collected atBeth Israel Deaconess Medical Center from 2001 to 2012. It contains over 58,000 hospital admissionrecords of 38,645 adults and 7,875 neonates. For our experiments, we extracted the following twodatasets:Adult-AHRF dataset : To study domain adaptation for adult patients with acute hypoxemicrespiratory failure (AHRF), we extracted 20 time series features (such as Base excess, bloodpH value, Mean Air Pressure, PaO2, etc.) from 5527 admission records based on Khemani5Published as a conference paper at ICLR 2017et al. (2009). We grouped the patients into 4 groups/cohorts based on their age[1]- Group2: working-age adult (20 to 45 yrs, 508 patients); Group 3: old working-age adult (46 to65 yrs, 1888 patients); Group 4: elderly (66 to 85 yrs, 2394 patients); Group 5: old elderly(85 yrs and up, 437 patients). We treated each group as a separate domain with which wecould perform domain adaptation. For each patient, we used the first 4 day after admission(with each day serving as a single time-step) as time series data for training and testing ourmodels.ICD9 dataset : For this dataset we extracted 99 time series features from 19714 admissionrecords from 4 modalities including input-events (fluids into patient, e.g., insulin), output-events (fluids out of the patient, e.g., urine), lab-events (lab test results, e.g., blood pH values,platelet count, etc.) and prescription-events (drugs prescribed by doctors, e.g., aspirin,potassium chloride, etc.). These modalities are known to be extremely useful for monitoringICU patients. All the time series are of more than 48 hours of duration, and only the first 24hours (after admission) 2-hourly sampled time series data is used for training and testing ourmodels. We use this dataset to predict the ICD9 Diagnosis code categories for each patient’sadmission record.Child-AHRF dataset : This is a PICU dataset which contains health records of 398 children patientwith acute hypoxemic respiratory failure in the intensive care unit at Children’s Hospital Los Angeles(CHLA)(Khemani et al. (2009)). Similar to Adult-AHRF, this dataset has 20 time series featurescollected for 4 days after ICU admission. This dataset is considered as one group (Group 1: children,age 0 to 19 yrs) and represents one domain.4.1.1 P REDICTION AND DOMAIN ADAPTATION TASKSMortality Prediction: For Adult-AHRF and Child-AHRF datasets, we are interested in predictingmortality, i.e. whether a patient dies from AHRF during their hospital stay. 20.10% of all the patientsin Child-AHRF and 13.84% of all patients in Adult-AHRF have a positive mortality label (i.e. thepatients who die in hospital).ICD9 Code Prediction: Each admission record in MIMIC-III dataset has multiple ICD-9 diagnosiscodes. We group all the occurrences of the ICD-9 codes into 20 diagnosis groups[2]. For the ICD9dataset, we are interested in predicting these 20 ICD-9 Diagnosis Categories for each admissionrecord. We treat this as a multi-task prediction problem.Domain Adaptation Tasks: We study unsupervised domain adaptation (i.e. target domain labels areunavailable during training and validation) task with-in age groups of Adult-AHRF dataset, ICD9dataset and across Adult and Child-AHRF datasets. For Adult-AHRF and ICD9 datasets, we created12 source-target domain pairs using the age groups, pairing up each domain Diwith another domainDj6=i, for example, the source-target pair 2-5 was used for adapting from group 2 (working-age adult)to group 5 (old elderly). We also created 4 source-target pairs for performing domain adaptation from4 adult age-groups to 1 child age-group.4.2 M ETHODS AND IMPLEMENTATION DETAILSWe categorize the methods used in our main experiments into the following groups:Non-adaptive baseline methods: Logistic Regression (LR), Adaboost with decision regres-sors (Adaboost), and feed forward deep neural networks (DNN)Deep Domain adaptation methods: Domain Adversarial Neural Networks (DANN) (Ganinet al. (2016)); DANN with a RNN (LSTM) as feature extractor (R-DANN); Variational FairAutocoder (VFAE)(Louizos et al. (2015))Our method: Variational Recurrent Adversarial Deep Domain Adaptation (VRADA)[3].[1]:https://www.cms.gov/Research-Statistics-Data-and-Systems/Statistics-Trends-and-Reports/NationalHealthExpendData/[2]:http://tdrdata.com/ipd/ipd_SearchForICD9CodesAndDescriptions.aspx .“Conditions Originating in the Perinatal Period” is not present in the preprocessed dataset.[3]: Codes will be publicly released soon6Published as a conference paper at ICLR 2017In all our experiments, we conducted unsupervised domain adaptation where target domain labels areunavailable during training and validation. For R-DANN, we used LSTM(Hochreiter & Schmidhuber(1997)) as the feature extractor network instead of the feed-forward neural networks used in DANN.For VFAE, DANN and all the non-domain adaptive approaches we flattened the time series alongtime axis and treat it as the input to the model. For fairness, the classifier and feature extractors ofthe VRADA and R-DANN were equivalent in depth and both had the same model capacity. We alsoensure that the size of latent feature representation ~ziare similar for VRADA and DANN models.The model capacity of VFAE was chosen to be similar to VRADA. All the deep domain adaptationmodels including ours had depth of size 8 (including output classifier layers). We used the Adamoptimizer ( Kingma & Ba (2014)) and ran all models for 500 epochs with a learning rate of 3e4.We set an early stopping criteria that the model does not experience a decrease in the validationloss for 20 epochs. Source domain data was split into train/validation subsets with a 70/30 ratio andtarget domain data into train/validation/test subsets with a 70/15/15 ratio. In order to compare all themethods, we report AUC scores on the entire target domain set, and the test subset for each targetdomain data of a source-target pair.4.3 Q UANTITATIVE RESULTSIn Table 1, we compare non domain adaptation and domain adaptation models’ performance onthe target domain test subset for the AHRF mortality prediction task. It is immediately clear thatdomain adaptation methods consistently outperform non domain adaptation methods. We see thatgenerally the VRADA outperforms both variants of the DANN with it consistently seeing scores4%higher. While the standard deviation for the VRADA was about 1%, it was about 2%for theR-DANN, further showing our models efficacy as it converges to more stable local optima. Ourmodel VRADA beats state-of-the-art DANN(Ganin et al. (2016)) and VFAE(Louizos et al. (2015)) onall the source-pair domain adaptation tasks for Adult-AHRF dataset. For the domain adaptation fromAdult-AHRF to Child-AHRF dataset, we observe that VRADA mostly outperforms all the competingmodels. This shows that our model can perform well even for smaller target domain datasets.Table 1: AUC Comparison for AHRF Mortality Prediction task with and without Domain AdaptationSource-Target LR Adaboost DNN DANN VFAE R-DANN VRADA3- 2 0:555 0:562 0:569 0:572 0 :615 0 :603 0:6544- 2 0:624 0:645 0:569 0:589 0 :635 0 :584 0:6565- 2 0:527 0:554 0:551 0:540 0 :588 0 :611 0:6162- 3 0:627 0:621 0 :550 0:563 0 :585 0 :708 0:7244- 3 0:681 0:636 0 :542 0:527 0 :722 0:821 0:7705- 3 0:655 0:706 0:503 0:518 0 :608 0 :769 0:7822- 4 0:585 0:591 0:530 0:560 0 :582 0 :716 0:7773- 4 0:652 0:629 0 :531 0:527 0 :697 0:769 0:7645- 4 0:689 0:699 0:538 0:532 0 :614 0 :728 0:7382- 5 0:565 0:543 0 :549 0:526 0 :555 0 :659 0:7193- 5 0:576 0:587 0:510 0:526 0 :533 0 :630 0:7214- 5 0:682 0:587 0 :575 0:548 0 :712 0 :747 0:7755- 1 0:502 0:573 0:557 0:563 0 :618 0 :563 0:6394- 1 0:565 0:533 0 :572 0:542 0:668 0:577 0 :6363- 1 0:500 0 :500 0 :542 0:535 0 :570 0 :591 0:6312- 1 0:520 0:500 0 :534 0:559 0 :578 0 :630 0:637In the above table, we test classification without adaptation using Logistic Regression (LR), Adaboost withdecision tree classifiers and Feed forward Deep Neural Networks (DNN); and with adaptation using DeepDomain Adversarial Neural Networks (DANN), a DANN with an LSTM in its feature extractor (R-DANN),Variational Fair Autoencoder (VFAE) and our Variational Adversarial Domain Adaptation Model (VRADA). Allresults are reported on the target domain test subset dataset.As the AHRF mortality prediction task made it clear that domain adaptation is necessary for inter-group adaptation, for the ICD9 multi-task prediction task that involved data with time-steps of length12, we focused strictly on domain adaptive models (i.e. the DANN, R-DANN, and VRADA). Table 2shows the aggregated AUC scores on the entire target domain dataset and test data of the targetdomain for the 20 tasks of the ICD9 Code Prediction task. Here, we clearly see that VRADA and7Published as a conference paper at ICLR 2017Table 2: AUC Comparison for ICD9 Diagnosis Code Prediction taskModel 23 24 25 32 34 35 42 43 45 52 53 54DANNentire targettarget test0:5130:5090:5080:5130:5090:5310:5110:5270:5080:5150:5140:5310:5110:5150:5070:5210:5120:5210:5050:5180:5080:5140:5060:519R-DANNentire targettarget test0:6080:6050:5810:5790:5620:5700:6180:6280:6100:6090:5860:5890:6040:6140:6070:6160:5750:5860:5730:5730:5580:5630:5660:564VRADAentire targettarget test0:6200:6090:5640:5630:5570:5600:6110:6200:6170:6170:5800:5800:5980:6060:6150:6230:5880:5940:5710:5760:5820:5810:5760:576Here, we compare results for the ICD9 Diagnosis Code Prediction task on the ICD9 dataset. For each model, thetop row corresponds to the performance on the entire target domain dataset and the bottom row corresponds toperformance on the test subset (15%) of the target domain dataset.R-DANN models outperform DANN Ganin et al. (2016) by significant margins. We also observe thatVRADA outperforms R-DANN by 1:52% when averaged over all the source-target domain pairs.4.4 D ISCUSSIONFigure 3 shows the temporal latent dependencies captured by our VRADA as compared to theR-DANN for 3-4source-target pair. While both models learn temporal latent dependencies fairlywell, the VRADA outperforms the R-DANN in two ways. First, the VRADA’s neurons learnedstronger predictions of whether features are relevant towards modeling the data. If we look at theVRADA row, for both AHRF and ICD9 we see that the neural activation patterns are more consistentacross time-steps than for R-DANN. Figure 4 shows the unrolled memory cell states (in the formExamples(TimeNeurons )) for all the source and target domain data points. We see a consistentactivation firing patterns across all these data points for VRADA but not for R-DANN. Together withthe stronger performance on 3-4for AHRF and 2-5for ICD9, this potentially indicates that VRADAis better learning the temporal dependencies.Second, nuanced values are consistent across time-steps for the VRADA, exhibiting a gradualtransition towards stronger activation with time, whereas the temporal activation pattern of the R-DANN seems somewhat sporadic. While activation gradients across time are consistent for boththe R-DANN and VRADA, more consistent inhibitory and excitatory neuron firing patterns indicatethat the VRADA better transfers knowledge. Another indication of domain adaptation was shownin Figure 1c. Looking at the t-SNE projections of feature representations of DNN, R-DANN, andVRADA we can see that the addition of temporal latent dependencies might help in better mixingof the domain distributions since we observe that the data is more evenly spread out. Figure 1c andFigure 3 together indicate that the VRADA’s temporal latent dependency capturing power and abilityto create domain-invariant representations act synergistically. For plots of activation patterns withoutdomain adaptation, please see appendix section 6.2.3.5 S UMMARYBecause of its diverse range of patients and its episodic and longitudal nature, healthcare data providesa good platform to test domain adaptation techniques for temporal data. With it as our example, weshowcase the Variational Recurrent Adversarial Domain Adaptation (VRADA) model’s ability tolearn temporal latent representations that are domain-invariant. By comparing our model’s latentrepresentations to others’, we show its ability to use variational methods to capture hidden factors ofvariation and produce more robust domain-invariant representations. We hope this work serves as abedrock for future work capturing and adapting temporal latent representations across domains.ACKNOWLEDGMENTSThis material is based upon work supported by the NSF research grants IIS-1134990, IIS-1254206,Samsung GRO Grant and the NSF Graduate Research Fellowship Program under Grant No. DGE-1418060. Any opinions, findings, and conclusions or recommendations expressed in this materialare those of the author(s) and do not necessarily reflect the views of the funding agencies. We alsoacknowledge Thailand’s Development and Promotion of Science and Technology Talents Project forfinancial support. We thank Dr. Robinder Khemani for sharing the Child-AHRF dataset.8Published as a conference paper at ICLR 2017R-DANN0.511.522.533.544.52468101214161820-2-1.5-1-0.500.511.520.511.522.533.544.52468101214161820-3-2-10123VRADA0.511.522.533.544.52468101214161820-3-2-101230.511.522.533.544.52468101214161820-3-2-10123Source Target2 4 6 8 10 12510152025303540-4-3-2-1012342 4 6 8 10 12510152025303540 -3-2.5-2-1.5-1-0.500.511.522 4 6 8 10 12510152025303540-10-8-6-4-202468102 4 6 8 10 12510152025303540-10-8-6-4-20246810 Source TargetAHRF, 3-4 ICD9, 2-5Figure 3: Cell states of memory cell for R-DANN and VRADA showing temporal latent dependencies capturedby neurons of the R-DANN and VRADA for the source domain and transferred to the target domain. Each stepalong the y-axis refers to the activation of a single neuron with blue for strong inhibition and yellow for strongexcitation. Step along the x-axis refers to activation per time-step. The left shows a single example in adapting3-4 and the right for adapting 2-5.5010015020025030035040045050100150200250300350400450-8-6-4-202468105010015020025030035040045050100150200250300350400-10-8-6-4-202468105010015020025030035040045050100150200250300350400450-10-8-6-4-202468105010015020025030035040045050100150200250300350400 -10-8-6-4-20246810R-DANN VRADAFigure 4: Cell states of memory cell for R-DANN and VRADA showing activation for all ICD9 2-5 adaptationexamples. Here, we show temporal dependencies learned across time, feature pairs for examples in a domain.The y-axis values refer to values per data point and the x-axis shows activation at time, feature pairs with thetime and feature dimensions being flattened.
SJ3a_AG4g
A combination of variational RNN and domain adversarial networks
6: Marginally above acceptance threshold
This paper combines variational RNN (VRNN) and domain adversarial networks (DANN) for domain adaptation in the sequence modelling domain. The VRNN is used to learn representations for sequential data, which is the hidden states of the last time step. The DANN is used to make the representations domain invariant, therefore achieving cross domain adaptation. Experiments are done on a number of data sets, and the proposed method (VRADA) outperforms baselines including DANN, VFAE and R-DANN on almost all of them. I don't have questions about the proposed model, the model is quite clear and seems to be a simple combination of VRNN and DANN. But a few questions came up during the pre-review question phase: - As the authors have mentioned, DANN in general outperforms MMD based methods, however, the VFAE method which is based on MMD regularization on the representations seems to outperform DANN across the board. That seems to indicate VRNN + MMD should also be a good combination. - One baseline the authors showed in the experiments is R-DANN, which is an RNN version of DANN. There are two differences between R-DANN and VRADA: (1) R-DANN uses deterministic RNN for representation learning, while VRADA uses variational RNN; (2) on target domain R-DANN only optimizes adversarial loss, while VRADA optimizes both adversarial loss and reconstruction loss for feature learning. It would be good to analyze further where the performance gain comes from.
4: The reviewer is confident but not absolutely certain that the evaluation is correct
rk9eAFcxg
ICLR.cc/2017/conference
2017
Variational Recurrent Adversarial Deep Domain Adaptation
["Sanjay Purushotham", "Wilka Carvalho", "Tanachat Nilanon", "Yan Liu"]
We study the problem of learning domain invariant representations for time series data while transferring the complex temporal latent dependencies between the domains. Our model termed as Variational Recurrent Adversarial Deep Domain Adaptation (VRADA) is built atop a variational recurrent neural network (VRNN) and trains adversarially to capture complex temporal relationships that are domain-invariant. This is (as far as we know) the first to capture and transfer temporal latent dependencies in multivariate time-series data. Through experiments on real-world multivariate healthcare time-series datasets, we empirically demonstrate that learning temporal dependencies helps our model's ability to create domain-invariant representations, allowing our model to outperform current state-of-the-art deep domain adaptation approaches.
["Deep learning", "Transfer Learning"]
ABSTRACTWe study the problem of learning domain invariant representations for time seriesdata while transferring the complex temporal latent dependencies between domains.Our model termed as Variational Recurrent Adversarial Deep Domain Adaptation(VRADA) is built atop a variational recurrent neural network (VRNN) and trainsadversarially to capture complex temporal relationships that are domain-invariant.This is (as far as we know) the first to capture and transfer temporal latent de-pendencies of multivariate time-series data. Through experiments on real-worldmultivariate healthcare time-series datasets, we empirically demonstrate that learn-ing temporal dependencies helps our model’s ability to create domain-invariantrepresentations, allowing our model to outperform current state-of-the-art deepdomain adaptation approaches.1 I NTRODUCTIONMany real-world applications require effective machine learning algorithms that can learn invariantrepresentations across related time-series datasets. For example, precision medicine for patients ofvarious age groups, mobile application recommendation for users based on locations, and so on.In these examples, while the domains (i.e. age group and location) may vary, there exist commonpredictive patterns that can aid in inferring knowledge from one domain to another. More often thannot, some domains have a significantly larger number of observations than others (e.g., respiratoryfailure in adults vs. children). Therefore effective domain adaption of time-series data is in greatdemand.The general approach to tackling domain adaptation has been explored under many facets whichinclude reducing the domain discrepancy between the source and target domains(Ben-David et al.(2007)), instance re-weighting (Jiang & Zhai (2007)), subspace alignment (Fernando et al. (2013)),and deep learning (Tzeng et al. (2015); Ganin & Lempitsky (2014)). Many of these approacheswork very well for non-sequential data but are not suitable for multivariate time-series data as theydo not usually capture the temporal dependencies present in the data. For sequential data, earlierwork has successfully used dynamic Bayesian Networks(Huang & Yates (2009)) and RecurrentNeural Networks (Socher et al. (2011)) to learn latent feature representations which were domain-invariant. Unfortunately, these works were not flexible enough to model non-linear dynamics ordid not explicitly capture and transfer the complex latent dependencies needed to perform domainadaptation of time-series data.In this paper, we address this problem with a model that learns temporal latent dependencies (i.e.dependencies between the latent variables across timesteps) that can be transferred across domainsthat experience different distributions in their features. We draw inspiration from the VariationalRecurrent Neural Network (Chung et al. (2016)) and use variational methods to produce a latentrepresentation that captures underlying temporal latent dependencies. Motivated by the theory ofdomain adaptation (Ben-David et al. (2010)), we perform adversarial training on this representation*: Co-first authors1Published as a conference paper at ICLR 2017Figure 1: A Story of Temporal Dependency and Domain Invariance(a)DNN (b)R-DANN (c)VRADAt-SNE projections for the latent representations of DNN, R-DANN, and our VRADA model. We show adaptionfrom Adult-AHRF to Child-AHRF data. Source data is represented with red circles and target data with bluecircles. From left to right, one can see that domain adaptation results in mixing the source and target domaindata distributions. We can also see a story of how encoding more temporal dependency into the latentrepresentation induces more domain-invariant representations. As models capture more underlying factors ofvariation, post domain adaptation representations gradually smoothen and become evenly dispersed, indicatingthat temporal dependency acts synergestically with domain adaptation.similarly to the Domain Adversarial Neural Network (DANN) (Ganin et al. (2016)) to make therepresentations invariant across domains. We call our model the Variational Recurrent AdversarialDeep Domain Adaptation (VRADA) model. As far as we know, this is the first model capable ofaccomplishing unsupervised domain adaptation while transferring temporal latent dependenciesfor complex multivariate time-series data. Figure 1 shows an example of the domain invariantrepresentations learned by different deep learning models including our VRADA model. From thisfigure, we can see that our model (VRADA) shows better mixing of the domain distributions than thecompeting models indicating that it learns better domain invariant representations.In order to prove the efficacy of our model, we perform domain adaptation using real-world healthcaretime-series data. We choose healthcare data for two primary reasons. (1) Currently, a standard protocolin healthcare is to build, evaluate, and deploy machine learning models for particular datasets thatmay perform poorly on unseen datasets with different distributions. For example, models built aroundpatient data from particular age groups perform poorly on other age groups because the features usedto train the models have different distributions across the groups (Alemayehu & Warner (2004); Laoet al. (2004); Seshamani & Gray (2004)). Knowledge learned from one group is not transferrableto the other group. Domain adaptation seems like a natural solution to this problem as knowledgeneeds to be transferred across domains which share features that exhibit different distributions. (2)Healthcare data has multiple attributes recorded per patient visit, and it is longitudinal and episodic innature. Thus, healthcare data is a suitable platform on which to study a model which seeks to capturecomplex temporal representations and transfer this knowledge across domains.The rest of the paper is structured as follows. In the following section, we briefly discuss thecurrent state-of-the-art deep domain adaptation approaches. Afterwards, we present our modelmathematically, detailing how it simultaneously learns to capture temporal latent dependencies andcreate domain-invariant representations. In Section 4, we compare and contrast the performance ofproposed approach with other approaches on two real-world health care datasets, and provide analysison our domain-invariant representations.2 R ELATED WORKDomain adaptation is a specific instance of transfer learning in which the feature spaces are shared buttheir marginal distributions are different. A good survey on the two has been done in several previousworks (Pan & Yang (2009); Jiang (2008); Patel et al. (2015)). Domain adaptation has been thoroughlystudied in computer vision(Saenko et al. (2010); Gong et al. (2012); Fernando et al. (2013)) andnatural language processing (NLP) (Blitzer (2007); Foster et al. (2010)) applications. Recently, thedeep learning paradigm has become popular in domain adaptation (Chen et al. (2012); Tzeng et al.(2015); Yang & Eisenstein; Long & Wang (2015)) due to its ability to learn rich, flexible, non-lineardomain-invariant representations. Here, we briefly discuss two deep domain adaptation approacheswhich are closely related to our proposed model. Domain Adversarial Neural Networks (DANN)2Published as a conference paper at ICLR 2017h1 h2 h3 ht:::::::::x1 x2 x3 xtz1 z2 z3 ztGyGdFigure 2: Block diagram of VRADA. Blue lines show the inference process, qe(ztjxt; z<t). Brown linesshow the generation process, pg(xtjzt; x<t). Red lines show the recurrence process where htis informed byht1, which is informed by zt1andxt1. Black lines indicate classification.(Ganin et al. (2016)) is a deep domain adaptation model which uses two core components to createdomain-invariant representations, a feature extractor that produces the data’s latent representation,and an adversarial domain labeler that attempts to classify that data’s domain to help the featureextractor produce latent representations which are domain-invariant. In Louizos et al. (2015), theauthors propose Variational Fair AutoEncoder, which uses Variational Autoencoding architecture(Kingma & Welling (2013)) to learn latent representations where most of the information aboutcertain known factors of variation are purged from the representation while still retaining as muchinformation about the data as possible. While, these deep learning approaches learn domain-invariantrepresentations, they fail to capture and transfer the underlying complex temporal latent relationshipsfrom one domain to another as they use convolutional or feed forward neural networks which weclaim are not suitable for multivariate time-series data.Other works such as Huang & Yates (2009); Xiao & Guo (2013) have used distributed representationsfor domain adaptation in NLP sequence labeling tasks. However, they either induce hidden statesas latent features using dynamic Bayesian networks (DBNs) or learn generalizable distributedrepresentations of words using Recurrent Neural Networks (RNN) (Socher et al. (2011)) to enabledomain adaptation. These works either model the highly non-linear dynamics, as one can with RNN,or capture the complex latent dependencies present in sequential data, as one can with DBNs, butnot both. To overcome the challenges of DBNs and RNNs, Variational Recurrent Neural Network(VRNN)( Chung et al. (2016)) was proposed recently to capture the complex relationship betweenthe underlying hidden factors of variation and the output variables at different time-steps. The VRNNuses Variational Autoencoders (V AEs)( Kingma & Welling (2013); Goodfellow et al. (2016)) at eachtime-step to learn a complex relationship between the latent hidden factors across time-steps. Likethe V AE, its latent variable is parametric. Combined, these things make it well-suited for multimodalsequential data such as multivariate time-series. In the following section, we discuss our approach,Variational Adversarial Deep Domain Adaptation (VRADA), which uses a VRNN to model andtransfer complex domain-invariant temporal latent relationships for unsupervised domain adaptationof multivariate time-series.3 V ARIATIONAL RECURRENT ADVERSARIAL DEEPDOMAIN ADAPTATIONIn this section, we present our Variational Recurrent Adversarial Deep Domain Adaptation (VRADA)model for the purpose of capturing and transferring temporal latent dependencies across domainsvia domain-invariant representations. First, we introduce the notations used in this paper and thendiscuss our VRADA model in detail.3.1 N OTATIONSLet us denote a multivariate variable-length time series with Ndata samples asfxi= (xit)Tit=1gNi=1,wherexit2RD. (Note: in our experiments, for all data samples Ti=, but for generality wemaintainTi). We denotefxiSgni=1as source domain data and fxiTgNi=n+1as target domain data. Weassume that each source domain data sample xiScomes with Llabelsyi2f0;1gL(for example,these labels may correspond to a clinical outcome such as mortality or ICD9 diagnosis codes), while3Published as a conference paper at ICLR 2017target domain has no labeled data samples. We assign a domain label di2f0;1gto each data sampleto indicate if it comes from the source or target domain. diwill be used for adversarial training.3.2 VRADAThe block diagram of our VRADA model is shown in Figure 2. To explicitly model the dependenciesbetween the latent random variable across time steps, the VRADA model utilizes Variational RecurrentNeural Networks (VRNN) (Chung et al. (2016)). The VRNN effectively contains a Variational Auto-Encoders (Kingma & Welling (2013)) at every time step, all of which are conditioned on previousauto-encoders via the hidden state ht1of an RNN, such as an LSTM (Hochreiter & Schmidhuber(1997)). Therefore, for each time-step of xit, we infer a latent random variable zitviazitjxitN(z;t;diag(z;t));where [z;t;z;t] ='enc('x(xit);ht1)with priorzitN(0;t;diag(0;t));where [0;t;0;t] ='prior(ht1)where;t;;tdenote parameters of a generating distribution, and 'can be any highly flexiblefunction such as deep neural networks. For each zit,xitis generated viaxitjzitN(x;t;diag(x;t));where [x;t;x;t] ='dec('z(zit);ht1)and learned by optimizing the VRNN objective function:Lr(xit;e;g) =Eqe(ziTijxiTi)[TiXt=1(D(qe(zitjxit;zi<t)jjp(zitjxi<t;zi<t))+logpg(xitjzit;xi<t))])whereqe(zitjxit;zi<t)is the inference model, p(zitjxi<t;zi<t)is the prior, pg(xitjzit;xi<t)is thegenerative model, eis the parameters of the VRNN’s encoder, gthe parameters of the VRNN’sdecoder, and D(jj)refers to KL-Divergence. Note: zTrefers to the set of all ztsuch thattT,likewise for z<T. For each xi, we use ~ziqe(ziTijxiTi;zi<Ti)as our feature representation forsource domain classification task since it captures temporal latent dependencies across the time-steps.Training the VRNN for the source domain classification involves solving the following optimization:mine;g;y1nnXi=11TiLr(xi;e;g) +1nnXi=1Ly(xi;y;e) +R(e) (1)whereR(e)is a regularizer for the parameters of VRNN encoder (which is also the feature extractorof VRADA) with a tuning hyperparameter .As we are interested in achieving domain adaptation via the latent representation ~zi(i.e. to make ~zidomain-invariant), we can adversarially train the above objective function (equation 1) by employingthe domain adaptation idea proposed in Ganin et al. (2016). Let Gy(~zi;y)andGd(~zi;d)representthe source label classifier (to predict source labels yi) and domain label classifier (to predict domainlabelsdi) respectively with parameters yanddfor a given input ~zi. Here,Gy(:)andGd(:)can bedeep neural networks. Let us denote their loss functions respectively asLy(xi;y;e) =LB(Gy(Ve(xi;e);y);yi);Ld(xi;d;e) =LB(Gd(Ve(xi;e);d);di)whereLBis the classification loss such as a binary or categorical cross-entropy loss function andVe(xi;e)is the VRNN encoder that maps input xito~zi.Now, for adversarial training, we consider the following domain adaptation term as the regularizer ofequation 1.R(e) = maxdh1nnXi=1Ld(xi;d;e)1n0NXi=n+1Ld(xi;d;e)i(2)wheren0is the number of target domain samples. As shown in Ganin et al. (2016), Ris the domainregularizer and it is derived from the empirical Hdivergence between the source domain and targetdomain samples( Ben-David et al. (2010)).4Published as a conference paper at ICLR 2017Combining the joint optimization problem of equations 1 and 2 leads to our VRADA model, wherewe minimize the source classification risk and at the same time achieve domain adaptation. Mathe-matically, we optimize the following complete objective function:E(e;g;y;d) =1NNXi=11TiLr(xi;e;g)+1nnXi=1Ly(xi;y)(1nnXi=1Ld(xi;d)+1n0NXi=n+1Ld(xi;d)))(3)whereis atrade-off between optimizing on making domain-invariant representations and optimiz-ing source classification accuracy. Our optimization involves minimization with respect to someparameters, and maximization with respect to the others, i.e., we iteratively solve the following:(^g;^y;^e) = arg ming;y;eE(e;g;y;^d)^d= arg maxdE(^e;^g;^y;d)with the gradient updates calculated as:e e(@Lr@e+@Ly@y@Ld@d) (4)g g@Lr@g(5)d d@Ld@d(6)y y@Ly@y(7)whereis the learning rate. We can use stochastic gradient descent (SGD) to solve the equations(5-7). To solve equation (4), we can use SGD and the gradient reversal layer (GRL)(Ganin et al.(2016)). The role of GRL is to reverse the gradient sign while performing backpropagation. Thisensures that the domain classification loss is maximized which makes the feature representationsdomain-invariant.Thus, VRADA results in learning feature representations which are domain-invariant (due to domainregressorR) and which capture the temporal latent dependencies (due to optimizing VRNN objectivefunctionLr). These things combine to allow the VRADAs’ discriminative power on the sourcedomain to transfer to the target domain.4 E XPERIMENTSWe conduct experiments on two real-world health care datasets to answer the following questions: (a)How does our VRADA model perform when compared to the state-of-the-art domain adaptation andnon-adaptation approaches? (b) How different are the domain-invariant representations learned byvarious domain adaptation methods? (c) How do we show that the temporal latent dependencies aretransferred between domains? In the remainder of this section, we will describe the datasets, methods,empirical results, and show visualizations to answer the above questions.4.1 D ATASET DESCRIPTIONWe conduct experiments on two health care datasets, including the MIMIC-III dataset and a PediatricICU (PICU) dataset from Children’s Hospital Los Angeles.MIMIC-III ( Johnson et al. (2016)) is a public dataset with deidentified clinical care data collected atBeth Israel Deaconess Medical Center from 2001 to 2012. It contains over 58,000 hospital admissionrecords of 38,645 adults and 7,875 neonates. For our experiments, we extracted the following twodatasets:Adult-AHRF dataset : To study domain adaptation for adult patients with acute hypoxemicrespiratory failure (AHRF), we extracted 20 time series features (such as Base excess, bloodpH value, Mean Air Pressure, PaO2, etc.) from 5527 admission records based on Khemani5Published as a conference paper at ICLR 2017et al. (2009). We grouped the patients into 4 groups/cohorts based on their age[1]- Group2: working-age adult (20 to 45 yrs, 508 patients); Group 3: old working-age adult (46 to65 yrs, 1888 patients); Group 4: elderly (66 to 85 yrs, 2394 patients); Group 5: old elderly(85 yrs and up, 437 patients). We treated each group as a separate domain with which wecould perform domain adaptation. For each patient, we used the first 4 day after admission(with each day serving as a single time-step) as time series data for training and testing ourmodels.ICD9 dataset : For this dataset we extracted 99 time series features from 19714 admissionrecords from 4 modalities including input-events (fluids into patient, e.g., insulin), output-events (fluids out of the patient, e.g., urine), lab-events (lab test results, e.g., blood pH values,platelet count, etc.) and prescription-events (drugs prescribed by doctors, e.g., aspirin,potassium chloride, etc.). These modalities are known to be extremely useful for monitoringICU patients. All the time series are of more than 48 hours of duration, and only the first 24hours (after admission) 2-hourly sampled time series data is used for training and testing ourmodels. We use this dataset to predict the ICD9 Diagnosis code categories for each patient’sadmission record.Child-AHRF dataset : This is a PICU dataset which contains health records of 398 children patientwith acute hypoxemic respiratory failure in the intensive care unit at Children’s Hospital Los Angeles(CHLA)(Khemani et al. (2009)). Similar to Adult-AHRF, this dataset has 20 time series featurescollected for 4 days after ICU admission. This dataset is considered as one group (Group 1: children,age 0 to 19 yrs) and represents one domain.4.1.1 P REDICTION AND DOMAIN ADAPTATION TASKSMortality Prediction: For Adult-AHRF and Child-AHRF datasets, we are interested in predictingmortality, i.e. whether a patient dies from AHRF during their hospital stay. 20.10% of all the patientsin Child-AHRF and 13.84% of all patients in Adult-AHRF have a positive mortality label (i.e. thepatients who die in hospital).ICD9 Code Prediction: Each admission record in MIMIC-III dataset has multiple ICD-9 diagnosiscodes. We group all the occurrences of the ICD-9 codes into 20 diagnosis groups[2]. For the ICD9dataset, we are interested in predicting these 20 ICD-9 Diagnosis Categories for each admissionrecord. We treat this as a multi-task prediction problem.Domain Adaptation Tasks: We study unsupervised domain adaptation (i.e. target domain labels areunavailable during training and validation) task with-in age groups of Adult-AHRF dataset, ICD9dataset and across Adult and Child-AHRF datasets. For Adult-AHRF and ICD9 datasets, we created12 source-target domain pairs using the age groups, pairing up each domain Diwith another domainDj6=i, for example, the source-target pair 2-5 was used for adapting from group 2 (working-age adult)to group 5 (old elderly). We also created 4 source-target pairs for performing domain adaptation from4 adult age-groups to 1 child age-group.4.2 M ETHODS AND IMPLEMENTATION DETAILSWe categorize the methods used in our main experiments into the following groups:Non-adaptive baseline methods: Logistic Regression (LR), Adaboost with decision regres-sors (Adaboost), and feed forward deep neural networks (DNN)Deep Domain adaptation methods: Domain Adversarial Neural Networks (DANN) (Ganinet al. (2016)); DANN with a RNN (LSTM) as feature extractor (R-DANN); Variational FairAutocoder (VFAE)(Louizos et al. (2015))Our method: Variational Recurrent Adversarial Deep Domain Adaptation (VRADA)[3].[1]:https://www.cms.gov/Research-Statistics-Data-and-Systems/Statistics-Trends-and-Reports/NationalHealthExpendData/[2]:http://tdrdata.com/ipd/ipd_SearchForICD9CodesAndDescriptions.aspx .“Conditions Originating in the Perinatal Period” is not present in the preprocessed dataset.[3]: Codes will be publicly released soon6Published as a conference paper at ICLR 2017In all our experiments, we conducted unsupervised domain adaptation where target domain labels areunavailable during training and validation. For R-DANN, we used LSTM(Hochreiter & Schmidhuber(1997)) as the feature extractor network instead of the feed-forward neural networks used in DANN.For VFAE, DANN and all the non-domain adaptive approaches we flattened the time series alongtime axis and treat it as the input to the model. For fairness, the classifier and feature extractors ofthe VRADA and R-DANN were equivalent in depth and both had the same model capacity. We alsoensure that the size of latent feature representation ~ziare similar for VRADA and DANN models.The model capacity of VFAE was chosen to be similar to VRADA. All the deep domain adaptationmodels including ours had depth of size 8 (including output classifier layers). We used the Adamoptimizer ( Kingma & Ba (2014)) and ran all models for 500 epochs with a learning rate of 3e4.We set an early stopping criteria that the model does not experience a decrease in the validationloss for 20 epochs. Source domain data was split into train/validation subsets with a 70/30 ratio andtarget domain data into train/validation/test subsets with a 70/15/15 ratio. In order to compare all themethods, we report AUC scores on the entire target domain set, and the test subset for each targetdomain data of a source-target pair.4.3 Q UANTITATIVE RESULTSIn Table 1, we compare non domain adaptation and domain adaptation models’ performance onthe target domain test subset for the AHRF mortality prediction task. It is immediately clear thatdomain adaptation methods consistently outperform non domain adaptation methods. We see thatgenerally the VRADA outperforms both variants of the DANN with it consistently seeing scores4%higher. While the standard deviation for the VRADA was about 1%, it was about 2%for theR-DANN, further showing our models efficacy as it converges to more stable local optima. Ourmodel VRADA beats state-of-the-art DANN(Ganin et al. (2016)) and VFAE(Louizos et al. (2015)) onall the source-pair domain adaptation tasks for Adult-AHRF dataset. For the domain adaptation fromAdult-AHRF to Child-AHRF dataset, we observe that VRADA mostly outperforms all the competingmodels. This shows that our model can perform well even for smaller target domain datasets.Table 1: AUC Comparison for AHRF Mortality Prediction task with and without Domain AdaptationSource-Target LR Adaboost DNN DANN VFAE R-DANN VRADA3- 2 0:555 0:562 0:569 0:572 0 :615 0 :603 0:6544- 2 0:624 0:645 0:569 0:589 0 :635 0 :584 0:6565- 2 0:527 0:554 0:551 0:540 0 :588 0 :611 0:6162- 3 0:627 0:621 0 :550 0:563 0 :585 0 :708 0:7244- 3 0:681 0:636 0 :542 0:527 0 :722 0:821 0:7705- 3 0:655 0:706 0:503 0:518 0 :608 0 :769 0:7822- 4 0:585 0:591 0:530 0:560 0 :582 0 :716 0:7773- 4 0:652 0:629 0 :531 0:527 0 :697 0:769 0:7645- 4 0:689 0:699 0:538 0:532 0 :614 0 :728 0:7382- 5 0:565 0:543 0 :549 0:526 0 :555 0 :659 0:7193- 5 0:576 0:587 0:510 0:526 0 :533 0 :630 0:7214- 5 0:682 0:587 0 :575 0:548 0 :712 0 :747 0:7755- 1 0:502 0:573 0:557 0:563 0 :618 0 :563 0:6394- 1 0:565 0:533 0 :572 0:542 0:668 0:577 0 :6363- 1 0:500 0 :500 0 :542 0:535 0 :570 0 :591 0:6312- 1 0:520 0:500 0 :534 0:559 0 :578 0 :630 0:637In the above table, we test classification without adaptation using Logistic Regression (LR), Adaboost withdecision tree classifiers and Feed forward Deep Neural Networks (DNN); and with adaptation using DeepDomain Adversarial Neural Networks (DANN), a DANN with an LSTM in its feature extractor (R-DANN),Variational Fair Autoencoder (VFAE) and our Variational Adversarial Domain Adaptation Model (VRADA). Allresults are reported on the target domain test subset dataset.As the AHRF mortality prediction task made it clear that domain adaptation is necessary for inter-group adaptation, for the ICD9 multi-task prediction task that involved data with time-steps of length12, we focused strictly on domain adaptive models (i.e. the DANN, R-DANN, and VRADA). Table 2shows the aggregated AUC scores on the entire target domain dataset and test data of the targetdomain for the 20 tasks of the ICD9 Code Prediction task. Here, we clearly see that VRADA and7Published as a conference paper at ICLR 2017Table 2: AUC Comparison for ICD9 Diagnosis Code Prediction taskModel 23 24 25 32 34 35 42 43 45 52 53 54DANNentire targettarget test0:5130:5090:5080:5130:5090:5310:5110:5270:5080:5150:5140:5310:5110:5150:5070:5210:5120:5210:5050:5180:5080:5140:5060:519R-DANNentire targettarget test0:6080:6050:5810:5790:5620:5700:6180:6280:6100:6090:5860:5890:6040:6140:6070:6160:5750:5860:5730:5730:5580:5630:5660:564VRADAentire targettarget test0:6200:6090:5640:5630:5570:5600:6110:6200:6170:6170:5800:5800:5980:6060:6150:6230:5880:5940:5710:5760:5820:5810:5760:576Here, we compare results for the ICD9 Diagnosis Code Prediction task on the ICD9 dataset. For each model, thetop row corresponds to the performance on the entire target domain dataset and the bottom row corresponds toperformance on the test subset (15%) of the target domain dataset.R-DANN models outperform DANN Ganin et al. (2016) by significant margins. We also observe thatVRADA outperforms R-DANN by 1:52% when averaged over all the source-target domain pairs.4.4 D ISCUSSIONFigure 3 shows the temporal latent dependencies captured by our VRADA as compared to theR-DANN for 3-4source-target pair. While both models learn temporal latent dependencies fairlywell, the VRADA outperforms the R-DANN in two ways. First, the VRADA’s neurons learnedstronger predictions of whether features are relevant towards modeling the data. If we look at theVRADA row, for both AHRF and ICD9 we see that the neural activation patterns are more consistentacross time-steps than for R-DANN. Figure 4 shows the unrolled memory cell states (in the formExamples(TimeNeurons )) for all the source and target domain data points. We see a consistentactivation firing patterns across all these data points for VRADA but not for R-DANN. Together withthe stronger performance on 3-4for AHRF and 2-5for ICD9, this potentially indicates that VRADAis better learning the temporal dependencies.Second, nuanced values are consistent across time-steps for the VRADA, exhibiting a gradualtransition towards stronger activation with time, whereas the temporal activation pattern of the R-DANN seems somewhat sporadic. While activation gradients across time are consistent for boththe R-DANN and VRADA, more consistent inhibitory and excitatory neuron firing patterns indicatethat the VRADA better transfers knowledge. Another indication of domain adaptation was shownin Figure 1c. Looking at the t-SNE projections of feature representations of DNN, R-DANN, andVRADA we can see that the addition of temporal latent dependencies might help in better mixingof the domain distributions since we observe that the data is more evenly spread out. Figure 1c andFigure 3 together indicate that the VRADA’s temporal latent dependency capturing power and abilityto create domain-invariant representations act synergistically. For plots of activation patterns withoutdomain adaptation, please see appendix section 6.2.3.5 S UMMARYBecause of its diverse range of patients and its episodic and longitudal nature, healthcare data providesa good platform to test domain adaptation techniques for temporal data. With it as our example, weshowcase the Variational Recurrent Adversarial Domain Adaptation (VRADA) model’s ability tolearn temporal latent representations that are domain-invariant. By comparing our model’s latentrepresentations to others’, we show its ability to use variational methods to capture hidden factors ofvariation and produce more robust domain-invariant representations. We hope this work serves as abedrock for future work capturing and adapting temporal latent representations across domains.ACKNOWLEDGMENTSThis material is based upon work supported by the NSF research grants IIS-1134990, IIS-1254206,Samsung GRO Grant and the NSF Graduate Research Fellowship Program under Grant No. DGE-1418060. Any opinions, findings, and conclusions or recommendations expressed in this materialare those of the author(s) and do not necessarily reflect the views of the funding agencies. We alsoacknowledge Thailand’s Development and Promotion of Science and Technology Talents Project forfinancial support. We thank Dr. Robinder Khemani for sharing the Child-AHRF dataset.8Published as a conference paper at ICLR 2017R-DANN0.511.522.533.544.52468101214161820-2-1.5-1-0.500.511.520.511.522.533.544.52468101214161820-3-2-10123VRADA0.511.522.533.544.52468101214161820-3-2-101230.511.522.533.544.52468101214161820-3-2-10123Source Target2 4 6 8 10 12510152025303540-4-3-2-1012342 4 6 8 10 12510152025303540 -3-2.5-2-1.5-1-0.500.511.522 4 6 8 10 12510152025303540-10-8-6-4-202468102 4 6 8 10 12510152025303540-10-8-6-4-20246810 Source TargetAHRF, 3-4 ICD9, 2-5Figure 3: Cell states of memory cell for R-DANN and VRADA showing temporal latent dependencies capturedby neurons of the R-DANN and VRADA for the source domain and transferred to the target domain. Each stepalong the y-axis refers to the activation of a single neuron with blue for strong inhibition and yellow for strongexcitation. Step along the x-axis refers to activation per time-step. The left shows a single example in adapting3-4 and the right for adapting 2-5.5010015020025030035040045050100150200250300350400450-8-6-4-202468105010015020025030035040045050100150200250300350400-10-8-6-4-202468105010015020025030035040045050100150200250300350400450-10-8-6-4-202468105010015020025030035040045050100150200250300350400 -10-8-6-4-20246810R-DANN VRADAFigure 4: Cell states of memory cell for R-DANN and VRADA showing activation for all ICD9 2-5 adaptationexamples. Here, we show temporal dependencies learned across time, feature pairs for examples in a domain.The y-axis values refer to values per data point and the x-axis shows activation at time, feature pairs with thetime and feature dimensions being flattened.
SJ3UnkIVe
Interesting applications but with unconvincing results
5: Marginally below acceptance threshold
The work combines variational recurrent neural networks, and adversarial neural networks to handle domain adaptation for time series data. The proposed method, along with several competing algorithms are compared on two healthcare datasets constructed from MIMIC-III in domain adaptation settings. The new contribution of the work is relatively small. It extends VRNN with adversarial training for learning domain agnostic representations. From the experimental results, the proposed method clearly out-performs competing algorithms. However, it is not clear where the advantage is coming from. The only difference between the proposed method and R-DANN is using variational RNN vs RNN. Little insights were provided on how this could bring such a big difference in terms of performance and the drastic difference in the temporal dependencies captured by these two methods in Figure 4. Detailed comments: 1. Please provide more details on what is plotted in Figure 1. Is 1 (b) is the t-sne projection of representations learned by DANN or R-DANN? The text in section 4.4 suggests it’s the later case. It is surprising to see such a regular plot for VRADA. What do you think are the two dominant latent factors encoded in figure 1 (c)? 2. In Table 2, the two baselines have quite significant difference in performance testing on the entire target (including validation set) vs on the test set only. VRADA, on the other hand, performs almost identical in these two settings. Could you please offer some explanation on this? 3. Please explain figure 3 and 4 in more details. how to interpret the x-axis of figure 3, and the x and y axes of figure 4. Again the right two plots in figure 4 are extremely regular comparing to the ones on the left.
4: The reviewer is confident but not absolutely certain that the evaluation is correct
SJ3rcZcxl
ICLR.cc/2017/conference
2017
Q-Prop: Sample-Efficient Policy Gradient with An Off-Policy Critic
["Shixiang Gu", "Timothy Lillicrap", "Zoubin Ghahramani", "Richard E. Turner", "Sergey Levine"]
Model-free deep reinforcement learning (RL) methods have been successful in a wide variety of simulated domains. However, a major obstacle facing deep RL in the real world is their high sample complexity. Batch policy gradient methods offer stable learning, but at the cost of high variance, which often requires large batches. TD-style methods, such as off-policy actor-critic and Q-learning, are more sample-efficient but biased, and often require costly hyperparameter sweeps to stabilize. In this work, we aim to develop methods that combine the stability of policy gradients with the efficiency of off-policy RL. We present Q-Prop, a policy gradient method that uses a Taylor expansion of the off-policy critic as a control variate. Q-Prop is both sample efficient and stable, and effectively combines the benefits of on-policy and off-policy methods. We analyze the connection between Q-Prop and existing model-free algorithms, and use control variate theory to derive two variants of Q-Prop with conservative and aggressive adaptation. We show that conservative Q-Prop provides substantial gains in sample efficiency over trust region policy optimization (TRPO) with generalized advantage estimation (GAE), and improves stability over deep deterministic policy gradient (DDPG), the state-of-the-art on-policy and off-policy methods, on OpenAI Gym's MuJoCo continuous control environments.
["Deep learning", "Reinforcement Learning"]
ABSTRACTModel-free deep reinforcement learning (RL) methods have been successful in awide variety of simulated domains. However, a major obstacle facing deep RLin the real world is their high sample complexity. Batch policy gradient methodsoffer stable learning, but at the cost of high variance, which often requires largebatches. TD-style methods, such as off-policy actor-critic and Q-learning, aremore sample-efficient but biased, and often require costly hyperparameter sweepsto stabilize. In this work, we aim to develop methods that combine the stability ofpolicy gradients with the efficiency of off-policy RL. We present Q-Prop, a policygradient method that uses a Taylor expansion of the off-policy critic as a controlvariate. Q-Prop is both sample efficient and stable, and effectively combines thebenefits of on-policy and off-policy methods. We analyze the connection betweenQ-Prop and existing model-free algorithms, and use control variate theory to de-rive two variants of Q-Prop with conservative and aggressive adaptation. We showthat conservative Q-Prop provides substantial gains in sample efficiency over trustregion policy optimization (TRPO) with generalized advantage estimation (GAE),and improves stability over deep deterministic policy gradient (DDPG), the state-of-the-art on-policy and off-policy methods, on OpenAI Gym’s MuJoCo continu-ous control environments.1 I NTRODUCTIONModel-free reinforcement learning is a promising approach for solving arbitrary goal-directed se-quential decision-making problems with only high-level reward signals and no supervision. It hasrecently been extended to utilize large neural network policies and value functions, and has beenshown to be successful in solving a range of difficult problems (Mnih et al., 2015; Schulman et al.,2015; Lillicrap et al., 2016; Silver et al., 2016; Gu et al., 2016b; Mnih et al., 2016). Deep neuralnetwork parametrization minimizes the need for manual feature and policy engineering, and allowslearning end-to-end policies mapping from high-dimensional inputs, such as images, directly to ac-tions. However, such expressive parametrization also introduces a number of practical problems.Deep reinforcement learning algorithms tend to be sensitive to hyperparameter settings, often re-quiring extensive hyperparameter sweeps to find good values. Poor hyperparameter settings tend toproduce unstable or non-convergent learning. Deep RL algorithms also tend to exhibit high samplecomplexity, often to the point of being impractical to run on real physical systems. Although a num-ber of recent techniques have sought to alleviate some of these issues (Hasselt, 2010; Mnih et al.,2015; Schulman et al., 2015; 2016), these recent advances still provide only a partial solution to theinstability and sample complexity challenges.Model-free reinforcement learning consists of on- and off-policy methods. Monte Carlo policy gra-dient methods (Peters & Schaal, 2006; Schulman et al., 2015) are popular on-policy methods that1Published as a conference paper at ICLR 2017directly maximize the cumulative future returns with respect to the policy. While these algorithmscan offer unbiased (or nearly unbiased, as discussed in Section 2.1) estimates of the gradient, theyrely on Monte Carlo estimation and often suffer from high variance. To cope with high variancegradient estimates and difficult optimization landscapes, a number of techniques have been pro-posed, including constraining the change in the policy at each gradient step (Kakade, 2001; Peterset al., 2010) and mixing value-based back-ups to trade off bias and variance in Monte Carlo returnestimates (Schulman et al., 2015). However, these methods all tend to require very large numbersof samples to deal with the high variance when estimating gradients of high-dimensional neuralnetwork policies. The crux of the problem with policy gradient methods is that they can only effec-tively use on-policy samples, which means that they require collecting large amounts of on-policyexperiences after each parameter update to the policy. This makes them very sample intensive. Off-policy methods, such as Q-learning (Watkins & Dayan, 1992; Sutton et al., 1999; Mnih et al., 2015;Gu et al., 2016b) and off-policy actor-critic methods (Lever, 2014; Lillicrap et al., 2016), can in-stead use all samples, including off-policy samples, by adopting temporal difference learning withexperience replay. Such methods are much more sample-efficient. However, convergence of thesealgorithms is in general not guaranteed with non-linear function approximators, and practical con-vergence and instability issues typically mean that extensive hyperparameter tuning is required toattain good results.In order to make deep reinforcement learning practical as a tool for tackling real-world tasks, wemust develop methods that are both data efficient and stable. In this paper, we propose Q-Prop, astep in this direction that combines the advantages of on-policy policy gradient methods with the effi-ciency of off-policy learning. Unlike prior approaches for off-policy learning, which either introducebias (Sutton et al., 1999; Silver et al., 2014) or increase variance (Precup, 2000; Levine & Koltun,2013; Munos et al., 2016), Q-Prop can reduce the variance of gradient estimator without addingbias; unlike prior approaches for critic-based variance reduction (Schulman et al., 2016) which fitthe value function on-policy, Q-Prop learns the action-value function off-policy. The core idea isto use the first-order Taylor expansion of the critic as a control variate, resulting in an analyticalgradient term through the critic and a Monte Carlo policy gradient term consisting of the residualsin advantage approximations. The method helps unify policy gradient and actor-critic methods: itcan be seen as using the off-policy critic to reduce variance in policy gradient or using on-policyMonte Carlo returns to correct for bias in the critic gradient. We further provide theoretical analy-sis of the control variate, and derive two additional variants of Q-Prop. The method can be easilyincorporated into any policy gradient algorithm. We show that Q-Prop provides substantial gainsin sample efficiency over trust region policy optimization (TRPO) with generalized advantage esti-mation (GAE) (Schulman et al., 2015; 2016), and improved stability over deep deterministic policygradient (DDPG) (Lillicrap et al., 2016) across a repertoire of continuous control tasks.2 B ACKGROUNDReinforcement learning (RL) aims to learn a policy for an agent such that it behaves optimallyaccording to a reward function. At a time step tand statest, the agent chooses an action atac-cording to its policy p(atjst), the state of the agent and the environment changes to new state st+1according to dynamics p(st+1jst;at), the agent receives a reward r(st;at), and the process con-tinues. Let Rtdenote a g-discounted cumulative return from tfor an infinite horizon problem, i.eRt=å¥t0=tgt0tr(st0;at0). The goal of reinforcement learning is to maximize the expected returnJ(q) =Epq[R0]with respect to the policy parameters q. In this section, we review several standardtechniques for performing this optimization, and in the next section, we will discuss our proposedQ-Prop algorithm that combines the strengths of these approaches to achieve efficient, stable RL.Monte Carlo policy gradient refers to policy gradient methods that use full Monte Carlo returns,e.g. REINFORCE (Williams, 1992) and TRPO (Schulman et al., 2015), and policy gradient withfunction approximation refers to actor-critic methods (Sutton et al., 1999) which optimize the policyagainst a critic, e.g. deterministic policy gradient (Silver et al., 2014; Lillicrap et al., 2016).2.1 M ONTE CARLO POLICY GRADIENT METHODSMonte Carlo policy gradient methods apply direct gradient-based optimization to the reinforcementlearning objective. This involves directly differentiating the J(q)objective with respect to the policy2Published as a conference paper at ICLR 2017parameters q. The standard form, known as the REINFORCE algorithm (Williams, 1992), is shownbelow:ÑqJ(q) =Ep[¥åt=0Ñqlogpq(atjst)gtRt] =Ep[¥åt=0gtÑqlogpq(atjst)(Rtb(st))]; (1)where b(st)is known as the baseline. For convenience of later derivations, Eq. 1 can also be writtenas below, where rp(s) =å¥t=0gtp(st=s)is the unnormalized discounted state visitation frequency,ÑqJ(q) =Estrp();atp(jst)[Ñqlogpq(atjst)(Rtb(st))]: (2)Eq. 2 is an unbiased gradient of the RL objective. However, in practice, most policy gradient meth-ods effectively use undiscounted state visitation frequencies, i.e. g=1 in the equal for rp, andare therefore biased; in fact, making them unbiased often hurts performance (Thomas, 2014). Inthis paper, we mainly discuss bias due to function approximation, off-policy learning, and valueback-ups.The gradient is estimated using Monte Carlo samples in practice and has very high variance. Aproper choice of baseline is necessary to reduce the variance sufficiently such that learning becomesfeasible. A common choice is to estimate the value function of the state Vp(st)to use as the base-line, which provides an estimate of advantage function Ap(st;at), which is a centered action-valuefunction Qp(st;at), as defined below:Vp(st) =Ep[Rt] =Epq(atjst)[Qp(st;at)]Qp(st;at) =r(st;at)+gEp[Rt+1] =r(st;at)+gEp(st+1jst;at)[Vp(st+1)]Ap(st;at) =Qp(st;at)Vp(st):(3)Qp(st;at)summarizes the performance of each action from a given state, assuming it follows pthereafter, and Ap(st;at)provides a measure of how each action compares to the average perfor-mance at the state st, which is given by Vp(st). Using Ap(st;at)centers the learning signal andreduces variance significantly.Besides high variance, another problem with the policy gradient is that it requires on-policy samples.This makes policy gradient optimization very sample intensive. To achieve similar sample efficiencyas off-policy methods, we can attempt to include off-policy data. Prior attempts use importancesampling to include off-policy trajectories; however, these are known to be difficult scale to high-dimensional action spaces because of rapidly degenerating importance weights (Precup, 2000).2.2 P OLICY GRADIENT WITH FUNCTION APPROXIMATIONPolicy gradient methods with function approximation (Sutton et al., 1999), or actor-critic methods,include a policy evaluation step, which often uses temporal difference (TD) learning to fit a criticQwfor the current policy p(q), and a policy improvement step which greedily optimizes the policypagainst the critic estimate Qw. Significant gains in sample efficiency may be achievable using off-policy TD learning for the critic, as in Q-learning and deterministic policy gradient (Sutton, 1990;Silver et al., 2014), typically by means of experience replay for training deep Q networks (Mnihet al., 2015; Lillicrap et al., 2016; Gu et al., 2016b).One particularly relevant example of such a method is the deep deterministic policy gradient(DDPG) (Silver et al., 2014; Lillicrap et al., 2016). The updates for this method are given below,where pq(atjst) =d(at=q(st))is a deterministic policy, bis arbitrary exploration distribution,andrbcorresponds to sampling from a replay buffer. Q(;)is the target network that slowly tracksQw(Lillicrap et al., 2016).w=argminwEstrb();atb(jst)[(r(st;at)+gQ(st+1;q(st+1))Qw(st;at))2]q=argmaxqEstrb()[Qw(st;q(st))](4)When the critic and policy are parametrized with neural networks, full optimization is expensive,and instead stochastic gradient optimization is used. The gradient in the policy improvement phaseis given below, which is generally a biased gradient of J(q).ÑqJ(q)Estrb()[ÑaQw(st;a)ja=q(st)Ñqq(st)] (5)3Published as a conference paper at ICLR 2017The crucial benefits of DDPG are that it does not rely on high variance REINFORCE gradients and istrainable on off-policy data. These properties make DDPG and other analogous off-policy methodssignificantly more sample-efficient than policy gradient methods (Lillicrap et al., 2016; Gu et al.,2016b; Duan et al., 2016). However, the use of a biased policy gradient estimator makes analyzingits convergence and stability properties difficult.3 Q-P ROPIn this section, we derive the Q-Prop estimator for policy gradient. The key idea from this estimatorcomes from observing Equations 2 and 5 and noting that the former provides an almost unbiased(see Section 2.1), but high variance gradient, while the latter provides a deterministic, but biasedgradient. By using the deterministic biased estimator as a particular form of control variate (Ross,2006; Paisley et al., 2012) for the Monte Carlo policy gradient estimator, we can effectively use bothtypes of gradient information to construct a new estimator that in practice exhibits improved sampleefficiency through the inclusion of off-policy samples while preserving the stability of on-policyMonte Carlo policy gradient.3.1 Q-P ROP ESTIMATORTo derive the Q-Prop gradient estimator, we start by using the first-order Taylor expansion of anarbitrary function f(st;at), ̄f(st;at) =f(st; ̄at)+Ñaf(st;a)ja= ̄at(at ̄at)as the control vari-ate for the policy gradient estimator. We use ˆQ(st;at) =å¥t0=tgt0tr(st0;at0)to denote MonteCarlo return from state stand actionat, i.e. Ep[ˆQ(st;at)] = r(st;at) +gEp[Vp(st+1)], andq(st) =Epq(atjst)[at]to denote the expected action of a stochastic policy pq. Full derivation isin Appendix A.ÑqJ(q) =Erp;p[Ñqlogpq(atjst)(ˆQ(st;at) ̄f(st;at)]+Erp;p[Ñqlogpq(atjst) ̄f(st;at)]=Erp;p[Ñqlogpq(atjst)(ˆQ(st;at) ̄f(st;at)]+Erp[Ñaf(st;a)ja= ̄atÑqq(st)](6)Eq. 6 is general for arbitrary function f(st;at)that is differentiable with respect to atat an arbitraryvalue of ̄at; however, a sensible choice is to use the critic Qwforfandq(st)for ̄atto get,ÑqJ(q) =Erp;p[Ñqlogpq(atjst)(ˆQ(st;at) ̄Qw(st;at)]+Erp[ÑaQw(st;a)ja=q(st)Ñqq(st)]:(7)Finally, since in practice we estimate advantages ˆA(st;at), we write the Q-Prop estimator in termsof advantages to complete the basic derivation,ÑqJ(q) =Erp;p[Ñqlogpq(atjst)(ˆA(st;at) ̄Aw(st;at)]+Erp[ÑaQw(st;a)ja=q(st)Ñqq(st)] ̄A(st;at) = ̄Q(st;at)Epq[ ̄Q(st;at)] =ÑaQw(st;a)ja=q(st)(atq(st)):(8)Eq. 8 is composed of an analytic gradient through the critic as in Eq. 5 and a residual REINFORCEgradient in Eq. 2. From the above derivation, Q-Prop is simply a Monte Carlo policy gradientestimator with a special form of control variate. The important insight comes from the fact thatQwcan be trained using off-policy data as in Eq. 4. Under this setting, Q-Prop is no longer justa Monte Carlo policy gradient method, but more closely resembles an actor-critic method, wherethe critic can be updated off-policy but the actor is always updated on-policy with an additionalREINFORCE correction term so that it remains a Monte Carlo policy gradient method regardlessof the parametrization, training method, and performance of the critic. Therefore, Q-Prop can bedirectly combined with a number of prior techniques from both on-policy methods such as naturalpolicy gradient (Kakade, 2001), trust-region policy optimization (TRPO) (Schulman et al., 2015)and generalized advantage estimation (GAE) (Schulman et al., 2016), and off-policy methods suchas DDPG (Lillicrap et al., 2016) and Retrace( l) (Munos et al., 2016).Intuitively, if the critic Qwapproximates Qpwell, it provides a reliable gradient, reduces the estima-tor variance, and improves the convergence rate. Interestingly, control variate analysis in the nextsection shows that this is not the only circumstance where Q-Prop helps reduce variance.4Published as a conference paper at ICLR 20173.2 C ONTROL VARIATE ANALYSIS AND ADAPTIVE Q-P ROPFor Q-Prop to be applied reliably, it is crucial to analyze how the variance of the estimator changesbefore and after the application of control variate. Following the prior work on control vari-ates (Ross, 2006; Paisley et al., 2012), we first introduce h(st)to Eq. 8, a weighing variable thatmodulates the strength of control variate. This additional variable h(st)does not introduce bias tothe estimator.ÑqJ(q) =Erp;p[Ñqlogpq(atjst)(ˆA(st;at)h(st) ̄Aw(st;at)]+Erp[h(st)ÑaQw(st;a)ja=q(st)Ñqq(st)](9)The variance of this estimator is given below, where m=1:::Mindexes the dimension of q,Var=ErpåmVarat(Ñqmlogpq(atjst)(ˆA(st;at)h(st) ̄A(st;at))): (10)If we choose h(st)such that Var<Var, where Var =Erp[åmVarat(Ñqmlogpq(atjst)ˆA(st;at))]is the original estimator variance measure, then we have managed to reduce the variance. Directlyanalyzing the above variance measure is nontrivial, for the same reason that computing the optimalbaseline is difficult (Weaver & Tao, 2001). In addition, it is often impractical to get multiple actionsamples from the same state, which prohibits using na ̈ıve Monte Carlo to estimate the expectations.Instead, we propose a surrogate variance measure, Var =Erp[Varat(ˆA(st;at))]. A similar surrogateis also used by prior work on learning state-dependent baseline (Mnih & Gregor, 2014), and thebenefit is that the measure becomes more tractable,Var=Erp[Varat(ˆA(st;at)h(st) ̄A(st;at))]=Var+Erp[2h(st)Covat(ˆA(st;at); ̄A(st;at))+h(st)2Varat( ̄A(st;at))]:(11)SinceEp[ˆA(st;at)] =Ep[ ̄A(st;at)] = 0, the terms can be simplified as below,Covat(ˆA; ̄A) =Ep[ˆA(st;at) ̄A(st;at)]Varat( ̄A) =Ep[ ̄A(st;at)2] =ÑaQw(st;a)jTa=q(st)Sq(st)ÑaQw(st;a)ja=q(st);(12)where Sq(st)is the covariance matrix of the stochastic policy pq. The nice property of Eq. 11 isthat Varat( ̄A)is analytical and Cov at(ˆA; ̄A)can be estimated with single action sample. Using thisestimate, we propose adaptive variants of Q-Prop that regulate the variance of the gradient estimate.Adaptive Q-Prop. The optimal state-dependent factor h(st)can be computed per state, accord-ing to h(st) =Covat(ˆA; ̄A)=Varat( ̄A). This provides maximum reduction in variance accordingto Eq. 11. Substituting h(st)into Eq. 11, we get Var=Erp[(1rcorr(ˆA; ̄A)2)Varat(ˆA)], wherercorris the correlation coefficient, which achieves guaranteed variance reduction if at any state ̄Aiscorrelated with ˆA. We call this the fully adaptive Q-Prop method. An important conclusion fromthis analysis is that, in adaptive Q-Prop, the critic Qwdoes not necessarily need to be approximatingQpwell to produce good results. Its Taylor expansion merely needs to be correlated with ˆA, posi-tively or even negatively. This is in contrast with actor-critic methods, where performance is greatlydependent on the absolute accuracy of the critic’s approximation.Conservative and Aggressive Q-Prop. In practice, the single-sample estimate of Cov at(ˆA; ̄A)hashigh variance itself, and we propose the following two practical implementations of adaptive Q-Prop:(1)h(st) =1 if ˆCovat(ˆA; ̄A)>0 and h(st) =0 if otherwise, and (2) h(st) =sign(ˆCovat(ˆA; ̄A)). Thefirst implementation, which we call conservative Q-Prop, can be thought of as a more conservativeversion of Q-Prop, which effectively disables the control variate for some samples of the states. Thisis sensible as if ˆAand ̄Aare negatively correlated, it is likely that the critic is very poor. The secondvariant can correspondingly be termed aggressive Q-Prop, since it makes more liberal use of thecontrol variate.3.3 Q-P ROP ALGORITHMPseudo-code for the adaptive Q-Prop algorithm is provided in Algorithm 1. It is a mixture of policygradient and actor-critic. At each iteration, it first rolls out the stochastic policy to collect on-policy5Published as a conference paper at ICLR 2017Algorithm 1 Adaptive Q-Prop1: Initialize wfor critic Qw,qfor stochastic policy pq, and replay buffer R / 0.2:repeat3: fore=1;:::; Edo .Collect Eepisodes of on-policy experience using pq4:s0;ep(s0)5: fort=0;:::; T1do6:at;epq(jst;e),st+1;ep(jst;e;at;e),rt;e=r(st;e;at;e)7: Add batch data B=fs0:T;1:E;a0:T1;1:E;r0:T1;1:Egto replay buffer R8: Take ETgradient steps on QwusingRandpq9: Fit Vf(st)usingB10: Compute ˆAt;eusing GAE( l) and ̄At;eusing Eq. 711: Set ht;ebased on Section 3.212: Compute and center the learning signals lt;e=ˆAt;eht;e ̄At;e13: Compute ÑqJ(q)1ETåeåtÑqlogpq(at;ejst;e)lt;e+ht;eÑaQw(st;e;a)ja=q(st;e)Ñqq(st;e)14: Take a gradient step on pqusing ÑqJ(q), optionally with a trust-region constraint using B15:until pqconverges.samples, adds the batch to a replay buffer, takes a few gradient steps on the critic, computes ˆAand ̄A, and finally applies a gradient step on the policy pq. In our implementation, the critic Qwis fittedwith off-policy TD learning using the same techniques as in DDPG (Lillicrap et al., 2016):w=argminwEstrb();atb(jst)[(r(st;at)+gEp[Q0(st+1;at+1)]Qw(st;at))2]: (13)Vfis fitted with the same technique in (Schulman et al., 2016). Generalized advantage estimation(GAE) (Schulman et al., 2016) is used to estimate ˆA. The policy update can be done by any methodthat utilizes the first-order gradient and possibly the on-policy batch data, which includes trust regionpolicy optimization (TRPO) (Schulman et al., 2015). Importantly, this is just one possible imple-mentation of Q-Prop, and in Appendix C we show a more general form that can interpolate betweenpure policy gradient and off-policy actor-critic.3.4 L IMITATIONSA limitation with Q-Prop is that if data collection is very fast, e.g. using fast simulators, the computetime per episode is bound by the critic training at each iteration, and similar to that of DDPG andusually much more than that of TRPO. However, in applications where data collection speed isthe bottleneck, there is sufficient time between policy updates to fit Qwwell, which can be doneasynchronously from the data collection, and the compute time of Q-Prop will be about the same asthat of TRPO.Another limitation is the robustness to bad critics. We empirically show that our conservative Q-Propis more robust than standard Q-Prop and much more robust than pure off-policy actor-critic methodssuch as DDPG; however, estimating when an off-policy critic is reliable or not is still a fundamentalproblem that shall be further investigated. We can also alleviate this limitation by adopting morestable off-policy critic learning techniques such as Retrace( l) (Munos et al., 2016).4 R ELATED WORKVariance reduction in policy gradient methods is a long-standing problem with a large body of priorwork (Weaver & Tao, 2001; Greensmith et al., 2004; Schulman et al., 2016). However, explorationof action-dependent control variates is relatively recent, with most work focusing instead on simplerbaselining techniques (Ross, 2006). A subtle exception is compatible feature approximation (Suttonet al., 1999) which can be viewed as a control variate as explained in Appendix B. Another exceptionis doubly robust estimator in contextual bandits (Dud ́ık et al., 2011), which uses a different controlvariate whose bias cannot be tractably corrected. Control variates were explored recently not inRL but for approximate inference in stochastic models (Paisley et al., 2012), and the closest relatedwork in that domain is the MuProp algorithm (Gu et al., 2016a) which uses a mean-field networkas a surrogate for backpropagating a deterministic gradient through stochastic discrete variables.MuProp is not directly applicable to model-free RL because the dynamics are unknown; however, it6Published as a conference paper at ICLR 2017can be if the dynamics are learned as in model-based RL (Atkeson & Santamaria, 1997; Deisenroth& Rasmussen, 2011). This model-based Q-Prop is itself an interesting direction of research as iteffectively corrects bias in model-based learning.Part of the benefit of Q-Prop is the ability to use off-policy data to improve on-policy policy gra-dient methods. Prior methods that combine off-policy data with policy gradients either introducebias (Sutton et al., 1999; Silver et al., 2014) or use importance weighting, which is known to re-sult in degenerate importance weights in high dimensions, resulting in very high variance (Precup,2000; Levine & Koltun, 2013). Q-Prop provides a new approach for using off-policy data to reducevariance without introducing further bias.Lastly, since Q-Prop uses both on-policy policy updates and off-policy critic learning, it can takeadvantage of prior work along both lines of research. We chose to implement Q-Prop on top ofTRPO-GAE primarily for the purpose of enabling a fair comparison in the experiments, but com-bining Q-Prop with other on-policy update schemes and off-policy critic training methods is aninteresting direction for future work. For example, Q-Prop can also be used with other on-policypolicy gradient methods such as A3C (Mnih et al., 2016) and off-policy advantage estimation meth-ods such as Retrace( l) (Munos et al., 2016), GTD2 (Sutton et al., 2009), emphatic TD (Sutton et al.,2015), and WIS-LSTD (Mahmood et al., 2014).5 E XPERIMENTS(a) (b) (c) (d) (e) (f) (g)Figure 1: Illustrations of OpenAI Gym MuJoCo domains (Brockman et al., 2016; Duan et al., 2016):(a) Ant, (b) HalfCheetah, (c) Hopper, (d) Humanoid, (e) Reacher, (f) Swimmer, (g) Walker.We evaluated Q-Prop and its variants on continuous control environments from the OpenAI Gymbenchmark (Brockman et al., 2016) using the MuJoCo physics simulator (Todorov et al., 2012) asshown in Figure 1. Algorithms are identified by acronyms, followed by a number indicating batchsize, except for DDPG, which is a prior online actor-critic algorithm (Lillicrap et al., 2016). “c-” and“v-” denote conservative and aggressive Q-Prop variants as described in Section 3.2. “TR-” denotestrust-region policy optimization (Schulman et al., 2015), while “V-” denotes vanilla policy gradient.For example, “TR-c-Q-Prop-5000” means convervative Q-Prop with the trust-region policy update,and a batch size of 5000. “VPG” and “TRPO” are vanilla policy gradient and trust-region policy op-timization respectively (Schulman et al., 2016; Duan et al., 2016). Unless otherwise stated, all policygradient methods are implemented with GAE( l=0:97) (Schulman et al., 2016). Note that TRPO-GAE is currently the state-of-the-art method on most of the OpenAI Gym benchmark tasks, thoughour experiments show that a well-tuned DDPG implementation sometimes achieves better results.Our algorithm implementations are built on top of the rllab TRPO and DDPG codes from Duanet al. (2016) and available at https://github.com/shaneshixiang/rllabplusplus .Policy and value function architectures and other training details including hyperparameter valuesare provided in Appendix D.5.1 A DAPTIVE Q-P ROPFirst, it is useful to identify how reliable each variant of Q-Prop is. In this section, we analyzestandard Q-Prop and two adaptive variants, c-Q-Prop and a-Q-Prop, and demonstrate the stabilityof the method across different batch sizes. Figure 2a shows a comparison of Q-Prop variants withtrust-region updates on the HalfCheetah-v1 domain, along with the best performing TRPO hyper-parameters. The results are consistent with theory: conservative Q-Prop achieves much more stableperformance than the standard and aggressive variants, and all Q-Prop variants significantly outper-form TRPO in terms of sample efficiency, e.g. conservative Q-Prop reaches average reward of 4000using about 10 times less samples than TRPO.7Published as a conference paper at ICLR 2017(a) Standard Q-Prop vs adaptive variants. (b) Conservative Q-Prop vs TRPO across batch sizes.Figure 2: Average return over episodes in HalfCheetah-v1 during learning, exploring adaptive Q-Prop methods and different batch sizes. All variants of Q-Prop substantially outperform TRPO interms of sample efficiency. TR-c-QP, conservative Q-Prop with trust-region update performs moststably across different batch sizes.Figure 2b shows the performance of conservative Q-Prop against TRPO across different batchsizes. Due to high variance in gradient estimates, TRPO typically requires very large batch sizes,e.g. 25000 steps or 25 episodes per update, to perform well. We show that our Q-Prop methods canlearn even with just 1 episode per update, and achieves better sample efficiency with small batchsizes. This shows that Q-Prop significantly reduces the variance compared to the prior methods.As we discussed in Section 1, stability is a significant challenge with state-of-the-art deep RL meth-ods, and is very important for being able to reliably use deep RL for real world tasks. In the rest ofthe experiments, we will use conservative Q-Prop as the main Q-Prop implementation.5.2 E VALUATION ACROSS ALGORITHMS(a) Comparing algorithms on HalfCheetah-v1. (b) Comparing algorithms on Humanoid-v1.Figure 3: Average return over episodes in HalfCheetah-v1 and Humanoid-v1 during learning, com-paring Q-Prop against other model-free algorithms. Q-Prop with vanilla policy gradient outperformsTRPO on HalfCheetah. Q-Prop significantly outperforms TRPO in convergence time on Humanoid.In this section, we evaluate two versions of conservative Q-Prop, v-c-Q-Prop using vanilla pol-icy gradient and TR-c-Q-Prop using trust-region updates, against other model-free algorithms onthe HalfCheetah-v1 domain. Figure 3a shows that c-Q-Prop methods significantly outperform thebest TRPO and VPG methods. Even Q-Prop with vanilla policy gradient is comparable to TRPO,confirming the significant benefits from variance reduction. DDPG on the other hand exhibits incon-sistent performances. With proper reward scaling, i.e. “DDPG-r0.1”, it outperforms other methodsas well as the DDPG results reported in prior work (Duan et al., 2016; Amos et al., 2016). Thisillustrates the sensitivity of DDPG to hyperparameter settings, while Q-Prop exhibits more stable,monotonic learning behaviors when compared to DDPG. In the next section we show this improvedstability allows Q-Prop to outperform DDPG in more complex domains.8Published as a conference paper at ICLR 20175.3 E VALUATION ACROSS DOMAINSLastly, we evaluate Q-Prop against TRPO and DDPG across multiple domains. While the gymenvironments are biased toward locomotion, we expect we can achieve similar performance on ma-nipulation tasks such as those in Lillicrap et al. (2016). Table 1 summarizes the results, including thebest attained average rewards and the steps to convergence. Q-Prop consistently outperform TRPOin terms of sample complexity and sometimes achieves higher rewards than DDPG in more complexdomains. A particularly notable case is shown in Figure 3b, where Q-Prop substantially improvessample efficiency over TRPO on Humanoid-v1 domain, while DDPG cannot find a good solution.The better performance on the more complex domains highlights the importance of stable deep RLalgorithms: while costly hyperparameter sweeps may allow even less stable algorithms to performwell on simpler problems, more complex tasks might have such narrow regions of stable hyperpa-rameters that discovering them becomes impractical.TR-c-Q-Prop TRPO DDPGDomain Threshold MaxReturn. Episodes MaxReturn Epsisodes MaxReturn EpisodesAnt 3500 3534 4975 4239 13825 957 N/AHalfCheetah 4700 4811 20785 4734 26370 7490 600Hopper 2000 2957 5945 2486 5715 2604 965Humanoid 2500 >3492 14750 918 >30000 552 N/AReacher -7 -6.0 2060 -6.7 2840 -6.6 1800Swimmer 90 103 2045 110 3025 150 500Walker 3000 4030 3685 3567 18875 3626 2125Table 1: Q-Prop, TRPO and DDPG results showing the max average rewards attained in the first30k episodes and the episodes to cross specific reward thresholds. Q-Prop often learns more sampleefficiently than TRPO and can solve difficult domains such as Humanoid better than DDPG.6 D ISCUSSION AND CONCLUSIONWe presented Q-Prop, a policy gradient algorithm that combines reliable, consistent, and poten-tially unbiased on-policy gradient estimation with a sample-efficient off-policy critic that acts as acontrol variate. The method provides a large improvement in sample efficiency compared to state-of-the-art policy gradient methods such as TRPO, while outperforming state-of-the-art actor-criticmethods on more challenging tasks such as humanoid locomotion. We hope that techniques likethese, which combine on-policy Monte Carlo gradient estimation with sample-efficient variance re-duction through off-policy critics, will eventually lead to deep reinforcement learning algorithmsthat are more stable and efficient, and therefore better suited for application to complex real-worldlearning tasks.ACKNOWLEDGMENTSWe thank Rocky Duan for sharing and answering questions about rllab code, and Yutian Chen andLaurent Dinh for discussion on control variates. SG and RT were funded by NSERC, Google, andEPSRC grants EP/L000776/1 and EP/M026957/1. ZG was funded by EPSRC grant EP/J012300/1and the Alan Turing Institute (EP/N510129/1).
S1hUM3FVx
Off-policy TD learning of the critic
7: Good paper, accept
This paper proposed a new policy gradient method that uses the Taylor expansion of a critic as the control variate to reduce the variance in gradient estimation. The key idea is that the critic can be learned in an off-policy manner so that it is more sample efficient. Although the algorithm structure is similar to actor-critic, the critic information is “truncated” in a proper manner to reduce the variance in policy gradient. The proposed methods are evaluated on OpenAI Gym’s MuJoCo domains. Q-Prop is shown to produce more stable performance compared to DDPG and has higher sample efficiency than TRPO. The stability of off-policy TD learning for the critic is not guaranteed. Do the authors observe instability of it in the experiment? As the authors stated in the paper, the critic does not need to approximate the actual value function very well as long as it is correlated with \hat{A}. In the two adaptive Q-Prop schemes, the authors apply some tricks (conservative and aggressive adaptation) to control the possible unreliable estimate of the critic. This could be another evidence that the off-policy critic is not reliable. The authors may need to comment more on this point. Especially, it will be useful if the authors could show/justify that by such a design Q-Prop is robust against unreliable critic estimate. The authors seem to indicate that the advantage of Q-Prop over DDPG is in its insensitivity to hyperparameters. In Figure 3(a), the authors show that DDPG is sensitive to hyperparameters. However, the sensitivity of Q-Prop to the same hyperparameter is not shown. Experiments in the paper show that Q-Prop has advantage over TRPO in sample complexity. However, not much experiments are shown to justify the advantage of Q-Prop over DDPG. This is important because Table 1 shows that TR-c-Q-Prop needs significantly more samples than DDPG on Hopper, HalfCheetah and Swimmer. Any comment on that?
4: The reviewer is confident but not absolutely certain that the evaluation is correct
SJ3rcZcxl
ICLR.cc/2017/conference
2017
Q-Prop: Sample-Efficient Policy Gradient with An Off-Policy Critic
["Shixiang Gu", "Timothy Lillicrap", "Zoubin Ghahramani", "Richard E. Turner", "Sergey Levine"]
Model-free deep reinforcement learning (RL) methods have been successful in a wide variety of simulated domains. However, a major obstacle facing deep RL in the real world is their high sample complexity. Batch policy gradient methods offer stable learning, but at the cost of high variance, which often requires large batches. TD-style methods, such as off-policy actor-critic and Q-learning, are more sample-efficient but biased, and often require costly hyperparameter sweeps to stabilize. In this work, we aim to develop methods that combine the stability of policy gradients with the efficiency of off-policy RL. We present Q-Prop, a policy gradient method that uses a Taylor expansion of the off-policy critic as a control variate. Q-Prop is both sample efficient and stable, and effectively combines the benefits of on-policy and off-policy methods. We analyze the connection between Q-Prop and existing model-free algorithms, and use control variate theory to derive two variants of Q-Prop with conservative and aggressive adaptation. We show that conservative Q-Prop provides substantial gains in sample efficiency over trust region policy optimization (TRPO) with generalized advantage estimation (GAE), and improves stability over deep deterministic policy gradient (DDPG), the state-of-the-art on-policy and off-policy methods, on OpenAI Gym's MuJoCo continuous control environments.
["Deep learning", "Reinforcement Learning"]
ABSTRACTModel-free deep reinforcement learning (RL) methods have been successful in awide variety of simulated domains. However, a major obstacle facing deep RLin the real world is their high sample complexity. Batch policy gradient methodsoffer stable learning, but at the cost of high variance, which often requires largebatches. TD-style methods, such as off-policy actor-critic and Q-learning, aremore sample-efficient but biased, and often require costly hyperparameter sweepsto stabilize. In this work, we aim to develop methods that combine the stability ofpolicy gradients with the efficiency of off-policy RL. We present Q-Prop, a policygradient method that uses a Taylor expansion of the off-policy critic as a controlvariate. Q-Prop is both sample efficient and stable, and effectively combines thebenefits of on-policy and off-policy methods. We analyze the connection betweenQ-Prop and existing model-free algorithms, and use control variate theory to de-rive two variants of Q-Prop with conservative and aggressive adaptation. We showthat conservative Q-Prop provides substantial gains in sample efficiency over trustregion policy optimization (TRPO) with generalized advantage estimation (GAE),and improves stability over deep deterministic policy gradient (DDPG), the state-of-the-art on-policy and off-policy methods, on OpenAI Gym’s MuJoCo continu-ous control environments.1 I NTRODUCTIONModel-free reinforcement learning is a promising approach for solving arbitrary goal-directed se-quential decision-making problems with only high-level reward signals and no supervision. It hasrecently been extended to utilize large neural network policies and value functions, and has beenshown to be successful in solving a range of difficult problems (Mnih et al., 2015; Schulman et al.,2015; Lillicrap et al., 2016; Silver et al., 2016; Gu et al., 2016b; Mnih et al., 2016). Deep neuralnetwork parametrization minimizes the need for manual feature and policy engineering, and allowslearning end-to-end policies mapping from high-dimensional inputs, such as images, directly to ac-tions. However, such expressive parametrization also introduces a number of practical problems.Deep reinforcement learning algorithms tend to be sensitive to hyperparameter settings, often re-quiring extensive hyperparameter sweeps to find good values. Poor hyperparameter settings tend toproduce unstable or non-convergent learning. Deep RL algorithms also tend to exhibit high samplecomplexity, often to the point of being impractical to run on real physical systems. Although a num-ber of recent techniques have sought to alleviate some of these issues (Hasselt, 2010; Mnih et al.,2015; Schulman et al., 2015; 2016), these recent advances still provide only a partial solution to theinstability and sample complexity challenges.Model-free reinforcement learning consists of on- and off-policy methods. Monte Carlo policy gra-dient methods (Peters & Schaal, 2006; Schulman et al., 2015) are popular on-policy methods that1Published as a conference paper at ICLR 2017directly maximize the cumulative future returns with respect to the policy. While these algorithmscan offer unbiased (or nearly unbiased, as discussed in Section 2.1) estimates of the gradient, theyrely on Monte Carlo estimation and often suffer from high variance. To cope with high variancegradient estimates and difficult optimization landscapes, a number of techniques have been pro-posed, including constraining the change in the policy at each gradient step (Kakade, 2001; Peterset al., 2010) and mixing value-based back-ups to trade off bias and variance in Monte Carlo returnestimates (Schulman et al., 2015). However, these methods all tend to require very large numbersof samples to deal with the high variance when estimating gradients of high-dimensional neuralnetwork policies. The crux of the problem with policy gradient methods is that they can only effec-tively use on-policy samples, which means that they require collecting large amounts of on-policyexperiences after each parameter update to the policy. This makes them very sample intensive. Off-policy methods, such as Q-learning (Watkins & Dayan, 1992; Sutton et al., 1999; Mnih et al., 2015;Gu et al., 2016b) and off-policy actor-critic methods (Lever, 2014; Lillicrap et al., 2016), can in-stead use all samples, including off-policy samples, by adopting temporal difference learning withexperience replay. Such methods are much more sample-efficient. However, convergence of thesealgorithms is in general not guaranteed with non-linear function approximators, and practical con-vergence and instability issues typically mean that extensive hyperparameter tuning is required toattain good results.In order to make deep reinforcement learning practical as a tool for tackling real-world tasks, wemust develop methods that are both data efficient and stable. In this paper, we propose Q-Prop, astep in this direction that combines the advantages of on-policy policy gradient methods with the effi-ciency of off-policy learning. Unlike prior approaches for off-policy learning, which either introducebias (Sutton et al., 1999; Silver et al., 2014) or increase variance (Precup, 2000; Levine & Koltun,2013; Munos et al., 2016), Q-Prop can reduce the variance of gradient estimator without addingbias; unlike prior approaches for critic-based variance reduction (Schulman et al., 2016) which fitthe value function on-policy, Q-Prop learns the action-value function off-policy. The core idea isto use the first-order Taylor expansion of the critic as a control variate, resulting in an analyticalgradient term through the critic and a Monte Carlo policy gradient term consisting of the residualsin advantage approximations. The method helps unify policy gradient and actor-critic methods: itcan be seen as using the off-policy critic to reduce variance in policy gradient or using on-policyMonte Carlo returns to correct for bias in the critic gradient. We further provide theoretical analy-sis of the control variate, and derive two additional variants of Q-Prop. The method can be easilyincorporated into any policy gradient algorithm. We show that Q-Prop provides substantial gainsin sample efficiency over trust region policy optimization (TRPO) with generalized advantage esti-mation (GAE) (Schulman et al., 2015; 2016), and improved stability over deep deterministic policygradient (DDPG) (Lillicrap et al., 2016) across a repertoire of continuous control tasks.2 B ACKGROUNDReinforcement learning (RL) aims to learn a policy for an agent such that it behaves optimallyaccording to a reward function. At a time step tand statest, the agent chooses an action atac-cording to its policy p(atjst), the state of the agent and the environment changes to new state st+1according to dynamics p(st+1jst;at), the agent receives a reward r(st;at), and the process con-tinues. Let Rtdenote a g-discounted cumulative return from tfor an infinite horizon problem, i.eRt=å¥t0=tgt0tr(st0;at0). The goal of reinforcement learning is to maximize the expected returnJ(q) =Epq[R0]with respect to the policy parameters q. In this section, we review several standardtechniques for performing this optimization, and in the next section, we will discuss our proposedQ-Prop algorithm that combines the strengths of these approaches to achieve efficient, stable RL.Monte Carlo policy gradient refers to policy gradient methods that use full Monte Carlo returns,e.g. REINFORCE (Williams, 1992) and TRPO (Schulman et al., 2015), and policy gradient withfunction approximation refers to actor-critic methods (Sutton et al., 1999) which optimize the policyagainst a critic, e.g. deterministic policy gradient (Silver et al., 2014; Lillicrap et al., 2016).2.1 M ONTE CARLO POLICY GRADIENT METHODSMonte Carlo policy gradient methods apply direct gradient-based optimization to the reinforcementlearning objective. This involves directly differentiating the J(q)objective with respect to the policy2Published as a conference paper at ICLR 2017parameters q. The standard form, known as the REINFORCE algorithm (Williams, 1992), is shownbelow:ÑqJ(q) =Ep[¥åt=0Ñqlogpq(atjst)gtRt] =Ep[¥åt=0gtÑqlogpq(atjst)(Rtb(st))]; (1)where b(st)is known as the baseline. For convenience of later derivations, Eq. 1 can also be writtenas below, where rp(s) =å¥t=0gtp(st=s)is the unnormalized discounted state visitation frequency,ÑqJ(q) =Estrp();atp(jst)[Ñqlogpq(atjst)(Rtb(st))]: (2)Eq. 2 is an unbiased gradient of the RL objective. However, in practice, most policy gradient meth-ods effectively use undiscounted state visitation frequencies, i.e. g=1 in the equal for rp, andare therefore biased; in fact, making them unbiased often hurts performance (Thomas, 2014). Inthis paper, we mainly discuss bias due to function approximation, off-policy learning, and valueback-ups.The gradient is estimated using Monte Carlo samples in practice and has very high variance. Aproper choice of baseline is necessary to reduce the variance sufficiently such that learning becomesfeasible. A common choice is to estimate the value function of the state Vp(st)to use as the base-line, which provides an estimate of advantage function Ap(st;at), which is a centered action-valuefunction Qp(st;at), as defined below:Vp(st) =Ep[Rt] =Epq(atjst)[Qp(st;at)]Qp(st;at) =r(st;at)+gEp[Rt+1] =r(st;at)+gEp(st+1jst;at)[Vp(st+1)]Ap(st;at) =Qp(st;at)Vp(st):(3)Qp(st;at)summarizes the performance of each action from a given state, assuming it follows pthereafter, and Ap(st;at)provides a measure of how each action compares to the average perfor-mance at the state st, which is given by Vp(st). Using Ap(st;at)centers the learning signal andreduces variance significantly.Besides high variance, another problem with the policy gradient is that it requires on-policy samples.This makes policy gradient optimization very sample intensive. To achieve similar sample efficiencyas off-policy methods, we can attempt to include off-policy data. Prior attempts use importancesampling to include off-policy trajectories; however, these are known to be difficult scale to high-dimensional action spaces because of rapidly degenerating importance weights (Precup, 2000).2.2 P OLICY GRADIENT WITH FUNCTION APPROXIMATIONPolicy gradient methods with function approximation (Sutton et al., 1999), or actor-critic methods,include a policy evaluation step, which often uses temporal difference (TD) learning to fit a criticQwfor the current policy p(q), and a policy improvement step which greedily optimizes the policypagainst the critic estimate Qw. Significant gains in sample efficiency may be achievable using off-policy TD learning for the critic, as in Q-learning and deterministic policy gradient (Sutton, 1990;Silver et al., 2014), typically by means of experience replay for training deep Q networks (Mnihet al., 2015; Lillicrap et al., 2016; Gu et al., 2016b).One particularly relevant example of such a method is the deep deterministic policy gradient(DDPG) (Silver et al., 2014; Lillicrap et al., 2016). The updates for this method are given below,where pq(atjst) =d(at=q(st))is a deterministic policy, bis arbitrary exploration distribution,andrbcorresponds to sampling from a replay buffer. Q(;)is the target network that slowly tracksQw(Lillicrap et al., 2016).w=argminwEstrb();atb(jst)[(r(st;at)+gQ(st+1;q(st+1))Qw(st;at))2]q=argmaxqEstrb()[Qw(st;q(st))](4)When the critic and policy are parametrized with neural networks, full optimization is expensive,and instead stochastic gradient optimization is used. The gradient in the policy improvement phaseis given below, which is generally a biased gradient of J(q).ÑqJ(q)Estrb()[ÑaQw(st;a)ja=q(st)Ñqq(st)] (5)3Published as a conference paper at ICLR 2017The crucial benefits of DDPG are that it does not rely on high variance REINFORCE gradients and istrainable on off-policy data. These properties make DDPG and other analogous off-policy methodssignificantly more sample-efficient than policy gradient methods (Lillicrap et al., 2016; Gu et al.,2016b; Duan et al., 2016). However, the use of a biased policy gradient estimator makes analyzingits convergence and stability properties difficult.3 Q-P ROPIn this section, we derive the Q-Prop estimator for policy gradient. The key idea from this estimatorcomes from observing Equations 2 and 5 and noting that the former provides an almost unbiased(see Section 2.1), but high variance gradient, while the latter provides a deterministic, but biasedgradient. By using the deterministic biased estimator as a particular form of control variate (Ross,2006; Paisley et al., 2012) for the Monte Carlo policy gradient estimator, we can effectively use bothtypes of gradient information to construct a new estimator that in practice exhibits improved sampleefficiency through the inclusion of off-policy samples while preserving the stability of on-policyMonte Carlo policy gradient.3.1 Q-P ROP ESTIMATORTo derive the Q-Prop gradient estimator, we start by using the first-order Taylor expansion of anarbitrary function f(st;at), ̄f(st;at) =f(st; ̄at)+Ñaf(st;a)ja= ̄at(at ̄at)as the control vari-ate for the policy gradient estimator. We use ˆQ(st;at) =å¥t0=tgt0tr(st0;at0)to denote MonteCarlo return from state stand actionat, i.e. Ep[ˆQ(st;at)] = r(st;at) +gEp[Vp(st+1)], andq(st) =Epq(atjst)[at]to denote the expected action of a stochastic policy pq. Full derivation isin Appendix A.ÑqJ(q) =Erp;p[Ñqlogpq(atjst)(ˆQ(st;at) ̄f(st;at)]+Erp;p[Ñqlogpq(atjst) ̄f(st;at)]=Erp;p[Ñqlogpq(atjst)(ˆQ(st;at) ̄f(st;at)]+Erp[Ñaf(st;a)ja= ̄atÑqq(st)](6)Eq. 6 is general for arbitrary function f(st;at)that is differentiable with respect to atat an arbitraryvalue of ̄at; however, a sensible choice is to use the critic Qwforfandq(st)for ̄atto get,ÑqJ(q) =Erp;p[Ñqlogpq(atjst)(ˆQ(st;at) ̄Qw(st;at)]+Erp[ÑaQw(st;a)ja=q(st)Ñqq(st)]:(7)Finally, since in practice we estimate advantages ˆA(st;at), we write the Q-Prop estimator in termsof advantages to complete the basic derivation,ÑqJ(q) =Erp;p[Ñqlogpq(atjst)(ˆA(st;at) ̄Aw(st;at)]+Erp[ÑaQw(st;a)ja=q(st)Ñqq(st)] ̄A(st;at) = ̄Q(st;at)Epq[ ̄Q(st;at)] =ÑaQw(st;a)ja=q(st)(atq(st)):(8)Eq. 8 is composed of an analytic gradient through the critic as in Eq. 5 and a residual REINFORCEgradient in Eq. 2. From the above derivation, Q-Prop is simply a Monte Carlo policy gradientestimator with a special form of control variate. The important insight comes from the fact thatQwcan be trained using off-policy data as in Eq. 4. Under this setting, Q-Prop is no longer justa Monte Carlo policy gradient method, but more closely resembles an actor-critic method, wherethe critic can be updated off-policy but the actor is always updated on-policy with an additionalREINFORCE correction term so that it remains a Monte Carlo policy gradient method regardlessof the parametrization, training method, and performance of the critic. Therefore, Q-Prop can bedirectly combined with a number of prior techniques from both on-policy methods such as naturalpolicy gradient (Kakade, 2001), trust-region policy optimization (TRPO) (Schulman et al., 2015)and generalized advantage estimation (GAE) (Schulman et al., 2016), and off-policy methods suchas DDPG (Lillicrap et al., 2016) and Retrace( l) (Munos et al., 2016).Intuitively, if the critic Qwapproximates Qpwell, it provides a reliable gradient, reduces the estima-tor variance, and improves the convergence rate. Interestingly, control variate analysis in the nextsection shows that this is not the only circumstance where Q-Prop helps reduce variance.4Published as a conference paper at ICLR 20173.2 C ONTROL VARIATE ANALYSIS AND ADAPTIVE Q-P ROPFor Q-Prop to be applied reliably, it is crucial to analyze how the variance of the estimator changesbefore and after the application of control variate. Following the prior work on control vari-ates (Ross, 2006; Paisley et al., 2012), we first introduce h(st)to Eq. 8, a weighing variable thatmodulates the strength of control variate. This additional variable h(st)does not introduce bias tothe estimator.ÑqJ(q) =Erp;p[Ñqlogpq(atjst)(ˆA(st;at)h(st) ̄Aw(st;at)]+Erp[h(st)ÑaQw(st;a)ja=q(st)Ñqq(st)](9)The variance of this estimator is given below, where m=1:::Mindexes the dimension of q,Var=ErpåmVarat(Ñqmlogpq(atjst)(ˆA(st;at)h(st) ̄A(st;at))): (10)If we choose h(st)such that Var<Var, where Var =Erp[åmVarat(Ñqmlogpq(atjst)ˆA(st;at))]is the original estimator variance measure, then we have managed to reduce the variance. Directlyanalyzing the above variance measure is nontrivial, for the same reason that computing the optimalbaseline is difficult (Weaver & Tao, 2001). In addition, it is often impractical to get multiple actionsamples from the same state, which prohibits using na ̈ıve Monte Carlo to estimate the expectations.Instead, we propose a surrogate variance measure, Var =Erp[Varat(ˆA(st;at))]. A similar surrogateis also used by prior work on learning state-dependent baseline (Mnih & Gregor, 2014), and thebenefit is that the measure becomes more tractable,Var=Erp[Varat(ˆA(st;at)h(st) ̄A(st;at))]=Var+Erp[2h(st)Covat(ˆA(st;at); ̄A(st;at))+h(st)2Varat( ̄A(st;at))]:(11)SinceEp[ˆA(st;at)] =Ep[ ̄A(st;at)] = 0, the terms can be simplified as below,Covat(ˆA; ̄A) =Ep[ˆA(st;at) ̄A(st;at)]Varat( ̄A) =Ep[ ̄A(st;at)2] =ÑaQw(st;a)jTa=q(st)Sq(st)ÑaQw(st;a)ja=q(st);(12)where Sq(st)is the covariance matrix of the stochastic policy pq. The nice property of Eq. 11 isthat Varat( ̄A)is analytical and Cov at(ˆA; ̄A)can be estimated with single action sample. Using thisestimate, we propose adaptive variants of Q-Prop that regulate the variance of the gradient estimate.Adaptive Q-Prop. The optimal state-dependent factor h(st)can be computed per state, accord-ing to h(st) =Covat(ˆA; ̄A)=Varat( ̄A). This provides maximum reduction in variance accordingto Eq. 11. Substituting h(st)into Eq. 11, we get Var=Erp[(1rcorr(ˆA; ̄A)2)Varat(ˆA)], wherercorris the correlation coefficient, which achieves guaranteed variance reduction if at any state ̄Aiscorrelated with ˆA. We call this the fully adaptive Q-Prop method. An important conclusion fromthis analysis is that, in adaptive Q-Prop, the critic Qwdoes not necessarily need to be approximatingQpwell to produce good results. Its Taylor expansion merely needs to be correlated with ˆA, posi-tively or even negatively. This is in contrast with actor-critic methods, where performance is greatlydependent on the absolute accuracy of the critic’s approximation.Conservative and Aggressive Q-Prop. In practice, the single-sample estimate of Cov at(ˆA; ̄A)hashigh variance itself, and we propose the following two practical implementations of adaptive Q-Prop:(1)h(st) =1 if ˆCovat(ˆA; ̄A)>0 and h(st) =0 if otherwise, and (2) h(st) =sign(ˆCovat(ˆA; ̄A)). Thefirst implementation, which we call conservative Q-Prop, can be thought of as a more conservativeversion of Q-Prop, which effectively disables the control variate for some samples of the states. Thisis sensible as if ˆAand ̄Aare negatively correlated, it is likely that the critic is very poor. The secondvariant can correspondingly be termed aggressive Q-Prop, since it makes more liberal use of thecontrol variate.3.3 Q-P ROP ALGORITHMPseudo-code for the adaptive Q-Prop algorithm is provided in Algorithm 1. It is a mixture of policygradient and actor-critic. At each iteration, it first rolls out the stochastic policy to collect on-policy5Published as a conference paper at ICLR 2017Algorithm 1 Adaptive Q-Prop1: Initialize wfor critic Qw,qfor stochastic policy pq, and replay buffer R / 0.2:repeat3: fore=1;:::; Edo .Collect Eepisodes of on-policy experience using pq4:s0;ep(s0)5: fort=0;:::; T1do6:at;epq(jst;e),st+1;ep(jst;e;at;e),rt;e=r(st;e;at;e)7: Add batch data B=fs0:T;1:E;a0:T1;1:E;r0:T1;1:Egto replay buffer R8: Take ETgradient steps on QwusingRandpq9: Fit Vf(st)usingB10: Compute ˆAt;eusing GAE( l) and ̄At;eusing Eq. 711: Set ht;ebased on Section 3.212: Compute and center the learning signals lt;e=ˆAt;eht;e ̄At;e13: Compute ÑqJ(q)1ETåeåtÑqlogpq(at;ejst;e)lt;e+ht;eÑaQw(st;e;a)ja=q(st;e)Ñqq(st;e)14: Take a gradient step on pqusing ÑqJ(q), optionally with a trust-region constraint using B15:until pqconverges.samples, adds the batch to a replay buffer, takes a few gradient steps on the critic, computes ˆAand ̄A, and finally applies a gradient step on the policy pq. In our implementation, the critic Qwis fittedwith off-policy TD learning using the same techniques as in DDPG (Lillicrap et al., 2016):w=argminwEstrb();atb(jst)[(r(st;at)+gEp[Q0(st+1;at+1)]Qw(st;at))2]: (13)Vfis fitted with the same technique in (Schulman et al., 2016). Generalized advantage estimation(GAE) (Schulman et al., 2016) is used to estimate ˆA. The policy update can be done by any methodthat utilizes the first-order gradient and possibly the on-policy batch data, which includes trust regionpolicy optimization (TRPO) (Schulman et al., 2015). Importantly, this is just one possible imple-mentation of Q-Prop, and in Appendix C we show a more general form that can interpolate betweenpure policy gradient and off-policy actor-critic.3.4 L IMITATIONSA limitation with Q-Prop is that if data collection is very fast, e.g. using fast simulators, the computetime per episode is bound by the critic training at each iteration, and similar to that of DDPG andusually much more than that of TRPO. However, in applications where data collection speed isthe bottleneck, there is sufficient time between policy updates to fit Qwwell, which can be doneasynchronously from the data collection, and the compute time of Q-Prop will be about the same asthat of TRPO.Another limitation is the robustness to bad critics. We empirically show that our conservative Q-Propis more robust than standard Q-Prop and much more robust than pure off-policy actor-critic methodssuch as DDPG; however, estimating when an off-policy critic is reliable or not is still a fundamentalproblem that shall be further investigated. We can also alleviate this limitation by adopting morestable off-policy critic learning techniques such as Retrace( l) (Munos et al., 2016).4 R ELATED WORKVariance reduction in policy gradient methods is a long-standing problem with a large body of priorwork (Weaver & Tao, 2001; Greensmith et al., 2004; Schulman et al., 2016). However, explorationof action-dependent control variates is relatively recent, with most work focusing instead on simplerbaselining techniques (Ross, 2006). A subtle exception is compatible feature approximation (Suttonet al., 1999) which can be viewed as a control variate as explained in Appendix B. Another exceptionis doubly robust estimator in contextual bandits (Dud ́ık et al., 2011), which uses a different controlvariate whose bias cannot be tractably corrected. Control variates were explored recently not inRL but for approximate inference in stochastic models (Paisley et al., 2012), and the closest relatedwork in that domain is the MuProp algorithm (Gu et al., 2016a) which uses a mean-field networkas a surrogate for backpropagating a deterministic gradient through stochastic discrete variables.MuProp is not directly applicable to model-free RL because the dynamics are unknown; however, it6Published as a conference paper at ICLR 2017can be if the dynamics are learned as in model-based RL (Atkeson & Santamaria, 1997; Deisenroth& Rasmussen, 2011). This model-based Q-Prop is itself an interesting direction of research as iteffectively corrects bias in model-based learning.Part of the benefit of Q-Prop is the ability to use off-policy data to improve on-policy policy gra-dient methods. Prior methods that combine off-policy data with policy gradients either introducebias (Sutton et al., 1999; Silver et al., 2014) or use importance weighting, which is known to re-sult in degenerate importance weights in high dimensions, resulting in very high variance (Precup,2000; Levine & Koltun, 2013). Q-Prop provides a new approach for using off-policy data to reducevariance without introducing further bias.Lastly, since Q-Prop uses both on-policy policy updates and off-policy critic learning, it can takeadvantage of prior work along both lines of research. We chose to implement Q-Prop on top ofTRPO-GAE primarily for the purpose of enabling a fair comparison in the experiments, but com-bining Q-Prop with other on-policy update schemes and off-policy critic training methods is aninteresting direction for future work. For example, Q-Prop can also be used with other on-policypolicy gradient methods such as A3C (Mnih et al., 2016) and off-policy advantage estimation meth-ods such as Retrace( l) (Munos et al., 2016), GTD2 (Sutton et al., 2009), emphatic TD (Sutton et al.,2015), and WIS-LSTD (Mahmood et al., 2014).5 E XPERIMENTS(a) (b) (c) (d) (e) (f) (g)Figure 1: Illustrations of OpenAI Gym MuJoCo domains (Brockman et al., 2016; Duan et al., 2016):(a) Ant, (b) HalfCheetah, (c) Hopper, (d) Humanoid, (e) Reacher, (f) Swimmer, (g) Walker.We evaluated Q-Prop and its variants on continuous control environments from the OpenAI Gymbenchmark (Brockman et al., 2016) using the MuJoCo physics simulator (Todorov et al., 2012) asshown in Figure 1. Algorithms are identified by acronyms, followed by a number indicating batchsize, except for DDPG, which is a prior online actor-critic algorithm (Lillicrap et al., 2016). “c-” and“v-” denote conservative and aggressive Q-Prop variants as described in Section 3.2. “TR-” denotestrust-region policy optimization (Schulman et al., 2015), while “V-” denotes vanilla policy gradient.For example, “TR-c-Q-Prop-5000” means convervative Q-Prop with the trust-region policy update,and a batch size of 5000. “VPG” and “TRPO” are vanilla policy gradient and trust-region policy op-timization respectively (Schulman et al., 2016; Duan et al., 2016). Unless otherwise stated, all policygradient methods are implemented with GAE( l=0:97) (Schulman et al., 2016). Note that TRPO-GAE is currently the state-of-the-art method on most of the OpenAI Gym benchmark tasks, thoughour experiments show that a well-tuned DDPG implementation sometimes achieves better results.Our algorithm implementations are built on top of the rllab TRPO and DDPG codes from Duanet al. (2016) and available at https://github.com/shaneshixiang/rllabplusplus .Policy and value function architectures and other training details including hyperparameter valuesare provided in Appendix D.5.1 A DAPTIVE Q-P ROPFirst, it is useful to identify how reliable each variant of Q-Prop is. In this section, we analyzestandard Q-Prop and two adaptive variants, c-Q-Prop and a-Q-Prop, and demonstrate the stabilityof the method across different batch sizes. Figure 2a shows a comparison of Q-Prop variants withtrust-region updates on the HalfCheetah-v1 domain, along with the best performing TRPO hyper-parameters. The results are consistent with theory: conservative Q-Prop achieves much more stableperformance than the standard and aggressive variants, and all Q-Prop variants significantly outper-form TRPO in terms of sample efficiency, e.g. conservative Q-Prop reaches average reward of 4000using about 10 times less samples than TRPO.7Published as a conference paper at ICLR 2017(a) Standard Q-Prop vs adaptive variants. (b) Conservative Q-Prop vs TRPO across batch sizes.Figure 2: Average return over episodes in HalfCheetah-v1 during learning, exploring adaptive Q-Prop methods and different batch sizes. All variants of Q-Prop substantially outperform TRPO interms of sample efficiency. TR-c-QP, conservative Q-Prop with trust-region update performs moststably across different batch sizes.Figure 2b shows the performance of conservative Q-Prop against TRPO across different batchsizes. Due to high variance in gradient estimates, TRPO typically requires very large batch sizes,e.g. 25000 steps or 25 episodes per update, to perform well. We show that our Q-Prop methods canlearn even with just 1 episode per update, and achieves better sample efficiency with small batchsizes. This shows that Q-Prop significantly reduces the variance compared to the prior methods.As we discussed in Section 1, stability is a significant challenge with state-of-the-art deep RL meth-ods, and is very important for being able to reliably use deep RL for real world tasks. In the rest ofthe experiments, we will use conservative Q-Prop as the main Q-Prop implementation.5.2 E VALUATION ACROSS ALGORITHMS(a) Comparing algorithms on HalfCheetah-v1. (b) Comparing algorithms on Humanoid-v1.Figure 3: Average return over episodes in HalfCheetah-v1 and Humanoid-v1 during learning, com-paring Q-Prop against other model-free algorithms. Q-Prop with vanilla policy gradient outperformsTRPO on HalfCheetah. Q-Prop significantly outperforms TRPO in convergence time on Humanoid.In this section, we evaluate two versions of conservative Q-Prop, v-c-Q-Prop using vanilla pol-icy gradient and TR-c-Q-Prop using trust-region updates, against other model-free algorithms onthe HalfCheetah-v1 domain. Figure 3a shows that c-Q-Prop methods significantly outperform thebest TRPO and VPG methods. Even Q-Prop with vanilla policy gradient is comparable to TRPO,confirming the significant benefits from variance reduction. DDPG on the other hand exhibits incon-sistent performances. With proper reward scaling, i.e. “DDPG-r0.1”, it outperforms other methodsas well as the DDPG results reported in prior work (Duan et al., 2016; Amos et al., 2016). Thisillustrates the sensitivity of DDPG to hyperparameter settings, while Q-Prop exhibits more stable,monotonic learning behaviors when compared to DDPG. In the next section we show this improvedstability allows Q-Prop to outperform DDPG in more complex domains.8Published as a conference paper at ICLR 20175.3 E VALUATION ACROSS DOMAINSLastly, we evaluate Q-Prop against TRPO and DDPG across multiple domains. While the gymenvironments are biased toward locomotion, we expect we can achieve similar performance on ma-nipulation tasks such as those in Lillicrap et al. (2016). Table 1 summarizes the results, including thebest attained average rewards and the steps to convergence. Q-Prop consistently outperform TRPOin terms of sample complexity and sometimes achieves higher rewards than DDPG in more complexdomains. A particularly notable case is shown in Figure 3b, where Q-Prop substantially improvessample efficiency over TRPO on Humanoid-v1 domain, while DDPG cannot find a good solution.The better performance on the more complex domains highlights the importance of stable deep RLalgorithms: while costly hyperparameter sweeps may allow even less stable algorithms to performwell on simpler problems, more complex tasks might have such narrow regions of stable hyperpa-rameters that discovering them becomes impractical.TR-c-Q-Prop TRPO DDPGDomain Threshold MaxReturn. Episodes MaxReturn Epsisodes MaxReturn EpisodesAnt 3500 3534 4975 4239 13825 957 N/AHalfCheetah 4700 4811 20785 4734 26370 7490 600Hopper 2000 2957 5945 2486 5715 2604 965Humanoid 2500 >3492 14750 918 >30000 552 N/AReacher -7 -6.0 2060 -6.7 2840 -6.6 1800Swimmer 90 103 2045 110 3025 150 500Walker 3000 4030 3685 3567 18875 3626 2125Table 1: Q-Prop, TRPO and DDPG results showing the max average rewards attained in the first30k episodes and the episodes to cross specific reward thresholds. Q-Prop often learns more sampleefficiently than TRPO and can solve difficult domains such as Humanoid better than DDPG.6 D ISCUSSION AND CONCLUSIONWe presented Q-Prop, a policy gradient algorithm that combines reliable, consistent, and poten-tially unbiased on-policy gradient estimation with a sample-efficient off-policy critic that acts as acontrol variate. The method provides a large improvement in sample efficiency compared to state-of-the-art policy gradient methods such as TRPO, while outperforming state-of-the-art actor-criticmethods on more challenging tasks such as humanoid locomotion. We hope that techniques likethese, which combine on-policy Monte Carlo gradient estimation with sample-efficient variance re-duction through off-policy critics, will eventually lead to deep reinforcement learning algorithmsthat are more stable and efficient, and therefore better suited for application to complex real-worldlearning tasks.ACKNOWLEDGMENTSWe thank Rocky Duan for sharing and answering questions about rllab code, and Yutian Chen andLaurent Dinh for discussion on control variates. SG and RT were funded by NSERC, Google, andEPSRC grants EP/L000776/1 and EP/M026957/1. ZG was funded by EPSRC grant EP/J012300/1and the Alan Turing Institute (EP/N510129/1).
SyJakD7Bl
A good idea, but not a research paper
7: Good paper, accept
**Edit: Based on the discussion below, my main problem (#2) was not correct. I have changed my overall rating from a 3 to a 7** This paper makes a fascinating observation: one can introduce an action-dependent baseline (control variate) into REINFORCE, which introduces bias, and then include a correction term to remove the bias. The variance of the correction term is low relative to the REINFORCE update and the action-dependent baseline, and so this results in benefits. However, the paper is poorly executed. Below I list my concerns. 1. The paper tries to distinguish between "policy gradient" methods and "actor critic" methods by defining them in a non-standard way. Specifically, when this paper says "policy gradient" it means REINFORCE. Historically, the two have meant different things: some policy gradient algorithms are actor-critics (e.g., Degris et al's INAC algorithm) while others are not (e.g. REINFORCE). 2. The proposed Q-Prop algorithm includes many interesting design choices that make in unclear what the real source of improved performance is. Is the improved performance due to the use of the action-dependent control variate? Would the same setup but using a state-value baseline still perform just as well? Are the performance benefits due to the use of an off-policy advantage estimation algorithm, GAE(lambda)? Or, would performance have been similar with an on-policy advantage estimation algorithm? What about if a different off-policy advantage estimation algorithm was used, like Retrace(lambda), GTD2, ETD, or WIS-LSTD? Or, is the improved performance due to the use of a replay buffer? Comparisons are not performed between variants of Q-Prop that show the importances of these different components. Rather the authors opt to show better performance on a benchmark task. I find this to be non-scientific, and more of a paper showing a feat of engineering (by combining many different ideas) than it is a research paper that studies the details of which parts of Q-Prop make it work well. For example, after reading this paper, it is not clear whether having the action-dependent baseline (or using the first order Taylor approximation for the baseline) is beneficial or not - it could be that the strong performance comes from GAE(lambda) or the use of a replay buffer. At the very least I would have expected comparisons to Q-Prop using a state-value baseline (which would then be a variant of REINFORCE using off-policy data and a replay buffer, and which would show whether the action-dependent baseline is important). 3. There is a fair amount of discussion about unbiased policy gradient algorithms, which is not accurate. Most policy gradient algorithms are biased, and making them unbiased tends to hurt performance. This is discussed in the paper "Bias in Natural Actor-Critic Algorithms", which applies to non-natural algorithms as well. Also, I suspect that the use of GAE(lambda) results in the exact sort of bias discussed in that paper, even when lambda=1. As a result, Q-Prop may act more like an average reward method than expected. This should be discussed. 4. The proposed algorithm can be applied to deep architectures, just as most linear-time policy gradient algorithms can. However, it does not have to be applied to deep architectures. The emphasis on "deep" therefore seems to detract from the core ideas of the paper. 5. The paper repeatedly says that importance sampling based methods result in high variance. This ignores weighted importance sampling methods that have very low variance. A good example of this is Mahmood et al's WIS-LSTD algorithm. WIS-LSTD has high computational complexity, so it would only be compared to on non-deep RL problems, of which there are plenty. Alternatively, algorithms like Retrace(lambda) have quite low variance since the likelihood ratios are never bigger than one. Others might argue that ETD algorithms are currently the most effective. The simple dismissal of these algorithms because the original importance sampling estimator proposed in 2000 has high variance is not sufficient. 6. The paper does not compare to natural actor-critic algorithms. Once the weights, w, have been computed, REINFORCE uses samples of states from the normalized discounted state distribution and samples of the corresponding returns to estimate the policy gradient. One of the main reasons Q-Prop should work better than REINFORCE is that it includes a control variate that reduces the variance of the policy gradient update after w has been computed. Now, compare this to natural policy gradient algorithms. Once the weights, w, have been computed (admittedly, using compatible features for the advantage estimation but any features for the state-value estimation) the resulting update is = w. That is, is has zero variance and does not require additional sampling. It is as though a perfect control variate was used. Furthermore, natural gradient algorithms can be applied to deep architectures. Degris et al's INAC algorithm is linear time. Desjardin et al's "natural neural networks" paper also discusses efficient implementations of natural gradients for neural networks. Dabney's Natural Temporal Difference algorithms have linear time variants that fit this paper's description of actor-critic algorithms. To summarize, given the weights w, REINFORCE has high variance, and Q-Prop claims to reduce the variance of REINFORCE. However, natural policy gradient methods have zero variance given the weights w. So, what is the benefit of Q-Prop over natural gradient algorithms using off-policy value function estimation methods to estimate Q (or A)? That is, why should we expect Q-Prop to perform better than NAC-LSTD using GAE(lambda) with experience replay in place of LSTD? 7. Equation (2) is false. The right side is proportional to the left side, not equal to it. There is a (1-gamma) term missing. There are also other typos throughout (e.g., Q and A sometimes are missing their action arguments). Although I have listed my concerns, I would like to re-iterate that I do find the idea of an action-dependent baseline fascinating. My problem with this paper is with its execution, not with the novelty, impact, or quality of the core idea.
5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature
SJ3rcZcxl
ICLR.cc/2017/conference
2017
Q-Prop: Sample-Efficient Policy Gradient with An Off-Policy Critic
["Shixiang Gu", "Timothy Lillicrap", "Zoubin Ghahramani", "Richard E. Turner", "Sergey Levine"]
Model-free deep reinforcement learning (RL) methods have been successful in a wide variety of simulated domains. However, a major obstacle facing deep RL in the real world is their high sample complexity. Batch policy gradient methods offer stable learning, but at the cost of high variance, which often requires large batches. TD-style methods, such as off-policy actor-critic and Q-learning, are more sample-efficient but biased, and often require costly hyperparameter sweeps to stabilize. In this work, we aim to develop methods that combine the stability of policy gradients with the efficiency of off-policy RL. We present Q-Prop, a policy gradient method that uses a Taylor expansion of the off-policy critic as a control variate. Q-Prop is both sample efficient and stable, and effectively combines the benefits of on-policy and off-policy methods. We analyze the connection between Q-Prop and existing model-free algorithms, and use control variate theory to derive two variants of Q-Prop with conservative and aggressive adaptation. We show that conservative Q-Prop provides substantial gains in sample efficiency over trust region policy optimization (TRPO) with generalized advantage estimation (GAE), and improves stability over deep deterministic policy gradient (DDPG), the state-of-the-art on-policy and off-policy methods, on OpenAI Gym's MuJoCo continuous control environments.
["Deep learning", "Reinforcement Learning"]
ABSTRACTModel-free deep reinforcement learning (RL) methods have been successful in awide variety of simulated domains. However, a major obstacle facing deep RLin the real world is their high sample complexity. Batch policy gradient methodsoffer stable learning, but at the cost of high variance, which often requires largebatches. TD-style methods, such as off-policy actor-critic and Q-learning, aremore sample-efficient but biased, and often require costly hyperparameter sweepsto stabilize. In this work, we aim to develop methods that combine the stability ofpolicy gradients with the efficiency of off-policy RL. We present Q-Prop, a policygradient method that uses a Taylor expansion of the off-policy critic as a controlvariate. Q-Prop is both sample efficient and stable, and effectively combines thebenefits of on-policy and off-policy methods. We analyze the connection betweenQ-Prop and existing model-free algorithms, and use control variate theory to de-rive two variants of Q-Prop with conservative and aggressive adaptation. We showthat conservative Q-Prop provides substantial gains in sample efficiency over trustregion policy optimization (TRPO) with generalized advantage estimation (GAE),and improves stability over deep deterministic policy gradient (DDPG), the state-of-the-art on-policy and off-policy methods, on OpenAI Gym’s MuJoCo continu-ous control environments.1 I NTRODUCTIONModel-free reinforcement learning is a promising approach for solving arbitrary goal-directed se-quential decision-making problems with only high-level reward signals and no supervision. It hasrecently been extended to utilize large neural network policies and value functions, and has beenshown to be successful in solving a range of difficult problems (Mnih et al., 2015; Schulman et al.,2015; Lillicrap et al., 2016; Silver et al., 2016; Gu et al., 2016b; Mnih et al., 2016). Deep neuralnetwork parametrization minimizes the need for manual feature and policy engineering, and allowslearning end-to-end policies mapping from high-dimensional inputs, such as images, directly to ac-tions. However, such expressive parametrization also introduces a number of practical problems.Deep reinforcement learning algorithms tend to be sensitive to hyperparameter settings, often re-quiring extensive hyperparameter sweeps to find good values. Poor hyperparameter settings tend toproduce unstable or non-convergent learning. Deep RL algorithms also tend to exhibit high samplecomplexity, often to the point of being impractical to run on real physical systems. Although a num-ber of recent techniques have sought to alleviate some of these issues (Hasselt, 2010; Mnih et al.,2015; Schulman et al., 2015; 2016), these recent advances still provide only a partial solution to theinstability and sample complexity challenges.Model-free reinforcement learning consists of on- and off-policy methods. Monte Carlo policy gra-dient methods (Peters & Schaal, 2006; Schulman et al., 2015) are popular on-policy methods that1Published as a conference paper at ICLR 2017directly maximize the cumulative future returns with respect to the policy. While these algorithmscan offer unbiased (or nearly unbiased, as discussed in Section 2.1) estimates of the gradient, theyrely on Monte Carlo estimation and often suffer from high variance. To cope with high variancegradient estimates and difficult optimization landscapes, a number of techniques have been pro-posed, including constraining the change in the policy at each gradient step (Kakade, 2001; Peterset al., 2010) and mixing value-based back-ups to trade off bias and variance in Monte Carlo returnestimates (Schulman et al., 2015). However, these methods all tend to require very large numbersof samples to deal with the high variance when estimating gradients of high-dimensional neuralnetwork policies. The crux of the problem with policy gradient methods is that they can only effec-tively use on-policy samples, which means that they require collecting large amounts of on-policyexperiences after each parameter update to the policy. This makes them very sample intensive. Off-policy methods, such as Q-learning (Watkins & Dayan, 1992; Sutton et al., 1999; Mnih et al., 2015;Gu et al., 2016b) and off-policy actor-critic methods (Lever, 2014; Lillicrap et al., 2016), can in-stead use all samples, including off-policy samples, by adopting temporal difference learning withexperience replay. Such methods are much more sample-efficient. However, convergence of thesealgorithms is in general not guaranteed with non-linear function approximators, and practical con-vergence and instability issues typically mean that extensive hyperparameter tuning is required toattain good results.In order to make deep reinforcement learning practical as a tool for tackling real-world tasks, wemust develop methods that are both data efficient and stable. In this paper, we propose Q-Prop, astep in this direction that combines the advantages of on-policy policy gradient methods with the effi-ciency of off-policy learning. Unlike prior approaches for off-policy learning, which either introducebias (Sutton et al., 1999; Silver et al., 2014) or increase variance (Precup, 2000; Levine & Koltun,2013; Munos et al., 2016), Q-Prop can reduce the variance of gradient estimator without addingbias; unlike prior approaches for critic-based variance reduction (Schulman et al., 2016) which fitthe value function on-policy, Q-Prop learns the action-value function off-policy. The core idea isto use the first-order Taylor expansion of the critic as a control variate, resulting in an analyticalgradient term through the critic and a Monte Carlo policy gradient term consisting of the residualsin advantage approximations. The method helps unify policy gradient and actor-critic methods: itcan be seen as using the off-policy critic to reduce variance in policy gradient or using on-policyMonte Carlo returns to correct for bias in the critic gradient. We further provide theoretical analy-sis of the control variate, and derive two additional variants of Q-Prop. The method can be easilyincorporated into any policy gradient algorithm. We show that Q-Prop provides substantial gainsin sample efficiency over trust region policy optimization (TRPO) with generalized advantage esti-mation (GAE) (Schulman et al., 2015; 2016), and improved stability over deep deterministic policygradient (DDPG) (Lillicrap et al., 2016) across a repertoire of continuous control tasks.2 B ACKGROUNDReinforcement learning (RL) aims to learn a policy for an agent such that it behaves optimallyaccording to a reward function. At a time step tand statest, the agent chooses an action atac-cording to its policy p(atjst), the state of the agent and the environment changes to new state st+1according to dynamics p(st+1jst;at), the agent receives a reward r(st;at), and the process con-tinues. Let Rtdenote a g-discounted cumulative return from tfor an infinite horizon problem, i.eRt=å¥t0=tgt0tr(st0;at0). The goal of reinforcement learning is to maximize the expected returnJ(q) =Epq[R0]with respect to the policy parameters q. In this section, we review several standardtechniques for performing this optimization, and in the next section, we will discuss our proposedQ-Prop algorithm that combines the strengths of these approaches to achieve efficient, stable RL.Monte Carlo policy gradient refers to policy gradient methods that use full Monte Carlo returns,e.g. REINFORCE (Williams, 1992) and TRPO (Schulman et al., 2015), and policy gradient withfunction approximation refers to actor-critic methods (Sutton et al., 1999) which optimize the policyagainst a critic, e.g. deterministic policy gradient (Silver et al., 2014; Lillicrap et al., 2016).2.1 M ONTE CARLO POLICY GRADIENT METHODSMonte Carlo policy gradient methods apply direct gradient-based optimization to the reinforcementlearning objective. This involves directly differentiating the J(q)objective with respect to the policy2Published as a conference paper at ICLR 2017parameters q. The standard form, known as the REINFORCE algorithm (Williams, 1992), is shownbelow:ÑqJ(q) =Ep[¥åt=0Ñqlogpq(atjst)gtRt] =Ep[¥åt=0gtÑqlogpq(atjst)(Rtb(st))]; (1)where b(st)is known as the baseline. For convenience of later derivations, Eq. 1 can also be writtenas below, where rp(s) =å¥t=0gtp(st=s)is the unnormalized discounted state visitation frequency,ÑqJ(q) =Estrp();atp(jst)[Ñqlogpq(atjst)(Rtb(st))]: (2)Eq. 2 is an unbiased gradient of the RL objective. However, in practice, most policy gradient meth-ods effectively use undiscounted state visitation frequencies, i.e. g=1 in the equal for rp, andare therefore biased; in fact, making them unbiased often hurts performance (Thomas, 2014). Inthis paper, we mainly discuss bias due to function approximation, off-policy learning, and valueback-ups.The gradient is estimated using Monte Carlo samples in practice and has very high variance. Aproper choice of baseline is necessary to reduce the variance sufficiently such that learning becomesfeasible. A common choice is to estimate the value function of the state Vp(st)to use as the base-line, which provides an estimate of advantage function Ap(st;at), which is a centered action-valuefunction Qp(st;at), as defined below:Vp(st) =Ep[Rt] =Epq(atjst)[Qp(st;at)]Qp(st;at) =r(st;at)+gEp[Rt+1] =r(st;at)+gEp(st+1jst;at)[Vp(st+1)]Ap(st;at) =Qp(st;at)Vp(st):(3)Qp(st;at)summarizes the performance of each action from a given state, assuming it follows pthereafter, and Ap(st;at)provides a measure of how each action compares to the average perfor-mance at the state st, which is given by Vp(st). Using Ap(st;at)centers the learning signal andreduces variance significantly.Besides high variance, another problem with the policy gradient is that it requires on-policy samples.This makes policy gradient optimization very sample intensive. To achieve similar sample efficiencyas off-policy methods, we can attempt to include off-policy data. Prior attempts use importancesampling to include off-policy trajectories; however, these are known to be difficult scale to high-dimensional action spaces because of rapidly degenerating importance weights (Precup, 2000).2.2 P OLICY GRADIENT WITH FUNCTION APPROXIMATIONPolicy gradient methods with function approximation (Sutton et al., 1999), or actor-critic methods,include a policy evaluation step, which often uses temporal difference (TD) learning to fit a criticQwfor the current policy p(q), and a policy improvement step which greedily optimizes the policypagainst the critic estimate Qw. Significant gains in sample efficiency may be achievable using off-policy TD learning for the critic, as in Q-learning and deterministic policy gradient (Sutton, 1990;Silver et al., 2014), typically by means of experience replay for training deep Q networks (Mnihet al., 2015; Lillicrap et al., 2016; Gu et al., 2016b).One particularly relevant example of such a method is the deep deterministic policy gradient(DDPG) (Silver et al., 2014; Lillicrap et al., 2016). The updates for this method are given below,where pq(atjst) =d(at=q(st))is a deterministic policy, bis arbitrary exploration distribution,andrbcorresponds to sampling from a replay buffer. Q(;)is the target network that slowly tracksQw(Lillicrap et al., 2016).w=argminwEstrb();atb(jst)[(r(st;at)+gQ(st+1;q(st+1))Qw(st;at))2]q=argmaxqEstrb()[Qw(st;q(st))](4)When the critic and policy are parametrized with neural networks, full optimization is expensive,and instead stochastic gradient optimization is used. The gradient in the policy improvement phaseis given below, which is generally a biased gradient of J(q).ÑqJ(q)Estrb()[ÑaQw(st;a)ja=q(st)Ñqq(st)] (5)3Published as a conference paper at ICLR 2017The crucial benefits of DDPG are that it does not rely on high variance REINFORCE gradients and istrainable on off-policy data. These properties make DDPG and other analogous off-policy methodssignificantly more sample-efficient than policy gradient methods (Lillicrap et al., 2016; Gu et al.,2016b; Duan et al., 2016). However, the use of a biased policy gradient estimator makes analyzingits convergence and stability properties difficult.3 Q-P ROPIn this section, we derive the Q-Prop estimator for policy gradient. The key idea from this estimatorcomes from observing Equations 2 and 5 and noting that the former provides an almost unbiased(see Section 2.1), but high variance gradient, while the latter provides a deterministic, but biasedgradient. By using the deterministic biased estimator as a particular form of control variate (Ross,2006; Paisley et al., 2012) for the Monte Carlo policy gradient estimator, we can effectively use bothtypes of gradient information to construct a new estimator that in practice exhibits improved sampleefficiency through the inclusion of off-policy samples while preserving the stability of on-policyMonte Carlo policy gradient.3.1 Q-P ROP ESTIMATORTo derive the Q-Prop gradient estimator, we start by using the first-order Taylor expansion of anarbitrary function f(st;at), ̄f(st;at) =f(st; ̄at)+Ñaf(st;a)ja= ̄at(at ̄at)as the control vari-ate for the policy gradient estimator. We use ˆQ(st;at) =å¥t0=tgt0tr(st0;at0)to denote MonteCarlo return from state stand actionat, i.e. Ep[ˆQ(st;at)] = r(st;at) +gEp[Vp(st+1)], andq(st) =Epq(atjst)[at]to denote the expected action of a stochastic policy pq. Full derivation isin Appendix A.ÑqJ(q) =Erp;p[Ñqlogpq(atjst)(ˆQ(st;at) ̄f(st;at)]+Erp;p[Ñqlogpq(atjst) ̄f(st;at)]=Erp;p[Ñqlogpq(atjst)(ˆQ(st;at) ̄f(st;at)]+Erp[Ñaf(st;a)ja= ̄atÑqq(st)](6)Eq. 6 is general for arbitrary function f(st;at)that is differentiable with respect to atat an arbitraryvalue of ̄at; however, a sensible choice is to use the critic Qwforfandq(st)for ̄atto get,ÑqJ(q) =Erp;p[Ñqlogpq(atjst)(ˆQ(st;at) ̄Qw(st;at)]+Erp[ÑaQw(st;a)ja=q(st)Ñqq(st)]:(7)Finally, since in practice we estimate advantages ˆA(st;at), we write the Q-Prop estimator in termsof advantages to complete the basic derivation,ÑqJ(q) =Erp;p[Ñqlogpq(atjst)(ˆA(st;at) ̄Aw(st;at)]+Erp[ÑaQw(st;a)ja=q(st)Ñqq(st)] ̄A(st;at) = ̄Q(st;at)Epq[ ̄Q(st;at)] =ÑaQw(st;a)ja=q(st)(atq(st)):(8)Eq. 8 is composed of an analytic gradient through the critic as in Eq. 5 and a residual REINFORCEgradient in Eq. 2. From the above derivation, Q-Prop is simply a Monte Carlo policy gradientestimator with a special form of control variate. The important insight comes from the fact thatQwcan be trained using off-policy data as in Eq. 4. Under this setting, Q-Prop is no longer justa Monte Carlo policy gradient method, but more closely resembles an actor-critic method, wherethe critic can be updated off-policy but the actor is always updated on-policy with an additionalREINFORCE correction term so that it remains a Monte Carlo policy gradient method regardlessof the parametrization, training method, and performance of the critic. Therefore, Q-Prop can bedirectly combined with a number of prior techniques from both on-policy methods such as naturalpolicy gradient (Kakade, 2001), trust-region policy optimization (TRPO) (Schulman et al., 2015)and generalized advantage estimation (GAE) (Schulman et al., 2016), and off-policy methods suchas DDPG (Lillicrap et al., 2016) and Retrace( l) (Munos et al., 2016).Intuitively, if the critic Qwapproximates Qpwell, it provides a reliable gradient, reduces the estima-tor variance, and improves the convergence rate. Interestingly, control variate analysis in the nextsection shows that this is not the only circumstance where Q-Prop helps reduce variance.4Published as a conference paper at ICLR 20173.2 C ONTROL VARIATE ANALYSIS AND ADAPTIVE Q-P ROPFor Q-Prop to be applied reliably, it is crucial to analyze how the variance of the estimator changesbefore and after the application of control variate. Following the prior work on control vari-ates (Ross, 2006; Paisley et al., 2012), we first introduce h(st)to Eq. 8, a weighing variable thatmodulates the strength of control variate. This additional variable h(st)does not introduce bias tothe estimator.ÑqJ(q) =Erp;p[Ñqlogpq(atjst)(ˆA(st;at)h(st) ̄Aw(st;at)]+Erp[h(st)ÑaQw(st;a)ja=q(st)Ñqq(st)](9)The variance of this estimator is given below, where m=1:::Mindexes the dimension of q,Var=ErpåmVarat(Ñqmlogpq(atjst)(ˆA(st;at)h(st) ̄A(st;at))): (10)If we choose h(st)such that Var<Var, where Var =Erp[åmVarat(Ñqmlogpq(atjst)ˆA(st;at))]is the original estimator variance measure, then we have managed to reduce the variance. Directlyanalyzing the above variance measure is nontrivial, for the same reason that computing the optimalbaseline is difficult (Weaver & Tao, 2001). In addition, it is often impractical to get multiple actionsamples from the same state, which prohibits using na ̈ıve Monte Carlo to estimate the expectations.Instead, we propose a surrogate variance measure, Var =Erp[Varat(ˆA(st;at))]. A similar surrogateis also used by prior work on learning state-dependent baseline (Mnih & Gregor, 2014), and thebenefit is that the measure becomes more tractable,Var=Erp[Varat(ˆA(st;at)h(st) ̄A(st;at))]=Var+Erp[2h(st)Covat(ˆA(st;at); ̄A(st;at))+h(st)2Varat( ̄A(st;at))]:(11)SinceEp[ˆA(st;at)] =Ep[ ̄A(st;at)] = 0, the terms can be simplified as below,Covat(ˆA; ̄A) =Ep[ˆA(st;at) ̄A(st;at)]Varat( ̄A) =Ep[ ̄A(st;at)2] =ÑaQw(st;a)jTa=q(st)Sq(st)ÑaQw(st;a)ja=q(st);(12)where Sq(st)is the covariance matrix of the stochastic policy pq. The nice property of Eq. 11 isthat Varat( ̄A)is analytical and Cov at(ˆA; ̄A)can be estimated with single action sample. Using thisestimate, we propose adaptive variants of Q-Prop that regulate the variance of the gradient estimate.Adaptive Q-Prop. The optimal state-dependent factor h(st)can be computed per state, accord-ing to h(st) =Covat(ˆA; ̄A)=Varat( ̄A). This provides maximum reduction in variance accordingto Eq. 11. Substituting h(st)into Eq. 11, we get Var=Erp[(1rcorr(ˆA; ̄A)2)Varat(ˆA)], wherercorris the correlation coefficient, which achieves guaranteed variance reduction if at any state ̄Aiscorrelated with ˆA. We call this the fully adaptive Q-Prop method. An important conclusion fromthis analysis is that, in adaptive Q-Prop, the critic Qwdoes not necessarily need to be approximatingQpwell to produce good results. Its Taylor expansion merely needs to be correlated with ˆA, posi-tively or even negatively. This is in contrast with actor-critic methods, where performance is greatlydependent on the absolute accuracy of the critic’s approximation.Conservative and Aggressive Q-Prop. In practice, the single-sample estimate of Cov at(ˆA; ̄A)hashigh variance itself, and we propose the following two practical implementations of adaptive Q-Prop:(1)h(st) =1 if ˆCovat(ˆA; ̄A)>0 and h(st) =0 if otherwise, and (2) h(st) =sign(ˆCovat(ˆA; ̄A)). Thefirst implementation, which we call conservative Q-Prop, can be thought of as a more conservativeversion of Q-Prop, which effectively disables the control variate for some samples of the states. Thisis sensible as if ˆAand ̄Aare negatively correlated, it is likely that the critic is very poor. The secondvariant can correspondingly be termed aggressive Q-Prop, since it makes more liberal use of thecontrol variate.3.3 Q-P ROP ALGORITHMPseudo-code for the adaptive Q-Prop algorithm is provided in Algorithm 1. It is a mixture of policygradient and actor-critic. At each iteration, it first rolls out the stochastic policy to collect on-policy5Published as a conference paper at ICLR 2017Algorithm 1 Adaptive Q-Prop1: Initialize wfor critic Qw,qfor stochastic policy pq, and replay buffer R / 0.2:repeat3: fore=1;:::; Edo .Collect Eepisodes of on-policy experience using pq4:s0;ep(s0)5: fort=0;:::; T1do6:at;epq(jst;e),st+1;ep(jst;e;at;e),rt;e=r(st;e;at;e)7: Add batch data B=fs0:T;1:E;a0:T1;1:E;r0:T1;1:Egto replay buffer R8: Take ETgradient steps on QwusingRandpq9: Fit Vf(st)usingB10: Compute ˆAt;eusing GAE( l) and ̄At;eusing Eq. 711: Set ht;ebased on Section 3.212: Compute and center the learning signals lt;e=ˆAt;eht;e ̄At;e13: Compute ÑqJ(q)1ETåeåtÑqlogpq(at;ejst;e)lt;e+ht;eÑaQw(st;e;a)ja=q(st;e)Ñqq(st;e)14: Take a gradient step on pqusing ÑqJ(q), optionally with a trust-region constraint using B15:until pqconverges.samples, adds the batch to a replay buffer, takes a few gradient steps on the critic, computes ˆAand ̄A, and finally applies a gradient step on the policy pq. In our implementation, the critic Qwis fittedwith off-policy TD learning using the same techniques as in DDPG (Lillicrap et al., 2016):w=argminwEstrb();atb(jst)[(r(st;at)+gEp[Q0(st+1;at+1)]Qw(st;at))2]: (13)Vfis fitted with the same technique in (Schulman et al., 2016). Generalized advantage estimation(GAE) (Schulman et al., 2016) is used to estimate ˆA. The policy update can be done by any methodthat utilizes the first-order gradient and possibly the on-policy batch data, which includes trust regionpolicy optimization (TRPO) (Schulman et al., 2015). Importantly, this is just one possible imple-mentation of Q-Prop, and in Appendix C we show a more general form that can interpolate betweenpure policy gradient and off-policy actor-critic.3.4 L IMITATIONSA limitation with Q-Prop is that if data collection is very fast, e.g. using fast simulators, the computetime per episode is bound by the critic training at each iteration, and similar to that of DDPG andusually much more than that of TRPO. However, in applications where data collection speed isthe bottleneck, there is sufficient time between policy updates to fit Qwwell, which can be doneasynchronously from the data collection, and the compute time of Q-Prop will be about the same asthat of TRPO.Another limitation is the robustness to bad critics. We empirically show that our conservative Q-Propis more robust than standard Q-Prop and much more robust than pure off-policy actor-critic methodssuch as DDPG; however, estimating when an off-policy critic is reliable or not is still a fundamentalproblem that shall be further investigated. We can also alleviate this limitation by adopting morestable off-policy critic learning techniques such as Retrace( l) (Munos et al., 2016).4 R ELATED WORKVariance reduction in policy gradient methods is a long-standing problem with a large body of priorwork (Weaver & Tao, 2001; Greensmith et al., 2004; Schulman et al., 2016). However, explorationof action-dependent control variates is relatively recent, with most work focusing instead on simplerbaselining techniques (Ross, 2006). A subtle exception is compatible feature approximation (Suttonet al., 1999) which can be viewed as a control variate as explained in Appendix B. Another exceptionis doubly robust estimator in contextual bandits (Dud ́ık et al., 2011), which uses a different controlvariate whose bias cannot be tractably corrected. Control variates were explored recently not inRL but for approximate inference in stochastic models (Paisley et al., 2012), and the closest relatedwork in that domain is the MuProp algorithm (Gu et al., 2016a) which uses a mean-field networkas a surrogate for backpropagating a deterministic gradient through stochastic discrete variables.MuProp is not directly applicable to model-free RL because the dynamics are unknown; however, it6Published as a conference paper at ICLR 2017can be if the dynamics are learned as in model-based RL (Atkeson & Santamaria, 1997; Deisenroth& Rasmussen, 2011). This model-based Q-Prop is itself an interesting direction of research as iteffectively corrects bias in model-based learning.Part of the benefit of Q-Prop is the ability to use off-policy data to improve on-policy policy gra-dient methods. Prior methods that combine off-policy data with policy gradients either introducebias (Sutton et al., 1999; Silver et al., 2014) or use importance weighting, which is known to re-sult in degenerate importance weights in high dimensions, resulting in very high variance (Precup,2000; Levine & Koltun, 2013). Q-Prop provides a new approach for using off-policy data to reducevariance without introducing further bias.Lastly, since Q-Prop uses both on-policy policy updates and off-policy critic learning, it can takeadvantage of prior work along both lines of research. We chose to implement Q-Prop on top ofTRPO-GAE primarily for the purpose of enabling a fair comparison in the experiments, but com-bining Q-Prop with other on-policy update schemes and off-policy critic training methods is aninteresting direction for future work. For example, Q-Prop can also be used with other on-policypolicy gradient methods such as A3C (Mnih et al., 2016) and off-policy advantage estimation meth-ods such as Retrace( l) (Munos et al., 2016), GTD2 (Sutton et al., 2009), emphatic TD (Sutton et al.,2015), and WIS-LSTD (Mahmood et al., 2014).5 E XPERIMENTS(a) (b) (c) (d) (e) (f) (g)Figure 1: Illustrations of OpenAI Gym MuJoCo domains (Brockman et al., 2016; Duan et al., 2016):(a) Ant, (b) HalfCheetah, (c) Hopper, (d) Humanoid, (e) Reacher, (f) Swimmer, (g) Walker.We evaluated Q-Prop and its variants on continuous control environments from the OpenAI Gymbenchmark (Brockman et al., 2016) using the MuJoCo physics simulator (Todorov et al., 2012) asshown in Figure 1. Algorithms are identified by acronyms, followed by a number indicating batchsize, except for DDPG, which is a prior online actor-critic algorithm (Lillicrap et al., 2016). “c-” and“v-” denote conservative and aggressive Q-Prop variants as described in Section 3.2. “TR-” denotestrust-region policy optimization (Schulman et al., 2015), while “V-” denotes vanilla policy gradient.For example, “TR-c-Q-Prop-5000” means convervative Q-Prop with the trust-region policy update,and a batch size of 5000. “VPG” and “TRPO” are vanilla policy gradient and trust-region policy op-timization respectively (Schulman et al., 2016; Duan et al., 2016). Unless otherwise stated, all policygradient methods are implemented with GAE( l=0:97) (Schulman et al., 2016). Note that TRPO-GAE is currently the state-of-the-art method on most of the OpenAI Gym benchmark tasks, thoughour experiments show that a well-tuned DDPG implementation sometimes achieves better results.Our algorithm implementations are built on top of the rllab TRPO and DDPG codes from Duanet al. (2016) and available at https://github.com/shaneshixiang/rllabplusplus .Policy and value function architectures and other training details including hyperparameter valuesare provided in Appendix D.5.1 A DAPTIVE Q-P ROPFirst, it is useful to identify how reliable each variant of Q-Prop is. In this section, we analyzestandard Q-Prop and two adaptive variants, c-Q-Prop and a-Q-Prop, and demonstrate the stabilityof the method across different batch sizes. Figure 2a shows a comparison of Q-Prop variants withtrust-region updates on the HalfCheetah-v1 domain, along with the best performing TRPO hyper-parameters. The results are consistent with theory: conservative Q-Prop achieves much more stableperformance than the standard and aggressive variants, and all Q-Prop variants significantly outper-form TRPO in terms of sample efficiency, e.g. conservative Q-Prop reaches average reward of 4000using about 10 times less samples than TRPO.7Published as a conference paper at ICLR 2017(a) Standard Q-Prop vs adaptive variants. (b) Conservative Q-Prop vs TRPO across batch sizes.Figure 2: Average return over episodes in HalfCheetah-v1 during learning, exploring adaptive Q-Prop methods and different batch sizes. All variants of Q-Prop substantially outperform TRPO interms of sample efficiency. TR-c-QP, conservative Q-Prop with trust-region update performs moststably across different batch sizes.Figure 2b shows the performance of conservative Q-Prop against TRPO across different batchsizes. Due to high variance in gradient estimates, TRPO typically requires very large batch sizes,e.g. 25000 steps or 25 episodes per update, to perform well. We show that our Q-Prop methods canlearn even with just 1 episode per update, and achieves better sample efficiency with small batchsizes. This shows that Q-Prop significantly reduces the variance compared to the prior methods.As we discussed in Section 1, stability is a significant challenge with state-of-the-art deep RL meth-ods, and is very important for being able to reliably use deep RL for real world tasks. In the rest ofthe experiments, we will use conservative Q-Prop as the main Q-Prop implementation.5.2 E VALUATION ACROSS ALGORITHMS(a) Comparing algorithms on HalfCheetah-v1. (b) Comparing algorithms on Humanoid-v1.Figure 3: Average return over episodes in HalfCheetah-v1 and Humanoid-v1 during learning, com-paring Q-Prop against other model-free algorithms. Q-Prop with vanilla policy gradient outperformsTRPO on HalfCheetah. Q-Prop significantly outperforms TRPO in convergence time on Humanoid.In this section, we evaluate two versions of conservative Q-Prop, v-c-Q-Prop using vanilla pol-icy gradient and TR-c-Q-Prop using trust-region updates, against other model-free algorithms onthe HalfCheetah-v1 domain. Figure 3a shows that c-Q-Prop methods significantly outperform thebest TRPO and VPG methods. Even Q-Prop with vanilla policy gradient is comparable to TRPO,confirming the significant benefits from variance reduction. DDPG on the other hand exhibits incon-sistent performances. With proper reward scaling, i.e. “DDPG-r0.1”, it outperforms other methodsas well as the DDPG results reported in prior work (Duan et al., 2016; Amos et al., 2016). Thisillustrates the sensitivity of DDPG to hyperparameter settings, while Q-Prop exhibits more stable,monotonic learning behaviors when compared to DDPG. In the next section we show this improvedstability allows Q-Prop to outperform DDPG in more complex domains.8Published as a conference paper at ICLR 20175.3 E VALUATION ACROSS DOMAINSLastly, we evaluate Q-Prop against TRPO and DDPG across multiple domains. While the gymenvironments are biased toward locomotion, we expect we can achieve similar performance on ma-nipulation tasks such as those in Lillicrap et al. (2016). Table 1 summarizes the results, including thebest attained average rewards and the steps to convergence. Q-Prop consistently outperform TRPOin terms of sample complexity and sometimes achieves higher rewards than DDPG in more complexdomains. A particularly notable case is shown in Figure 3b, where Q-Prop substantially improvessample efficiency over TRPO on Humanoid-v1 domain, while DDPG cannot find a good solution.The better performance on the more complex domains highlights the importance of stable deep RLalgorithms: while costly hyperparameter sweeps may allow even less stable algorithms to performwell on simpler problems, more complex tasks might have such narrow regions of stable hyperpa-rameters that discovering them becomes impractical.TR-c-Q-Prop TRPO DDPGDomain Threshold MaxReturn. Episodes MaxReturn Epsisodes MaxReturn EpisodesAnt 3500 3534 4975 4239 13825 957 N/AHalfCheetah 4700 4811 20785 4734 26370 7490 600Hopper 2000 2957 5945 2486 5715 2604 965Humanoid 2500 >3492 14750 918 >30000 552 N/AReacher -7 -6.0 2060 -6.7 2840 -6.6 1800Swimmer 90 103 2045 110 3025 150 500Walker 3000 4030 3685 3567 18875 3626 2125Table 1: Q-Prop, TRPO and DDPG results showing the max average rewards attained in the first30k episodes and the episodes to cross specific reward thresholds. Q-Prop often learns more sampleefficiently than TRPO and can solve difficult domains such as Humanoid better than DDPG.6 D ISCUSSION AND CONCLUSIONWe presented Q-Prop, a policy gradient algorithm that combines reliable, consistent, and poten-tially unbiased on-policy gradient estimation with a sample-efficient off-policy critic that acts as acontrol variate. The method provides a large improvement in sample efficiency compared to state-of-the-art policy gradient methods such as TRPO, while outperforming state-of-the-art actor-criticmethods on more challenging tasks such as humanoid locomotion. We hope that techniques likethese, which combine on-policy Monte Carlo gradient estimation with sample-efficient variance re-duction through off-policy critics, will eventually lead to deep reinforcement learning algorithmsthat are more stable and efficient, and therefore better suited for application to complex real-worldlearning tasks.ACKNOWLEDGMENTSWe thank Rocky Duan for sharing and answering questions about rllab code, and Yutian Chen andLaurent Dinh for discussion on control variates. SG and RT were funded by NSERC, Google, andEPSRC grants EP/L000776/1 and EP/M026957/1. ZG was funded by EPSRC grant EP/J012300/1and the Alan Turing Institute (EP/N510129/1).
rJVKwJMSe
An interesting approach for using control variables to improve stability of deep RL control
7: Good paper, accept
The paper proposes using a first-order Taylor expansion as a control variate in policy gradient-style methods. Empirical results in dynamical control tasks suggest that this algorithm reduces the sample complexity, while the theoretical results presented suggest the algorithm is unbiased but of lower variance. The use of control variates is very important and the present paper is an interesting approach in this direction. I am not fully convinced of the approach, because it is one of many possible, and the theoretical analysis relies on an approximation of the variance rather than exact calculations, which makes it less compelling. However, this paper is a step in the right direction so it is worth accepting. In the experiments, a few things need to be discussed further: - What is the running time of the proposed approach? The computation of the extra terms required looks like it could be expensive. running time comparison in addition to sample comparison should be included - The sensitivity to parameter settings of the proposed algorithm needs to be illustrated in separate graphs, since this is one of the main claims in the paper - It would be nice to have a toy example included in which one can actually compute exact values and plot learning curves to compare more directly bias and variance. It would especially be nice to do this with a task that includes rare states, which is the case in which variance of other methods (eg importance sampling) really becomes significant.
4: The reviewer is confident but not absolutely certain that the evaluation is correct
SJ3rcZcxl
ICLR.cc/2017/conference
2017
Q-Prop: Sample-Efficient Policy Gradient with An Off-Policy Critic
["Shixiang Gu", "Timothy Lillicrap", "Zoubin Ghahramani", "Richard E. Turner", "Sergey Levine"]
Model-free deep reinforcement learning (RL) methods have been successful in a wide variety of simulated domains. However, a major obstacle facing deep RL in the real world is their high sample complexity. Batch policy gradient methods offer stable learning, but at the cost of high variance, which often requires large batches. TD-style methods, such as off-policy actor-critic and Q-learning, are more sample-efficient but biased, and often require costly hyperparameter sweeps to stabilize. In this work, we aim to develop methods that combine the stability of policy gradients with the efficiency of off-policy RL. We present Q-Prop, a policy gradient method that uses a Taylor expansion of the off-policy critic as a control variate. Q-Prop is both sample efficient and stable, and effectively combines the benefits of on-policy and off-policy methods. We analyze the connection between Q-Prop and existing model-free algorithms, and use control variate theory to derive two variants of Q-Prop with conservative and aggressive adaptation. We show that conservative Q-Prop provides substantial gains in sample efficiency over trust region policy optimization (TRPO) with generalized advantage estimation (GAE), and improves stability over deep deterministic policy gradient (DDPG), the state-of-the-art on-policy and off-policy methods, on OpenAI Gym's MuJoCo continuous control environments.
["Deep learning", "Reinforcement Learning"]
ABSTRACTModel-free deep reinforcement learning (RL) methods have been successful in awide variety of simulated domains. However, a major obstacle facing deep RLin the real world is their high sample complexity. Batch policy gradient methodsoffer stable learning, but at the cost of high variance, which often requires largebatches. TD-style methods, such as off-policy actor-critic and Q-learning, aremore sample-efficient but biased, and often require costly hyperparameter sweepsto stabilize. In this work, we aim to develop methods that combine the stability ofpolicy gradients with the efficiency of off-policy RL. We present Q-Prop, a policygradient method that uses a Taylor expansion of the off-policy critic as a controlvariate. Q-Prop is both sample efficient and stable, and effectively combines thebenefits of on-policy and off-policy methods. We analyze the connection betweenQ-Prop and existing model-free algorithms, and use control variate theory to de-rive two variants of Q-Prop with conservative and aggressive adaptation. We showthat conservative Q-Prop provides substantial gains in sample efficiency over trustregion policy optimization (TRPO) with generalized advantage estimation (GAE),and improves stability over deep deterministic policy gradient (DDPG), the state-of-the-art on-policy and off-policy methods, on OpenAI Gym’s MuJoCo continu-ous control environments.1 I NTRODUCTIONModel-free reinforcement learning is a promising approach for solving arbitrary goal-directed se-quential decision-making problems with only high-level reward signals and no supervision. It hasrecently been extended to utilize large neural network policies and value functions, and has beenshown to be successful in solving a range of difficult problems (Mnih et al., 2015; Schulman et al.,2015; Lillicrap et al., 2016; Silver et al., 2016; Gu et al., 2016b; Mnih et al., 2016). Deep neuralnetwork parametrization minimizes the need for manual feature and policy engineering, and allowslearning end-to-end policies mapping from high-dimensional inputs, such as images, directly to ac-tions. However, such expressive parametrization also introduces a number of practical problems.Deep reinforcement learning algorithms tend to be sensitive to hyperparameter settings, often re-quiring extensive hyperparameter sweeps to find good values. Poor hyperparameter settings tend toproduce unstable or non-convergent learning. Deep RL algorithms also tend to exhibit high samplecomplexity, often to the point of being impractical to run on real physical systems. Although a num-ber of recent techniques have sought to alleviate some of these issues (Hasselt, 2010; Mnih et al.,2015; Schulman et al., 2015; 2016), these recent advances still provide only a partial solution to theinstability and sample complexity challenges.Model-free reinforcement learning consists of on- and off-policy methods. Monte Carlo policy gra-dient methods (Peters & Schaal, 2006; Schulman et al., 2015) are popular on-policy methods that1Published as a conference paper at ICLR 2017directly maximize the cumulative future returns with respect to the policy. While these algorithmscan offer unbiased (or nearly unbiased, as discussed in Section 2.1) estimates of the gradient, theyrely on Monte Carlo estimation and often suffer from high variance. To cope with high variancegradient estimates and difficult optimization landscapes, a number of techniques have been pro-posed, including constraining the change in the policy at each gradient step (Kakade, 2001; Peterset al., 2010) and mixing value-based back-ups to trade off bias and variance in Monte Carlo returnestimates (Schulman et al., 2015). However, these methods all tend to require very large numbersof samples to deal with the high variance when estimating gradients of high-dimensional neuralnetwork policies. The crux of the problem with policy gradient methods is that they can only effec-tively use on-policy samples, which means that they require collecting large amounts of on-policyexperiences after each parameter update to the policy. This makes them very sample intensive. Off-policy methods, such as Q-learning (Watkins & Dayan, 1992; Sutton et al., 1999; Mnih et al., 2015;Gu et al., 2016b) and off-policy actor-critic methods (Lever, 2014; Lillicrap et al., 2016), can in-stead use all samples, including off-policy samples, by adopting temporal difference learning withexperience replay. Such methods are much more sample-efficient. However, convergence of thesealgorithms is in general not guaranteed with non-linear function approximators, and practical con-vergence and instability issues typically mean that extensive hyperparameter tuning is required toattain good results.In order to make deep reinforcement learning practical as a tool for tackling real-world tasks, wemust develop methods that are both data efficient and stable. In this paper, we propose Q-Prop, astep in this direction that combines the advantages of on-policy policy gradient methods with the effi-ciency of off-policy learning. Unlike prior approaches for off-policy learning, which either introducebias (Sutton et al., 1999; Silver et al., 2014) or increase variance (Precup, 2000; Levine & Koltun,2013; Munos et al., 2016), Q-Prop can reduce the variance of gradient estimator without addingbias; unlike prior approaches for critic-based variance reduction (Schulman et al., 2016) which fitthe value function on-policy, Q-Prop learns the action-value function off-policy. The core idea isto use the first-order Taylor expansion of the critic as a control variate, resulting in an analyticalgradient term through the critic and a Monte Carlo policy gradient term consisting of the residualsin advantage approximations. The method helps unify policy gradient and actor-critic methods: itcan be seen as using the off-policy critic to reduce variance in policy gradient or using on-policyMonte Carlo returns to correct for bias in the critic gradient. We further provide theoretical analy-sis of the control variate, and derive two additional variants of Q-Prop. The method can be easilyincorporated into any policy gradient algorithm. We show that Q-Prop provides substantial gainsin sample efficiency over trust region policy optimization (TRPO) with generalized advantage esti-mation (GAE) (Schulman et al., 2015; 2016), and improved stability over deep deterministic policygradient (DDPG) (Lillicrap et al., 2016) across a repertoire of continuous control tasks.2 B ACKGROUNDReinforcement learning (RL) aims to learn a policy for an agent such that it behaves optimallyaccording to a reward function. At a time step tand statest, the agent chooses an action atac-cording to its policy p(atjst), the state of the agent and the environment changes to new state st+1according to dynamics p(st+1jst;at), the agent receives a reward r(st;at), and the process con-tinues. Let Rtdenote a g-discounted cumulative return from tfor an infinite horizon problem, i.eRt=å¥t0=tgt0tr(st0;at0). The goal of reinforcement learning is to maximize the expected returnJ(q) =Epq[R0]with respect to the policy parameters q. In this section, we review several standardtechniques for performing this optimization, and in the next section, we will discuss our proposedQ-Prop algorithm that combines the strengths of these approaches to achieve efficient, stable RL.Monte Carlo policy gradient refers to policy gradient methods that use full Monte Carlo returns,e.g. REINFORCE (Williams, 1992) and TRPO (Schulman et al., 2015), and policy gradient withfunction approximation refers to actor-critic methods (Sutton et al., 1999) which optimize the policyagainst a critic, e.g. deterministic policy gradient (Silver et al., 2014; Lillicrap et al., 2016).2.1 M ONTE CARLO POLICY GRADIENT METHODSMonte Carlo policy gradient methods apply direct gradient-based optimization to the reinforcementlearning objective. This involves directly differentiating the J(q)objective with respect to the policy2Published as a conference paper at ICLR 2017parameters q. The standard form, known as the REINFORCE algorithm (Williams, 1992), is shownbelow:ÑqJ(q) =Ep[¥åt=0Ñqlogpq(atjst)gtRt] =Ep[¥åt=0gtÑqlogpq(atjst)(Rtb(st))]; (1)where b(st)is known as the baseline. For convenience of later derivations, Eq. 1 can also be writtenas below, where rp(s) =å¥t=0gtp(st=s)is the unnormalized discounted state visitation frequency,ÑqJ(q) =Estrp();atp(jst)[Ñqlogpq(atjst)(Rtb(st))]: (2)Eq. 2 is an unbiased gradient of the RL objective. However, in practice, most policy gradient meth-ods effectively use undiscounted state visitation frequencies, i.e. g=1 in the equal for rp, andare therefore biased; in fact, making them unbiased often hurts performance (Thomas, 2014). Inthis paper, we mainly discuss bias due to function approximation, off-policy learning, and valueback-ups.The gradient is estimated using Monte Carlo samples in practice and has very high variance. Aproper choice of baseline is necessary to reduce the variance sufficiently such that learning becomesfeasible. A common choice is to estimate the value function of the state Vp(st)to use as the base-line, which provides an estimate of advantage function Ap(st;at), which is a centered action-valuefunction Qp(st;at), as defined below:Vp(st) =Ep[Rt] =Epq(atjst)[Qp(st;at)]Qp(st;at) =r(st;at)+gEp[Rt+1] =r(st;at)+gEp(st+1jst;at)[Vp(st+1)]Ap(st;at) =Qp(st;at)Vp(st):(3)Qp(st;at)summarizes the performance of each action from a given state, assuming it follows pthereafter, and Ap(st;at)provides a measure of how each action compares to the average perfor-mance at the state st, which is given by Vp(st). Using Ap(st;at)centers the learning signal andreduces variance significantly.Besides high variance, another problem with the policy gradient is that it requires on-policy samples.This makes policy gradient optimization very sample intensive. To achieve similar sample efficiencyas off-policy methods, we can attempt to include off-policy data. Prior attempts use importancesampling to include off-policy trajectories; however, these are known to be difficult scale to high-dimensional action spaces because of rapidly degenerating importance weights (Precup, 2000).2.2 P OLICY GRADIENT WITH FUNCTION APPROXIMATIONPolicy gradient methods with function approximation (Sutton et al., 1999), or actor-critic methods,include a policy evaluation step, which often uses temporal difference (TD) learning to fit a criticQwfor the current policy p(q), and a policy improvement step which greedily optimizes the policypagainst the critic estimate Qw. Significant gains in sample efficiency may be achievable using off-policy TD learning for the critic, as in Q-learning and deterministic policy gradient (Sutton, 1990;Silver et al., 2014), typically by means of experience replay for training deep Q networks (Mnihet al., 2015; Lillicrap et al., 2016; Gu et al., 2016b).One particularly relevant example of such a method is the deep deterministic policy gradient(DDPG) (Silver et al., 2014; Lillicrap et al., 2016). The updates for this method are given below,where pq(atjst) =d(at=q(st))is a deterministic policy, bis arbitrary exploration distribution,andrbcorresponds to sampling from a replay buffer. Q(;)is the target network that slowly tracksQw(Lillicrap et al., 2016).w=argminwEstrb();atb(jst)[(r(st;at)+gQ(st+1;q(st+1))Qw(st;at))2]q=argmaxqEstrb()[Qw(st;q(st))](4)When the critic and policy are parametrized with neural networks, full optimization is expensive,and instead stochastic gradient optimization is used. The gradient in the policy improvement phaseis given below, which is generally a biased gradient of J(q).ÑqJ(q)Estrb()[ÑaQw(st;a)ja=q(st)Ñqq(st)] (5)3Published as a conference paper at ICLR 2017The crucial benefits of DDPG are that it does not rely on high variance REINFORCE gradients and istrainable on off-policy data. These properties make DDPG and other analogous off-policy methodssignificantly more sample-efficient than policy gradient methods (Lillicrap et al., 2016; Gu et al.,2016b; Duan et al., 2016). However, the use of a biased policy gradient estimator makes analyzingits convergence and stability properties difficult.3 Q-P ROPIn this section, we derive the Q-Prop estimator for policy gradient. The key idea from this estimatorcomes from observing Equations 2 and 5 and noting that the former provides an almost unbiased(see Section 2.1), but high variance gradient, while the latter provides a deterministic, but biasedgradient. By using the deterministic biased estimator as a particular form of control variate (Ross,2006; Paisley et al., 2012) for the Monte Carlo policy gradient estimator, we can effectively use bothtypes of gradient information to construct a new estimator that in practice exhibits improved sampleefficiency through the inclusion of off-policy samples while preserving the stability of on-policyMonte Carlo policy gradient.3.1 Q-P ROP ESTIMATORTo derive the Q-Prop gradient estimator, we start by using the first-order Taylor expansion of anarbitrary function f(st;at), ̄f(st;at) =f(st; ̄at)+Ñaf(st;a)ja= ̄at(at ̄at)as the control vari-ate for the policy gradient estimator. We use ˆQ(st;at) =å¥t0=tgt0tr(st0;at0)to denote MonteCarlo return from state stand actionat, i.e. Ep[ˆQ(st;at)] = r(st;at) +gEp[Vp(st+1)], andq(st) =Epq(atjst)[at]to denote the expected action of a stochastic policy pq. Full derivation isin Appendix A.ÑqJ(q) =Erp;p[Ñqlogpq(atjst)(ˆQ(st;at) ̄f(st;at)]+Erp;p[Ñqlogpq(atjst) ̄f(st;at)]=Erp;p[Ñqlogpq(atjst)(ˆQ(st;at) ̄f(st;at)]+Erp[Ñaf(st;a)ja= ̄atÑqq(st)](6)Eq. 6 is general for arbitrary function f(st;at)that is differentiable with respect to atat an arbitraryvalue of ̄at; however, a sensible choice is to use the critic Qwforfandq(st)for ̄atto get,ÑqJ(q) =Erp;p[Ñqlogpq(atjst)(ˆQ(st;at) ̄Qw(st;at)]+Erp[ÑaQw(st;a)ja=q(st)Ñqq(st)]:(7)Finally, since in practice we estimate advantages ˆA(st;at), we write the Q-Prop estimator in termsof advantages to complete the basic derivation,ÑqJ(q) =Erp;p[Ñqlogpq(atjst)(ˆA(st;at) ̄Aw(st;at)]+Erp[ÑaQw(st;a)ja=q(st)Ñqq(st)] ̄A(st;at) = ̄Q(st;at)Epq[ ̄Q(st;at)] =ÑaQw(st;a)ja=q(st)(atq(st)):(8)Eq. 8 is composed of an analytic gradient through the critic as in Eq. 5 and a residual REINFORCEgradient in Eq. 2. From the above derivation, Q-Prop is simply a Monte Carlo policy gradientestimator with a special form of control variate. The important insight comes from the fact thatQwcan be trained using off-policy data as in Eq. 4. Under this setting, Q-Prop is no longer justa Monte Carlo policy gradient method, but more closely resembles an actor-critic method, wherethe critic can be updated off-policy but the actor is always updated on-policy with an additionalREINFORCE correction term so that it remains a Monte Carlo policy gradient method regardlessof the parametrization, training method, and performance of the critic. Therefore, Q-Prop can bedirectly combined with a number of prior techniques from both on-policy methods such as naturalpolicy gradient (Kakade, 2001), trust-region policy optimization (TRPO) (Schulman et al., 2015)and generalized advantage estimation (GAE) (Schulman et al., 2016), and off-policy methods suchas DDPG (Lillicrap et al., 2016) and Retrace( l) (Munos et al., 2016).Intuitively, if the critic Qwapproximates Qpwell, it provides a reliable gradient, reduces the estima-tor variance, and improves the convergence rate. Interestingly, control variate analysis in the nextsection shows that this is not the only circumstance where Q-Prop helps reduce variance.4Published as a conference paper at ICLR 20173.2 C ONTROL VARIATE ANALYSIS AND ADAPTIVE Q-P ROPFor Q-Prop to be applied reliably, it is crucial to analyze how the variance of the estimator changesbefore and after the application of control variate. Following the prior work on control vari-ates (Ross, 2006; Paisley et al., 2012), we first introduce h(st)to Eq. 8, a weighing variable thatmodulates the strength of control variate. This additional variable h(st)does not introduce bias tothe estimator.ÑqJ(q) =Erp;p[Ñqlogpq(atjst)(ˆA(st;at)h(st) ̄Aw(st;at)]+Erp[h(st)ÑaQw(st;a)ja=q(st)Ñqq(st)](9)The variance of this estimator is given below, where m=1:::Mindexes the dimension of q,Var=ErpåmVarat(Ñqmlogpq(atjst)(ˆA(st;at)h(st) ̄A(st;at))): (10)If we choose h(st)such that Var<Var, where Var =Erp[åmVarat(Ñqmlogpq(atjst)ˆA(st;at))]is the original estimator variance measure, then we have managed to reduce the variance. Directlyanalyzing the above variance measure is nontrivial, for the same reason that computing the optimalbaseline is difficult (Weaver & Tao, 2001). In addition, it is often impractical to get multiple actionsamples from the same state, which prohibits using na ̈ıve Monte Carlo to estimate the expectations.Instead, we propose a surrogate variance measure, Var =Erp[Varat(ˆA(st;at))]. A similar surrogateis also used by prior work on learning state-dependent baseline (Mnih & Gregor, 2014), and thebenefit is that the measure becomes more tractable,Var=Erp[Varat(ˆA(st;at)h(st) ̄A(st;at))]=Var+Erp[2h(st)Covat(ˆA(st;at); ̄A(st;at))+h(st)2Varat( ̄A(st;at))]:(11)SinceEp[ˆA(st;at)] =Ep[ ̄A(st;at)] = 0, the terms can be simplified as below,Covat(ˆA; ̄A) =Ep[ˆA(st;at) ̄A(st;at)]Varat( ̄A) =Ep[ ̄A(st;at)2] =ÑaQw(st;a)jTa=q(st)Sq(st)ÑaQw(st;a)ja=q(st);(12)where Sq(st)is the covariance matrix of the stochastic policy pq. The nice property of Eq. 11 isthat Varat( ̄A)is analytical and Cov at(ˆA; ̄A)can be estimated with single action sample. Using thisestimate, we propose adaptive variants of Q-Prop that regulate the variance of the gradient estimate.Adaptive Q-Prop. The optimal state-dependent factor h(st)can be computed per state, accord-ing to h(st) =Covat(ˆA; ̄A)=Varat( ̄A). This provides maximum reduction in variance accordingto Eq. 11. Substituting h(st)into Eq. 11, we get Var=Erp[(1rcorr(ˆA; ̄A)2)Varat(ˆA)], wherercorris the correlation coefficient, which achieves guaranteed variance reduction if at any state ̄Aiscorrelated with ˆA. We call this the fully adaptive Q-Prop method. An important conclusion fromthis analysis is that, in adaptive Q-Prop, the critic Qwdoes not necessarily need to be approximatingQpwell to produce good results. Its Taylor expansion merely needs to be correlated with ˆA, posi-tively or even negatively. This is in contrast with actor-critic methods, where performance is greatlydependent on the absolute accuracy of the critic’s approximation.Conservative and Aggressive Q-Prop. In practice, the single-sample estimate of Cov at(ˆA; ̄A)hashigh variance itself, and we propose the following two practical implementations of adaptive Q-Prop:(1)h(st) =1 if ˆCovat(ˆA; ̄A)>0 and h(st) =0 if otherwise, and (2) h(st) =sign(ˆCovat(ˆA; ̄A)). Thefirst implementation, which we call conservative Q-Prop, can be thought of as a more conservativeversion of Q-Prop, which effectively disables the control variate for some samples of the states. Thisis sensible as if ˆAand ̄Aare negatively correlated, it is likely that the critic is very poor. The secondvariant can correspondingly be termed aggressive Q-Prop, since it makes more liberal use of thecontrol variate.3.3 Q-P ROP ALGORITHMPseudo-code for the adaptive Q-Prop algorithm is provided in Algorithm 1. It is a mixture of policygradient and actor-critic. At each iteration, it first rolls out the stochastic policy to collect on-policy5Published as a conference paper at ICLR 2017Algorithm 1 Adaptive Q-Prop1: Initialize wfor critic Qw,qfor stochastic policy pq, and replay buffer R / 0.2:repeat3: fore=1;:::; Edo .Collect Eepisodes of on-policy experience using pq4:s0;ep(s0)5: fort=0;:::; T1do6:at;epq(jst;e),st+1;ep(jst;e;at;e),rt;e=r(st;e;at;e)7: Add batch data B=fs0:T;1:E;a0:T1;1:E;r0:T1;1:Egto replay buffer R8: Take ETgradient steps on QwusingRandpq9: Fit Vf(st)usingB10: Compute ˆAt;eusing GAE( l) and ̄At;eusing Eq. 711: Set ht;ebased on Section 3.212: Compute and center the learning signals lt;e=ˆAt;eht;e ̄At;e13: Compute ÑqJ(q)1ETåeåtÑqlogpq(at;ejst;e)lt;e+ht;eÑaQw(st;e;a)ja=q(st;e)Ñqq(st;e)14: Take a gradient step on pqusing ÑqJ(q), optionally with a trust-region constraint using B15:until pqconverges.samples, adds the batch to a replay buffer, takes a few gradient steps on the critic, computes ˆAand ̄A, and finally applies a gradient step on the policy pq. In our implementation, the critic Qwis fittedwith off-policy TD learning using the same techniques as in DDPG (Lillicrap et al., 2016):w=argminwEstrb();atb(jst)[(r(st;at)+gEp[Q0(st+1;at+1)]Qw(st;at))2]: (13)Vfis fitted with the same technique in (Schulman et al., 2016). Generalized advantage estimation(GAE) (Schulman et al., 2016) is used to estimate ˆA. The policy update can be done by any methodthat utilizes the first-order gradient and possibly the on-policy batch data, which includes trust regionpolicy optimization (TRPO) (Schulman et al., 2015). Importantly, this is just one possible imple-mentation of Q-Prop, and in Appendix C we show a more general form that can interpolate betweenpure policy gradient and off-policy actor-critic.3.4 L IMITATIONSA limitation with Q-Prop is that if data collection is very fast, e.g. using fast simulators, the computetime per episode is bound by the critic training at each iteration, and similar to that of DDPG andusually much more than that of TRPO. However, in applications where data collection speed isthe bottleneck, there is sufficient time between policy updates to fit Qwwell, which can be doneasynchronously from the data collection, and the compute time of Q-Prop will be about the same asthat of TRPO.Another limitation is the robustness to bad critics. We empirically show that our conservative Q-Propis more robust than standard Q-Prop and much more robust than pure off-policy actor-critic methodssuch as DDPG; however, estimating when an off-policy critic is reliable or not is still a fundamentalproblem that shall be further investigated. We can also alleviate this limitation by adopting morestable off-policy critic learning techniques such as Retrace( l) (Munos et al., 2016).4 R ELATED WORKVariance reduction in policy gradient methods is a long-standing problem with a large body of priorwork (Weaver & Tao, 2001; Greensmith et al., 2004; Schulman et al., 2016). However, explorationof action-dependent control variates is relatively recent, with most work focusing instead on simplerbaselining techniques (Ross, 2006). A subtle exception is compatible feature approximation (Suttonet al., 1999) which can be viewed as a control variate as explained in Appendix B. Another exceptionis doubly robust estimator in contextual bandits (Dud ́ık et al., 2011), which uses a different controlvariate whose bias cannot be tractably corrected. Control variates were explored recently not inRL but for approximate inference in stochastic models (Paisley et al., 2012), and the closest relatedwork in that domain is the MuProp algorithm (Gu et al., 2016a) which uses a mean-field networkas a surrogate for backpropagating a deterministic gradient through stochastic discrete variables.MuProp is not directly applicable to model-free RL because the dynamics are unknown; however, it6Published as a conference paper at ICLR 2017can be if the dynamics are learned as in model-based RL (Atkeson & Santamaria, 1997; Deisenroth& Rasmussen, 2011). This model-based Q-Prop is itself an interesting direction of research as iteffectively corrects bias in model-based learning.Part of the benefit of Q-Prop is the ability to use off-policy data to improve on-policy policy gra-dient methods. Prior methods that combine off-policy data with policy gradients either introducebias (Sutton et al., 1999; Silver et al., 2014) or use importance weighting, which is known to re-sult in degenerate importance weights in high dimensions, resulting in very high variance (Precup,2000; Levine & Koltun, 2013). Q-Prop provides a new approach for using off-policy data to reducevariance without introducing further bias.Lastly, since Q-Prop uses both on-policy policy updates and off-policy critic learning, it can takeadvantage of prior work along both lines of research. We chose to implement Q-Prop on top ofTRPO-GAE primarily for the purpose of enabling a fair comparison in the experiments, but com-bining Q-Prop with other on-policy update schemes and off-policy critic training methods is aninteresting direction for future work. For example, Q-Prop can also be used with other on-policypolicy gradient methods such as A3C (Mnih et al., 2016) and off-policy advantage estimation meth-ods such as Retrace( l) (Munos et al., 2016), GTD2 (Sutton et al., 2009), emphatic TD (Sutton et al.,2015), and WIS-LSTD (Mahmood et al., 2014).5 E XPERIMENTS(a) (b) (c) (d) (e) (f) (g)Figure 1: Illustrations of OpenAI Gym MuJoCo domains (Brockman et al., 2016; Duan et al., 2016):(a) Ant, (b) HalfCheetah, (c) Hopper, (d) Humanoid, (e) Reacher, (f) Swimmer, (g) Walker.We evaluated Q-Prop and its variants on continuous control environments from the OpenAI Gymbenchmark (Brockman et al., 2016) using the MuJoCo physics simulator (Todorov et al., 2012) asshown in Figure 1. Algorithms are identified by acronyms, followed by a number indicating batchsize, except for DDPG, which is a prior online actor-critic algorithm (Lillicrap et al., 2016). “c-” and“v-” denote conservative and aggressive Q-Prop variants as described in Section 3.2. “TR-” denotestrust-region policy optimization (Schulman et al., 2015), while “V-” denotes vanilla policy gradient.For example, “TR-c-Q-Prop-5000” means convervative Q-Prop with the trust-region policy update,and a batch size of 5000. “VPG” and “TRPO” are vanilla policy gradient and trust-region policy op-timization respectively (Schulman et al., 2016; Duan et al., 2016). Unless otherwise stated, all policygradient methods are implemented with GAE( l=0:97) (Schulman et al., 2016). Note that TRPO-GAE is currently the state-of-the-art method on most of the OpenAI Gym benchmark tasks, thoughour experiments show that a well-tuned DDPG implementation sometimes achieves better results.Our algorithm implementations are built on top of the rllab TRPO and DDPG codes from Duanet al. (2016) and available at https://github.com/shaneshixiang/rllabplusplus .Policy and value function architectures and other training details including hyperparameter valuesare provided in Appendix D.5.1 A DAPTIVE Q-P ROPFirst, it is useful to identify how reliable each variant of Q-Prop is. In this section, we analyzestandard Q-Prop and two adaptive variants, c-Q-Prop and a-Q-Prop, and demonstrate the stabilityof the method across different batch sizes. Figure 2a shows a comparison of Q-Prop variants withtrust-region updates on the HalfCheetah-v1 domain, along with the best performing TRPO hyper-parameters. The results are consistent with theory: conservative Q-Prop achieves much more stableperformance than the standard and aggressive variants, and all Q-Prop variants significantly outper-form TRPO in terms of sample efficiency, e.g. conservative Q-Prop reaches average reward of 4000using about 10 times less samples than TRPO.7Published as a conference paper at ICLR 2017(a) Standard Q-Prop vs adaptive variants. (b) Conservative Q-Prop vs TRPO across batch sizes.Figure 2: Average return over episodes in HalfCheetah-v1 during learning, exploring adaptive Q-Prop methods and different batch sizes. All variants of Q-Prop substantially outperform TRPO interms of sample efficiency. TR-c-QP, conservative Q-Prop with trust-region update performs moststably across different batch sizes.Figure 2b shows the performance of conservative Q-Prop against TRPO across different batchsizes. Due to high variance in gradient estimates, TRPO typically requires very large batch sizes,e.g. 25000 steps or 25 episodes per update, to perform well. We show that our Q-Prop methods canlearn even with just 1 episode per update, and achieves better sample efficiency with small batchsizes. This shows that Q-Prop significantly reduces the variance compared to the prior methods.As we discussed in Section 1, stability is a significant challenge with state-of-the-art deep RL meth-ods, and is very important for being able to reliably use deep RL for real world tasks. In the rest ofthe experiments, we will use conservative Q-Prop as the main Q-Prop implementation.5.2 E VALUATION ACROSS ALGORITHMS(a) Comparing algorithms on HalfCheetah-v1. (b) Comparing algorithms on Humanoid-v1.Figure 3: Average return over episodes in HalfCheetah-v1 and Humanoid-v1 during learning, com-paring Q-Prop against other model-free algorithms. Q-Prop with vanilla policy gradient outperformsTRPO on HalfCheetah. Q-Prop significantly outperforms TRPO in convergence time on Humanoid.In this section, we evaluate two versions of conservative Q-Prop, v-c-Q-Prop using vanilla pol-icy gradient and TR-c-Q-Prop using trust-region updates, against other model-free algorithms onthe HalfCheetah-v1 domain. Figure 3a shows that c-Q-Prop methods significantly outperform thebest TRPO and VPG methods. Even Q-Prop with vanilla policy gradient is comparable to TRPO,confirming the significant benefits from variance reduction. DDPG on the other hand exhibits incon-sistent performances. With proper reward scaling, i.e. “DDPG-r0.1”, it outperforms other methodsas well as the DDPG results reported in prior work (Duan et al., 2016; Amos et al., 2016). Thisillustrates the sensitivity of DDPG to hyperparameter settings, while Q-Prop exhibits more stable,monotonic learning behaviors when compared to DDPG. In the next section we show this improvedstability allows Q-Prop to outperform DDPG in more complex domains.8Published as a conference paper at ICLR 20175.3 E VALUATION ACROSS DOMAINSLastly, we evaluate Q-Prop against TRPO and DDPG across multiple domains. While the gymenvironments are biased toward locomotion, we expect we can achieve similar performance on ma-nipulation tasks such as those in Lillicrap et al. (2016). Table 1 summarizes the results, including thebest attained average rewards and the steps to convergence. Q-Prop consistently outperform TRPOin terms of sample complexity and sometimes achieves higher rewards than DDPG in more complexdomains. A particularly notable case is shown in Figure 3b, where Q-Prop substantially improvessample efficiency over TRPO on Humanoid-v1 domain, while DDPG cannot find a good solution.The better performance on the more complex domains highlights the importance of stable deep RLalgorithms: while costly hyperparameter sweeps may allow even less stable algorithms to performwell on simpler problems, more complex tasks might have such narrow regions of stable hyperpa-rameters that discovering them becomes impractical.TR-c-Q-Prop TRPO DDPGDomain Threshold MaxReturn. Episodes MaxReturn Epsisodes MaxReturn EpisodesAnt 3500 3534 4975 4239 13825 957 N/AHalfCheetah 4700 4811 20785 4734 26370 7490 600Hopper 2000 2957 5945 2486 5715 2604 965Humanoid 2500 >3492 14750 918 >30000 552 N/AReacher -7 -6.0 2060 -6.7 2840 -6.6 1800Swimmer 90 103 2045 110 3025 150 500Walker 3000 4030 3685 3567 18875 3626 2125Table 1: Q-Prop, TRPO and DDPG results showing the max average rewards attained in the first30k episodes and the episodes to cross specific reward thresholds. Q-Prop often learns more sampleefficiently than TRPO and can solve difficult domains such as Humanoid better than DDPG.6 D ISCUSSION AND CONCLUSIONWe presented Q-Prop, a policy gradient algorithm that combines reliable, consistent, and poten-tially unbiased on-policy gradient estimation with a sample-efficient off-policy critic that acts as acontrol variate. The method provides a large improvement in sample efficiency compared to state-of-the-art policy gradient methods such as TRPO, while outperforming state-of-the-art actor-criticmethods on more challenging tasks such as humanoid locomotion. We hope that techniques likethese, which combine on-policy Monte Carlo gradient estimation with sample-efficient variance re-duction through off-policy critics, will eventually lead to deep reinforcement learning algorithmsthat are more stable and efficient, and therefore better suited for application to complex real-worldlearning tasks.ACKNOWLEDGMENTSWe thank Rocky Duan for sharing and answering questions about rllab code, and Yutian Chen andLaurent Dinh for discussion on control variates. SG and RT were funded by NSERC, Google, andEPSRC grants EP/L000776/1 and EP/M026957/1. ZG was funded by EPSRC grant EP/J012300/1and the Alan Turing Institute (EP/N510129/1).
r11-EhWBx
Efficient Policy Gradient using a Critic
8: Top 50% of accepted papers, clear accept
This paper presents a model-free policy gradient approach for reinforcement learning that combines on-policy updates with an off-policy critic. The hope is to learn continuous control in a sample-efficient fashion. The approach is validated on a number of low-dimensional continuous control tasks in a simulated environment. The paper is very well written, easy to follow, and provides an adequate context with which to appreciate the contributions it brings. Although this reviewer is not an expert in this literature, the proposed approach appears novel. The Q-Prop estimator appears to be a general and useful method for policy learning, and the experimental validations provide adequate support for the claims of improved sample efficiency. The detailed derivations given in the Supplementary Materials are very useful. I like the paper and I don’t have much to comment on. Perhaps a discussion of the following aspects would add to the depth: 1) comparison of the methods at a given computational cost, instead of by the number of episodes seen. 2) discussion of the limitations of the technique: are there situations where convergence is difficult Possible typo: in equation (4), should we read $… + \gamma Q_w( …$ instead of $… + \gamma Q( …$ ? If not, then what is Q() without subscript w?
3: The reviewer is fairly confident that the evaluation is correct
S1VaB4cex
ICLR.cc/2017/conference
2017
FractalNet: Ultra-Deep Neural Networks without Residuals
["Gustav Larsson", "Michael Maire", "Gregory Shakhnarovich"]
We introduce a design strategy for neural network macro-architecture based on self-similarity. Repeated application of a simple expansion rule generates deep networks whose structural layouts are precisely truncated fractals. These networks contain interacting subpaths of different lengths, but do not include any pass-through or residual connections; every internal signal is transformed by a filter and nonlinearity before being seen by subsequent layers. In experiments, fractal networks match the excellent performance of standard residual networks on both CIFAR and ImageNet classification tasks, thereby demonstrating that residual representations may not be fundamental to the success of extremely deep convolutional neural networks. Rather, the key may be the ability to transition, during training, from effectively shallow to deep. We note similarities with student-teacher behavior and develop drop-path, a natural extension of dropout, to regularize co-adaptation of subpaths in fractal architectures. Such regularization allows extraction of high-performance fixed-depth subnetworks. Additionally, fractal networks exhibit an anytime property: shallow subnetworks provide a quick answer, while deeper subnetworks, with higher latency, provide a more accurate answer.
["neural networks", "fractal networks", "fractalnet", "residuals fractalnet", "residuals", "design strategy", "neural network", "application", "simple expansion rule", "deep networks"]
ABSTRACTWe introduce a design strategy for neural network macro-architecture based on self-similarity. Repeated application of a simple expansion rule generates deep networkswhose structural layouts are precisely truncated fractals. These networks containinteracting subpaths of different lengths, but do not include any pass-through orresidual connections; every internal signal is transformed by a filter and nonlinearitybefore being seen by subsequent layers. In experiments, fractal networks matchthe excellent performance of standard residual networks on both CIFAR andImageNet classification tasks, thereby demonstrating that residual representationsmay not be fundamental to the success of extremely deep convolutional neuralnetworks. Rather, the key may be the ability to transition, during training, fromeffectively shallow to deep. We note similarities with student-teacher behavior anddevelop drop-path, a natural extension of dropout, to regularize co-adaptation ofsubpaths in fractal architectures. Such regularization allows extraction of high-performance fixed-depth subnetworks. Additionally, fractal networks exhibit ananytime property: shallow subnetworks provide a quick answer, while deepersubnetworks, with higher latency, provide a more accurate answer.1 I NTRODUCTIONResidual networks (He et al., 2016a), or ResNets, lead a recent and dramatic increase in both depth andaccuracy of convolutional neural networks, facilitated by constraining the network to learn residuals.ResNet variants (He et al., 2016a;b; Huang et al., 2016b) and related architectures (Srivastava et al.,2015) employ the common technique of initializing and anchoring, via a pass-through channel, anetwork to the identity function. Training now differs in two respects. First, the objective changesto learning residual outputs, rather than unreferenced absolute mappings. Second, these networksexhibit a type of deep supervision (Lee et al., 2014), as near-identity layers effectively reduce distanceto the loss. He et al. (2016a) speculate that the former, the residual formulation itself, is crucial.We show otherwise, by constructing a competitive extremely deep architecture that does not rely onresiduals. Our design principle is pure enough to communicate in a single word, fractal, and a simplediagram (Figure 1). Yet, fractal networks implicitly recapitulate many properties hard-wired intoprevious successful architectures. Deep supervision not only arises automatically, but also drives atype of student-teacher learning (Ba & Caruana, 2014; Urban et al., 2017) internal to the network.Modular building blocks of other designs (Szegedy et al., 2015; Liao & Carneiro, 2015) resemblespecial cases of a fractal network’s nested substructure.For fractal networks, simplicity of training mirrors simplicity of design. A single loss, attached to thefinal layer, suffices to drive internal behavior mimicking deep supervision. Parameters are randomlyinitialized. As they contain subnetworks of many depths, fractal networks are robust to choice ofoverall depth; make them deep enough and training will carve out a useful assembly of subnetworks.The entirety of emergent behavior resulting from a fractal design may erode the need for recentengineering tricks intended to achieve similar effects. These tricks include residual functional formswith identity initialization, manual deep supervision, hand-crafted architectural modules, and student-teacher training regimes. Section 2 reviews this large body of related techniques. Hybrid designscould certainly integrate any of them with a fractal architecture; we leave open the question of thedegree to which such hybrids are synergistic.1Published as a conference paper at ICLR 2017zf4pzqzf4pzqBlock 1Block 2Block 3Block 4Block 5xyFractal Expansion RuleLayer KeyConvolutionJoinPoolPredictionzfCfCpzqzfCfCfC1pzqFigure 1: Fractal architecture. Left: A simple expansion rule generates a fractal architecture withCintertwined columns. The base case, f1pzq, has a single layer of the chosen type ( e.g.convolutional)between input and output. Join layers compute element-wise mean. Right: Deep convolutionalnetworks periodically reduce spatial resolution via pooling. A fractal version uses fCas a buildingblock between pooling layers. Stacking Bsuch blocks yields a network whose total depth, measuredin terms of convolution layers, is B2C1. This example has depth 40(B5,C4).Our main contribution is twofold:We introduce FractalNet, the first simple alternative to ResNet. FractalNet shows thatexplicit residual learning is not a requirement for building ultra-deep neural networks.Through analysis and experiments, we elucidate connections between FractalNet and anarray of phenomena engineered into previous deep network designs.As an additional contribution, we develop drop-path, a novel regularization protocol for ultra-deep fractal networks. Without data augmentation, fractal networks, trained with drop-path anddropout (Hinton et al., 2012), exceed the performance of residual networks regularized via stochasticdepth (Huang et al., 2016b). Though, like stochastic depth, it randomly removes macro-scalecomponents, drop-path further exploits our fractal structure in choosing which components to disable.Drop-path constitutes not only a regularization strategy, but also provides means of optionallyimparting fractal networks with anytime behavior. A particular schedule of dropped paths duringlearning prevents subnetworks of different depths from co-adapting. As a consequence, both shallowand deep subnetworks must individually produce correct output. Querying a shallow subnetwork thusyields a quick and moderately accurate result in advance of completion of the full network.Section 3 elaborates the technical details of fractal networks and drop-path. Section 4 providesexperimental comparisons to residual networks across the CIFAR-10, CIFAR-100 (Krizhevsky,2009), SVHN (Netzer et al., 2011), and ImageNet (Deng et al., 2009) datasets. We also evaluateregularization and data augmentation strategies, investigate subnetwork student-teacher behaviorduring training, and benchmark anytime networks obtained using drop-path. Section 5 providessynthesis. By virtue of encapsulating many known, yet seemingly distinct, design principles, self-similar structure may materialize as a fundamental component of neural architectures.2Published as a conference paper at ICLR 20172 R ELATED WORKDeepening feed-forward neural networks has generally returned dividends in performance. A strikingexample within the computer vision community is the improvement on the ImageNet (Deng et al.,2009) classification task when transitioning from AlexNet (Krizhevsky et al., 2012) to VGG (Si-monyan & Zisserman, 2015) to GoogLeNet (Szegedy et al., 2015) to ResNet (He et al., 2016a).Unfortunately, greater depth also makes training more challenging, at least when employing a first-order optimization method with randomly initialized layers. As the network grows deeper and morenon-linear, the linear approximation of a gradient step becomes increasingly inappropriate. Desire toovercome these difficulties drives research on both optimization techniques and network architectures.On the optimization side, much recent work yields improvements. To prevent vanishing gradients,ReLU activation functions now widely replace sigmoid and tanh units (Nair & Hinton, 2010). Thissubject remains an area of active inquiry, with various tweaks on ReLUs, e.g.PReLUs (He et al., 2015),and ELUs (Clevert et al., 2016). Even with ReLUs, employing batch normalization (Ioffe & Szegedy,2015) speeds training by reducing internal covariate shift. Good initialization can also amelioratethis problem (Glorot & Bengio, 2010; Mishkin & Matas, 2016). Path-SGD (Neyshabur et al., 2015)offers an alternative normalization scheme. Progress in optimization is somewhat orthogonal to ourarchitectural focus, with the expectation that advances in either are ripe for combination.Notable ideas in architecture reach back to skip connections, the earliest example of a nontrivialrouting pattern within a neural network. Recent work further elaborates upon them (Maire et al., 2014;Hariharan et al., 2015). Highway networks (Srivastava et al., 2015) and ResNet (He et al., 2016a;b)offer additional twists in the form of parameterized pass-through and gating. In work subsequentto our own, Huang et al. (2016a) investigate a ResNet variant with explicit skip connections. Thesemethods share distinction as the only other designs demonstrated to scale to hundreds of layers andbeyond. ResNet’s building block uses the identity map as an anchor point and explicitly parameterizesan additive correction term (the residual). Identity initialization also appears in the context of recurrentnetworks (Le et al., 2015). A tendency of ResNet and highway networks to fall-back to the identitymap may make their effective depth much smaller than their nominal depth.Some prior results hint at what we experimentally demonstrate in Section 4. Namely, reduction ofeffective depth is key to training extremely deep networks; residuals are incidental. Huang et al.(2016b) provide one clue in their work on stochastic depth: randomly dropping layers from ResNetduring training, thereby shrinking network depth by a constant factor, provides additional performancebenefit. We build upon this intuition through drop-path, which shrinks depth much more drastically.The success of deep supervision (Lee et al., 2014) provides another clue that effective depth is crucial.Here, an auxiliary loss, forked off mid-level layers, introduces a shorter path during backpropagation.The layer at the fork receives two gradients, originating from the main loss and the auxiliaryloss, that are added together. Deep supervision is now common, being adopted, for example, byGoogLeNet (Szegedy et al., 2015). However, irrelevance of the auxiliary loss at test time introducesthe drawback of having a discrepancy between the actual objective and that used for training.Exploration of the student-teacher paradigm (Ba & Caruana, 2014) illuminates the potential forinterplay between networks of different depth. In the model compression scenario, a deeper network(previously trained) guides and improves the learning of a shallower and faster student network (Ba& Caruana, 2014; Urban et al., 2017). This is accomplished by feeding unlabeled data through theteacher and having the student mimic the teacher’s soft output predictions. FitNets (Romero et al.,2015) explicitly couple students and teachers, forcing mimic behavior across several intermediatepoints in the network. Our fractal networks capture yet another alternative, in the form of implicitcoupling, with the potential for bidirectional information flow between shallow and deep subnetworks.Widening networks, by using larger modules in place of individual layers, has also produced per-formance gains. For example, an Inception module (Szegedy et al., 2015) concatenates results ofconvolutional layers of different receptive field size. Stacking these modules forms the GoogLeNet ar-chitecture. Liao & Carneiro (2015) employ a variant with maxout in place of concatenation. Figure 1makes apparent our connection with such work. As a fractal network deepens, it also widens. More-over, note that stacking two 2D convolutional layers with the same spatial receptive field ( e.g.33)achieves a larger ( 55) receptive field. A horizontal cross-section of a fractal network is reminiscentof an Inception module, except with additional joins due to recursive structure.3Published as a conference paper at ICLR 20173 F RACTAL NETWORKSWe begin with a more formal presentation of the ideas sketched in Figure 1. Convolutional neuralnetworks serve as our running example and, in the subsequent section, our experimental platform.However, it is worth emphasizing that our framework is more general. In principle, convolutionallayers in Figure 1 could be replaced by a different layer type, or even a custom-designed module orsubnetwork, in order to generate other fractal architectures.LetCdenote the index of the truncated fractal fCpq. Our network’s structure, connections and layertypes, is defined by fCpq. A network consisting of a single convolutional layer is the base case:f1pzqconvpzq (1)We define successive fractals recursively:fC1pzqrp fCfCqpzqs`r convpzqs (2)wheredenotes composition and `a join operation. When drawn in the style of Figure 1, Ccorresponds to the number of columns, or width, of network fCpq. Depth, defined to be the numberofconv layers on the longest path between input and output, scales as 2C1. Convolutional networksfor classification typically intersperse pooling layers. We achieve the same by using fCpqas abuilding block and stacking it with subsequent pooling layers Btimes, yielding total depth B2C1.The join operation `merges two feature blobs into one. Here, a blob is the result of a conv layer: atensor holding activations for a fixed number of channels over a spatial domain. The channel countcorresponds to the size of the filter set in the preceding conv layer. As the fractal is expanded, wecollapse neighboring joins into a single joinlayer which spans multiple columns, as shown on theright side of Figure 1. The join layer merges all of its input feature blobs into a single output blob.Several choices seem reasonable for the action of a join layer, including concatenation and addition.We instantiate each join to compute the element-wise mean of its inputs. This is appropriate forconvolutional networks in which channel count is set the same for all conv layers within a fractal block.Averaging might appear similar to ResNet’s addition operation, but there are critical differences:ResNet makes clear distinction between pass-through and residual signals. In FractalNet, nosignal is privileged. Every input to a joinlayer is the output of an immediately precedingconv layer. The network structure alone cannot identify any as being primary.Drop-path regularization, as described next in Section 3.1, forces each input to a join to beindividually reliable. This reduces the reward for even implicitly learning to allocate part ofone signal to act as a residual for another.Experiments show that we can extract high-performance subnetworks consisting of a singlecolumn (Section 4.2). Such a subnetwork is effectively devoid of joins, as only a single pathis active throughout. They produce no signal to which a residual could be added.Together, these properties ensure that join layers are not an alternative method of residual learning.3.1 R EGULARIZATION VIA DROP-PATHDropout (Hinton et al., 2012) and drop-connect (Wan et al., 2013) modify interactions betweensequential network layers in order to discourage co-adaptation. Since fractal networks containadditional macro-scale structure, we propose to complement these techniques with an analogouscoarse-scale regularization scheme.Figure 2 illustrates drop-path. Just as dropout prevents co-adaptation of activations, drop-pathprevents co-adaptation of parallel paths by randomly dropping operands of the join layers. Thisdiscourages the network from using one input path as an anchor and another as a corrective term (aconfiguration that, if not prevented, is prone to overfitting). We consider two sampling strategies:Local : ajoin drops each input with fixed probability, but we make sure at least one survives.Global : a single path is selected for the entire network. We restrict this path to be a singlecolumn, thereby promoting individual columns as independently strong predictors.4Published as a conference paper at ICLR 2017Iteration #1(Local)Iteration #2(Global)Iteration #3(Local)Iteration #4(Global)Figure 2: Drop-path. A fractal network block functions with some connections between layersdisabled, provided some path from input to output is still available. Drop-path guarantees at least onesuch path, while sampling a subnetwork with many other paths disabled. During training, presentinga different active subnetwork to each mini-batch prevents co-adaptation of parallel paths. A globalsampling strategy returns a single column as a subnetwork. Alternating it with local samplingencourages the development of individual columns as performant stand-alone subnetworks.As with dropout, signals may need appropriate rescaling. With element-wise means, this is trivial;each join computes the mean of only its active inputs.In experiments, we train with dropout and a mixture model of 50% local and 50% global samplingfor drop-path. We sample a new subnetwork each mini-batch. With sufficient memory, we cansimultaneously evaluate one local sample and all global samples for each mini-batch by keepingseparate networks and tying them together via weight sharing.While fractal connectivity permits the use of paths of any length, global drop-path forces the use ofmany paths whose lengths differ by orders of magnitude (powers of 2). The subnetworks sampled bydrop-path thus exhibit large structural diversity. This property stands in contrast to stochastic depthregularization of ResNet, which, by virtue of using a fixed drop probability for each layer in a chain,samples subnetworks with a concentrated depth distribution (Huang et al., 2016b).Global drop-path serves not only as a regularizer, but also as a diagnostic tool. Monitoring perfor-mance of individual columns provides insight into both the network and training mechanisms, asSection 4.3 discusses in more detail. Individually strong columns of various depths also give userschoices in the trade-off between speed (shallow) and accuracy (deep).3.2 D ATA AUGMENTATIONData augmentation can reduce the need for regularization. ResNet demonstrates this, achieving27.22% error rate on CIFAR-100 with augmentation compared to 44.76% without (Huang et al.,2016b). While augmentation benefits fractal networks, we show that drop-path provides highlyeffective regularization, allowing them to achieve competitive results even without data augmentation.3.3 I MPLEMENTATION DETAILSWe implement FractalNet using Caffe (Jia et al., 2014). Purely for convenience, we flip the orderof pool and join layers at the end of a block in Figure 1. We pool individual columns immediatelybefore the joins spanning all columns, rather than pooling once immediately after them.We train fractal networks using stochastic gradient descent with momentum. As now standard, weemploy batch normalization together with each conv layer (convolution, batch norm, then ReLU).5Published as a conference paper at ICLR 2017Method C100 C100+ C100++ C10 C10+ C10++ SVHNNetwork in Network (Lin et al., 2013) 35.68 - - 10.41 8.81 - 2.35Generalized Pooling (Lee et al., 2016) 32.37 - - 7.62 6.05 - 1.69Recurrent CNN (Liang & Hu, 2015) 31.75 - - 8.69 7.09 - 1.77Multi-scale (Liao & Carneiro, 2015) 27.56 - - 6.87 - - 1.76FitNet Romero et al. (2015) - 35.04 - - 8.39 - 2.42Deeply Supervised (Lee et al., 2014) - 34.57 - 9.69 7.97 - 1.92All-CNN (Springenberg et al., 2014) - 33.71 - 9.08 7.25 4.41 -Highway Net (Srivastava et al., 2015) - 32.39 - - 7.72 - -ELU (Clevert et al., 2016) - 24.28 - - 6.55 - -Scalable BO (Snoek et al., 2015) - - 27.04 - - 6.37 1.77Fractional Max-Pool (Graham, 2014) - - 26.32 - - 3.47 -FitResNet (Mishkin & Matas, 2016) - 27.66 - - 5.84 - -ResNet (He et al., 2016a) - - - - 6.61 - -ResNet by (Huang et al., 2016b) 44.76 27.22 - 13.63 6.41 - 2.01Stochastic Depth (Huang et al., 2016b) 37.80 24.58 - 11.66 5.23 - 1.75Identity Mapping (He et al., 2016b) - 22.68 - - 4.69 - -ResNet in ResNet (Targ et al., 2016) - 22.90 - - 5.01 - -Wide (Zagoruyko & Komodakis, 2016) - 20.50 - - 4.17 - -DenseNet-BC (Huang et al., 2016a)119.64 17.60 - 5.19 3.62 - 1.74FractalNet (20 layers, 38.6M params) 35.34 23.30 22.85 10.18 5.22 5.11 2.01+ drop-path + dropout 28.20 23.73 23.36 7.33 4.60 4.59 1.87ëdeepest column alone 29.05 24.32 23.60 7.27 4.68 4.63 1.89FractalNet (40 layers, 22.9M params)2- 22.49 21.49 - 5.24 5.21 -Table 1: CIFAR-100/CIFAR-10/SVHN. We compare test error (%) with other leading methods,trained with either no data augmentation, translation/mirroring (+), or more substantial augmentation(++). Our main point of comparison is ResNet. We closely match its benchmark results usingdata augmentation, and outperform it by large margins without data augmentation. Training withdrop-path, we can extract from FractalNet single-column (plain) networks that are highly competitive.4 E XPERIMENTSThe CIFAR, SVHN, and ImageNet datasets serve as testbeds for comparison to prior work andanalysis of FractalNet’s internal behavior. We evaluate performance on the standard classification taskassociated with each dataset. For CIFAR and SVHN, which consist of 3232images, we set ourfractal network to have 5blocks ( B5) with 22non-overlapping max-pooling and subsamplingapplied after each. This reduces the input 3232spatial resolution to 11over the course of theentire network. A softmax prediction layer attaches at the end of the network. Unless otherwise noted,we set the number of filter channels within blocks 1through 5asp64;128;256;512;512q, mostlymatching the convention of doubling the number of channels after halving spatial resolution.For ImageNet, we choose a fractal architecture to facilitate direct comparison with the 34-layerResNet of He et al. (2016a). We use the same first and last layer as ResNet-34, but change the middleof the network to consist of 4blocks ( B4), each of 8layers ( C4columns). We use a filterchannel progression of p128;256;512;1024qin blocks 1through 4.4.1 T RAININGFor experiments using dropout, we fix drop rate per block at p0%;10%;20%;30%;40%q, similarto Clevert et al. (2016). Local drop-path uses 15% drop rate across the entire network.1Densely connected networks (DenseNets) are concurrent work, appearing subsequent to our original arXivpaper on FractalNet. A variant of residual networks, they swap addition for concatenation in the residualfunctional form. We report performance of their 250-layer DenseNet-BC network with growth rate k24.2This deeper (4 column) FractalNet has fewer parameters. We vary column width: p128;64;32;16qchannelsacross columns initially, doubling each block except the last. A linear projection temporarily widens thinnercolumns before joins. As in Iandola et al. (2016), we switch to a mix of 11and33convolutional filters.6Published as a conference paper at ICLR 2017Method Top-1 (%) Top-5 (%)VGG-16 28.07 9.33ResNet-34 C 24.19 7.40FractalNet-34 24.12 7.39Table 2: ImageNet (validation set, 10-crop).Cols. Depth Params. Error (%)1 5 0.3M 37.322 10 0.8M 30.713 20 2.1M 27.694 40 4.8M 27.385 80 10.2M 26.466 160 21.1M 27.38Table 3: Ultra-deep fractal networks(CIFAR-100++). Increasing depth greatly im-proves accuracy until eventual diminishingreturns. Contrast with plain networks, whichare not trainable if made too deep (Table 4).Model Depth Train Loss Error (%)Plain 5 0.786 36.62Plain 10 0.159 32.47Plain 20 0.037 31.31Plain 40 0.580 38.84Fractal Col #1 5 0.677 37.23Fractal Col #2 10 0.141 32.85Fractal Col #3 20 0.029 31.31Fractal Col #4 40 0.016 31.75Fractal Full 40 0.015 27.40Table 4: Fractal structure as a training appara-tus(CIFAR-100++). Plain networks perform well ifmoderately deep, but exhibit worse convergence dur-ing training if instantiated with great depth. How-ever, as a column trained within, and then extractedfrom, a fractal network with mixed drop-path, werecover a plain network that overcomes such depthlimitation (possibly due to a student-teacher effect).We run for 400epochs on CIFAR, 20epochs on SVHN, and 70epochs on ImageNet. Our learningrate starts at 0:02(for ImageNet, 0:001) and we train using stochastic gradient descent with batchsize100(for ImageNet, 32) and momentum 0:9. For CIFAR/SVHN, we drop the learning rate by afactor of 10whenever the number of remaining epochs halves. For ImageNet, we drop by a factor of10at epochs 50and65. We use Xavier initialization (Glorot & Bengio, 2010).A widely employed (Lin et al., 2013; Clevert et al., 2016; Srivastava et al., 2015; He et al., 2016a;b;Huang et al., 2016b; Targ et al., 2016) scheme for data augmentation on CIFAR consists of onlyhorizontal mirroring and translation (uniform offsets in r4;4s), with images zero-padded whereneeded after mean subtraction. We denote results achieved using no more than this degree ofaugmentation by appending a “+” to the dataset name ( e.g.CIFAR-100+). A “++” marks resultsreliant on more data augmentation; here exact schemes may vary. Our entry in this category is modestand simply changes the zero-padding to reflect-padding.4.2 R ESULTSTable 1 compares performance of FractalNet on CIFAR and SVHN with competing methods. Frac-talNet (depth 20) outperforms the original ResNet across the board. With data augmentation, ourCIFAR-100 accuracy is close to that of the best ResNet variants. With neither augmentation nor regu-larization, FractalNet’s performance on CIFAR is superior to both ResNet and ResNet with stochasticdepth, suggesting that FractalNet may be less prone to overfitting. Most methods perform similarlyon SVHN. Increasing depth to 40, while borrowing some parameter reduction tricks (Iandola et al.,2016), reveals FractalNet’s performance to be consistent across a range of configuration choices.Experiments without data augmentation highlight the power of drop-path regularization. On CIFAR-100, drop-path reduces FractalNet’s error rate from 35:34% to28:20%. Unregularized ResNet is farbehind ( 44:76%) and ResNet with stochastic depth ( 37:80%) does not catch up to our unregularizedstarting point of 35:34%. CIFAR-10 mirrors this story. With data augmentation, drop-path provides aboost (CIFAR-10), or does not significantly influence FractalNet’s performance (CIFAR-100).Note that the performance of the deepest column of the fractal network is close to that of the fullnetwork (statistically equivalent on CIFAR-10). This suggests that the fractal structure may be moreimportant as a learning framework than as a final model architecture.Table 2 shows that FractalNet scales to ImageNet, matching ResNet (He et al., 2016a) at equal depth.Note that, concurrent with our work, refinements to the residual network paradigm further improve thestate-of-the-art on ImageNet. Wide residual networks (Zagoruyko & Komodakis, 2016) of 34-layersreduce single-crop Top-1 and Top-5 validation error by approximately 2%and1%, respectively, over7Published as a conference paper at ICLR 20170 50 100 150 200 250 300 350 400Epochs10-1100101Training LossPlain Networks5 layers10 layers20 layers40 layers0 50 100 150 200 250 300 350 400Epochs10-1100101Training LossFractalNetCol #1: 5 layersCol #2: 10 layersCol #3: 20 layersCol #4: 40 layersFractalNetFigure 3: Implicit deep supervision. Left: Evolution of loss for plain networks of depth 5,10,20and40trained on CIFAR-100. Training becomes increasingly difficult for deeper networks. At 40layers, we are unable to train the network satisfactorily. Right: We train a 4column fractal networkwith mixed drop-path, monitoring its loss as well as the losses of its four subnetworks correspondingto individual columns of the same depth as the plain networks. As the 20-layer subnetwork starts tostabilize, drop-path puts pressure on the 40-layer column to adapt, with the rest of the network as itsteacher. This explains the elbow-shaped learning curve for Col #4 that occurs around 25epochs.ResNet- 34by doubling feature channels in each layer. DenseNets (Huang et al., 2016a) substantiallyimprove performance by building residual blocks that concatenate rather than add feature channels.Table 3 demonstrates that FractalNet resists performance degradation as we increase Cto obtainextremely deep networks ( 160layers for C6). Scores in this table are not comparable tothose in Table 1. For time and memory efficiency, we reduced block-wise feature channels top16;32;64;128;128qand the batch size to 50for the supporting experiments in Tables 3 and 4.Table 4 provides a baseline showing that training of plain deep networks begins to degrade by the timetheir depth reaches 40layers. In our experience, a plain 160-layer completely fails to converge. Thistable also highlights the ability to use FractalNet and drop-path as an engine for extracting trainednetworks (columns) with the same topology as plain networks, but much higher test performance.4.3 I NTROSPECTIONWith Figure 3, we examine the evolution of a 40-layer FractalNet during training. Tracking columnsindividually (recording their losses when run as stand-alone networks), we observe that the 40-layercolumn initially improves slowly, but picks up once the loss of the rest of the network begins tostabilize. Contrast with a plain 40-layer network trained alone (dashed blue line), which never makesfast progress. The column has the same initial plateau, but subsequently improves after 25epochs,producing a loss curve uncharacteristic of plain networks.We hypothesize that the fractal structure triggers effects akin to deep supervision and lateral student-teacher information flow. Column #4 joins with column #3 every other layer, and in every fourthlayer this join involves no other columns. Once the fractal network partially relies on the signalgoing through column #3, drop-path puts pressure on column #4 to produce a replacement signalwhen column #3 is dropped. This task has constrained scope. A particular drop only requires twoconsecutive layers in column #4 to substitute for one in column #3 (a mini student-teacher problem).This explanation of FractalNet dynamics parallels what, in concurrent work, Greff et al. (2017)claim for ResNet. Specifically, Greff et al. (2017) suggest residual networks learn unrolled iterativeestimation, with each layer performing a gradual refinement on its input representation. The deepestFractalNet column could behave in the same manner, with the remainder of the network acting as ascaffold for building smaller refinement steps by doubling layers from one column to the next.8Published as a conference paper at ICLR 2017These interpretations appear not to mesh with the conclusions of Veit et al. (2016), who claim thatensemble-like behavior underlies the success of ResNet. This is certainly untrue of some very deepnetworks, as FractalNet provides a counterexample: we can extract a single column (plain networktopology) and it alone (no ensembling) performs nearly as well as the entire network. Moreover, thegradual refinement view may offer an alternative explanation for the experiments of Veit et al. (2016).If each layer makes only a small modification, removing one may look, to the subsequent portionof the network, like injecting a small amount of input noise. Perhaps noise tolerance explains thegradual performance degradation that Veit et al. (2016) observe when removing ResNet layers.5 C ONCLUSIONOur experiments with fractal networks provide strong evidence that path length is fundamentalfor training ultra-deep neural networks; residuals are incidental. Key is the shared characteristicof FractalNet and ResNet: large nominal network depth, but effectively shorter paths for gradientpropagation during training. Fractal architectures are arguably the simplest means of satisfyingthis requirement, and match residual networks in experimental performance. Fractal networks areresistant to being too deep; extra depth may slow training, but does not impair accuracy.With drop-path, regularization of extremely deep fractal networks is intuitive and effective. Drop-pathdoubles as a method of enforcing speed (latency) vs. accuracy tradeoffs. For applications where fastresponses have utility, we can obtain fractal networks whose partial evaluation yields good answers.Our analysis connects the internal behavior of fractal networks with phenomena engineered into othernetworks. Their substructure resembles hand-crafted modules used as components in prior work.Their training evolution may emulate deep supervision and student-teacher learning.ACKNOWLEDGMENTSWe gratefully acknowledge the support of NVIDIA Corporation with the donation of GPUs used forthis research.
r1tPX1rVg
Unsatisfactory experiments and restrictively large number of parameters
5: Marginally below acceptance threshold
This paper proposes a new architecture that does not explicitly use residuals but constructs an architecture that is composed of networks with fractal structure by using expand and join operations. Using the fractal architecture, authors argue and try to demonstrate that the large nominal network depth with many short paths is the key for 'training 'ultra-deep” networks while residuals are incidental. The main bottleneck of this paper is that number of parameters needed for the FractalNet is significantly higher than the baselines which makes it hard to scale to ''ultra-deep” networks. Authors replied that Wide ResNets also require many parameters but this is not the case for ResNet and other ResNet variants. ResNet and ResNet with Stochastic depth scales to depth of 110 with 1.7M parameters and to depth of 1202 with 10.2M parameters which is much less than the number of parameters for depths of 20 and 40 in Table 1(Huang et al, 2016a). It is not clear whether FractalNet can perform better than these depths with a reasonable computation. Authors report less parameters for 40 layers but this scaling trick is not validated for other depths including depth 20 in Table 1. On the other hand, the number of parameters for 40 layers with scaling trick is clearly still large compared to most of the baselines. Unsatisfactory comparison to these baselines makes the claims of authors unconvincing. Authors also claim that drop-path to provide improvement compared to layer dropping procedure in Huang et al, 2016b however the results show that the empirical gain of this specific regularization disappears when well-known data augmentation techniques applied. Therefore the empirical effectiveness of drop-path is not convincing too. DenseNets (Huang et al, 2016a) should be also included in the comparison since it outperforms most of the state of art Res Nets on both CIFAR10 and ImageNet and more importantly outperforms the proposed FractalNet significantly and it requires significantly less computation. Table 1 has Res-Net variants as baselines however Table 2 has only ResNet. Therefore ImageNet comparison only shows that one can run FractalNet on ImageNet and can perform comparably well to ResNet which is not a satisfactory result given the improvements of other baselines over ResNet. In addition, there is no improvement in SVHN dataset results and this is not discussed in the empirical analysis. Also, authors give a list of some improvements over Inception (Szegedy et al., 2015) but again these intuitive claims about effectiveness of these changes are not supported with any empirical analysis. Although the paper attempts to explore many interesting intuitive directions using the proposed architecture, the empirical results are not support the given claims and the large number of parameters makes the model restrictive in practice hence the contribution does not seem to be significant. Pros: Provides an interesting architecture compared to ResNet and its variants and investigates the differences to residual networks which can stimulate some other promising analysis cons: - Number of parameters are very large compared to baselines that can have even much higher depths with smaller number of parameters The claims are intuitive but not supported well with empirical evidence Path regularization does not yield improvement when the data augmentation is used - The empirical results do not show whether the method is promising for “ultra-deep” networks
5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature
S1VaB4cex
ICLR.cc/2017/conference
2017
FractalNet: Ultra-Deep Neural Networks without Residuals
["Gustav Larsson", "Michael Maire", "Gregory Shakhnarovich"]
We introduce a design strategy for neural network macro-architecture based on self-similarity. Repeated application of a simple expansion rule generates deep networks whose structural layouts are precisely truncated fractals. These networks contain interacting subpaths of different lengths, but do not include any pass-through or residual connections; every internal signal is transformed by a filter and nonlinearity before being seen by subsequent layers. In experiments, fractal networks match the excellent performance of standard residual networks on both CIFAR and ImageNet classification tasks, thereby demonstrating that residual representations may not be fundamental to the success of extremely deep convolutional neural networks. Rather, the key may be the ability to transition, during training, from effectively shallow to deep. We note similarities with student-teacher behavior and develop drop-path, a natural extension of dropout, to regularize co-adaptation of subpaths in fractal architectures. Such regularization allows extraction of high-performance fixed-depth subnetworks. Additionally, fractal networks exhibit an anytime property: shallow subnetworks provide a quick answer, while deeper subnetworks, with higher latency, provide a more accurate answer.
["neural networks", "fractal networks", "fractalnet", "residuals fractalnet", "residuals", "design strategy", "neural network", "application", "simple expansion rule", "deep networks"]
ABSTRACTWe introduce a design strategy for neural network macro-architecture based on self-similarity. Repeated application of a simple expansion rule generates deep networkswhose structural layouts are precisely truncated fractals. These networks containinteracting subpaths of different lengths, but do not include any pass-through orresidual connections; every internal signal is transformed by a filter and nonlinearitybefore being seen by subsequent layers. In experiments, fractal networks matchthe excellent performance of standard residual networks on both CIFAR andImageNet classification tasks, thereby demonstrating that residual representationsmay not be fundamental to the success of extremely deep convolutional neuralnetworks. Rather, the key may be the ability to transition, during training, fromeffectively shallow to deep. We note similarities with student-teacher behavior anddevelop drop-path, a natural extension of dropout, to regularize co-adaptation ofsubpaths in fractal architectures. Such regularization allows extraction of high-performance fixed-depth subnetworks. Additionally, fractal networks exhibit ananytime property: shallow subnetworks provide a quick answer, while deepersubnetworks, with higher latency, provide a more accurate answer.1 I NTRODUCTIONResidual networks (He et al., 2016a), or ResNets, lead a recent and dramatic increase in both depth andaccuracy of convolutional neural networks, facilitated by constraining the network to learn residuals.ResNet variants (He et al., 2016a;b; Huang et al., 2016b) and related architectures (Srivastava et al.,2015) employ the common technique of initializing and anchoring, via a pass-through channel, anetwork to the identity function. Training now differs in two respects. First, the objective changesto learning residual outputs, rather than unreferenced absolute mappings. Second, these networksexhibit a type of deep supervision (Lee et al., 2014), as near-identity layers effectively reduce distanceto the loss. He et al. (2016a) speculate that the former, the residual formulation itself, is crucial.We show otherwise, by constructing a competitive extremely deep architecture that does not rely onresiduals. Our design principle is pure enough to communicate in a single word, fractal, and a simplediagram (Figure 1). Yet, fractal networks implicitly recapitulate many properties hard-wired intoprevious successful architectures. Deep supervision not only arises automatically, but also drives atype of student-teacher learning (Ba & Caruana, 2014; Urban et al., 2017) internal to the network.Modular building blocks of other designs (Szegedy et al., 2015; Liao & Carneiro, 2015) resemblespecial cases of a fractal network’s nested substructure.For fractal networks, simplicity of training mirrors simplicity of design. A single loss, attached to thefinal layer, suffices to drive internal behavior mimicking deep supervision. Parameters are randomlyinitialized. As they contain subnetworks of many depths, fractal networks are robust to choice ofoverall depth; make them deep enough and training will carve out a useful assembly of subnetworks.The entirety of emergent behavior resulting from a fractal design may erode the need for recentengineering tricks intended to achieve similar effects. These tricks include residual functional formswith identity initialization, manual deep supervision, hand-crafted architectural modules, and student-teacher training regimes. Section 2 reviews this large body of related techniques. Hybrid designscould certainly integrate any of them with a fractal architecture; we leave open the question of thedegree to which such hybrids are synergistic.1Published as a conference paper at ICLR 2017zf4pzqzf4pzqBlock 1Block 2Block 3Block 4Block 5xyFractal Expansion RuleLayer KeyConvolutionJoinPoolPredictionzfCfCpzqzfCfCfC1pzqFigure 1: Fractal architecture. Left: A simple expansion rule generates a fractal architecture withCintertwined columns. The base case, f1pzq, has a single layer of the chosen type ( e.g.convolutional)between input and output. Join layers compute element-wise mean. Right: Deep convolutionalnetworks periodically reduce spatial resolution via pooling. A fractal version uses fCas a buildingblock between pooling layers. Stacking Bsuch blocks yields a network whose total depth, measuredin terms of convolution layers, is B2C1. This example has depth 40(B5,C4).Our main contribution is twofold:We introduce FractalNet, the first simple alternative to ResNet. FractalNet shows thatexplicit residual learning is not a requirement for building ultra-deep neural networks.Through analysis and experiments, we elucidate connections between FractalNet and anarray of phenomena engineered into previous deep network designs.As an additional contribution, we develop drop-path, a novel regularization protocol for ultra-deep fractal networks. Without data augmentation, fractal networks, trained with drop-path anddropout (Hinton et al., 2012), exceed the performance of residual networks regularized via stochasticdepth (Huang et al., 2016b). Though, like stochastic depth, it randomly removes macro-scalecomponents, drop-path further exploits our fractal structure in choosing which components to disable.Drop-path constitutes not only a regularization strategy, but also provides means of optionallyimparting fractal networks with anytime behavior. A particular schedule of dropped paths duringlearning prevents subnetworks of different depths from co-adapting. As a consequence, both shallowand deep subnetworks must individually produce correct output. Querying a shallow subnetwork thusyields a quick and moderately accurate result in advance of completion of the full network.Section 3 elaborates the technical details of fractal networks and drop-path. Section 4 providesexperimental comparisons to residual networks across the CIFAR-10, CIFAR-100 (Krizhevsky,2009), SVHN (Netzer et al., 2011), and ImageNet (Deng et al., 2009) datasets. We also evaluateregularization and data augmentation strategies, investigate subnetwork student-teacher behaviorduring training, and benchmark anytime networks obtained using drop-path. Section 5 providessynthesis. By virtue of encapsulating many known, yet seemingly distinct, design principles, self-similar structure may materialize as a fundamental component of neural architectures.2Published as a conference paper at ICLR 20172 R ELATED WORKDeepening feed-forward neural networks has generally returned dividends in performance. A strikingexample within the computer vision community is the improvement on the ImageNet (Deng et al.,2009) classification task when transitioning from AlexNet (Krizhevsky et al., 2012) to VGG (Si-monyan & Zisserman, 2015) to GoogLeNet (Szegedy et al., 2015) to ResNet (He et al., 2016a).Unfortunately, greater depth also makes training more challenging, at least when employing a first-order optimization method with randomly initialized layers. As the network grows deeper and morenon-linear, the linear approximation of a gradient step becomes increasingly inappropriate. Desire toovercome these difficulties drives research on both optimization techniques and network architectures.On the optimization side, much recent work yields improvements. To prevent vanishing gradients,ReLU activation functions now widely replace sigmoid and tanh units (Nair & Hinton, 2010). Thissubject remains an area of active inquiry, with various tweaks on ReLUs, e.g.PReLUs (He et al., 2015),and ELUs (Clevert et al., 2016). Even with ReLUs, employing batch normalization (Ioffe & Szegedy,2015) speeds training by reducing internal covariate shift. Good initialization can also amelioratethis problem (Glorot & Bengio, 2010; Mishkin & Matas, 2016). Path-SGD (Neyshabur et al., 2015)offers an alternative normalization scheme. Progress in optimization is somewhat orthogonal to ourarchitectural focus, with the expectation that advances in either are ripe for combination.Notable ideas in architecture reach back to skip connections, the earliest example of a nontrivialrouting pattern within a neural network. Recent work further elaborates upon them (Maire et al., 2014;Hariharan et al., 2015). Highway networks (Srivastava et al., 2015) and ResNet (He et al., 2016a;b)offer additional twists in the form of parameterized pass-through and gating. In work subsequentto our own, Huang et al. (2016a) investigate a ResNet variant with explicit skip connections. Thesemethods share distinction as the only other designs demonstrated to scale to hundreds of layers andbeyond. ResNet’s building block uses the identity map as an anchor point and explicitly parameterizesan additive correction term (the residual). Identity initialization also appears in the context of recurrentnetworks (Le et al., 2015). A tendency of ResNet and highway networks to fall-back to the identitymap may make their effective depth much smaller than their nominal depth.Some prior results hint at what we experimentally demonstrate in Section 4. Namely, reduction ofeffective depth is key to training extremely deep networks; residuals are incidental. Huang et al.(2016b) provide one clue in their work on stochastic depth: randomly dropping layers from ResNetduring training, thereby shrinking network depth by a constant factor, provides additional performancebenefit. We build upon this intuition through drop-path, which shrinks depth much more drastically.The success of deep supervision (Lee et al., 2014) provides another clue that effective depth is crucial.Here, an auxiliary loss, forked off mid-level layers, introduces a shorter path during backpropagation.The layer at the fork receives two gradients, originating from the main loss and the auxiliaryloss, that are added together. Deep supervision is now common, being adopted, for example, byGoogLeNet (Szegedy et al., 2015). However, irrelevance of the auxiliary loss at test time introducesthe drawback of having a discrepancy between the actual objective and that used for training.Exploration of the student-teacher paradigm (Ba & Caruana, 2014) illuminates the potential forinterplay between networks of different depth. In the model compression scenario, a deeper network(previously trained) guides and improves the learning of a shallower and faster student network (Ba& Caruana, 2014; Urban et al., 2017). This is accomplished by feeding unlabeled data through theteacher and having the student mimic the teacher’s soft output predictions. FitNets (Romero et al.,2015) explicitly couple students and teachers, forcing mimic behavior across several intermediatepoints in the network. Our fractal networks capture yet another alternative, in the form of implicitcoupling, with the potential for bidirectional information flow between shallow and deep subnetworks.Widening networks, by using larger modules in place of individual layers, has also produced per-formance gains. For example, an Inception module (Szegedy et al., 2015) concatenates results ofconvolutional layers of different receptive field size. Stacking these modules forms the GoogLeNet ar-chitecture. Liao & Carneiro (2015) employ a variant with maxout in place of concatenation. Figure 1makes apparent our connection with such work. As a fractal network deepens, it also widens. More-over, note that stacking two 2D convolutional layers with the same spatial receptive field ( e.g.33)achieves a larger ( 55) receptive field. A horizontal cross-section of a fractal network is reminiscentof an Inception module, except with additional joins due to recursive structure.3Published as a conference paper at ICLR 20173 F RACTAL NETWORKSWe begin with a more formal presentation of the ideas sketched in Figure 1. Convolutional neuralnetworks serve as our running example and, in the subsequent section, our experimental platform.However, it is worth emphasizing that our framework is more general. In principle, convolutionallayers in Figure 1 could be replaced by a different layer type, or even a custom-designed module orsubnetwork, in order to generate other fractal architectures.LetCdenote the index of the truncated fractal fCpq. Our network’s structure, connections and layertypes, is defined by fCpq. A network consisting of a single convolutional layer is the base case:f1pzqconvpzq (1)We define successive fractals recursively:fC1pzqrp fCfCqpzqs`r convpzqs (2)wheredenotes composition and `a join operation. When drawn in the style of Figure 1, Ccorresponds to the number of columns, or width, of network fCpq. Depth, defined to be the numberofconv layers on the longest path between input and output, scales as 2C1. Convolutional networksfor classification typically intersperse pooling layers. We achieve the same by using fCpqas abuilding block and stacking it with subsequent pooling layers Btimes, yielding total depth B2C1.The join operation `merges two feature blobs into one. Here, a blob is the result of a conv layer: atensor holding activations for a fixed number of channels over a spatial domain. The channel countcorresponds to the size of the filter set in the preceding conv layer. As the fractal is expanded, wecollapse neighboring joins into a single joinlayer which spans multiple columns, as shown on theright side of Figure 1. The join layer merges all of its input feature blobs into a single output blob.Several choices seem reasonable for the action of a join layer, including concatenation and addition.We instantiate each join to compute the element-wise mean of its inputs. This is appropriate forconvolutional networks in which channel count is set the same for all conv layers within a fractal block.Averaging might appear similar to ResNet’s addition operation, but there are critical differences:ResNet makes clear distinction between pass-through and residual signals. In FractalNet, nosignal is privileged. Every input to a joinlayer is the output of an immediately precedingconv layer. The network structure alone cannot identify any as being primary.Drop-path regularization, as described next in Section 3.1, forces each input to a join to beindividually reliable. This reduces the reward for even implicitly learning to allocate part ofone signal to act as a residual for another.Experiments show that we can extract high-performance subnetworks consisting of a singlecolumn (Section 4.2). Such a subnetwork is effectively devoid of joins, as only a single pathis active throughout. They produce no signal to which a residual could be added.Together, these properties ensure that join layers are not an alternative method of residual learning.3.1 R EGULARIZATION VIA DROP-PATHDropout (Hinton et al., 2012) and drop-connect (Wan et al., 2013) modify interactions betweensequential network layers in order to discourage co-adaptation. Since fractal networks containadditional macro-scale structure, we propose to complement these techniques with an analogouscoarse-scale regularization scheme.Figure 2 illustrates drop-path. Just as dropout prevents co-adaptation of activations, drop-pathprevents co-adaptation of parallel paths by randomly dropping operands of the join layers. Thisdiscourages the network from using one input path as an anchor and another as a corrective term (aconfiguration that, if not prevented, is prone to overfitting). We consider two sampling strategies:Local : ajoin drops each input with fixed probability, but we make sure at least one survives.Global : a single path is selected for the entire network. We restrict this path to be a singlecolumn, thereby promoting individual columns as independently strong predictors.4Published as a conference paper at ICLR 2017Iteration #1(Local)Iteration #2(Global)Iteration #3(Local)Iteration #4(Global)Figure 2: Drop-path. A fractal network block functions with some connections between layersdisabled, provided some path from input to output is still available. Drop-path guarantees at least onesuch path, while sampling a subnetwork with many other paths disabled. During training, presentinga different active subnetwork to each mini-batch prevents co-adaptation of parallel paths. A globalsampling strategy returns a single column as a subnetwork. Alternating it with local samplingencourages the development of individual columns as performant stand-alone subnetworks.As with dropout, signals may need appropriate rescaling. With element-wise means, this is trivial;each join computes the mean of only its active inputs.In experiments, we train with dropout and a mixture model of 50% local and 50% global samplingfor drop-path. We sample a new subnetwork each mini-batch. With sufficient memory, we cansimultaneously evaluate one local sample and all global samples for each mini-batch by keepingseparate networks and tying them together via weight sharing.While fractal connectivity permits the use of paths of any length, global drop-path forces the use ofmany paths whose lengths differ by orders of magnitude (powers of 2). The subnetworks sampled bydrop-path thus exhibit large structural diversity. This property stands in contrast to stochastic depthregularization of ResNet, which, by virtue of using a fixed drop probability for each layer in a chain,samples subnetworks with a concentrated depth distribution (Huang et al., 2016b).Global drop-path serves not only as a regularizer, but also as a diagnostic tool. Monitoring perfor-mance of individual columns provides insight into both the network and training mechanisms, asSection 4.3 discusses in more detail. Individually strong columns of various depths also give userschoices in the trade-off between speed (shallow) and accuracy (deep).3.2 D ATA AUGMENTATIONData augmentation can reduce the need for regularization. ResNet demonstrates this, achieving27.22% error rate on CIFAR-100 with augmentation compared to 44.76% without (Huang et al.,2016b). While augmentation benefits fractal networks, we show that drop-path provides highlyeffective regularization, allowing them to achieve competitive results even without data augmentation.3.3 I MPLEMENTATION DETAILSWe implement FractalNet using Caffe (Jia et al., 2014). Purely for convenience, we flip the orderof pool and join layers at the end of a block in Figure 1. We pool individual columns immediatelybefore the joins spanning all columns, rather than pooling once immediately after them.We train fractal networks using stochastic gradient descent with momentum. As now standard, weemploy batch normalization together with each conv layer (convolution, batch norm, then ReLU).5Published as a conference paper at ICLR 2017Method C100 C100+ C100++ C10 C10+ C10++ SVHNNetwork in Network (Lin et al., 2013) 35.68 - - 10.41 8.81 - 2.35Generalized Pooling (Lee et al., 2016) 32.37 - - 7.62 6.05 - 1.69Recurrent CNN (Liang & Hu, 2015) 31.75 - - 8.69 7.09 - 1.77Multi-scale (Liao & Carneiro, 2015) 27.56 - - 6.87 - - 1.76FitNet Romero et al. (2015) - 35.04 - - 8.39 - 2.42Deeply Supervised (Lee et al., 2014) - 34.57 - 9.69 7.97 - 1.92All-CNN (Springenberg et al., 2014) - 33.71 - 9.08 7.25 4.41 -Highway Net (Srivastava et al., 2015) - 32.39 - - 7.72 - -ELU (Clevert et al., 2016) - 24.28 - - 6.55 - -Scalable BO (Snoek et al., 2015) - - 27.04 - - 6.37 1.77Fractional Max-Pool (Graham, 2014) - - 26.32 - - 3.47 -FitResNet (Mishkin & Matas, 2016) - 27.66 - - 5.84 - -ResNet (He et al., 2016a) - - - - 6.61 - -ResNet by (Huang et al., 2016b) 44.76 27.22 - 13.63 6.41 - 2.01Stochastic Depth (Huang et al., 2016b) 37.80 24.58 - 11.66 5.23 - 1.75Identity Mapping (He et al., 2016b) - 22.68 - - 4.69 - -ResNet in ResNet (Targ et al., 2016) - 22.90 - - 5.01 - -Wide (Zagoruyko & Komodakis, 2016) - 20.50 - - 4.17 - -DenseNet-BC (Huang et al., 2016a)119.64 17.60 - 5.19 3.62 - 1.74FractalNet (20 layers, 38.6M params) 35.34 23.30 22.85 10.18 5.22 5.11 2.01+ drop-path + dropout 28.20 23.73 23.36 7.33 4.60 4.59 1.87ëdeepest column alone 29.05 24.32 23.60 7.27 4.68 4.63 1.89FractalNet (40 layers, 22.9M params)2- 22.49 21.49 - 5.24 5.21 -Table 1: CIFAR-100/CIFAR-10/SVHN. We compare test error (%) with other leading methods,trained with either no data augmentation, translation/mirroring (+), or more substantial augmentation(++). Our main point of comparison is ResNet. We closely match its benchmark results usingdata augmentation, and outperform it by large margins without data augmentation. Training withdrop-path, we can extract from FractalNet single-column (plain) networks that are highly competitive.4 E XPERIMENTSThe CIFAR, SVHN, and ImageNet datasets serve as testbeds for comparison to prior work andanalysis of FractalNet’s internal behavior. We evaluate performance on the standard classification taskassociated with each dataset. For CIFAR and SVHN, which consist of 3232images, we set ourfractal network to have 5blocks ( B5) with 22non-overlapping max-pooling and subsamplingapplied after each. This reduces the input 3232spatial resolution to 11over the course of theentire network. A softmax prediction layer attaches at the end of the network. Unless otherwise noted,we set the number of filter channels within blocks 1through 5asp64;128;256;512;512q, mostlymatching the convention of doubling the number of channels after halving spatial resolution.For ImageNet, we choose a fractal architecture to facilitate direct comparison with the 34-layerResNet of He et al. (2016a). We use the same first and last layer as ResNet-34, but change the middleof the network to consist of 4blocks ( B4), each of 8layers ( C4columns). We use a filterchannel progression of p128;256;512;1024qin blocks 1through 4.4.1 T RAININGFor experiments using dropout, we fix drop rate per block at p0%;10%;20%;30%;40%q, similarto Clevert et al. (2016). Local drop-path uses 15% drop rate across the entire network.1Densely connected networks (DenseNets) are concurrent work, appearing subsequent to our original arXivpaper on FractalNet. A variant of residual networks, they swap addition for concatenation in the residualfunctional form. We report performance of their 250-layer DenseNet-BC network with growth rate k24.2This deeper (4 column) FractalNet has fewer parameters. We vary column width: p128;64;32;16qchannelsacross columns initially, doubling each block except the last. A linear projection temporarily widens thinnercolumns before joins. As in Iandola et al. (2016), we switch to a mix of 11and33convolutional filters.6Published as a conference paper at ICLR 2017Method Top-1 (%) Top-5 (%)VGG-16 28.07 9.33ResNet-34 C 24.19 7.40FractalNet-34 24.12 7.39Table 2: ImageNet (validation set, 10-crop).Cols. Depth Params. Error (%)1 5 0.3M 37.322 10 0.8M 30.713 20 2.1M 27.694 40 4.8M 27.385 80 10.2M 26.466 160 21.1M 27.38Table 3: Ultra-deep fractal networks(CIFAR-100++). Increasing depth greatly im-proves accuracy until eventual diminishingreturns. Contrast with plain networks, whichare not trainable if made too deep (Table 4).Model Depth Train Loss Error (%)Plain 5 0.786 36.62Plain 10 0.159 32.47Plain 20 0.037 31.31Plain 40 0.580 38.84Fractal Col #1 5 0.677 37.23Fractal Col #2 10 0.141 32.85Fractal Col #3 20 0.029 31.31Fractal Col #4 40 0.016 31.75Fractal Full 40 0.015 27.40Table 4: Fractal structure as a training appara-tus(CIFAR-100++). Plain networks perform well ifmoderately deep, but exhibit worse convergence dur-ing training if instantiated with great depth. How-ever, as a column trained within, and then extractedfrom, a fractal network with mixed drop-path, werecover a plain network that overcomes such depthlimitation (possibly due to a student-teacher effect).We run for 400epochs on CIFAR, 20epochs on SVHN, and 70epochs on ImageNet. Our learningrate starts at 0:02(for ImageNet, 0:001) and we train using stochastic gradient descent with batchsize100(for ImageNet, 32) and momentum 0:9. For CIFAR/SVHN, we drop the learning rate by afactor of 10whenever the number of remaining epochs halves. For ImageNet, we drop by a factor of10at epochs 50and65. We use Xavier initialization (Glorot & Bengio, 2010).A widely employed (Lin et al., 2013; Clevert et al., 2016; Srivastava et al., 2015; He et al., 2016a;b;Huang et al., 2016b; Targ et al., 2016) scheme for data augmentation on CIFAR consists of onlyhorizontal mirroring and translation (uniform offsets in r4;4s), with images zero-padded whereneeded after mean subtraction. We denote results achieved using no more than this degree ofaugmentation by appending a “+” to the dataset name ( e.g.CIFAR-100+). A “++” marks resultsreliant on more data augmentation; here exact schemes may vary. Our entry in this category is modestand simply changes the zero-padding to reflect-padding.4.2 R ESULTSTable 1 compares performance of FractalNet on CIFAR and SVHN with competing methods. Frac-talNet (depth 20) outperforms the original ResNet across the board. With data augmentation, ourCIFAR-100 accuracy is close to that of the best ResNet variants. With neither augmentation nor regu-larization, FractalNet’s performance on CIFAR is superior to both ResNet and ResNet with stochasticdepth, suggesting that FractalNet may be less prone to overfitting. Most methods perform similarlyon SVHN. Increasing depth to 40, while borrowing some parameter reduction tricks (Iandola et al.,2016), reveals FractalNet’s performance to be consistent across a range of configuration choices.Experiments without data augmentation highlight the power of drop-path regularization. On CIFAR-100, drop-path reduces FractalNet’s error rate from 35:34% to28:20%. Unregularized ResNet is farbehind ( 44:76%) and ResNet with stochastic depth ( 37:80%) does not catch up to our unregularizedstarting point of 35:34%. CIFAR-10 mirrors this story. With data augmentation, drop-path provides aboost (CIFAR-10), or does not significantly influence FractalNet’s performance (CIFAR-100).Note that the performance of the deepest column of the fractal network is close to that of the fullnetwork (statistically equivalent on CIFAR-10). This suggests that the fractal structure may be moreimportant as a learning framework than as a final model architecture.Table 2 shows that FractalNet scales to ImageNet, matching ResNet (He et al., 2016a) at equal depth.Note that, concurrent with our work, refinements to the residual network paradigm further improve thestate-of-the-art on ImageNet. Wide residual networks (Zagoruyko & Komodakis, 2016) of 34-layersreduce single-crop Top-1 and Top-5 validation error by approximately 2%and1%, respectively, over7Published as a conference paper at ICLR 20170 50 100 150 200 250 300 350 400Epochs10-1100101Training LossPlain Networks5 layers10 layers20 layers40 layers0 50 100 150 200 250 300 350 400Epochs10-1100101Training LossFractalNetCol #1: 5 layersCol #2: 10 layersCol #3: 20 layersCol #4: 40 layersFractalNetFigure 3: Implicit deep supervision. Left: Evolution of loss for plain networks of depth 5,10,20and40trained on CIFAR-100. Training becomes increasingly difficult for deeper networks. At 40layers, we are unable to train the network satisfactorily. Right: We train a 4column fractal networkwith mixed drop-path, monitoring its loss as well as the losses of its four subnetworks correspondingto individual columns of the same depth as the plain networks. As the 20-layer subnetwork starts tostabilize, drop-path puts pressure on the 40-layer column to adapt, with the rest of the network as itsteacher. This explains the elbow-shaped learning curve for Col #4 that occurs around 25epochs.ResNet- 34by doubling feature channels in each layer. DenseNets (Huang et al., 2016a) substantiallyimprove performance by building residual blocks that concatenate rather than add feature channels.Table 3 demonstrates that FractalNet resists performance degradation as we increase Cto obtainextremely deep networks ( 160layers for C6). Scores in this table are not comparable tothose in Table 1. For time and memory efficiency, we reduced block-wise feature channels top16;32;64;128;128qand the batch size to 50for the supporting experiments in Tables 3 and 4.Table 4 provides a baseline showing that training of plain deep networks begins to degrade by the timetheir depth reaches 40layers. In our experience, a plain 160-layer completely fails to converge. Thistable also highlights the ability to use FractalNet and drop-path as an engine for extracting trainednetworks (columns) with the same topology as plain networks, but much higher test performance.4.3 I NTROSPECTIONWith Figure 3, we examine the evolution of a 40-layer FractalNet during training. Tracking columnsindividually (recording their losses when run as stand-alone networks), we observe that the 40-layercolumn initially improves slowly, but picks up once the loss of the rest of the network begins tostabilize. Contrast with a plain 40-layer network trained alone (dashed blue line), which never makesfast progress. The column has the same initial plateau, but subsequently improves after 25epochs,producing a loss curve uncharacteristic of plain networks.We hypothesize that the fractal structure triggers effects akin to deep supervision and lateral student-teacher information flow. Column #4 joins with column #3 every other layer, and in every fourthlayer this join involves no other columns. Once the fractal network partially relies on the signalgoing through column #3, drop-path puts pressure on column #4 to produce a replacement signalwhen column #3 is dropped. This task has constrained scope. A particular drop only requires twoconsecutive layers in column #4 to substitute for one in column #3 (a mini student-teacher problem).This explanation of FractalNet dynamics parallels what, in concurrent work, Greff et al. (2017)claim for ResNet. Specifically, Greff et al. (2017) suggest residual networks learn unrolled iterativeestimation, with each layer performing a gradual refinement on its input representation. The deepestFractalNet column could behave in the same manner, with the remainder of the network acting as ascaffold for building smaller refinement steps by doubling layers from one column to the next.8Published as a conference paper at ICLR 2017These interpretations appear not to mesh with the conclusions of Veit et al. (2016), who claim thatensemble-like behavior underlies the success of ResNet. This is certainly untrue of some very deepnetworks, as FractalNet provides a counterexample: we can extract a single column (plain networktopology) and it alone (no ensembling) performs nearly as well as the entire network. Moreover, thegradual refinement view may offer an alternative explanation for the experiments of Veit et al. (2016).If each layer makes only a small modification, removing one may look, to the subsequent portionof the network, like injecting a small amount of input noise. Perhaps noise tolerance explains thegradual performance degradation that Veit et al. (2016) observe when removing ResNet layers.5 C ONCLUSIONOur experiments with fractal networks provide strong evidence that path length is fundamentalfor training ultra-deep neural networks; residuals are incidental. Key is the shared characteristicof FractalNet and ResNet: large nominal network depth, but effectively shorter paths for gradientpropagation during training. Fractal architectures are arguably the simplest means of satisfyingthis requirement, and match residual networks in experimental performance. Fractal networks areresistant to being too deep; extra depth may slow training, but does not impair accuracy.With drop-path, regularization of extremely deep fractal networks is intuitive and effective. Drop-pathdoubles as a method of enforcing speed (latency) vs. accuracy tradeoffs. For applications where fastresponses have utility, we can obtain fractal networks whose partial evaluation yields good answers.Our analysis connects the internal behavior of fractal networks with phenomena engineered into othernetworks. Their substructure resembles hand-crafted modules used as components in prior work.Their training evolution may emulate deep supervision and student-teacher learning.ACKNOWLEDGMENTSWe gratefully acknowledge the support of NVIDIA Corporation with the donation of GPUs used forthis research.
rkUFJWfNl
Unconvincing experimental comparisons
6: Marginally above acceptance threshold
This paper proposes a design principle for computation blocks in convolutional networks based on repeated application of expand and join operations resulting in a fractal-like structure. This paper is primarily about experimental evaluation, since the objective is to show that a residual formulation is not necessary to obtain good performance, at least on some tasks. However, in my opinion the evaluations in the paper are not convincing. The primary issue is lack of a proper baseline, against which the improvements can be clearly demonstrated by making isolated changes. I understand that for this paper such a baseline is hard to construct, since it is about a novel architecture principle. This is why more effort should be put into this, so that core insights from this paper can be useful even after better performing architectures are discovered. The number of parameters and amount of computation should be used to indicate how fair the comparisons are between architectures. Some detailed comments: - In Table 1 comparisons to Resnets, the resnets from He et al. 2016b and Wide Resnets should be compared to FractalNet (in lieu of a proper baseline). The first outperforms FractalNet on CIFAR-100 while the second outperforms it on both. The authors compare to other results without augmentation, but did not perform additional experiments without augmentation for these architectures. - The 40 layer Fractal Net should not be compared to other models unless the parameter reduction tricks are utilized for the other models as well. - A proper comparison to Inception networks should also be performed for these networks. My guess is that the reason behind a seemingly 'ad-hoc' design of Inception modules is to reduce the computational footprint of the model (which is not a central motivation of fractal nets). Since this model is directly related to the Inception module due to use of shorter and longer paths without shortcuts, one can easily simplify the Inception design to build a strong baseline e.g. by converting the concatenation operation to a mean operation among equally sized convolution outputs. As an aside, note that Inception networks have already shown that residual networks are not necessary to obtain the best performance [1]. - It should be noted that Residual/Highway architectures do have a type of anytime property, as shown by lesioning experiments in Srivastava et al and Viet et al. - The architecture specific drop-path regularization is interesting, but is used along with other regularizers such as dropout, batch norm and weight decay and its benefit on its own is not clear. Overall, it's not clear to me that the experiments clearly demonstrate the utility of the proposed architecture. [1] Szegedy, Christian, Sergey Ioffe, and Vincent Vanhoucke. "Inception-v4, inception-resnet and the impact of residual connections on learning." arXiv preprint arXiv:1602.07261 (2016).
5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature
S1VaB4cex
ICLR.cc/2017/conference
2017
FractalNet: Ultra-Deep Neural Networks without Residuals
["Gustav Larsson", "Michael Maire", "Gregory Shakhnarovich"]
We introduce a design strategy for neural network macro-architecture based on self-similarity. Repeated application of a simple expansion rule generates deep networks whose structural layouts are precisely truncated fractals. These networks contain interacting subpaths of different lengths, but do not include any pass-through or residual connections; every internal signal is transformed by a filter and nonlinearity before being seen by subsequent layers. In experiments, fractal networks match the excellent performance of standard residual networks on both CIFAR and ImageNet classification tasks, thereby demonstrating that residual representations may not be fundamental to the success of extremely deep convolutional neural networks. Rather, the key may be the ability to transition, during training, from effectively shallow to deep. We note similarities with student-teacher behavior and develop drop-path, a natural extension of dropout, to regularize co-adaptation of subpaths in fractal architectures. Such regularization allows extraction of high-performance fixed-depth subnetworks. Additionally, fractal networks exhibit an anytime property: shallow subnetworks provide a quick answer, while deeper subnetworks, with higher latency, provide a more accurate answer.
["neural networks", "fractal networks", "fractalnet", "residuals fractalnet", "residuals", "design strategy", "neural network", "application", "simple expansion rule", "deep networks"]
ABSTRACTWe introduce a design strategy for neural network macro-architecture based on self-similarity. Repeated application of a simple expansion rule generates deep networkswhose structural layouts are precisely truncated fractals. These networks containinteracting subpaths of different lengths, but do not include any pass-through orresidual connections; every internal signal is transformed by a filter and nonlinearitybefore being seen by subsequent layers. In experiments, fractal networks matchthe excellent performance of standard residual networks on both CIFAR andImageNet classification tasks, thereby demonstrating that residual representationsmay not be fundamental to the success of extremely deep convolutional neuralnetworks. Rather, the key may be the ability to transition, during training, fromeffectively shallow to deep. We note similarities with student-teacher behavior anddevelop drop-path, a natural extension of dropout, to regularize co-adaptation ofsubpaths in fractal architectures. Such regularization allows extraction of high-performance fixed-depth subnetworks. Additionally, fractal networks exhibit ananytime property: shallow subnetworks provide a quick answer, while deepersubnetworks, with higher latency, provide a more accurate answer.1 I NTRODUCTIONResidual networks (He et al., 2016a), or ResNets, lead a recent and dramatic increase in both depth andaccuracy of convolutional neural networks, facilitated by constraining the network to learn residuals.ResNet variants (He et al., 2016a;b; Huang et al., 2016b) and related architectures (Srivastava et al.,2015) employ the common technique of initializing and anchoring, via a pass-through channel, anetwork to the identity function. Training now differs in two respects. First, the objective changesto learning residual outputs, rather than unreferenced absolute mappings. Second, these networksexhibit a type of deep supervision (Lee et al., 2014), as near-identity layers effectively reduce distanceto the loss. He et al. (2016a) speculate that the former, the residual formulation itself, is crucial.We show otherwise, by constructing a competitive extremely deep architecture that does not rely onresiduals. Our design principle is pure enough to communicate in a single word, fractal, and a simplediagram (Figure 1). Yet, fractal networks implicitly recapitulate many properties hard-wired intoprevious successful architectures. Deep supervision not only arises automatically, but also drives atype of student-teacher learning (Ba & Caruana, 2014; Urban et al., 2017) internal to the network.Modular building blocks of other designs (Szegedy et al., 2015; Liao & Carneiro, 2015) resemblespecial cases of a fractal network’s nested substructure.For fractal networks, simplicity of training mirrors simplicity of design. A single loss, attached to thefinal layer, suffices to drive internal behavior mimicking deep supervision. Parameters are randomlyinitialized. As they contain subnetworks of many depths, fractal networks are robust to choice ofoverall depth; make them deep enough and training will carve out a useful assembly of subnetworks.The entirety of emergent behavior resulting from a fractal design may erode the need for recentengineering tricks intended to achieve similar effects. These tricks include residual functional formswith identity initialization, manual deep supervision, hand-crafted architectural modules, and student-teacher training regimes. Section 2 reviews this large body of related techniques. Hybrid designscould certainly integrate any of them with a fractal architecture; we leave open the question of thedegree to which such hybrids are synergistic.1Published as a conference paper at ICLR 2017zf4pzqzf4pzqBlock 1Block 2Block 3Block 4Block 5xyFractal Expansion RuleLayer KeyConvolutionJoinPoolPredictionzfCfCpzqzfCfCfC1pzqFigure 1: Fractal architecture. Left: A simple expansion rule generates a fractal architecture withCintertwined columns. The base case, f1pzq, has a single layer of the chosen type ( e.g.convolutional)between input and output. Join layers compute element-wise mean. Right: Deep convolutionalnetworks periodically reduce spatial resolution via pooling. A fractal version uses fCas a buildingblock between pooling layers. Stacking Bsuch blocks yields a network whose total depth, measuredin terms of convolution layers, is B2C1. This example has depth 40(B5,C4).Our main contribution is twofold:We introduce FractalNet, the first simple alternative to ResNet. FractalNet shows thatexplicit residual learning is not a requirement for building ultra-deep neural networks.Through analysis and experiments, we elucidate connections between FractalNet and anarray of phenomena engineered into previous deep network designs.As an additional contribution, we develop drop-path, a novel regularization protocol for ultra-deep fractal networks. Without data augmentation, fractal networks, trained with drop-path anddropout (Hinton et al., 2012), exceed the performance of residual networks regularized via stochasticdepth (Huang et al., 2016b). Though, like stochastic depth, it randomly removes macro-scalecomponents, drop-path further exploits our fractal structure in choosing which components to disable.Drop-path constitutes not only a regularization strategy, but also provides means of optionallyimparting fractal networks with anytime behavior. A particular schedule of dropped paths duringlearning prevents subnetworks of different depths from co-adapting. As a consequence, both shallowand deep subnetworks must individually produce correct output. Querying a shallow subnetwork thusyields a quick and moderately accurate result in advance of completion of the full network.Section 3 elaborates the technical details of fractal networks and drop-path. Section 4 providesexperimental comparisons to residual networks across the CIFAR-10, CIFAR-100 (Krizhevsky,2009), SVHN (Netzer et al., 2011), and ImageNet (Deng et al., 2009) datasets. We also evaluateregularization and data augmentation strategies, investigate subnetwork student-teacher behaviorduring training, and benchmark anytime networks obtained using drop-path. Section 5 providessynthesis. By virtue of encapsulating many known, yet seemingly distinct, design principles, self-similar structure may materialize as a fundamental component of neural architectures.2Published as a conference paper at ICLR 20172 R ELATED WORKDeepening feed-forward neural networks has generally returned dividends in performance. A strikingexample within the computer vision community is the improvement on the ImageNet (Deng et al.,2009) classification task when transitioning from AlexNet (Krizhevsky et al., 2012) to VGG (Si-monyan & Zisserman, 2015) to GoogLeNet (Szegedy et al., 2015) to ResNet (He et al., 2016a).Unfortunately, greater depth also makes training more challenging, at least when employing a first-order optimization method with randomly initialized layers. As the network grows deeper and morenon-linear, the linear approximation of a gradient step becomes increasingly inappropriate. Desire toovercome these difficulties drives research on both optimization techniques and network architectures.On the optimization side, much recent work yields improvements. To prevent vanishing gradients,ReLU activation functions now widely replace sigmoid and tanh units (Nair & Hinton, 2010). Thissubject remains an area of active inquiry, with various tweaks on ReLUs, e.g.PReLUs (He et al., 2015),and ELUs (Clevert et al., 2016). Even with ReLUs, employing batch normalization (Ioffe & Szegedy,2015) speeds training by reducing internal covariate shift. Good initialization can also amelioratethis problem (Glorot & Bengio, 2010; Mishkin & Matas, 2016). Path-SGD (Neyshabur et al., 2015)offers an alternative normalization scheme. Progress in optimization is somewhat orthogonal to ourarchitectural focus, with the expectation that advances in either are ripe for combination.Notable ideas in architecture reach back to skip connections, the earliest example of a nontrivialrouting pattern within a neural network. Recent work further elaborates upon them (Maire et al., 2014;Hariharan et al., 2015). Highway networks (Srivastava et al., 2015) and ResNet (He et al., 2016a;b)offer additional twists in the form of parameterized pass-through and gating. In work subsequentto our own, Huang et al. (2016a) investigate a ResNet variant with explicit skip connections. Thesemethods share distinction as the only other designs demonstrated to scale to hundreds of layers andbeyond. ResNet’s building block uses the identity map as an anchor point and explicitly parameterizesan additive correction term (the residual). Identity initialization also appears in the context of recurrentnetworks (Le et al., 2015). A tendency of ResNet and highway networks to fall-back to the identitymap may make their effective depth much smaller than their nominal depth.Some prior results hint at what we experimentally demonstrate in Section 4. Namely, reduction ofeffective depth is key to training extremely deep networks; residuals are incidental. Huang et al.(2016b) provide one clue in their work on stochastic depth: randomly dropping layers from ResNetduring training, thereby shrinking network depth by a constant factor, provides additional performancebenefit. We build upon this intuition through drop-path, which shrinks depth much more drastically.The success of deep supervision (Lee et al., 2014) provides another clue that effective depth is crucial.Here, an auxiliary loss, forked off mid-level layers, introduces a shorter path during backpropagation.The layer at the fork receives two gradients, originating from the main loss and the auxiliaryloss, that are added together. Deep supervision is now common, being adopted, for example, byGoogLeNet (Szegedy et al., 2015). However, irrelevance of the auxiliary loss at test time introducesthe drawback of having a discrepancy between the actual objective and that used for training.Exploration of the student-teacher paradigm (Ba & Caruana, 2014) illuminates the potential forinterplay between networks of different depth. In the model compression scenario, a deeper network(previously trained) guides and improves the learning of a shallower and faster student network (Ba& Caruana, 2014; Urban et al., 2017). This is accomplished by feeding unlabeled data through theteacher and having the student mimic the teacher’s soft output predictions. FitNets (Romero et al.,2015) explicitly couple students and teachers, forcing mimic behavior across several intermediatepoints in the network. Our fractal networks capture yet another alternative, in the form of implicitcoupling, with the potential for bidirectional information flow between shallow and deep subnetworks.Widening networks, by using larger modules in place of individual layers, has also produced per-formance gains. For example, an Inception module (Szegedy et al., 2015) concatenates results ofconvolutional layers of different receptive field size. Stacking these modules forms the GoogLeNet ar-chitecture. Liao & Carneiro (2015) employ a variant with maxout in place of concatenation. Figure 1makes apparent our connection with such work. As a fractal network deepens, it also widens. More-over, note that stacking two 2D convolutional layers with the same spatial receptive field ( e.g.33)achieves a larger ( 55) receptive field. A horizontal cross-section of a fractal network is reminiscentof an Inception module, except with additional joins due to recursive structure.3Published as a conference paper at ICLR 20173 F RACTAL NETWORKSWe begin with a more formal presentation of the ideas sketched in Figure 1. Convolutional neuralnetworks serve as our running example and, in the subsequent section, our experimental platform.However, it is worth emphasizing that our framework is more general. In principle, convolutionallayers in Figure 1 could be replaced by a different layer type, or even a custom-designed module orsubnetwork, in order to generate other fractal architectures.LetCdenote the index of the truncated fractal fCpq. Our network’s structure, connections and layertypes, is defined by fCpq. A network consisting of a single convolutional layer is the base case:f1pzqconvpzq (1)We define successive fractals recursively:fC1pzqrp fCfCqpzqs`r convpzqs (2)wheredenotes composition and `a join operation. When drawn in the style of Figure 1, Ccorresponds to the number of columns, or width, of network fCpq. Depth, defined to be the numberofconv layers on the longest path between input and output, scales as 2C1. Convolutional networksfor classification typically intersperse pooling layers. We achieve the same by using fCpqas abuilding block and stacking it with subsequent pooling layers Btimes, yielding total depth B2C1.The join operation `merges two feature blobs into one. Here, a blob is the result of a conv layer: atensor holding activations for a fixed number of channels over a spatial domain. The channel countcorresponds to the size of the filter set in the preceding conv layer. As the fractal is expanded, wecollapse neighboring joins into a single joinlayer which spans multiple columns, as shown on theright side of Figure 1. The join layer merges all of its input feature blobs into a single output blob.Several choices seem reasonable for the action of a join layer, including concatenation and addition.We instantiate each join to compute the element-wise mean of its inputs. This is appropriate forconvolutional networks in which channel count is set the same for all conv layers within a fractal block.Averaging might appear similar to ResNet’s addition operation, but there are critical differences:ResNet makes clear distinction between pass-through and residual signals. In FractalNet, nosignal is privileged. Every input to a joinlayer is the output of an immediately precedingconv layer. The network structure alone cannot identify any as being primary.Drop-path regularization, as described next in Section 3.1, forces each input to a join to beindividually reliable. This reduces the reward for even implicitly learning to allocate part ofone signal to act as a residual for another.Experiments show that we can extract high-performance subnetworks consisting of a singlecolumn (Section 4.2). Such a subnetwork is effectively devoid of joins, as only a single pathis active throughout. They produce no signal to which a residual could be added.Together, these properties ensure that join layers are not an alternative method of residual learning.3.1 R EGULARIZATION VIA DROP-PATHDropout (Hinton et al., 2012) and drop-connect (Wan et al., 2013) modify interactions betweensequential network layers in order to discourage co-adaptation. Since fractal networks containadditional macro-scale structure, we propose to complement these techniques with an analogouscoarse-scale regularization scheme.Figure 2 illustrates drop-path. Just as dropout prevents co-adaptation of activations, drop-pathprevents co-adaptation of parallel paths by randomly dropping operands of the join layers. Thisdiscourages the network from using one input path as an anchor and another as a corrective term (aconfiguration that, if not prevented, is prone to overfitting). We consider two sampling strategies:Local : ajoin drops each input with fixed probability, but we make sure at least one survives.Global : a single path is selected for the entire network. We restrict this path to be a singlecolumn, thereby promoting individual columns as independently strong predictors.4Published as a conference paper at ICLR 2017Iteration #1(Local)Iteration #2(Global)Iteration #3(Local)Iteration #4(Global)Figure 2: Drop-path. A fractal network block functions with some connections between layersdisabled, provided some path from input to output is still available. Drop-path guarantees at least onesuch path, while sampling a subnetwork with many other paths disabled. During training, presentinga different active subnetwork to each mini-batch prevents co-adaptation of parallel paths. A globalsampling strategy returns a single column as a subnetwork. Alternating it with local samplingencourages the development of individual columns as performant stand-alone subnetworks.As with dropout, signals may need appropriate rescaling. With element-wise means, this is trivial;each join computes the mean of only its active inputs.In experiments, we train with dropout and a mixture model of 50% local and 50% global samplingfor drop-path. We sample a new subnetwork each mini-batch. With sufficient memory, we cansimultaneously evaluate one local sample and all global samples for each mini-batch by keepingseparate networks and tying them together via weight sharing.While fractal connectivity permits the use of paths of any length, global drop-path forces the use ofmany paths whose lengths differ by orders of magnitude (powers of 2). The subnetworks sampled bydrop-path thus exhibit large structural diversity. This property stands in contrast to stochastic depthregularization of ResNet, which, by virtue of using a fixed drop probability for each layer in a chain,samples subnetworks with a concentrated depth distribution (Huang et al., 2016b).Global drop-path serves not only as a regularizer, but also as a diagnostic tool. Monitoring perfor-mance of individual columns provides insight into both the network and training mechanisms, asSection 4.3 discusses in more detail. Individually strong columns of various depths also give userschoices in the trade-off between speed (shallow) and accuracy (deep).3.2 D ATA AUGMENTATIONData augmentation can reduce the need for regularization. ResNet demonstrates this, achieving27.22% error rate on CIFAR-100 with augmentation compared to 44.76% without (Huang et al.,2016b). While augmentation benefits fractal networks, we show that drop-path provides highlyeffective regularization, allowing them to achieve competitive results even without data augmentation.3.3 I MPLEMENTATION DETAILSWe implement FractalNet using Caffe (Jia et al., 2014). Purely for convenience, we flip the orderof pool and join layers at the end of a block in Figure 1. We pool individual columns immediatelybefore the joins spanning all columns, rather than pooling once immediately after them.We train fractal networks using stochastic gradient descent with momentum. As now standard, weemploy batch normalization together with each conv layer (convolution, batch norm, then ReLU).5Published as a conference paper at ICLR 2017Method C100 C100+ C100++ C10 C10+ C10++ SVHNNetwork in Network (Lin et al., 2013) 35.68 - - 10.41 8.81 - 2.35Generalized Pooling (Lee et al., 2016) 32.37 - - 7.62 6.05 - 1.69Recurrent CNN (Liang & Hu, 2015) 31.75 - - 8.69 7.09 - 1.77Multi-scale (Liao & Carneiro, 2015) 27.56 - - 6.87 - - 1.76FitNet Romero et al. (2015) - 35.04 - - 8.39 - 2.42Deeply Supervised (Lee et al., 2014) - 34.57 - 9.69 7.97 - 1.92All-CNN (Springenberg et al., 2014) - 33.71 - 9.08 7.25 4.41 -Highway Net (Srivastava et al., 2015) - 32.39 - - 7.72 - -ELU (Clevert et al., 2016) - 24.28 - - 6.55 - -Scalable BO (Snoek et al., 2015) - - 27.04 - - 6.37 1.77Fractional Max-Pool (Graham, 2014) - - 26.32 - - 3.47 -FitResNet (Mishkin & Matas, 2016) - 27.66 - - 5.84 - -ResNet (He et al., 2016a) - - - - 6.61 - -ResNet by (Huang et al., 2016b) 44.76 27.22 - 13.63 6.41 - 2.01Stochastic Depth (Huang et al., 2016b) 37.80 24.58 - 11.66 5.23 - 1.75Identity Mapping (He et al., 2016b) - 22.68 - - 4.69 - -ResNet in ResNet (Targ et al., 2016) - 22.90 - - 5.01 - -Wide (Zagoruyko & Komodakis, 2016) - 20.50 - - 4.17 - -DenseNet-BC (Huang et al., 2016a)119.64 17.60 - 5.19 3.62 - 1.74FractalNet (20 layers, 38.6M params) 35.34 23.30 22.85 10.18 5.22 5.11 2.01+ drop-path + dropout 28.20 23.73 23.36 7.33 4.60 4.59 1.87ëdeepest column alone 29.05 24.32 23.60 7.27 4.68 4.63 1.89FractalNet (40 layers, 22.9M params)2- 22.49 21.49 - 5.24 5.21 -Table 1: CIFAR-100/CIFAR-10/SVHN. We compare test error (%) with other leading methods,trained with either no data augmentation, translation/mirroring (+), or more substantial augmentation(++). Our main point of comparison is ResNet. We closely match its benchmark results usingdata augmentation, and outperform it by large margins without data augmentation. Training withdrop-path, we can extract from FractalNet single-column (plain) networks that are highly competitive.4 E XPERIMENTSThe CIFAR, SVHN, and ImageNet datasets serve as testbeds for comparison to prior work andanalysis of FractalNet’s internal behavior. We evaluate performance on the standard classification taskassociated with each dataset. For CIFAR and SVHN, which consist of 3232images, we set ourfractal network to have 5blocks ( B5) with 22non-overlapping max-pooling and subsamplingapplied after each. This reduces the input 3232spatial resolution to 11over the course of theentire network. A softmax prediction layer attaches at the end of the network. Unless otherwise noted,we set the number of filter channels within blocks 1through 5asp64;128;256;512;512q, mostlymatching the convention of doubling the number of channels after halving spatial resolution.For ImageNet, we choose a fractal architecture to facilitate direct comparison with the 34-layerResNet of He et al. (2016a). We use the same first and last layer as ResNet-34, but change the middleof the network to consist of 4blocks ( B4), each of 8layers ( C4columns). We use a filterchannel progression of p128;256;512;1024qin blocks 1through 4.4.1 T RAININGFor experiments using dropout, we fix drop rate per block at p0%;10%;20%;30%;40%q, similarto Clevert et al. (2016). Local drop-path uses 15% drop rate across the entire network.1Densely connected networks (DenseNets) are concurrent work, appearing subsequent to our original arXivpaper on FractalNet. A variant of residual networks, they swap addition for concatenation in the residualfunctional form. We report performance of their 250-layer DenseNet-BC network with growth rate k24.2This deeper (4 column) FractalNet has fewer parameters. We vary column width: p128;64;32;16qchannelsacross columns initially, doubling each block except the last. A linear projection temporarily widens thinnercolumns before joins. As in Iandola et al. (2016), we switch to a mix of 11and33convolutional filters.6Published as a conference paper at ICLR 2017Method Top-1 (%) Top-5 (%)VGG-16 28.07 9.33ResNet-34 C 24.19 7.40FractalNet-34 24.12 7.39Table 2: ImageNet (validation set, 10-crop).Cols. Depth Params. Error (%)1 5 0.3M 37.322 10 0.8M 30.713 20 2.1M 27.694 40 4.8M 27.385 80 10.2M 26.466 160 21.1M 27.38Table 3: Ultra-deep fractal networks(CIFAR-100++). Increasing depth greatly im-proves accuracy until eventual diminishingreturns. Contrast with plain networks, whichare not trainable if made too deep (Table 4).Model Depth Train Loss Error (%)Plain 5 0.786 36.62Plain 10 0.159 32.47Plain 20 0.037 31.31Plain 40 0.580 38.84Fractal Col #1 5 0.677 37.23Fractal Col #2 10 0.141 32.85Fractal Col #3 20 0.029 31.31Fractal Col #4 40 0.016 31.75Fractal Full 40 0.015 27.40Table 4: Fractal structure as a training appara-tus(CIFAR-100++). Plain networks perform well ifmoderately deep, but exhibit worse convergence dur-ing training if instantiated with great depth. How-ever, as a column trained within, and then extractedfrom, a fractal network with mixed drop-path, werecover a plain network that overcomes such depthlimitation (possibly due to a student-teacher effect).We run for 400epochs on CIFAR, 20epochs on SVHN, and 70epochs on ImageNet. Our learningrate starts at 0:02(for ImageNet, 0:001) and we train using stochastic gradient descent with batchsize100(for ImageNet, 32) and momentum 0:9. For CIFAR/SVHN, we drop the learning rate by afactor of 10whenever the number of remaining epochs halves. For ImageNet, we drop by a factor of10at epochs 50and65. We use Xavier initialization (Glorot & Bengio, 2010).A widely employed (Lin et al., 2013; Clevert et al., 2016; Srivastava et al., 2015; He et al., 2016a;b;Huang et al., 2016b; Targ et al., 2016) scheme for data augmentation on CIFAR consists of onlyhorizontal mirroring and translation (uniform offsets in r4;4s), with images zero-padded whereneeded after mean subtraction. We denote results achieved using no more than this degree ofaugmentation by appending a “+” to the dataset name ( e.g.CIFAR-100+). A “++” marks resultsreliant on more data augmentation; here exact schemes may vary. Our entry in this category is modestand simply changes the zero-padding to reflect-padding.4.2 R ESULTSTable 1 compares performance of FractalNet on CIFAR and SVHN with competing methods. Frac-talNet (depth 20) outperforms the original ResNet across the board. With data augmentation, ourCIFAR-100 accuracy is close to that of the best ResNet variants. With neither augmentation nor regu-larization, FractalNet’s performance on CIFAR is superior to both ResNet and ResNet with stochasticdepth, suggesting that FractalNet may be less prone to overfitting. Most methods perform similarlyon SVHN. Increasing depth to 40, while borrowing some parameter reduction tricks (Iandola et al.,2016), reveals FractalNet’s performance to be consistent across a range of configuration choices.Experiments without data augmentation highlight the power of drop-path regularization. On CIFAR-100, drop-path reduces FractalNet’s error rate from 35:34% to28:20%. Unregularized ResNet is farbehind ( 44:76%) and ResNet with stochastic depth ( 37:80%) does not catch up to our unregularizedstarting point of 35:34%. CIFAR-10 mirrors this story. With data augmentation, drop-path provides aboost (CIFAR-10), or does not significantly influence FractalNet’s performance (CIFAR-100).Note that the performance of the deepest column of the fractal network is close to that of the fullnetwork (statistically equivalent on CIFAR-10). This suggests that the fractal structure may be moreimportant as a learning framework than as a final model architecture.Table 2 shows that FractalNet scales to ImageNet, matching ResNet (He et al., 2016a) at equal depth.Note that, concurrent with our work, refinements to the residual network paradigm further improve thestate-of-the-art on ImageNet. Wide residual networks (Zagoruyko & Komodakis, 2016) of 34-layersreduce single-crop Top-1 and Top-5 validation error by approximately 2%and1%, respectively, over7Published as a conference paper at ICLR 20170 50 100 150 200 250 300 350 400Epochs10-1100101Training LossPlain Networks5 layers10 layers20 layers40 layers0 50 100 150 200 250 300 350 400Epochs10-1100101Training LossFractalNetCol #1: 5 layersCol #2: 10 layersCol #3: 20 layersCol #4: 40 layersFractalNetFigure 3: Implicit deep supervision. Left: Evolution of loss for plain networks of depth 5,10,20and40trained on CIFAR-100. Training becomes increasingly difficult for deeper networks. At 40layers, we are unable to train the network satisfactorily. Right: We train a 4column fractal networkwith mixed drop-path, monitoring its loss as well as the losses of its four subnetworks correspondingto individual columns of the same depth as the plain networks. As the 20-layer subnetwork starts tostabilize, drop-path puts pressure on the 40-layer column to adapt, with the rest of the network as itsteacher. This explains the elbow-shaped learning curve for Col #4 that occurs around 25epochs.ResNet- 34by doubling feature channels in each layer. DenseNets (Huang et al., 2016a) substantiallyimprove performance by building residual blocks that concatenate rather than add feature channels.Table 3 demonstrates that FractalNet resists performance degradation as we increase Cto obtainextremely deep networks ( 160layers for C6). Scores in this table are not comparable tothose in Table 1. For time and memory efficiency, we reduced block-wise feature channels top16;32;64;128;128qand the batch size to 50for the supporting experiments in Tables 3 and 4.Table 4 provides a baseline showing that training of plain deep networks begins to degrade by the timetheir depth reaches 40layers. In our experience, a plain 160-layer completely fails to converge. Thistable also highlights the ability to use FractalNet and drop-path as an engine for extracting trainednetworks (columns) with the same topology as plain networks, but much higher test performance.4.3 I NTROSPECTIONWith Figure 3, we examine the evolution of a 40-layer FractalNet during training. Tracking columnsindividually (recording their losses when run as stand-alone networks), we observe that the 40-layercolumn initially improves slowly, but picks up once the loss of the rest of the network begins tostabilize. Contrast with a plain 40-layer network trained alone (dashed blue line), which never makesfast progress. The column has the same initial plateau, but subsequently improves after 25epochs,producing a loss curve uncharacteristic of plain networks.We hypothesize that the fractal structure triggers effects akin to deep supervision and lateral student-teacher information flow. Column #4 joins with column #3 every other layer, and in every fourthlayer this join involves no other columns. Once the fractal network partially relies on the signalgoing through column #3, drop-path puts pressure on column #4 to produce a replacement signalwhen column #3 is dropped. This task has constrained scope. A particular drop only requires twoconsecutive layers in column #4 to substitute for one in column #3 (a mini student-teacher problem).This explanation of FractalNet dynamics parallels what, in concurrent work, Greff et al. (2017)claim for ResNet. Specifically, Greff et al. (2017) suggest residual networks learn unrolled iterativeestimation, with each layer performing a gradual refinement on its input representation. The deepestFractalNet column could behave in the same manner, with the remainder of the network acting as ascaffold for building smaller refinement steps by doubling layers from one column to the next.8Published as a conference paper at ICLR 2017These interpretations appear not to mesh with the conclusions of Veit et al. (2016), who claim thatensemble-like behavior underlies the success of ResNet. This is certainly untrue of some very deepnetworks, as FractalNet provides a counterexample: we can extract a single column (plain networktopology) and it alone (no ensembling) performs nearly as well as the entire network. Moreover, thegradual refinement view may offer an alternative explanation for the experiments of Veit et al. (2016).If each layer makes only a small modification, removing one may look, to the subsequent portionof the network, like injecting a small amount of input noise. Perhaps noise tolerance explains thegradual performance degradation that Veit et al. (2016) observe when removing ResNet layers.5 C ONCLUSIONOur experiments with fractal networks provide strong evidence that path length is fundamentalfor training ultra-deep neural networks; residuals are incidental. Key is the shared characteristicof FractalNet and ResNet: large nominal network depth, but effectively shorter paths for gradientpropagation during training. Fractal architectures are arguably the simplest means of satisfyingthis requirement, and match residual networks in experimental performance. Fractal networks areresistant to being too deep; extra depth may slow training, but does not impair accuracy.With drop-path, regularization of extremely deep fractal networks is intuitive and effective. Drop-pathdoubles as a method of enforcing speed (latency) vs. accuracy tradeoffs. For applications where fastresponses have utility, we can obtain fractal networks whose partial evaluation yields good answers.Our analysis connects the internal behavior of fractal networks with phenomena engineered into othernetworks. Their substructure resembles hand-crafted modules used as components in prior work.Their training evolution may emulate deep supervision and student-teacher learning.ACKNOWLEDGMENTSWe gratefully acknowledge the support of NVIDIA Corporation with the donation of GPUs used forthis research.
HJoOt0F4g
weak comparison
6: Marginally above acceptance threshold
This paper presents a strategy for building deep neural networks via rules for expansion and merging of sub-networks. pros: - the idea is novel - the approach is described clearly cons: - the experimental evaluation is not convincing, e.g. no improvement on SVHN - number of parameters should be mentioned for all models for fair comparison - the effect of drop-path seems to vanish with data augmentation
4: The reviewer is confident but not absolutely certain that the evaluation is correct
BJ6oOfqge
ICLR.cc/2017/conference
2017
Temporal Ensembling for Semi-Supervised Learning
["Samuli Laine", "Timo Aila"]
In this paper, we present a simple and efficient method for training deep neural networks in a semi-supervised setting where only a small portion of training data is labeled. We introduce self-ensembling, where we form a consensus prediction of the unknown labels using the outputs of the network-in-training on different epochs, and most importantly, under different regularization and input augmentation conditions. This ensemble prediction can be expected to be a better predictor for the unknown labels than the output of the network at the most recent training epoch, and can thus be used as a target for training. Using our method, we set new records for two standard semi-supervised learning benchmarks, reducing the (non-augmented) classification error rate from 18.44% to 7.05% in SVHN with 500 labels and from 18.63% to 16.55% in CIFAR-10 with 4000 labels, and further to 5.12% and 12.16% by enabling the standard augmentations. We additionally obtain a clear improvement in CIFAR-100 classification accuracy by using random images from the Tiny Images dataset as unlabeled extra inputs during training. Finally, we demonstrate good tolerance to incorrect labels.
["labels", "unknown labels", "training", "temporal", "learning temporal", "learning", "simple", "efficient", "deep neural networks", "setting"]
ABSTRACTIn this paper, we present a simple and efficient method for training deep neuralnetworks in a semi-supervised setting where only a small portion of training datais labeled. We introduce self-ensembling, where we form a consensus predictionof the unknown labels using the outputs of the network-in-training on differentepochs, and most importantly, under different regularization and input augmenta-tion conditions. This ensemble prediction can be expected to be a better predictorfor the unknown labels than the output of the network at the most recent trainingepoch, and can thus be used as a target for training. Using our method, we setnew records for two standard semi-supervised learning benchmarks, reducing the(non-augmented) classification error rate from 18.44% to 7.05% in SVHN with500 labels and from 18.63% to 16.55% in CIFAR-10 with 4000 labels, and furtherto 5.12% and 12.16% by enabling the standard augmentations. We additionallyobtain a clear improvement in CIFAR-100 classification accuracy by using ran-dom images from the Tiny Images dataset as unlabeled extra inputs during train-ing. Finally, we demonstrate good tolerance to incorrect labels.1 I NTRODUCTIONIt has long been known that an ensemble of multiple neural networks generally yields better pre-dictions than a single network in the ensemble. This effect has also been indirectly exploited whentraining a single network through dropout (Srivastava et al., 2014), dropconnect (Wan et al., 2013),or stochastic depth (Huang et al., 2016) regularization methods, and in swapout networks (Singhet al., 2016), where training always focuses on a particular subset of the network, and thus the com-plete network can be seen as an implicit ensemble of such trained sub-networks. We extend this ideaby forming ensemble predictions during training, using the outputs of a single network on differenttraining epochs and under different regularization and input augmentation conditions. Our train-ing still operates on a single network, but the predictions made on different epochs correspond to anensemble prediction of a large number of individual sub-networks because of dropout regularization.This ensemble prediction can be exploited for semi-supervised learning where only a small portionof training data is labeled. If we compare the ensemble prediction to the current output of the net-work being trained, the ensemble prediction is likely to be closer to the correct, unknown labels ofthe unlabeled inputs. Therefore the labels inferred this way can be used as training targets for theunlabeled inputs. Our method relies heavily on dropout regularization and versatile input augmen-tation. Indeed, without neither, there would be much less reason to place confidence in whateverlabels are inferred for the unlabeled training data.We describe two ways to implement self-ensembling, -model and temporal ensembling. Both ap-proaches surpass prior state-of-the-art results in semi-supervised learning by a considerable margin.We furthermore observe that self-ensembling improves the classification accuracy in fully labeledcases as well, and provides tolerance against incorrect labels.The recently introduced transform/stability loss of Sajjadi et al. (2016b) is based on the same prin-ciple as our work, and the -model can be seen as a special case of it. The -model can also beseen as a simplification of the -model of the ladder network by Rasmus et al. (2015), a previouslypresented network architecture for semi-supervised learning. Our temporal ensembling method hasconnections to the bootstrapping method of Reed et al. (2014) targeted for training with noisy labels.1Published as a conference paper at ICLR 2017xiyistochasticaugmentationnetworkwith dropoutzi~zicross-entropysquareddifferenceweightedsumlossxiyistochasticaugmentationzi~zicross-entropysquareddifferenceweightedsumlosszinetworkwith dropoutw(t)w(t)Temporal ensemblingП-modelFigure 1: Structure of the training pass in our methods. Top: -model. Bottom: temporal en-sembling. Labels yiare available only for the labeled inputs, and the associated cross-entropy losscomponent is evaluated only for those.Algorithm 1 -model pseudocode.Require:xi= training stimuliRequire:L= set of training input indices with known labelsRequire:yi= labels for labeled inputs i2LRequire:w(t)= unsupervised weight ramp-up functionRequire:f(x)= stochastic neural network with trainable parameters Require:g(x)= stochastic input augmentation functionfortin[1;num epochs ]doforeach minibatch Bdozi2B f(g(xi2B)) .evaluate network outputs for augmented inputs~zi2B f(g(xi2B)) .again, with different dropout and augmentationloss 1jBjPi2(B\L)logzi[yi].supervised loss component+w(t)1CjBjPi2Bjjzi~zijj2.unsupervised loss componentupdateusing, e.g., A DAM .update network parametersend forend forreturn2 S ELF-ENSEMBLING DURING TRAININGWe present two implementations of self-ensembling during training. The first one, -model, en-courages consistent network output between two realizations of the same input stimulus, under twodifferent dropout conditions. The second method, temporal ensembling, simplifies and extends thisby taking into account the network predictions over multiple previous training epochs.We shall describe our methods in the context of traditional image classification networks. Let thetraining data consist of total of Ninputs, out of which Mare labeled. The input stimuli, availablefor all training data, are denoted xi, wherei2f1:::Ng. Let setLcontain the indices of the labeledinputs,jLj=M. For everyi2L, we have a known correct label yi2f1:::Cg, whereCis thenumber of different classes.2.1 -MODELThe structure of -model is shown in Figure 1 (top), and the pseudocode in Algorithm 1. Duringtraining, we evaluate the network for each training input xitwice, resulting in prediction vectors ziand~zi. Our loss function consists of two components. The first component is the standard cross-entropy loss, evaluated for labeled inputs only. The second component, evaluated for all inputs,penalizes different predictions for the same training input xiby taking the mean square difference2Published as a conference paper at ICLR 2017between the prediction vectors ziand~zi.1To combine the supervised and unsupervised loss terms,we scale the latter by time-dependent weighting function w(t). By comparing the entire outputvectorsziand~zi, we effectively ask the “dark knowledge” (Hinton et al., 2015) between the twoevaluations to be close, which is a much stronger requirement compared to asking that only the finalclassification remains the same, which is what happens in traditional training.It is important to notice that, because of dropout regularization, the network output during trainingis a stochastic variable. Thus two evaluations of the same input xiunder same network weights yield different results. In addition, Gaussian noise and augmentations such as random translationare evaluated twice, resulting in additional variation. The combination of these effects explainsthe difference between the prediction vectors ziand~zi. This difference can be seen as an error inclassification, given that the original input xiwas the same, and thus minimizing it is a reasonablegoal.In our implementation, the unsupervised loss weighting function w(t)ramps up, starting from zero,along a Gaussian curve during the first 80 training epochs. See Appendix A for further details aboutthis and other training parameters. In the beginning the total loss and the learning gradients are thusdominated by the supervised loss component, i.e., the labeled data only. We have found it to bevery important that the ramp-up of the unsupervised loss component is slow enough—otherwise,the network gets easily stuck in a degenerate solution where no meaningful classification of the datais obtained.Our approach is somewhat similar to the -model of the ladder network by Rasmus et al. (2015), butconceptually simpler. In the -model, the comparison is done directly on network outputs, i.e., aftersoftmax activation, and there is no auxiliary mapping between the two branches such as the learneddenoising functions in the ladder network architecture. Furthermore, instead of having one “clean”and one “corrupted” branch as in -model, we apply equal augmentation and noise to the inputs forboth branches.As shown in Section 3, the -model combined with a good convolutional network architectureprovides a significant improvement over prior art in classification accuracy.2.2 T EMPORAL ENSEMBLINGAnalyzing how the -model works, we could equally well split the evaluation of the two branches intwo separate phases: first classifying the training set once without updating the weights , and thentraining the network on the same inputs under different augmentations and dropout, using the justobtained predictions as targets for the unsupervised loss component. As the training targets obtainedthis way are based on a single evaluation of the network, they can be expected to be noisy. Temporalensembling alleviates this by aggregating the predictions of multiple previous network evaluationsinto an ensemble prediction. It also lets us evaluate the network only once during training, gainingan approximate 2x speedup over the -model.The structure of our temporal ensembling method is shown in Figure 1 (bottom), and the pseudocodein Algorithm 2. The main difference to the -model is that the network and augmentations areevaluated only once per input per epoch, and the target vectors ~zfor the unsupervised loss componentare based on prior network evaluations instead of a second evaluation of the network.After every training epoch, the network outputs ziare accumulated into ensemble outputs ZibyupdatingZi Zi+ (1)zi, whereis a momentum term that controls how far the ensemblereaches into training history. Because of dropout regularization and stochastic augmentation, Zthuscontains a weighted average of the outputs of an ensemble of networks ffrom previous trainingepochs, with recent epochs having larger weight than distant epochs. For generating the trainingtargets ~z, we need to correct for the startup bias in Zby dividing by factor (1t). A similarbias correction has been used in, e.g., Adam (Kingma & Ba, 2014) and mean-only batch normal-ization (Salimans & Kingma, 2016). On the first training epoch, Zand~zare zero as no data fromprevious epochs is available. For this reason, we specify the unsupervised weight ramp-up functionw(t)to also be zero on the first training epoch.1Squared difference gave slightly but consistently better results than cross-entropy loss in our tests.3Published as a conference paper at ICLR 2017Algorithm 2 Temporal ensembling pseudocode. Note that the updates of Zand~zcould equallywell be done inside the minibatch loop; in this pseudocode they occur between epochs for clarity.Require:xi= training stimuliRequire:L= set of training input indices with known labelsRequire:yi= labels for labeled inputs i2LRequire:= ensembling momentum, 0<1Require:w(t)= unsupervised weight ramp-up functionRequire:f(x)= stochastic neural network with trainable parameters Require:g(x)= stochastic input augmentation functionZ 0[NC] .initialize ensemble predictions~z 0[NC] .initialize target vectorsfortin[1;num epochs ]doforeach minibatch Bdozi2B f(g(xi2B;t)) .evaluate network outputs for augmented inputsloss 1jBjPi2(B\L)logzi[yi].supervised loss component+w(t)1CjBjPi2Bjjzi~zijj2.unsupervised loss componentupdateusing, e.g., A DAM .update network parametersend forZ Z+ (1)z . accumulate ensemble predictions~z Z=(1t) .construct target vectors by bias correctionend forreturnThe benefits of temporal ensembling compared to -model are twofold. First, the training is fasterbecause the network is evaluated only once per input on each epoch. Second, the training targets~zcan be expected to be less noisy than with -model. As shown in Section 3, we indeed obtainsomewhat better results with temporal ensembling than with -model in the same number of trainingepochs. The downside compared to -model is the need to store auxiliary data across epochs, andthe new hyperparameter . While the matrix Zcan be fairly large when the dataset contains a largenumber of items and categories, its elements are accessed relatively infrequently. Thus it can bestored, e.g., in a memory mapped file.An intriguing additional possibility of temporal ensembling is collecting other statistics from thenetwork predictions zibesides the mean. For example, by tracking the second raw moment ofthe network outputs, we can estimate the variance of each output component zi;j. This makes itpossible to reason about the uncertainty of network outputs in a principled way (Gal & Ghahramani,2016). Based on this information, we could, e.g., place more weight on more certain predictionsvs. uncertain ones in the unsupervised loss term. However, we leave the exploration of these avenuesas future work.3 R ESULTSOur network structure is given in Table 5, and the test setup and all training parameters are detailedin Appendix A. We test the -model and temporal ensembling in two image classification tasks,CIFAR-10 and SVHN, and report the mean and standard deviation of 10 runs using different randomseeds.Although it is rarely stated explicitly, we believe that our comparison methods do not use input aug-mentation, i.e., are limited to dropout and other forms of permutation-invariant noise. Therefore wereport the error rates without augmentation, unless explicitly stated otherwise. Given that the abilityof an algorithm to extract benefit from augmentation is also an important property, we report theclassification accuracy using a standard set of augmentations as well. In purely supervised trainingthe de facto standard way of augmenting the CIFAR-10 dataset includes horizontal flips and randomtranslations, while SVHN is limited to random translations. By using these same augmentations wecan compare against the best fully supervised results as well. After all, the fully supervised resultsshould indicate the upper bound of obtainable accuracy.4Published as a conference paper at ICLR 2017Table 1: CIFAR-10 results with 4000 labels, averages of 10 runs (4 runs for all labels).Error rate (%) with # labels4000 All (50000)Supervised-only 35:561:59 7:330:04with augmentation 34:851:65 6:050:15Conv-Large, -model (Rasmus et al., 2015) 20:400:47CatGAN (Springenberg, 2016) 19:580:58GAN of Salimans et al. (2016) 18:632:32-model 16:550:29 6:900:07-model with augmentation 12:360:31 5.560.10Temporal ensembling with augmentation 12.160.24 5:600:10Table 2: SVHN results for 500 and 1000 labels, averages of 10 runs (4 runs for all labels).Error rate (%) with # labelsModel500 1000 All (73257)Supervised-only 35:185:61 20:472:64 3:050:07with augmentation 31:593:60 19:303:89 2:880:03DGN (Kingma et al., 2014) 36:020:10Virtual Adversarial (Miyato et al., 2016) 24:63ADGM (Maaløe et al., 2016) 22:86SDGM (Maaløe et al., 2016) 16:610:24GAN of Salimans et al. (2016) 18:444:8 8:111:3-model 7:050:30 5:430:25 2:780:03-model with augmentation 6:650:53 4:820:17 2.540.04Temporal ensembling with augmentation 5.120.13 4.420.16 2:740:063.1 CIFAR-10CIFAR-10 is a dataset consisting of 3232pixel RGB images from ten classes. Table 1 shows a2:1percentage point reduction in classification error rate with 4000 labels (400 per class) comparedto earlier methods for the non-augmented -model.Enabling the standard set of augmentations further reduces the error rate by 4:2percentage pointsto12:36%. Temporal ensembling is slightly better still at 12:16%, while being twice as fast totrain. This small improvement conceals the subtle fact that random horizontal flips need to be doneindependently for each epoch in temporal ensembling, while -model can randomize once per apair of evaluations, which according to our measurements is 0.5 percentage points better thanindependent flips.A principled comparison with Sajjadi et al. (2016b) is difficult due to several reasons. They provideresults only for a fairly extreme set of augmentations (translations, flipping, rotations, stretching,and shearing) on top of fractional max pooling (Graham, 2014), which introduces random, localstretching inside the network, and is known to improve classification results substantially. Theyquote an error rate of only 13.60% for supervised-only training with 4000 labels, while our cor-responding baseline is 34.85%. This gap indicates a huge benefit from versatile augmentationsand fractional max pooling—in fact, their baseline result is already better than any previous semi-supervised results. By enabling semi-supervised learning they achieve a 17% drop in classificationerror rate (from 13.60% to 11.29%), while we see a much larger relative drop of 65% (from 34.85%to 12.16%).3.2 SVHNThe street view house numbers (SVHN) dataset consists of 3232pixel RGB images of real-worldhouse numbers, and the task is to classify the centermost digit. In SVHN we chose to use only the5Published as a conference paper at ICLR 2017Table 3: CIFAR-100 results with 10000 labels, averages of 10 runs (4 runs for all labels).Error rate (%) with # labels10000 All (50000)Supervised-only 51:210:33 29:140:25with augmentation 44:560:30 26:420:17-model 43:430:54 29:060:21-model with augmentation 39:190:36 26:320:04Temporal ensembling with augmentation 38.650.51 26.300.15Table 4: CIFAR-100 + Tiny Images results, averages of 10 runs.Error rate (%) with # unlabeledauxiliary inputs from Tiny ImagesRandom 500k Restricted 237k-model with augmentation 25:790:17 25:430:32Temporal ensembling with augmentation 23.620.23 23.790.24official 73257 training examples following Salimans et al. (2016). Even with this choice our errorrate with all labels is only 3:05% without augmentation.Table 2 compares our method to the previous state-of-the-art. With the most commonly used 1000labels we observe an improvement of 2:7percentage points, from 8:11% to5:43% without augmen-tation, and further to 4:42% with standard augmentations.We also investigated the behavior with 500 labels, where we obtained an error rate less than halfof Salimans et al. (2016) without augmentations, with a significantly lower standard deviation aswell. When augmentations were enabled, temporal ensembling further reduced the error rate to5:12%. In this test the difference between -model and temporal ensembling was quite significantat1:5percentage points.In SVHN Sajjadi et al. (2016b) provide results without augmentation, with the caveat that theyuse fractional max pooling, which is a very augmentation-like technique due to the random, localstretching it introduces inside the network. It leads to a superb error rate of 2.28% in supervised-only training, while our corresponding baseline is 3.05% (or 2.88% with translations). Given thatin a separate experiment our network matched the best published result for non-augmented SVHNwhen extra data is used (1.69% from Lee et al. (2015)), this gap is quite surprising, and leads us toconclude that fractional max pooling leads to a powerful augmentation of the dataset, well beyondwhat simple translations can achieve. Our temporal ensembling technique obtains better error ratesfor both 500 and 1000 labels (5.12% and 4.42%, respectively) compared to the 6.03% reported bySajjadi et al. for 732 labels.3.3 CIFAR-100 AND TINYIMAGESThe CIFAR-100 dataset consists of 3232pixel RGB images from a hundred classes. We arenot aware of previous semi-supervised results in this dataset, and chose 10000 labels for our ex-periments. Table 3 shows error rates of 43:43% and38:65% without and with augmentation, re-spectively. These correspond to 7.8 and 5.9 percentage point improvements compared to supervisedlearning with labeled inputs only.We ran two additional tests using unlabeled extra data from Tiny Images dataset (Torralba et al.,2008): one with randomly selected 500k extra images, most not corresponding to any of the CIFAR-100 categories, and another with a restricted set of 237k images from the categories that correspondto those found in the CIFAR-100 dataset (see appendix A for details). The results are shown inTable 4. The addition of randomly selected, unlabeled extra images improved the error rate by 2:7percentage points (from 26:30% to23:63%), indicating a desirable ability to learn from randomnatural images. Temporal ensembling benefited much more from the extra data than the -model.Interestingly, restricting the extra data to categories that are present in CIFAR-100 did not improve6Published as a conference paper at ICLR 201701020304050607080901000%20%50%80%90%01020304050607080901000%20%50%80%90%Standard supervisedTemporal ensembling1300Classification accuracy (%)epoch1300epochFigure 2: Percentage of correct SVHN classifications as a function of training epoch when a part ofthe labels is randomized. With standard supervised training (left) the classification accuracy sufferswhen even a small portion of the labels give disinformation, and the situation worsens quickly asthe portion of randomized labels increases to 50% or more. On the other hand, temporal ensembling(right) shows almost perfect resistance to disinformation when half of the labels are random, andretains over ninety percent classification accuracy even when 80% of the labels are random.the classification accuracy further. This indicates that in order to train a better classifier by addingextra data as unlabeled inputs, it is enough to have the extra data roughly in the same space as theactual inputs—in our case, natural images. We hypothesize that it may even be possible to useproperly crafted synthetic data as unlabeled inputs to obtain improved classifiers.In order to keep the training times tolerable, we limited the number of unlabeled inputs to 50k perepoch in these tests, i.e., on every epoch we trained using all 50k labeled inputs from CIFAR-100 and50k additional unlabeled inputs from Tiny Images. The 50k unlabeled inputs were chosen randomlyon each epoch from the 500k or 237k extra inputs. In temporal ensembling, after each epoch weupdated only the rows of Zthat corresponded to inputs used on that epoch.3.4 S UPERVISED LEARNINGWhen all labels are used for traditional supervised training, our network approximately matchesthe state-of-the-art error rate for a single model in CIFAR-10 with augmentation (Lee et al., 2015;Mishkin & Matas, 2016) at 6:05%, and without augmentation (Salimans & Kingma, 2016) at 7:33%.The same is probably true for SVHN as well, but there the best published results rely on extra datathat we chose not to use.Given this premise, it is perhaps somewhat surprising that our methods reduce the error rate alsowhen all labels are used (Tables 1 and 2). We believe that this is an indication that the consis-tency requirement adds a degree of resistance to ambiguous labels that are fairly common in manyclassification tasks, and that it encourages features to be more invariant to stochastic sampling.3.5 T OLERANCE TO INCORRECT LABELSIn a further test we studied the hypothesis that our methods add tolerance to incorrect labels byassigning a random label to a certain percentage of the training set before starting to train. Figure 2shows the classification error graphs for standard supervised training and temporal ensembling.Clearly our methods provide considerable resistance to wrong labels, and we believe this is becausethe unsupervised loss term encourages the mapping function implemented by the network to beflat in the vicinity of all input data points, whereas the supervised loss term enforces the mappingfunction to have a specific value in the vicinity of the labeled input data points. This means thateven the wrongly labeled inputs play a role in shaping the mapping function—the unsupervisedloss term smooths the mapping function and thus also the decision boundaries, effectively fusingthe inputs into coherent clusters, whereas the excess of correct labels in each class is sufficient forlocking the clusters to the right output vectors through the supervised loss term. The difference toclassical regularizers is that we induce smoothness only on the manifold of likely inputs instead7Published as a conference paper at ICLR 2017of over the entire input domain. For further analysis about the importance of the gradient of themapping function, see Simard et al. (1998).4 R ELATED WORKThere is a large body of previous work on semi-supervised learning (Zhu, 2005). In here we willconcentrate on the ones that are most directly connected to our work.-model is a subset of a ladder network (Rasmus et al., 2015) that introduces lateral connections intoan encoder-decoder type network architecture, targeted at semi-supervised learning. In -model, allbut the highest lateral connections in the ladder network are removed, and after pruning the un-necessary stages, the remaining network consists of two parallel, identical branches. One of thebranches takes the original training inputs, whereas the other branch is given the same input cor-rupted with noise. The unsupervised loss term is computed as the squared difference between the(pre-activation) output of the clean branch and a denoised (pre-activation) output of the corruptedbranch. The denoised estimate is computed from the output of the corrupted branch using a para-metric nonlinearity that has 10 auxiliary trainable parameters per unit. Our -model differs fromthe-model in removing the parametric nonlinearity and denoising, having two corrupted paths,and comparing the outputs of the network instead of pre-activation data of the final layer.Sajjadi et al. (2016b) recently introduced a new loss function for semi-supervised learning, so calledtransform/stability loss, which is founded on the same principle as our work. During training, theyrun augmentation and network evaluation ntimes for each minibatch, and then compute an unsu-pervised loss term as the sum of all pairwise squared distances between the obtained nnetworkoutputs. As such, their technique follows the general pseudo-ensemble agreement (PEA) regular-ization framework of Bachman et al. (2014). In addition, they employ a mutual exclusivity lossterm (Sajjadi et al., 2016a) that we do not use. Our -model can be seen as a special case of thetransform/stability loss obtained by setting n= 2. The computational cost of training with trans-form/stability loss increases linearly as a function of n, whereas the efficiency of our temporalensembling technique remains constant regardless of how large effective ensemble we obtain via theaveraging of previous epochs’ predictions.In bootstrap aggregating, or bagging , multiple networks are trained independently based on subsetsof training data (Breiman, 1996). This results in an ensemble that is more stable and accuratethan the individual networks. Our approach can be seen as pulling the predictions from an implicitensemble that is based on a single network, and the variability is a result of evaluating it underdifferent dropout and augmentation conditions instead of training on different subsets of data. Inwork parallel to ours, Huang et al. (2017) store multiple snapshots of the network during training,hopefully corresponding to different local minima, and use them as an explicit ensemble.The general technique of inferring new labels from partially labeled data is often referred to as boot-strapping orself-training , and it was first proposed by Yarowsky (1995) in the context of linguisticanalysis. Whitney & Sarkar (2012) analyze Yarowsky’s algorithm and propose a novel graph-basedlabel propagation approach. Similarly, label propagation methods (Zhu & Ghahramani, 2002) inferlabels for unlabeled training data by comparing the associated inputs to labeled training inputs usinga suitable distance metric. Our approach differs from this in two important ways. Firstly, we nevercompare training inputs against each other, but instead only rely on the unknown labels remainingconstant, and secondly, we let the network produce the likely classifications for the unlabeled inputsinstead of providing them through an outside process.In addition to partially labeled data, considerable amount of effort has been put into dealing withdensely but inaccurately labeled data. This can be seen as a semi-supervised learning task where partof the training process is to identify the labels that are not to be trusted. For recent work in this area,see, e.g., Sukhbaatar et al. (2014) and Patrini et al. (2016). In this context of noisy labels, Reed et al.(2014) presented a simple bootstrapping method that trains a classifier with the target composed ofa convex combination of the previous epoch output and the known but potentially noisy labels. Ourtemporal ensembling differs from this by taking into account the evaluations over multiple epochs.Generative Adversarial Networks (GAN) have been recently used for semi-supervised learning withpromising results (Maaløe et al., 2016; Springenberg, 2016; Odena, 2016; Salimans et al., 2016). It8Published as a conference paper at ICLR 2017Table 5: The network architecture used in all of our tests.NAME DESCRIPTIONinput 3232RGB imagenoise Additive Gaussian noise = 0:15conv1a 128filters, 33, pad = ’same’, LReLU ( = 0:1)conv1b 128filters, 33, pad = ’same’, LReLU ( = 0:1)conv1c 128filters, 33, pad = ’same’, LReLU ( = 0:1)pool1 Maxpool 22pixelsdrop1 Dropout,p= 0:5conv2a 256filters, 33, pad = ’same’, LReLU ( = 0:1)conv2b 256filters, 33, pad = ’same’, LReLU ( = 0:1)conv2c 256filters, 33, pad = ’same’, LReLU ( = 0:1)pool2 Maxpool 22pixelsdrop2 Dropout,p= 0:5conv3a 512filters, 33, pad = ’valid’, LReLU ( = 0:1)conv3b 256filters, 11, LReLU (= 0:1)conv3c 128filters, 11, LReLU (= 0:1)pool3 Global average pool ( 66!11 pixels)dense Fully connected 128!10output Softmaxcould be an interesting avenue for future work to incorporate a generative component to our solution.We also envision that our methods could be applied to regression-type learning tasks.5 A CKNOWLEDGEMENTSWe thank the anonymous reviewers, Tero Karras, Pekka J ̈anis, Tim Salimans, Ian Goodfellow, aswell as Harri Valpola and his colleagues at Curious AI for valuable suggestions that helped to im-prove this article.
Hkxf8DNNe
simple approach showing some decent results
7: Good paper, accept
This paper presents a model for semi-supervised learning by encouraging feature invariance to stochastic perturbations of the network and/or inputs. Two models are described: One where an invariance term is applied between different instantiations of the model/input a single training step, and a second where invariance is applied to features for the same input point across training steps via a cumulative exponential averaging of the features. These models evaluated using CIFAR-10 and SVHN, finding decent gains of similar amounts in each case. An additional application is also explored at the end, showing some tolerance to corrupted labels as well. The authors also discuss recent work by Sajjadi &al that is very similar in spirit, which I think helps corroborate the findings here. My largest critique is it would have been nice to see applications on larger datasets as well. CIFAR and SVHN are fairly small test cases, though adequate for demonstration of the idea. For cases of unlabelled data especially, it would be good to see tests with on the order of 1M+ data samples, with 1K-10K labeled, as this is a common case when labels are missing. On a similar note, data augmentations are restricted to only translations and (for CIFAR) horizontal flips. While "standard," as the paper notes, more augmentations would have been interesting to see --- particularly since the model is designed explicitly to take advantage of random sampling. Some more details might also pop up, such as the one the paper mentions about handling horizontal flips in different ways between the two model variants. Rather than restrict the system to a particular set of augmentations, I think it would be interesting to push it further, and see how its performance behaves over a larger array of augmentations and (even fewer) numbers of labels. Overall, this seems like a simple approach that is getting decent results, though I would have liked to see more and larger experiments to get a better sense for its performance characteristics. Smaller comment: the paper mentions "dark knowledge" a couple times in explaining results, e.g. bottom of p.6. This is OK for a motivation, but in analyzing the results I think it may be possible to have something more concrete. For instance, the consistency term encourages feature invariance to the stochastic sampling more strongly than would a classification loss alone.
4: The reviewer is confident but not absolutely certain that the evaluation is correct
BJ6oOfqge
ICLR.cc/2017/conference
2017
Temporal Ensembling for Semi-Supervised Learning
["Samuli Laine", "Timo Aila"]
In this paper, we present a simple and efficient method for training deep neural networks in a semi-supervised setting where only a small portion of training data is labeled. We introduce self-ensembling, where we form a consensus prediction of the unknown labels using the outputs of the network-in-training on different epochs, and most importantly, under different regularization and input augmentation conditions. This ensemble prediction can be expected to be a better predictor for the unknown labels than the output of the network at the most recent training epoch, and can thus be used as a target for training. Using our method, we set new records for two standard semi-supervised learning benchmarks, reducing the (non-augmented) classification error rate from 18.44% to 7.05% in SVHN with 500 labels and from 18.63% to 16.55% in CIFAR-10 with 4000 labels, and further to 5.12% and 12.16% by enabling the standard augmentations. We additionally obtain a clear improvement in CIFAR-100 classification accuracy by using random images from the Tiny Images dataset as unlabeled extra inputs during training. Finally, we demonstrate good tolerance to incorrect labels.
["labels", "unknown labels", "training", "temporal", "learning temporal", "learning", "simple", "efficient", "deep neural networks", "setting"]
ABSTRACTIn this paper, we present a simple and efficient method for training deep neuralnetworks in a semi-supervised setting where only a small portion of training datais labeled. We introduce self-ensembling, where we form a consensus predictionof the unknown labels using the outputs of the network-in-training on differentepochs, and most importantly, under different regularization and input augmenta-tion conditions. This ensemble prediction can be expected to be a better predictorfor the unknown labels than the output of the network at the most recent trainingepoch, and can thus be used as a target for training. Using our method, we setnew records for two standard semi-supervised learning benchmarks, reducing the(non-augmented) classification error rate from 18.44% to 7.05% in SVHN with500 labels and from 18.63% to 16.55% in CIFAR-10 with 4000 labels, and furtherto 5.12% and 12.16% by enabling the standard augmentations. We additionallyobtain a clear improvement in CIFAR-100 classification accuracy by using ran-dom images from the Tiny Images dataset as unlabeled extra inputs during train-ing. Finally, we demonstrate good tolerance to incorrect labels.1 I NTRODUCTIONIt has long been known that an ensemble of multiple neural networks generally yields better pre-dictions than a single network in the ensemble. This effect has also been indirectly exploited whentraining a single network through dropout (Srivastava et al., 2014), dropconnect (Wan et al., 2013),or stochastic depth (Huang et al., 2016) regularization methods, and in swapout networks (Singhet al., 2016), where training always focuses on a particular subset of the network, and thus the com-plete network can be seen as an implicit ensemble of such trained sub-networks. We extend this ideaby forming ensemble predictions during training, using the outputs of a single network on differenttraining epochs and under different regularization and input augmentation conditions. Our train-ing still operates on a single network, but the predictions made on different epochs correspond to anensemble prediction of a large number of individual sub-networks because of dropout regularization.This ensemble prediction can be exploited for semi-supervised learning where only a small portionof training data is labeled. If we compare the ensemble prediction to the current output of the net-work being trained, the ensemble prediction is likely to be closer to the correct, unknown labels ofthe unlabeled inputs. Therefore the labels inferred this way can be used as training targets for theunlabeled inputs. Our method relies heavily on dropout regularization and versatile input augmen-tation. Indeed, without neither, there would be much less reason to place confidence in whateverlabels are inferred for the unlabeled training data.We describe two ways to implement self-ensembling, -model and temporal ensembling. Both ap-proaches surpass prior state-of-the-art results in semi-supervised learning by a considerable margin.We furthermore observe that self-ensembling improves the classification accuracy in fully labeledcases as well, and provides tolerance against incorrect labels.The recently introduced transform/stability loss of Sajjadi et al. (2016b) is based on the same prin-ciple as our work, and the -model can be seen as a special case of it. The -model can also beseen as a simplification of the -model of the ladder network by Rasmus et al. (2015), a previouslypresented network architecture for semi-supervised learning. Our temporal ensembling method hasconnections to the bootstrapping method of Reed et al. (2014) targeted for training with noisy labels.1Published as a conference paper at ICLR 2017xiyistochasticaugmentationnetworkwith dropoutzi~zicross-entropysquareddifferenceweightedsumlossxiyistochasticaugmentationzi~zicross-entropysquareddifferenceweightedsumlosszinetworkwith dropoutw(t)w(t)Temporal ensemblingП-modelFigure 1: Structure of the training pass in our methods. Top: -model. Bottom: temporal en-sembling. Labels yiare available only for the labeled inputs, and the associated cross-entropy losscomponent is evaluated only for those.Algorithm 1 -model pseudocode.Require:xi= training stimuliRequire:L= set of training input indices with known labelsRequire:yi= labels for labeled inputs i2LRequire:w(t)= unsupervised weight ramp-up functionRequire:f(x)= stochastic neural network with trainable parameters Require:g(x)= stochastic input augmentation functionfortin[1;num epochs ]doforeach minibatch Bdozi2B f(g(xi2B)) .evaluate network outputs for augmented inputs~zi2B f(g(xi2B)) .again, with different dropout and augmentationloss 1jBjPi2(B\L)logzi[yi].supervised loss component+w(t)1CjBjPi2Bjjzi~zijj2.unsupervised loss componentupdateusing, e.g., A DAM .update network parametersend forend forreturn2 S ELF-ENSEMBLING DURING TRAININGWe present two implementations of self-ensembling during training. The first one, -model, en-courages consistent network output between two realizations of the same input stimulus, under twodifferent dropout conditions. The second method, temporal ensembling, simplifies and extends thisby taking into account the network predictions over multiple previous training epochs.We shall describe our methods in the context of traditional image classification networks. Let thetraining data consist of total of Ninputs, out of which Mare labeled. The input stimuli, availablefor all training data, are denoted xi, wherei2f1:::Ng. Let setLcontain the indices of the labeledinputs,jLj=M. For everyi2L, we have a known correct label yi2f1:::Cg, whereCis thenumber of different classes.2.1 -MODELThe structure of -model is shown in Figure 1 (top), and the pseudocode in Algorithm 1. Duringtraining, we evaluate the network for each training input xitwice, resulting in prediction vectors ziand~zi. Our loss function consists of two components. The first component is the standard cross-entropy loss, evaluated for labeled inputs only. The second component, evaluated for all inputs,penalizes different predictions for the same training input xiby taking the mean square difference2Published as a conference paper at ICLR 2017between the prediction vectors ziand~zi.1To combine the supervised and unsupervised loss terms,we scale the latter by time-dependent weighting function w(t). By comparing the entire outputvectorsziand~zi, we effectively ask the “dark knowledge” (Hinton et al., 2015) between the twoevaluations to be close, which is a much stronger requirement compared to asking that only the finalclassification remains the same, which is what happens in traditional training.It is important to notice that, because of dropout regularization, the network output during trainingis a stochastic variable. Thus two evaluations of the same input xiunder same network weights yield different results. In addition, Gaussian noise and augmentations such as random translationare evaluated twice, resulting in additional variation. The combination of these effects explainsthe difference between the prediction vectors ziand~zi. This difference can be seen as an error inclassification, given that the original input xiwas the same, and thus minimizing it is a reasonablegoal.In our implementation, the unsupervised loss weighting function w(t)ramps up, starting from zero,along a Gaussian curve during the first 80 training epochs. See Appendix A for further details aboutthis and other training parameters. In the beginning the total loss and the learning gradients are thusdominated by the supervised loss component, i.e., the labeled data only. We have found it to bevery important that the ramp-up of the unsupervised loss component is slow enough—otherwise,the network gets easily stuck in a degenerate solution where no meaningful classification of the datais obtained.Our approach is somewhat similar to the -model of the ladder network by Rasmus et al. (2015), butconceptually simpler. In the -model, the comparison is done directly on network outputs, i.e., aftersoftmax activation, and there is no auxiliary mapping between the two branches such as the learneddenoising functions in the ladder network architecture. Furthermore, instead of having one “clean”and one “corrupted” branch as in -model, we apply equal augmentation and noise to the inputs forboth branches.As shown in Section 3, the -model combined with a good convolutional network architectureprovides a significant improvement over prior art in classification accuracy.2.2 T EMPORAL ENSEMBLINGAnalyzing how the -model works, we could equally well split the evaluation of the two branches intwo separate phases: first classifying the training set once without updating the weights , and thentraining the network on the same inputs under different augmentations and dropout, using the justobtained predictions as targets for the unsupervised loss component. As the training targets obtainedthis way are based on a single evaluation of the network, they can be expected to be noisy. Temporalensembling alleviates this by aggregating the predictions of multiple previous network evaluationsinto an ensemble prediction. It also lets us evaluate the network only once during training, gainingan approximate 2x speedup over the -model.The structure of our temporal ensembling method is shown in Figure 1 (bottom), and the pseudocodein Algorithm 2. The main difference to the -model is that the network and augmentations areevaluated only once per input per epoch, and the target vectors ~zfor the unsupervised loss componentare based on prior network evaluations instead of a second evaluation of the network.After every training epoch, the network outputs ziare accumulated into ensemble outputs ZibyupdatingZi Zi+ (1)zi, whereis a momentum term that controls how far the ensemblereaches into training history. Because of dropout regularization and stochastic augmentation, Zthuscontains a weighted average of the outputs of an ensemble of networks ffrom previous trainingepochs, with recent epochs having larger weight than distant epochs. For generating the trainingtargets ~z, we need to correct for the startup bias in Zby dividing by factor (1t). A similarbias correction has been used in, e.g., Adam (Kingma & Ba, 2014) and mean-only batch normal-ization (Salimans & Kingma, 2016). On the first training epoch, Zand~zare zero as no data fromprevious epochs is available. For this reason, we specify the unsupervised weight ramp-up functionw(t)to also be zero on the first training epoch.1Squared difference gave slightly but consistently better results than cross-entropy loss in our tests.3Published as a conference paper at ICLR 2017Algorithm 2 Temporal ensembling pseudocode. Note that the updates of Zand~zcould equallywell be done inside the minibatch loop; in this pseudocode they occur between epochs for clarity.Require:xi= training stimuliRequire:L= set of training input indices with known labelsRequire:yi= labels for labeled inputs i2LRequire:= ensembling momentum, 0<1Require:w(t)= unsupervised weight ramp-up functionRequire:f(x)= stochastic neural network with trainable parameters Require:g(x)= stochastic input augmentation functionZ 0[NC] .initialize ensemble predictions~z 0[NC] .initialize target vectorsfortin[1;num epochs ]doforeach minibatch Bdozi2B f(g(xi2B;t)) .evaluate network outputs for augmented inputsloss 1jBjPi2(B\L)logzi[yi].supervised loss component+w(t)1CjBjPi2Bjjzi~zijj2.unsupervised loss componentupdateusing, e.g., A DAM .update network parametersend forZ Z+ (1)z . accumulate ensemble predictions~z Z=(1t) .construct target vectors by bias correctionend forreturnThe benefits of temporal ensembling compared to -model are twofold. First, the training is fasterbecause the network is evaluated only once per input on each epoch. Second, the training targets~zcan be expected to be less noisy than with -model. As shown in Section 3, we indeed obtainsomewhat better results with temporal ensembling than with -model in the same number of trainingepochs. The downside compared to -model is the need to store auxiliary data across epochs, andthe new hyperparameter . While the matrix Zcan be fairly large when the dataset contains a largenumber of items and categories, its elements are accessed relatively infrequently. Thus it can bestored, e.g., in a memory mapped file.An intriguing additional possibility of temporal ensembling is collecting other statistics from thenetwork predictions zibesides the mean. For example, by tracking the second raw moment ofthe network outputs, we can estimate the variance of each output component zi;j. This makes itpossible to reason about the uncertainty of network outputs in a principled way (Gal & Ghahramani,2016). Based on this information, we could, e.g., place more weight on more certain predictionsvs. uncertain ones in the unsupervised loss term. However, we leave the exploration of these avenuesas future work.3 R ESULTSOur network structure is given in Table 5, and the test setup and all training parameters are detailedin Appendix A. We test the -model and temporal ensembling in two image classification tasks,CIFAR-10 and SVHN, and report the mean and standard deviation of 10 runs using different randomseeds.Although it is rarely stated explicitly, we believe that our comparison methods do not use input aug-mentation, i.e., are limited to dropout and other forms of permutation-invariant noise. Therefore wereport the error rates without augmentation, unless explicitly stated otherwise. Given that the abilityof an algorithm to extract benefit from augmentation is also an important property, we report theclassification accuracy using a standard set of augmentations as well. In purely supervised trainingthe de facto standard way of augmenting the CIFAR-10 dataset includes horizontal flips and randomtranslations, while SVHN is limited to random translations. By using these same augmentations wecan compare against the best fully supervised results as well. After all, the fully supervised resultsshould indicate the upper bound of obtainable accuracy.4Published as a conference paper at ICLR 2017Table 1: CIFAR-10 results with 4000 labels, averages of 10 runs (4 runs for all labels).Error rate (%) with # labels4000 All (50000)Supervised-only 35:561:59 7:330:04with augmentation 34:851:65 6:050:15Conv-Large, -model (Rasmus et al., 2015) 20:400:47CatGAN (Springenberg, 2016) 19:580:58GAN of Salimans et al. (2016) 18:632:32-model 16:550:29 6:900:07-model with augmentation 12:360:31 5.560.10Temporal ensembling with augmentation 12.160.24 5:600:10Table 2: SVHN results for 500 and 1000 labels, averages of 10 runs (4 runs for all labels).Error rate (%) with # labelsModel500 1000 All (73257)Supervised-only 35:185:61 20:472:64 3:050:07with augmentation 31:593:60 19:303:89 2:880:03DGN (Kingma et al., 2014) 36:020:10Virtual Adversarial (Miyato et al., 2016) 24:63ADGM (Maaløe et al., 2016) 22:86SDGM (Maaløe et al., 2016) 16:610:24GAN of Salimans et al. (2016) 18:444:8 8:111:3-model 7:050:30 5:430:25 2:780:03-model with augmentation 6:650:53 4:820:17 2.540.04Temporal ensembling with augmentation 5.120.13 4.420.16 2:740:063.1 CIFAR-10CIFAR-10 is a dataset consisting of 3232pixel RGB images from ten classes. Table 1 shows a2:1percentage point reduction in classification error rate with 4000 labels (400 per class) comparedto earlier methods for the non-augmented -model.Enabling the standard set of augmentations further reduces the error rate by 4:2percentage pointsto12:36%. Temporal ensembling is slightly better still at 12:16%, while being twice as fast totrain. This small improvement conceals the subtle fact that random horizontal flips need to be doneindependently for each epoch in temporal ensembling, while -model can randomize once per apair of evaluations, which according to our measurements is 0.5 percentage points better thanindependent flips.A principled comparison with Sajjadi et al. (2016b) is difficult due to several reasons. They provideresults only for a fairly extreme set of augmentations (translations, flipping, rotations, stretching,and shearing) on top of fractional max pooling (Graham, 2014), which introduces random, localstretching inside the network, and is known to improve classification results substantially. Theyquote an error rate of only 13.60% for supervised-only training with 4000 labels, while our cor-responding baseline is 34.85%. This gap indicates a huge benefit from versatile augmentationsand fractional max pooling—in fact, their baseline result is already better than any previous semi-supervised results. By enabling semi-supervised learning they achieve a 17% drop in classificationerror rate (from 13.60% to 11.29%), while we see a much larger relative drop of 65% (from 34.85%to 12.16%).3.2 SVHNThe street view house numbers (SVHN) dataset consists of 3232pixel RGB images of real-worldhouse numbers, and the task is to classify the centermost digit. In SVHN we chose to use only the5Published as a conference paper at ICLR 2017Table 3: CIFAR-100 results with 10000 labels, averages of 10 runs (4 runs for all labels).Error rate (%) with # labels10000 All (50000)Supervised-only 51:210:33 29:140:25with augmentation 44:560:30 26:420:17-model 43:430:54 29:060:21-model with augmentation 39:190:36 26:320:04Temporal ensembling with augmentation 38.650.51 26.300.15Table 4: CIFAR-100 + Tiny Images results, averages of 10 runs.Error rate (%) with # unlabeledauxiliary inputs from Tiny ImagesRandom 500k Restricted 237k-model with augmentation 25:790:17 25:430:32Temporal ensembling with augmentation 23.620.23 23.790.24official 73257 training examples following Salimans et al. (2016). Even with this choice our errorrate with all labels is only 3:05% without augmentation.Table 2 compares our method to the previous state-of-the-art. With the most commonly used 1000labels we observe an improvement of 2:7percentage points, from 8:11% to5:43% without augmen-tation, and further to 4:42% with standard augmentations.We also investigated the behavior with 500 labels, where we obtained an error rate less than halfof Salimans et al. (2016) without augmentations, with a significantly lower standard deviation aswell. When augmentations were enabled, temporal ensembling further reduced the error rate to5:12%. In this test the difference between -model and temporal ensembling was quite significantat1:5percentage points.In SVHN Sajjadi et al. (2016b) provide results without augmentation, with the caveat that theyuse fractional max pooling, which is a very augmentation-like technique due to the random, localstretching it introduces inside the network. It leads to a superb error rate of 2.28% in supervised-only training, while our corresponding baseline is 3.05% (or 2.88% with translations). Given thatin a separate experiment our network matched the best published result for non-augmented SVHNwhen extra data is used (1.69% from Lee et al. (2015)), this gap is quite surprising, and leads us toconclude that fractional max pooling leads to a powerful augmentation of the dataset, well beyondwhat simple translations can achieve. Our temporal ensembling technique obtains better error ratesfor both 500 and 1000 labels (5.12% and 4.42%, respectively) compared to the 6.03% reported bySajjadi et al. for 732 labels.3.3 CIFAR-100 AND TINYIMAGESThe CIFAR-100 dataset consists of 3232pixel RGB images from a hundred classes. We arenot aware of previous semi-supervised results in this dataset, and chose 10000 labels for our ex-periments. Table 3 shows error rates of 43:43% and38:65% without and with augmentation, re-spectively. These correspond to 7.8 and 5.9 percentage point improvements compared to supervisedlearning with labeled inputs only.We ran two additional tests using unlabeled extra data from Tiny Images dataset (Torralba et al.,2008): one with randomly selected 500k extra images, most not corresponding to any of the CIFAR-100 categories, and another with a restricted set of 237k images from the categories that correspondto those found in the CIFAR-100 dataset (see appendix A for details). The results are shown inTable 4. The addition of randomly selected, unlabeled extra images improved the error rate by 2:7percentage points (from 26:30% to23:63%), indicating a desirable ability to learn from randomnatural images. Temporal ensembling benefited much more from the extra data than the -model.Interestingly, restricting the extra data to categories that are present in CIFAR-100 did not improve6Published as a conference paper at ICLR 201701020304050607080901000%20%50%80%90%01020304050607080901000%20%50%80%90%Standard supervisedTemporal ensembling1300Classification accuracy (%)epoch1300epochFigure 2: Percentage of correct SVHN classifications as a function of training epoch when a part ofthe labels is randomized. With standard supervised training (left) the classification accuracy sufferswhen even a small portion of the labels give disinformation, and the situation worsens quickly asthe portion of randomized labels increases to 50% or more. On the other hand, temporal ensembling(right) shows almost perfect resistance to disinformation when half of the labels are random, andretains over ninety percent classification accuracy even when 80% of the labels are random.the classification accuracy further. This indicates that in order to train a better classifier by addingextra data as unlabeled inputs, it is enough to have the extra data roughly in the same space as theactual inputs—in our case, natural images. We hypothesize that it may even be possible to useproperly crafted synthetic data as unlabeled inputs to obtain improved classifiers.In order to keep the training times tolerable, we limited the number of unlabeled inputs to 50k perepoch in these tests, i.e., on every epoch we trained using all 50k labeled inputs from CIFAR-100 and50k additional unlabeled inputs from Tiny Images. The 50k unlabeled inputs were chosen randomlyon each epoch from the 500k or 237k extra inputs. In temporal ensembling, after each epoch weupdated only the rows of Zthat corresponded to inputs used on that epoch.3.4 S UPERVISED LEARNINGWhen all labels are used for traditional supervised training, our network approximately matchesthe state-of-the-art error rate for a single model in CIFAR-10 with augmentation (Lee et al., 2015;Mishkin & Matas, 2016) at 6:05%, and without augmentation (Salimans & Kingma, 2016) at 7:33%.The same is probably true for SVHN as well, but there the best published results rely on extra datathat we chose not to use.Given this premise, it is perhaps somewhat surprising that our methods reduce the error rate alsowhen all labels are used (Tables 1 and 2). We believe that this is an indication that the consis-tency requirement adds a degree of resistance to ambiguous labels that are fairly common in manyclassification tasks, and that it encourages features to be more invariant to stochastic sampling.3.5 T OLERANCE TO INCORRECT LABELSIn a further test we studied the hypothesis that our methods add tolerance to incorrect labels byassigning a random label to a certain percentage of the training set before starting to train. Figure 2shows the classification error graphs for standard supervised training and temporal ensembling.Clearly our methods provide considerable resistance to wrong labels, and we believe this is becausethe unsupervised loss term encourages the mapping function implemented by the network to beflat in the vicinity of all input data points, whereas the supervised loss term enforces the mappingfunction to have a specific value in the vicinity of the labeled input data points. This means thateven the wrongly labeled inputs play a role in shaping the mapping function—the unsupervisedloss term smooths the mapping function and thus also the decision boundaries, effectively fusingthe inputs into coherent clusters, whereas the excess of correct labels in each class is sufficient forlocking the clusters to the right output vectors through the supervised loss term. The difference toclassical regularizers is that we induce smoothness only on the manifold of likely inputs instead7Published as a conference paper at ICLR 2017of over the entire input domain. For further analysis about the importance of the gradient of themapping function, see Simard et al. (1998).4 R ELATED WORKThere is a large body of previous work on semi-supervised learning (Zhu, 2005). In here we willconcentrate on the ones that are most directly connected to our work.-model is a subset of a ladder network (Rasmus et al., 2015) that introduces lateral connections intoan encoder-decoder type network architecture, targeted at semi-supervised learning. In -model, allbut the highest lateral connections in the ladder network are removed, and after pruning the un-necessary stages, the remaining network consists of two parallel, identical branches. One of thebranches takes the original training inputs, whereas the other branch is given the same input cor-rupted with noise. The unsupervised loss term is computed as the squared difference between the(pre-activation) output of the clean branch and a denoised (pre-activation) output of the corruptedbranch. The denoised estimate is computed from the output of the corrupted branch using a para-metric nonlinearity that has 10 auxiliary trainable parameters per unit. Our -model differs fromthe-model in removing the parametric nonlinearity and denoising, having two corrupted paths,and comparing the outputs of the network instead of pre-activation data of the final layer.Sajjadi et al. (2016b) recently introduced a new loss function for semi-supervised learning, so calledtransform/stability loss, which is founded on the same principle as our work. During training, theyrun augmentation and network evaluation ntimes for each minibatch, and then compute an unsu-pervised loss term as the sum of all pairwise squared distances between the obtained nnetworkoutputs. As such, their technique follows the general pseudo-ensemble agreement (PEA) regular-ization framework of Bachman et al. (2014). In addition, they employ a mutual exclusivity lossterm (Sajjadi et al., 2016a) that we do not use. Our -model can be seen as a special case of thetransform/stability loss obtained by setting n= 2. The computational cost of training with trans-form/stability loss increases linearly as a function of n, whereas the efficiency of our temporalensembling technique remains constant regardless of how large effective ensemble we obtain via theaveraging of previous epochs’ predictions.In bootstrap aggregating, or bagging , multiple networks are trained independently based on subsetsof training data (Breiman, 1996). This results in an ensemble that is more stable and accuratethan the individual networks. Our approach can be seen as pulling the predictions from an implicitensemble that is based on a single network, and the variability is a result of evaluating it underdifferent dropout and augmentation conditions instead of training on different subsets of data. Inwork parallel to ours, Huang et al. (2017) store multiple snapshots of the network during training,hopefully corresponding to different local minima, and use them as an explicit ensemble.The general technique of inferring new labels from partially labeled data is often referred to as boot-strapping orself-training , and it was first proposed by Yarowsky (1995) in the context of linguisticanalysis. Whitney & Sarkar (2012) analyze Yarowsky’s algorithm and propose a novel graph-basedlabel propagation approach. Similarly, label propagation methods (Zhu & Ghahramani, 2002) inferlabels for unlabeled training data by comparing the associated inputs to labeled training inputs usinga suitable distance metric. Our approach differs from this in two important ways. Firstly, we nevercompare training inputs against each other, but instead only rely on the unknown labels remainingconstant, and secondly, we let the network produce the likely classifications for the unlabeled inputsinstead of providing them through an outside process.In addition to partially labeled data, considerable amount of effort has been put into dealing withdensely but inaccurately labeled data. This can be seen as a semi-supervised learning task where partof the training process is to identify the labels that are not to be trusted. For recent work in this area,see, e.g., Sukhbaatar et al. (2014) and Patrini et al. (2016). In this context of noisy labels, Reed et al.(2014) presented a simple bootstrapping method that trains a classifier with the target composed ofa convex combination of the previous epoch output and the known but potentially noisy labels. Ourtemporal ensembling differs from this by taking into account the evaluations over multiple epochs.Generative Adversarial Networks (GAN) have been recently used for semi-supervised learning withpromising results (Maaløe et al., 2016; Springenberg, 2016; Odena, 2016; Salimans et al., 2016). It8Published as a conference paper at ICLR 2017Table 5: The network architecture used in all of our tests.NAME DESCRIPTIONinput 3232RGB imagenoise Additive Gaussian noise = 0:15conv1a 128filters, 33, pad = ’same’, LReLU ( = 0:1)conv1b 128filters, 33, pad = ’same’, LReLU ( = 0:1)conv1c 128filters, 33, pad = ’same’, LReLU ( = 0:1)pool1 Maxpool 22pixelsdrop1 Dropout,p= 0:5conv2a 256filters, 33, pad = ’same’, LReLU ( = 0:1)conv2b 256filters, 33, pad = ’same’, LReLU ( = 0:1)conv2c 256filters, 33, pad = ’same’, LReLU ( = 0:1)pool2 Maxpool 22pixelsdrop2 Dropout,p= 0:5conv3a 512filters, 33, pad = ’valid’, LReLU ( = 0:1)conv3b 256filters, 11, LReLU (= 0:1)conv3c 128filters, 11, LReLU (= 0:1)pool3 Global average pool ( 66!11 pixels)dense Fully connected 128!10output Softmaxcould be an interesting avenue for future work to incorporate a generative component to our solution.We also envision that our methods could be applied to regression-type learning tasks.5 A CKNOWLEDGEMENTSWe thank the anonymous reviewers, Tero Karras, Pekka J ̈anis, Tim Salimans, Ian Goodfellow, aswell as Harri Valpola and his colleagues at Curious AI for valuable suggestions that helped to im-prove this article.
B1u6EURmg
8: Top 50% of accepted papers, clear accept
This paper presents a semi-supervised technique for “self-ensembling” where the model uses a consensus prediction (computed from previous epochs) as a target to regress to, in addition to the usual supervised learning loss. This has connections to the “dark knowledge” idea, ladder networks work is shown in this paper to be a promising technique for scenarios with few labeled examples (but not only). The paper presents two versions of the idea: one which is computationally expensive (and high variance) in that it needs two passes through the same example at a given step, and a temporal ensembling method that is stabler, cheaper computationally but more memory hungry and requires an extra hyper-parameter. My thoughts on this work are mostly positive. The drawbacks that I see are that the temporal ensembling work requires potentially a lot of memory, and non-trivial infrastructure / book-keeping for imagenet-sized experiments. I am quite confused by the Figure 2 / Section 3.4 experiments about tolerance to noisy labels: it’s *very* incredible to me that by making 90% of the labels random one can still train a classifier that is either 30% accurate or ~78% accurate (depending on whether or not temporal ensembling was used). I don’t see how that can happen, basically. Minor stuff: Please bold the best-in-category results in your tables. I think it would be nice to talk about the ramp-up of w(t) in the main paper. The authors should consider putting the state of the art results for the fully-supervised case in their tables, instead of just their own. I am confused as to why the authors chose not to use more SVHN examples. The stated reason that it’d be “too easy” seems a bit contrived: if they used all examples it would also make it easy to compare to previous work.
5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature
BJ6oOfqge
ICLR.cc/2017/conference
2017
Temporal Ensembling for Semi-Supervised Learning
["Samuli Laine", "Timo Aila"]
In this paper, we present a simple and efficient method for training deep neural networks in a semi-supervised setting where only a small portion of training data is labeled. We introduce self-ensembling, where we form a consensus prediction of the unknown labels using the outputs of the network-in-training on different epochs, and most importantly, under different regularization and input augmentation conditions. This ensemble prediction can be expected to be a better predictor for the unknown labels than the output of the network at the most recent training epoch, and can thus be used as a target for training. Using our method, we set new records for two standard semi-supervised learning benchmarks, reducing the (non-augmented) classification error rate from 18.44% to 7.05% in SVHN with 500 labels and from 18.63% to 16.55% in CIFAR-10 with 4000 labels, and further to 5.12% and 12.16% by enabling the standard augmentations. We additionally obtain a clear improvement in CIFAR-100 classification accuracy by using random images from the Tiny Images dataset as unlabeled extra inputs during training. Finally, we demonstrate good tolerance to incorrect labels.
["labels", "unknown labels", "training", "temporal", "learning temporal", "learning", "simple", "efficient", "deep neural networks", "setting"]
ABSTRACTIn this paper, we present a simple and efficient method for training deep neuralnetworks in a semi-supervised setting where only a small portion of training datais labeled. We introduce self-ensembling, where we form a consensus predictionof the unknown labels using the outputs of the network-in-training on differentepochs, and most importantly, under different regularization and input augmenta-tion conditions. This ensemble prediction can be expected to be a better predictorfor the unknown labels than the output of the network at the most recent trainingepoch, and can thus be used as a target for training. Using our method, we setnew records for two standard semi-supervised learning benchmarks, reducing the(non-augmented) classification error rate from 18.44% to 7.05% in SVHN with500 labels and from 18.63% to 16.55% in CIFAR-10 with 4000 labels, and furtherto 5.12% and 12.16% by enabling the standard augmentations. We additionallyobtain a clear improvement in CIFAR-100 classification accuracy by using ran-dom images from the Tiny Images dataset as unlabeled extra inputs during train-ing. Finally, we demonstrate good tolerance to incorrect labels.1 I NTRODUCTIONIt has long been known that an ensemble of multiple neural networks generally yields better pre-dictions than a single network in the ensemble. This effect has also been indirectly exploited whentraining a single network through dropout (Srivastava et al., 2014), dropconnect (Wan et al., 2013),or stochastic depth (Huang et al., 2016) regularization methods, and in swapout networks (Singhet al., 2016), where training always focuses on a particular subset of the network, and thus the com-plete network can be seen as an implicit ensemble of such trained sub-networks. We extend this ideaby forming ensemble predictions during training, using the outputs of a single network on differenttraining epochs and under different regularization and input augmentation conditions. Our train-ing still operates on a single network, but the predictions made on different epochs correspond to anensemble prediction of a large number of individual sub-networks because of dropout regularization.This ensemble prediction can be exploited for semi-supervised learning where only a small portionof training data is labeled. If we compare the ensemble prediction to the current output of the net-work being trained, the ensemble prediction is likely to be closer to the correct, unknown labels ofthe unlabeled inputs. Therefore the labels inferred this way can be used as training targets for theunlabeled inputs. Our method relies heavily on dropout regularization and versatile input augmen-tation. Indeed, without neither, there would be much less reason to place confidence in whateverlabels are inferred for the unlabeled training data.We describe two ways to implement self-ensembling, -model and temporal ensembling. Both ap-proaches surpass prior state-of-the-art results in semi-supervised learning by a considerable margin.We furthermore observe that self-ensembling improves the classification accuracy in fully labeledcases as well, and provides tolerance against incorrect labels.The recently introduced transform/stability loss of Sajjadi et al. (2016b) is based on the same prin-ciple as our work, and the -model can be seen as a special case of it. The -model can also beseen as a simplification of the -model of the ladder network by Rasmus et al. (2015), a previouslypresented network architecture for semi-supervised learning. Our temporal ensembling method hasconnections to the bootstrapping method of Reed et al. (2014) targeted for training with noisy labels.1Published as a conference paper at ICLR 2017xiyistochasticaugmentationnetworkwith dropoutzi~zicross-entropysquareddifferenceweightedsumlossxiyistochasticaugmentationzi~zicross-entropysquareddifferenceweightedsumlosszinetworkwith dropoutw(t)w(t)Temporal ensemblingП-modelFigure 1: Structure of the training pass in our methods. Top: -model. Bottom: temporal en-sembling. Labels yiare available only for the labeled inputs, and the associated cross-entropy losscomponent is evaluated only for those.Algorithm 1 -model pseudocode.Require:xi= training stimuliRequire:L= set of training input indices with known labelsRequire:yi= labels for labeled inputs i2LRequire:w(t)= unsupervised weight ramp-up functionRequire:f(x)= stochastic neural network with trainable parameters Require:g(x)= stochastic input augmentation functionfortin[1;num epochs ]doforeach minibatch Bdozi2B f(g(xi2B)) .evaluate network outputs for augmented inputs~zi2B f(g(xi2B)) .again, with different dropout and augmentationloss 1jBjPi2(B\L)logzi[yi].supervised loss component+w(t)1CjBjPi2Bjjzi~zijj2.unsupervised loss componentupdateusing, e.g., A DAM .update network parametersend forend forreturn2 S ELF-ENSEMBLING DURING TRAININGWe present two implementations of self-ensembling during training. The first one, -model, en-courages consistent network output between two realizations of the same input stimulus, under twodifferent dropout conditions. The second method, temporal ensembling, simplifies and extends thisby taking into account the network predictions over multiple previous training epochs.We shall describe our methods in the context of traditional image classification networks. Let thetraining data consist of total of Ninputs, out of which Mare labeled. The input stimuli, availablefor all training data, are denoted xi, wherei2f1:::Ng. Let setLcontain the indices of the labeledinputs,jLj=M. For everyi2L, we have a known correct label yi2f1:::Cg, whereCis thenumber of different classes.2.1 -MODELThe structure of -model is shown in Figure 1 (top), and the pseudocode in Algorithm 1. Duringtraining, we evaluate the network for each training input xitwice, resulting in prediction vectors ziand~zi. Our loss function consists of two components. The first component is the standard cross-entropy loss, evaluated for labeled inputs only. The second component, evaluated for all inputs,penalizes different predictions for the same training input xiby taking the mean square difference2Published as a conference paper at ICLR 2017between the prediction vectors ziand~zi.1To combine the supervised and unsupervised loss terms,we scale the latter by time-dependent weighting function w(t). By comparing the entire outputvectorsziand~zi, we effectively ask the “dark knowledge” (Hinton et al., 2015) between the twoevaluations to be close, which is a much stronger requirement compared to asking that only the finalclassification remains the same, which is what happens in traditional training.It is important to notice that, because of dropout regularization, the network output during trainingis a stochastic variable. Thus two evaluations of the same input xiunder same network weights yield different results. In addition, Gaussian noise and augmentations such as random translationare evaluated twice, resulting in additional variation. The combination of these effects explainsthe difference between the prediction vectors ziand~zi. This difference can be seen as an error inclassification, given that the original input xiwas the same, and thus minimizing it is a reasonablegoal.In our implementation, the unsupervised loss weighting function w(t)ramps up, starting from zero,along a Gaussian curve during the first 80 training epochs. See Appendix A for further details aboutthis and other training parameters. In the beginning the total loss and the learning gradients are thusdominated by the supervised loss component, i.e., the labeled data only. We have found it to bevery important that the ramp-up of the unsupervised loss component is slow enough—otherwise,the network gets easily stuck in a degenerate solution where no meaningful classification of the datais obtained.Our approach is somewhat similar to the -model of the ladder network by Rasmus et al. (2015), butconceptually simpler. In the -model, the comparison is done directly on network outputs, i.e., aftersoftmax activation, and there is no auxiliary mapping between the two branches such as the learneddenoising functions in the ladder network architecture. Furthermore, instead of having one “clean”and one “corrupted” branch as in -model, we apply equal augmentation and noise to the inputs forboth branches.As shown in Section 3, the -model combined with a good convolutional network architectureprovides a significant improvement over prior art in classification accuracy.2.2 T EMPORAL ENSEMBLINGAnalyzing how the -model works, we could equally well split the evaluation of the two branches intwo separate phases: first classifying the training set once without updating the weights , and thentraining the network on the same inputs under different augmentations and dropout, using the justobtained predictions as targets for the unsupervised loss component. As the training targets obtainedthis way are based on a single evaluation of the network, they can be expected to be noisy. Temporalensembling alleviates this by aggregating the predictions of multiple previous network evaluationsinto an ensemble prediction. It also lets us evaluate the network only once during training, gainingan approximate 2x speedup over the -model.The structure of our temporal ensembling method is shown in Figure 1 (bottom), and the pseudocodein Algorithm 2. The main difference to the -model is that the network and augmentations areevaluated only once per input per epoch, and the target vectors ~zfor the unsupervised loss componentare based on prior network evaluations instead of a second evaluation of the network.After every training epoch, the network outputs ziare accumulated into ensemble outputs ZibyupdatingZi Zi+ (1)zi, whereis a momentum term that controls how far the ensemblereaches into training history. Because of dropout regularization and stochastic augmentation, Zthuscontains a weighted average of the outputs of an ensemble of networks ffrom previous trainingepochs, with recent epochs having larger weight than distant epochs. For generating the trainingtargets ~z, we need to correct for the startup bias in Zby dividing by factor (1t). A similarbias correction has been used in, e.g., Adam (Kingma & Ba, 2014) and mean-only batch normal-ization (Salimans & Kingma, 2016). On the first training epoch, Zand~zare zero as no data fromprevious epochs is available. For this reason, we specify the unsupervised weight ramp-up functionw(t)to also be zero on the first training epoch.1Squared difference gave slightly but consistently better results than cross-entropy loss in our tests.3Published as a conference paper at ICLR 2017Algorithm 2 Temporal ensembling pseudocode. Note that the updates of Zand~zcould equallywell be done inside the minibatch loop; in this pseudocode they occur between epochs for clarity.Require:xi= training stimuliRequire:L= set of training input indices with known labelsRequire:yi= labels for labeled inputs i2LRequire:= ensembling momentum, 0<1Require:w(t)= unsupervised weight ramp-up functionRequire:f(x)= stochastic neural network with trainable parameters Require:g(x)= stochastic input augmentation functionZ 0[NC] .initialize ensemble predictions~z 0[NC] .initialize target vectorsfortin[1;num epochs ]doforeach minibatch Bdozi2B f(g(xi2B;t)) .evaluate network outputs for augmented inputsloss 1jBjPi2(B\L)logzi[yi].supervised loss component+w(t)1CjBjPi2Bjjzi~zijj2.unsupervised loss componentupdateusing, e.g., A DAM .update network parametersend forZ Z+ (1)z . accumulate ensemble predictions~z Z=(1t) .construct target vectors by bias correctionend forreturnThe benefits of temporal ensembling compared to -model are twofold. First, the training is fasterbecause the network is evaluated only once per input on each epoch. Second, the training targets~zcan be expected to be less noisy than with -model. As shown in Section 3, we indeed obtainsomewhat better results with temporal ensembling than with -model in the same number of trainingepochs. The downside compared to -model is the need to store auxiliary data across epochs, andthe new hyperparameter . While the matrix Zcan be fairly large when the dataset contains a largenumber of items and categories, its elements are accessed relatively infrequently. Thus it can bestored, e.g., in a memory mapped file.An intriguing additional possibility of temporal ensembling is collecting other statistics from thenetwork predictions zibesides the mean. For example, by tracking the second raw moment ofthe network outputs, we can estimate the variance of each output component zi;j. This makes itpossible to reason about the uncertainty of network outputs in a principled way (Gal & Ghahramani,2016). Based on this information, we could, e.g., place more weight on more certain predictionsvs. uncertain ones in the unsupervised loss term. However, we leave the exploration of these avenuesas future work.3 R ESULTSOur network structure is given in Table 5, and the test setup and all training parameters are detailedin Appendix A. We test the -model and temporal ensembling in two image classification tasks,CIFAR-10 and SVHN, and report the mean and standard deviation of 10 runs using different randomseeds.Although it is rarely stated explicitly, we believe that our comparison methods do not use input aug-mentation, i.e., are limited to dropout and other forms of permutation-invariant noise. Therefore wereport the error rates without augmentation, unless explicitly stated otherwise. Given that the abilityof an algorithm to extract benefit from augmentation is also an important property, we report theclassification accuracy using a standard set of augmentations as well. In purely supervised trainingthe de facto standard way of augmenting the CIFAR-10 dataset includes horizontal flips and randomtranslations, while SVHN is limited to random translations. By using these same augmentations wecan compare against the best fully supervised results as well. After all, the fully supervised resultsshould indicate the upper bound of obtainable accuracy.4Published as a conference paper at ICLR 2017Table 1: CIFAR-10 results with 4000 labels, averages of 10 runs (4 runs for all labels).Error rate (%) with # labels4000 All (50000)Supervised-only 35:561:59 7:330:04with augmentation 34:851:65 6:050:15Conv-Large, -model (Rasmus et al., 2015) 20:400:47CatGAN (Springenberg, 2016) 19:580:58GAN of Salimans et al. (2016) 18:632:32-model 16:550:29 6:900:07-model with augmentation 12:360:31 5.560.10Temporal ensembling with augmentation 12.160.24 5:600:10Table 2: SVHN results for 500 and 1000 labels, averages of 10 runs (4 runs for all labels).Error rate (%) with # labelsModel500 1000 All (73257)Supervised-only 35:185:61 20:472:64 3:050:07with augmentation 31:593:60 19:303:89 2:880:03DGN (Kingma et al., 2014) 36:020:10Virtual Adversarial (Miyato et al., 2016) 24:63ADGM (Maaløe et al., 2016) 22:86SDGM (Maaløe et al., 2016) 16:610:24GAN of Salimans et al. (2016) 18:444:8 8:111:3-model 7:050:30 5:430:25 2:780:03-model with augmentation 6:650:53 4:820:17 2.540.04Temporal ensembling with augmentation 5.120.13 4.420.16 2:740:063.1 CIFAR-10CIFAR-10 is a dataset consisting of 3232pixel RGB images from ten classes. Table 1 shows a2:1percentage point reduction in classification error rate with 4000 labels (400 per class) comparedto earlier methods for the non-augmented -model.Enabling the standard set of augmentations further reduces the error rate by 4:2percentage pointsto12:36%. Temporal ensembling is slightly better still at 12:16%, while being twice as fast totrain. This small improvement conceals the subtle fact that random horizontal flips need to be doneindependently for each epoch in temporal ensembling, while -model can randomize once per apair of evaluations, which according to our measurements is 0.5 percentage points better thanindependent flips.A principled comparison with Sajjadi et al. (2016b) is difficult due to several reasons. They provideresults only for a fairly extreme set of augmentations (translations, flipping, rotations, stretching,and shearing) on top of fractional max pooling (Graham, 2014), which introduces random, localstretching inside the network, and is known to improve classification results substantially. Theyquote an error rate of only 13.60% for supervised-only training with 4000 labels, while our cor-responding baseline is 34.85%. This gap indicates a huge benefit from versatile augmentationsand fractional max pooling—in fact, their baseline result is already better than any previous semi-supervised results. By enabling semi-supervised learning they achieve a 17% drop in classificationerror rate (from 13.60% to 11.29%), while we see a much larger relative drop of 65% (from 34.85%to 12.16%).3.2 SVHNThe street view house numbers (SVHN) dataset consists of 3232pixel RGB images of real-worldhouse numbers, and the task is to classify the centermost digit. In SVHN we chose to use only the5Published as a conference paper at ICLR 2017Table 3: CIFAR-100 results with 10000 labels, averages of 10 runs (4 runs for all labels).Error rate (%) with # labels10000 All (50000)Supervised-only 51:210:33 29:140:25with augmentation 44:560:30 26:420:17-model 43:430:54 29:060:21-model with augmentation 39:190:36 26:320:04Temporal ensembling with augmentation 38.650.51 26.300.15Table 4: CIFAR-100 + Tiny Images results, averages of 10 runs.Error rate (%) with # unlabeledauxiliary inputs from Tiny ImagesRandom 500k Restricted 237k-model with augmentation 25:790:17 25:430:32Temporal ensembling with augmentation 23.620.23 23.790.24official 73257 training examples following Salimans et al. (2016). Even with this choice our errorrate with all labels is only 3:05% without augmentation.Table 2 compares our method to the previous state-of-the-art. With the most commonly used 1000labels we observe an improvement of 2:7percentage points, from 8:11% to5:43% without augmen-tation, and further to 4:42% with standard augmentations.We also investigated the behavior with 500 labels, where we obtained an error rate less than halfof Salimans et al. (2016) without augmentations, with a significantly lower standard deviation aswell. When augmentations were enabled, temporal ensembling further reduced the error rate to5:12%. In this test the difference between -model and temporal ensembling was quite significantat1:5percentage points.In SVHN Sajjadi et al. (2016b) provide results without augmentation, with the caveat that theyuse fractional max pooling, which is a very augmentation-like technique due to the random, localstretching it introduces inside the network. It leads to a superb error rate of 2.28% in supervised-only training, while our corresponding baseline is 3.05% (or 2.88% with translations). Given thatin a separate experiment our network matched the best published result for non-augmented SVHNwhen extra data is used (1.69% from Lee et al. (2015)), this gap is quite surprising, and leads us toconclude that fractional max pooling leads to a powerful augmentation of the dataset, well beyondwhat simple translations can achieve. Our temporal ensembling technique obtains better error ratesfor both 500 and 1000 labels (5.12% and 4.42%, respectively) compared to the 6.03% reported bySajjadi et al. for 732 labels.3.3 CIFAR-100 AND TINYIMAGESThe CIFAR-100 dataset consists of 3232pixel RGB images from a hundred classes. We arenot aware of previous semi-supervised results in this dataset, and chose 10000 labels for our ex-periments. Table 3 shows error rates of 43:43% and38:65% without and with augmentation, re-spectively. These correspond to 7.8 and 5.9 percentage point improvements compared to supervisedlearning with labeled inputs only.We ran two additional tests using unlabeled extra data from Tiny Images dataset (Torralba et al.,2008): one with randomly selected 500k extra images, most not corresponding to any of the CIFAR-100 categories, and another with a restricted set of 237k images from the categories that correspondto those found in the CIFAR-100 dataset (see appendix A for details). The results are shown inTable 4. The addition of randomly selected, unlabeled extra images improved the error rate by 2:7percentage points (from 26:30% to23:63%), indicating a desirable ability to learn from randomnatural images. Temporal ensembling benefited much more from the extra data than the -model.Interestingly, restricting the extra data to categories that are present in CIFAR-100 did not improve6Published as a conference paper at ICLR 201701020304050607080901000%20%50%80%90%01020304050607080901000%20%50%80%90%Standard supervisedTemporal ensembling1300Classification accuracy (%)epoch1300epochFigure 2: Percentage of correct SVHN classifications as a function of training epoch when a part ofthe labels is randomized. With standard supervised training (left) the classification accuracy sufferswhen even a small portion of the labels give disinformation, and the situation worsens quickly asthe portion of randomized labels increases to 50% or more. On the other hand, temporal ensembling(right) shows almost perfect resistance to disinformation when half of the labels are random, andretains over ninety percent classification accuracy even when 80% of the labels are random.the classification accuracy further. This indicates that in order to train a better classifier by addingextra data as unlabeled inputs, it is enough to have the extra data roughly in the same space as theactual inputs—in our case, natural images. We hypothesize that it may even be possible to useproperly crafted synthetic data as unlabeled inputs to obtain improved classifiers.In order to keep the training times tolerable, we limited the number of unlabeled inputs to 50k perepoch in these tests, i.e., on every epoch we trained using all 50k labeled inputs from CIFAR-100 and50k additional unlabeled inputs from Tiny Images. The 50k unlabeled inputs were chosen randomlyon each epoch from the 500k or 237k extra inputs. In temporal ensembling, after each epoch weupdated only the rows of Zthat corresponded to inputs used on that epoch.3.4 S UPERVISED LEARNINGWhen all labels are used for traditional supervised training, our network approximately matchesthe state-of-the-art error rate for a single model in CIFAR-10 with augmentation (Lee et al., 2015;Mishkin & Matas, 2016) at 6:05%, and without augmentation (Salimans & Kingma, 2016) at 7:33%.The same is probably true for SVHN as well, but there the best published results rely on extra datathat we chose not to use.Given this premise, it is perhaps somewhat surprising that our methods reduce the error rate alsowhen all labels are used (Tables 1 and 2). We believe that this is an indication that the consis-tency requirement adds a degree of resistance to ambiguous labels that are fairly common in manyclassification tasks, and that it encourages features to be more invariant to stochastic sampling.3.5 T OLERANCE TO INCORRECT LABELSIn a further test we studied the hypothesis that our methods add tolerance to incorrect labels byassigning a random label to a certain percentage of the training set before starting to train. Figure 2shows the classification error graphs for standard supervised training and temporal ensembling.Clearly our methods provide considerable resistance to wrong labels, and we believe this is becausethe unsupervised loss term encourages the mapping function implemented by the network to beflat in the vicinity of all input data points, whereas the supervised loss term enforces the mappingfunction to have a specific value in the vicinity of the labeled input data points. This means thateven the wrongly labeled inputs play a role in shaping the mapping function—the unsupervisedloss term smooths the mapping function and thus also the decision boundaries, effectively fusingthe inputs into coherent clusters, whereas the excess of correct labels in each class is sufficient forlocking the clusters to the right output vectors through the supervised loss term. The difference toclassical regularizers is that we induce smoothness only on the manifold of likely inputs instead7Published as a conference paper at ICLR 2017of over the entire input domain. For further analysis about the importance of the gradient of themapping function, see Simard et al. (1998).4 R ELATED WORKThere is a large body of previous work on semi-supervised learning (Zhu, 2005). In here we willconcentrate on the ones that are most directly connected to our work.-model is a subset of a ladder network (Rasmus et al., 2015) that introduces lateral connections intoan encoder-decoder type network architecture, targeted at semi-supervised learning. In -model, allbut the highest lateral connections in the ladder network are removed, and after pruning the un-necessary stages, the remaining network consists of two parallel, identical branches. One of thebranches takes the original training inputs, whereas the other branch is given the same input cor-rupted with noise. The unsupervised loss term is computed as the squared difference between the(pre-activation) output of the clean branch and a denoised (pre-activation) output of the corruptedbranch. The denoised estimate is computed from the output of the corrupted branch using a para-metric nonlinearity that has 10 auxiliary trainable parameters per unit. Our -model differs fromthe-model in removing the parametric nonlinearity and denoising, having two corrupted paths,and comparing the outputs of the network instead of pre-activation data of the final layer.Sajjadi et al. (2016b) recently introduced a new loss function for semi-supervised learning, so calledtransform/stability loss, which is founded on the same principle as our work. During training, theyrun augmentation and network evaluation ntimes for each minibatch, and then compute an unsu-pervised loss term as the sum of all pairwise squared distances between the obtained nnetworkoutputs. As such, their technique follows the general pseudo-ensemble agreement (PEA) regular-ization framework of Bachman et al. (2014). In addition, they employ a mutual exclusivity lossterm (Sajjadi et al., 2016a) that we do not use. Our -model can be seen as a special case of thetransform/stability loss obtained by setting n= 2. The computational cost of training with trans-form/stability loss increases linearly as a function of n, whereas the efficiency of our temporalensembling technique remains constant regardless of how large effective ensemble we obtain via theaveraging of previous epochs’ predictions.In bootstrap aggregating, or bagging , multiple networks are trained independently based on subsetsof training data (Breiman, 1996). This results in an ensemble that is more stable and accuratethan the individual networks. Our approach can be seen as pulling the predictions from an implicitensemble that is based on a single network, and the variability is a result of evaluating it underdifferent dropout and augmentation conditions instead of training on different subsets of data. Inwork parallel to ours, Huang et al. (2017) store multiple snapshots of the network during training,hopefully corresponding to different local minima, and use them as an explicit ensemble.The general technique of inferring new labels from partially labeled data is often referred to as boot-strapping orself-training , and it was first proposed by Yarowsky (1995) in the context of linguisticanalysis. Whitney & Sarkar (2012) analyze Yarowsky’s algorithm and propose a novel graph-basedlabel propagation approach. Similarly, label propagation methods (Zhu & Ghahramani, 2002) inferlabels for unlabeled training data by comparing the associated inputs to labeled training inputs usinga suitable distance metric. Our approach differs from this in two important ways. Firstly, we nevercompare training inputs against each other, but instead only rely on the unknown labels remainingconstant, and secondly, we let the network produce the likely classifications for the unlabeled inputsinstead of providing them through an outside process.In addition to partially labeled data, considerable amount of effort has been put into dealing withdensely but inaccurately labeled data. This can be seen as a semi-supervised learning task where partof the training process is to identify the labels that are not to be trusted. For recent work in this area,see, e.g., Sukhbaatar et al. (2014) and Patrini et al. (2016). In this context of noisy labels, Reed et al.(2014) presented a simple bootstrapping method that trains a classifier with the target composed ofa convex combination of the previous epoch output and the known but potentially noisy labels. Ourtemporal ensembling differs from this by taking into account the evaluations over multiple epochs.Generative Adversarial Networks (GAN) have been recently used for semi-supervised learning withpromising results (Maaløe et al., 2016; Springenberg, 2016; Odena, 2016; Salimans et al., 2016). It8Published as a conference paper at ICLR 2017Table 5: The network architecture used in all of our tests.NAME DESCRIPTIONinput 3232RGB imagenoise Additive Gaussian noise = 0:15conv1a 128filters, 33, pad = ’same’, LReLU ( = 0:1)conv1b 128filters, 33, pad = ’same’, LReLU ( = 0:1)conv1c 128filters, 33, pad = ’same’, LReLU ( = 0:1)pool1 Maxpool 22pixelsdrop1 Dropout,p= 0:5conv2a 256filters, 33, pad = ’same’, LReLU ( = 0:1)conv2b 256filters, 33, pad = ’same’, LReLU ( = 0:1)conv2c 256filters, 33, pad = ’same’, LReLU ( = 0:1)pool2 Maxpool 22pixelsdrop2 Dropout,p= 0:5conv3a 512filters, 33, pad = ’valid’, LReLU ( = 0:1)conv3b 256filters, 11, LReLU (= 0:1)conv3c 128filters, 11, LReLU (= 0:1)pool3 Global average pool ( 66!11 pixels)dense Fully connected 128!10output Softmaxcould be an interesting avenue for future work to incorporate a generative component to our solution.We also envision that our methods could be applied to regression-type learning tasks.5 A CKNOWLEDGEMENTSWe thank the anonymous reviewers, Tero Karras, Pekka J ̈anis, Tim Salimans, Ian Goodfellow, aswell as Harri Valpola and his colleagues at Curious AI for valuable suggestions that helped to im-prove this article.
SyezfkfEg
Review
9: Top 15% of accepted papers, strong accept
This work explores taking advantage of the stochasticity of neural network outputs under randomized augmentation and regularization techniques to provide targets for unlabeled data in a semi-supervised setting. This is accomplished by either applying stochastic augmentation and regularization on a single image multiple times per epoch and encouraging the outputs to be similar (Π-model) or by keeping a weighted average of past epoch outputs and penalizing deviations of current network outputs from this running mean (temporal ensembling). The core argument is that these approaches produce ensemble predictions which are likely more accurate than the current network and are thus good targets for unlabeled data. Both approaches seem to work quite well on semi-supervised tasks and some results show that they are almost unbelievably robust to label noise. The paper is clearly written and provides sufficient details to reproduce these results in addition to providing a public code base. The core idea of the paper is quite interesting and seems to result in higher semi-supervised accuracy than prior work. I also found the attention to and discussion of the effect of different choices of data augmentation to be useful. I am a little surprised that a standard supervised network can achieve 30% accuracy on SVHN given 90% random training labels. This would only give 19% correctly labeled data (9% by chance + 10% unaltered). I suppose the other 81% would not provide a consistent training signal such that it is possible, but it does seem quite unintuitive. I tried to look through the github for this experiment but it does not seem to be included. As for the resistance of Π-model and temporal ensembling to this label noise, I find that somewhat more believable given the large weights placed on the consistency constraint for this task. The authors should really include discussion of w(t) in the main paper. Especially because the tremendous difference in w_max in the incorrect label tolerance experiment (10x for Π-model and 100x for temporal ensembling from the standard setting). Could the authors comment towards the scalability for larger problems? For ImageNet, you would need to store around 4.8 gigs for the temporal ensembling method or spend 2x as long training with Π-model. Can the authors discuss sensitivity of this approach to the amount and location of dropout layers in the architecture? Preliminary rating: I think this is a very interesting paper with quality results and clear presentation. Minor note: 2nd paragraph of page one 'without neither' -> 'without either'
4: The reviewer is confident but not absolutely certain that the evaluation is correct
Byk-VI9eg
ICLR.cc/2017/conference
2017
Generative Multi-Adversarial Networks
["Ishan Durugkar", "Ian Gemp", "Sridhar Mahadevan"]
Generative adversarial networks (GANs) are a framework for producing a generative model by way of a two-player minimax game. In this paper, we propose the \emph{Generative Multi-Adversarial Network} (GMAN), a framework that extends GANs to multiple discriminators. In previous work, the successful training of GANs requires modifying the minimax objective to accelerate training early on. In contrast, GMAN can be reliably trained with the original, untampered objective. We explore a number of design perspectives with the discriminator role ranging from formidable adversary to forgiving teacher. Image generation tasks comparing the proposed framework to standard GANs demonstrate GMAN produces higher quality samples in a fraction of the iterations when measured by a pairwise GAM-type metric.
["Deep learning", "Unsupervised Learning", "Games"]
ABSTRACTGenerative adversarial networks (GANs) are a framework for producing a gen-erative model by way of a two-player minimax game. In this paper, we proposetheGenerative Multi-Adversarial Network (GMAN), a framework that extendsGANs to multiple discriminators. In previous work, the successful training ofGANs requires modifying the minimax objective to accelerate training early on.In contrast, GMAN can be reliably trained with the original, untampered objec-tive. We explore a number of design perspectives with the discriminator role rang-ing from formidable adversary to forgiving teacher. Image generation tasks com-paring the proposed framework to standard GANs demonstrate GMAN produceshigher quality samples in a fraction of the iterations when measured by a pairwiseGAM-type metric.1 I NTRODUCTIONGenerative adversarial networks (Goodfellow et al. (2014)) (GANs) are a framework for producinga generative model by way of a two-player minimax game. One player, the generator, attempts togenerate realistic data samples by transforming noisy samples, z, drawn from a simple distribution(e.g.,zN (0;1)) using a transformation function G(z)with learned weights, . The generatorreceives feedback as to how realistic its synthetic sample is from another player, the discriminator,which attempts to discern between synthetic data samples produced by the generator and samplesdrawn from an actual dataset using a function D!(x)with learned weights, !.The GAN framework is one of the more recent successes in a line of research on adversarial train-ing in machine learning (Schmidhuber (1992); Bagnell (2005); Ajakan et al. (2014)) where gamesbetween learners are carefully crafted so that Nash equilibria coincide with some set of desired op-timality criteria. Preliminary work on GANs focused on generating images (e.g., MNIST (LeCunet al. (1998)), CIFAR (Krizhevsky (2009))), however, GANs have proven useful in a variety of appli-cation domains including learning censored representations (Edwards & Storkey (2015)), imitatingexpert policies (Ho & Ermon (2016)), and domain transfer (Yoo et al. (2016)). Work extendingGANs to semi-supervised learning (Chen et al. (2016); Mirza & Osindero (2014); Gauthier (2014);Springenberg (2015)), inference (Makhzani et al. (2015); Dumoulin et al. (2016)), feature learning(Donahue et al. (2016)), and improved image generation (Im et al. (2016); Denton et al. (2015);Radford et al. (2015)) have shown promise as well.Despite these successes, GANs are reputably difficult to train. While research is still underway toimprove training techniques and heuristics (Salimans et al. (2016)), most approaches have focusedon understanding and generalizing GANs theoretically with the aim of exploring more tractableformulations (Zhao et al. (2016); Li et al. (2015); Uehara et al. (2016); Nowozin et al. (2016)).In this paper, we theoretically and empirically justify generalizing the GAN framework to multiplediscriminators. We review GANs and summarize our extension in Section 2. In Sections 3 and 4,we present our N-discriminator extension to the GAN framework ( Generative Multi-AdversarialNetworks ) with several variants which range the role of the discriminator from formidable adversaryto forgiving teacher. Section 4.2 explains how this extension makes training with the untamperedminimax objective tractable. In Section 5, we define an intuitive metric (GMAM) to quantify GMANEqual contribution1Published as a conference paper at ICLR 2017performance and evaluate our framework on a variety of image generation tasks. Section 6 concludeswith a summary of our contributions and directions for future research.Contributions —To summarize, our main contributions are: i) a multi-discriminator GAN frame-work, GMAN, that allows training with the original, untampered minimax objective; ii) a generativemulti-adversarial metric (GMAM) to perform pairwise evaluation of separately trained frameworks;iii) a particular instance of GMAN, GMAN, that allows the generator to automatically regulatetraining and reach higher performance (as measured by GMAM) in a fraction of the training timerequired for the standard GAN model.2 G ENERATIVE ADVERSARIAL NETWORKS TO GMANThe original formulation of a GAN is a minimax game between a generator, G(z) :z!x, and adiscriminator, D!(x) :x![0;1],minGmaxD2DV(D;G ) =Expdata(x)hlog(D(x))i+Ezpz(z)hlog(1D(G(z)))i; (1)wherepdata(x)is the true data distribution and pz(z)is a simple (usually fixed) distribution that iseasy to draw samples from (e.g., N(0;1)). We differentiate between the function space of discrim-inators,D, and elements of this space, D. LetpG(x)be the distribution induced by the generator,G(z). We assume D;G to be deep neural networks as is typically the case.In their original work, Goodfellow et al. (2014) proved that given sufficient network capacitiesand an oracle providing the optimal discriminator, D=argmaxDV(D;G ), gradient descent onpG(x)will recover the desired globally optimal solution, pG(x) =pdata(x), so that the generatordistribution exactly matches the data distribution. In practice, they replaced the second term, log(1D(G(z))), withlog(D(G(z)))to enhance gradient signals at the start of the game; note this is nolonger a zero-sum game. Part of their convergence and optimality proof involves using the oracle,D, to reduce the minimax game to a minimization over Gonly:minGV(D;G) = minGnC(G) =log(4) + 2JSD (pdatajjpG)o(2)whereJSD denotes Jensen-Shannon divergence. Minimizing C(G)necessarily minimizes JSD ,however, we rarely know Dand so we instead minimize V(D;G ), which is only a lower bound.This perspective of minimizing the distance between the distributions, pdata andpG, motivatedLi et al. (2015) to develop a generative model that matches all moments of pG(x)withpdata(x)(atoptimality) by minimizing maximum mean discrepancy (MMD). Another approach, EBGAN, (Zhaoet al. (2016)) explores a larger class of games (non-zero-sum games) which generalize the generatorand discriminator objectives to take real-valued “energies” as input instead of probabilities. Nowozinet al. (2016) and then Uehara et al. (2016) extended the JSD perspective on GANs to more generaldivergences, specifically f-divergences and then Bregman-divergences respectively.In general, these approaches focus on exploring fundamental reformulations of V(D;G ). Similarly,our work focuses on a fundamental reformulation, however, our aim is to provide a framework thataccelerates training of the generator to a more robust state irrespective of the choice of V.2.1 GMAN: A M ULTI -ADVERSARIAL EXTENSIONWe propose introducing multiple discriminators, which brings with it a number of design possibil-ities. We explore approaches ranging between two extremes: 1) a more discriminating D(betterapproximating maxDV(D;G )) and 2) aDbetter matched to the generator’s capabilities. Math-ematically, we reformulate G’s objective as minGmaxF(V(D1;G);:::;V (DN;G))for differentchoices ofF(see Figure 1). Each Diis still expected to independently maximize its own V(Di;G)(i.e. no cooperation). We sometimes abbreviate V(Di;G)withViandF(V1;:::;VN)withFG(Vi).3 A F ORMIDABLE ADVERSARYHere, we consider multi-discriminator variants that attempt to better approximate maxDV(D;G ),providing a harsher critic to the generator.2Published as a conference paper at ICLR 2017G DN D2 D1 V(DN,G) V(D2,G) V(D1,G) F( · ) Figure 1: (GMAN) The generator trains using feedback aggregated over multiple discriminators. IfF:= max ,Gtrains against the best discriminator. If F:=mean ,Gtrains against an ensemble.We explore other alternatives to Fin Sections 4.1 & 4.4 that improve on both these options.3.1 M AXIMIZING V(D,G)For a fixedG, maximizing FG(Vi)withF:= max andNrandomly instantiated copies of our dis-criminator is functionally equivalent to optimizing V(e.g., stochastic gradient ascent) with randomrestarts in parallel and then presenting maxi2f1;:::;NgV(Di;G)as the loss to the generator —a verypragmatic approach to the difficulties presented by the non-convexity of Vcaused by the deep net.Requiring the generator to minimize the max forcesGto generate high fidelity samples that musthold up under the scrutiny of all Ndiscriminators, each potentially representing a distinct max.In practice, maxDi2DV(Di;G)is not performed to convergence (or global optimality), so theabove problem is oversimplified. Furthermore, introducing Ndiscriminators affects the dynam-ics of the game which affects the trajectories of the discriminators. This prevents us from claimingmaxfV1(t);:::;VN(t)g>maxfV01(t)g8teven if we initalize D1(0) =D01(0)as it is unlikely thatD1(t) =D01(t)at some time tafter the start of the game.3.2 B OOSTINGWe can also consider taking the max overNdiscriminators as a form of boosting for the discrim-inator’s online classification problem (online because Gcan produce an infinite data stream). Theboosted discriminator is given a sample xtand must predict whether it came from the generator orthe dataset. The booster then makes its prediction using the predictions of the NweakerDi.There are a few differences between taking the max (case 1) and online boosting (case 2). In case 1,our booster is limited to selecting a single weak discriminator (i.e. a pure strategy), while in case 2,many boosting algorithms more generally use linear combinations of the discriminators. Moreover,in case 2, a booster must make a prediction before receiving a loss function. In case 1, we assumeaccess to the loss function at prediction time, which allows us to compute the max.It is possible to train the weak discriminators using boosting and then ignore the booster’s predictionby instead presenting maxfVig. We explore both variants in our experiments, using the adaptive al-gorithm proposed in Beygelzimer et al. (2015). Unfortunately, boosting failed to produce promisingresults on the image generation tasks. It is possible that boosting produces too strong an adversaryfor learning which motivates the next section. Boosting results appear in Appendix A.7.4 A F ORGIVING TEACHERThe previous perspectives focus on improving the discriminator with the goal of presenting a betterapproximation of maxDV(D;G )to the generator. Our next perspective asks the question, “IsmaxDV(D;G )too harsh a critic?”4.1 Soft-DISCRIMINATORIn practice, training against a far superior discriminator can impede the generator’s learning. Thisis because the generator is unlikely to generate any samples considered “realistic” by the discrimi-nator’s standards, and so the generator will receive uniformly negative feedback. This is problem-3Published as a conference paper at ICLR 2017atic because the information contained in the gradient derived from negative feedback only dictateswhere to drive down pG(x), not specifically where to increase pG(x). Furthermore, driving downpG(x)necessarily increases pG(x)in other regions of X(to maintainRXpG(x) = 1 ) which may ormay not contain samples from the true dataset ( whack-a-mole dilemma). In contrast, a generator ismore likely to see positive feedback against a more lenient discriminator, which may better guide agenerator towards amassing pG(x)in approximately correct regions of X.For this reason, we explore a variety of functions that allow us to soften themax operator. Wechoose to focus on soft versions of the three classical Pythagorean means parameterized by where= 0corresponds to the mean and the max is recovered as !1 :AMsoft(V;) =NXiwiVi (3)GMsoft(V;) =expNXiwilog(Vi)(4)HMsoft(V;) =NXiwiV1i1(5)wherewi=eVi=jeVjwith0;Vi<0. Using a softmax also has the well known advantage ofbeing differentiable (as opposed to subdifferentiable for max). Note that we only require continuityto guarantee that computing the softmax is actually equivalent to computing V(~D;G )where ~Dissome convex combination of Di(see Appendix A.5).4.2 U SING THE ORIGINAL MINIMAX OBJECTIVETo illustrate the effect the softmax has on training, observe that the component of AMsoft(V;0)relevant to generator training can be rewritten as1NNXiExpG(x)hlog(1Di(x))i=1NExpG(x)hlog(z)i: (6)wherez=QNi(1Di(x)). Note that the generator gradient, j@log(z)@zj, is minimized at z= 1overz2(0;1]1. From this form, it is clear that z= 1 if and only if Di= 08i, soGonly receives avanishing gradient if all Diagree that the sample is fake; this is especially unlikely for large N. Inother words, Gonly needs to fool a single Dito receive constructive feedback. This result allows thegenerator to successfully minimize the original generator objective, log(1D). This is in contrastto the more popular log(D)introduced to artificially enhance gradients at the start of training.At the beginning of training, when maxDiV(Di;G)is likely too harsh a critic for the generator, wecan setcloser to zero to use the mean, increasing the odds of providing constructive feedback tothe generator. In addition, the discriminators have the added benefit of functioning as an ensemble,reducing the variance of the feedback presented to the generator, which is especially importantwhen the discriminators are far from optimal and are still learning a reasonable decision boundary.As training progresses and the discriminators improve, we can increase to become more criticalof the generator for more refined training.4.3 M AINTAINING MULTIPLE HYPOTHESESWe argue for this ensemble approach on a more fundamental level as well. Here, we draw onthe density ratio estimation perspective of GANs (Uehara et al. (2016)). The original GAN proofassumes we have access to pdata(x), if only implicitly. In most cases of interest, the discriminatoronly has access to a finite dataset sampled from pdata(x); therefore, when computing expectationsofV(D;G ), we only draw samples from our finite dataset. This is equivalent to training a GANwithpdata(x) = ~pdatawhich is a distribution consisting of point masses on all the data points in thedataset. For the sake of argument, let’s assume we are training a discriminator and generator, each1rGV=PiDiz@Di@GQj6=i(1Dj) =1z@Dk@GforDk= 1;D6=k= 0. Our argument ignores@Dk@G.4Published as a conference paper at ICLR 2017with infinite capacity. In this case, the global optimum ( pG(x) = ~pdata(x)) fails to capture any ofthe interesting structure from pdata(x), the true distribution we are trying to learn. Therefore, it isactually critical that we avoid this global optimum.x p(x) Figure 2: Consider a dataset consisting of the nine 1-dimensional samples in black. Their corre-sponding probability mass function is given in light gray. After training GMAN, three discrimina-tors converge to distinct local optima which implicitly define distributions over the data (red, blue,yellow). Each discriminator may specialize in discriminating a region of the data space (placingmore diffuse mass in other regions). Averaging over the three discriminators results in the distribu-tion in black, which we expect has higher likelihood under reasonable assumptions on the structureof the true distribution.In practice, this degenerate result is avoided by employing learners with limited capacity and corrupt-ing data samples with noise (i.e., dropout), but we might better accomplish this by simultaneouslytraining a variety of limited capacity discriminators. With this approach, we might obtain a diverseset of seemingly tenable hypotheses for the true pdata(x). Averaging over these multiple locallyoptimal discriminators increases the entropy of ~pdata(x)by diffusing the probability mass over thedata space (see Figure 2 for an example).4.4 A UTOMATING REGULATIONThe problem of keeping the discriminator and generator in balance has been widely recognized inprevious work with GANs. Issues with unstable dynamics, oscillatory behavior, and generator col-lapse are not uncommon. In addition, the discriminator is often times able to achieve a high degree ofclassification accuracy (producing a single scalar) before the generator has made sufficient progresson the arguably more difficult generative task (producing a high dimensional sample). Salimanset al. (2016) suggested label smoothing to reduce the vulnerability of the generator to a relativelysuperior discriminator. Here, we explore an approach that enables the generator to automaticallytemper the performance of the discriminator when necessary, but still encourages the generator tochallenge itself against more accurate adversaries. Specifically, we augment the generator objective:minG;> 0FG(Vi)f() (7)wheref()is monotonically increasing in which appears in the softmax equations, (3)—(5). Inexperiments, we simply set f() =cwithca constant (e.g., 0.001). The generator is incentivizedto increaseto reduce its objective at the expense of competing against the best available adversaryD(see Appendix A.6).5 E VALUATIONEvaluating GANs is still an open problem. In their original work, Goodfellow et al. (2014) reportlog likelihood estimates from Gaussian Parzen windows, which they admit, has high variance andis known not to perform well in high dimensions. Theis et al. (2016) recommend avoiding Parzenwindows and argue that generative models should be evaluated with respect to their intended appli-cation. Salimans et al. (2016) suggest an Inception score , however, it assumes labels exist for thedataset. Recently, Im et al. (2016) introduced the Generative Adversarial Metric (GAM) for mak-ing pairwise comparisons between independently trained GAN models. The core idea behind theirapproach is given two generator, discriminator pairs ( G1;D1) and (G2;D2), we should be able tolearn their relative performance by judging each generator under the opponent’s discriminator.5Published as a conference paper at ICLR 20175.1 M ETRICIn GMAN, the opponent may have multiple discriminators, which makes it unclear how to performthe swaps needed for GAM. We introduce a variant of GAM, the generative multi-adversarial metric(GMAM), that is amenable to training with multiple discriminators,GMAM = logFaGb(Vai)FaGa(Vai).FbGa(Vbi)FbGb(Vbi): (8)whereaandbrefer to the two GMAN variants (see Section 3 for notation FG(Vi)). The idea here issimilar. IfG2performs better than G1with respect to both D1andD2, then GMAM >0 (rememberV0always). IfG1performs better in both cases, GMAM <0, otherwise, the result is indeterminate.5.2 E XPERIMENTSWe evaluate the aforementioned variations of GMAN on a variety of image generation tasks: MNIST(LeCun et al. (1998)), CIFAR-10 (Krizhevsky (2009)) and CelebA (Liu et al. (2015)). We focus onrates of convergence to steady state along with quality of the steady state generator according to theGMAM metric. To summarize, loosely in order of increasing discriminator leniency, we compareF-boost: A single AdaBoost.OL -boosted discriminator (see Appendix A.7).P-boost:Diis trained according to AdaBoost.OL . Amax over the weak learner losses ispresented to the generator instead of the boosted prediction (see Appendix A.7).GMAN- max:maxfVigis presented to the generator.GAN: Standard GAN with a single discriminator (see Appendix A.2).mod-GAN: GAN with modified objective (generator minimizes log(D(G(z))).GMAN-: GMAN with F:=arithmetic softmax with parameter .GMAN: The arithmetic softmax is controlled by the generator through .All generator and discriminator models are deep (de)convolutional networks (Radford et al. (2015)),and aside from the boosted variants, all are trained with Adam (Kingma & Ba (2014)) and batchnormalization (Ioffe & Szegedy (2015)). Discriminators convert the real-valued outputs of theirnetworks to probabilities with squashed -sigmoids to prevent saturating logarithms in the minimaxobjective (+121+ez). See Appendix A.8 for further details. We test GMAN systems with N=f2;5gdiscriminators. We maintain discriminator diversity by varying dropout and network depth.5.2.1 MNISTFigure 3 reveals that increasing the number of discriminators reduces the number of iterations tosteady-state by 2x on MNIST; increasing N(the size of the discriminator ensemble) also has theadded benefit of reducing the variance the minimax objective over runs. Figure 4 displays the vari-ance of the same objective over a sliding time window, reaffirming GMAN’s acceleration to steady-state. Figure 5 corroborates this conclusion with recognizable digits appearing approximately anepoch before the single discriminator run; digits at steady-state appear slightly sharper as well.Our GMAM metric (see Table 1) agrees with the relative quality of images in Figure 5 with GMANachieving the best overall performance. Figure 6 reveals GMAN’s attempt to regulate the difficultyScore Variant GMANGMAN-0 GMAN- max mod-GANBetter!0:127 GMAN-0:0200:0090:0280:0190:0890:0360:007 GMAN-0 0:0200:009 -0:0130:0150:0180:0270:034 GMAN- max 0:0280:019 0:0130:015 -0:0110:0240:122 mod-GAN 0:0890:036 0:0180:027 0:0110:024 -Table 1: Pairwise GMAM metric means with stdev for select models on MNIST. For each column, apositive GMAM indicates better performance relative to the row opponent; negative implies worse.Scores are obtained by summing each variant’s column.6Published as a conference paper at ICLR 2017Figure 3: Generator objective, F, averagedover 5 training runs on MNIST. Increas-ing the number of discriminators acceleratesconvergence of Fto steady state (solid line)and reduces its variance, 2(filled shadow1). Figure 4 provides alternative evidenceof GMAN’s accelerated convergence.Figure 4: Stdev ,, of the generator objec-tive over a sliding window of 500 iterations.Lower values indicate a more steady-state.GMANwithN= 5 achieves steady-stateat2x speed of GAN ( N= 1). Note Fig-ure 3’s filled shadows reveal stdev ofFoverruns, while this plot shows stdev over time.Figure 5: Comparison of image quality across epochs for N=f1;2;5gusing GMAN-0 on MNIST.of the game to accelerate learning. Figure 7 displays the GMAM scores comparing fixed ’s to thevariablecontrolled by GMAN.Figure 6: GMANregulates difficulty of thegame by adjusting . Initially,Greducestoease learning and then gradually increases for a more challenging learning environment.Score= 1= 0(N= 5)Better!0:028 -0:0080:0090:0190:0100:001= 10:0080:009-0:0080:0100:025= 00:0190:0100:0080:010-Figure 7: PairwiseGMAMstdev (GMAM)for GMAN-andGMAN() over 5 runs on MNIST.7Published as a conference paper at ICLR 20175.2.2 C ELEB A & CIFAR-10We see similar accelerated convergence behavior for the CelebA dataset in Figure 8.Figure 8: Image quality improvement across number of generators at same number of iterations forGMAN-0 on CelebA.Figure 9 displays images generated by GMAN-0 on CIFAR-10. See Appendix A.3 for more results.Figure 9: Images generated by GMAN-0 on the CIFAR-10 dataset.We also found that GMAN is robust to mode collapse . We believe this is because the generatormust appease a diverse set of discriminators in each minibatch. Emitting a single sample will scorewell for one discriminator at the expense of the rest of the discriminators. Current solutions (e.g.,minibatch discrimination) are quadratic in batch size. GMAN, however, is linear in batch size.6 C ONCLUSIONWe introduced multiple discriminators into the GAN framework and explored discriminator rolesranging from a formidable adversary to a forgiving teacher. Allowing the generator to automaticallytune its learning schedule (GMAN) outperformed GANs with a single discriminator on MNIST. Ingeneral, GMAN variants achieved faster convergence to a higher quality steady state on a variety oftasks as measured by a GAM-type metric (GMAM). In addition, GMAN makes using the originalGAN objective possible by increasing the odds of the generator receiving constructive feedback.In future work, we will look at more sophisticated mechanisms for letting the generator controlthe game as well as other ways to ensure diversity among the discriminators. Introducing multiplegenerators is conceptually an obvious next step, however, we expect difficulties to arise from morecomplex game dynamics. For this reason, game theory and game design will likely be important.ACKNOWLEDGMENTSWe acknowledge helpful conversations with Stefan Dernbach, Archan Ray, Luke Vilnis, Ben Turtel,Stephen Giguere, Rajarshi Das, and Subhransu Maji. We also thank NVIDIA for donating a K40GPU. This material is based upon work supported by the National Science Foundation under GrantNos. IIS-1564032. Any opinions, findings, and conclusions or recommendations expressed in thismaterial are those of the authors and do not necessarily reflect the views of the NSF.8Published as a conference paper at ICLR 2017BIBLIOGRAPHYMartın Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg SCorrado, Andy Davis, Jeffrey Dean, Matthieu Devin, et al. Tensorflow: Large-scale machinelearning on heterogeneous distributed systems. arXiv preprint arXiv:1603.04467 , 2016.Hana Ajakan, Pascal Germain, Hugo Larochelle, Franc ̧ois Laviolette, and Mario Marchand.Domain-adversarial neural networks. arXiv preprint arXiv:1412.4446 , 2014.J Andrew Bagnell. Robust supervised learning. In Proceedings Of The National Conference OnArtificial Intelligence , volume 20, pp. 714. Menlo Park, CA; Cambridge, MA; London; AAAIPress; MIT Press; 1999, 2005.Alina Beygelzimer, Satyen Kale, and Haipeng Luo. Optimal and adaptive algorithms for onlineboosting. arXiv preprint arXiv:1502.02651 , 2015.Xi Chen, Yan Duan, Rein Houthooft, John Schulman, Ilya Sutskever, and Pieter Abbeel. Info-gan: Interpretable representation learning by information maximizing generative adversarial nets.arXiv preprint arXiv:1606.03657 , 2016.Emily L Denton, Soumith Chintala, Rob Fergus, et al. Deep generative image models using alaplacian pyramid of adversarial networks. In Advances in neural information processing systems ,pp. 1486–1494, 2015.Jeff Donahue, Philipp Kr ̈ahenb ̈uhl, and Trevor Darrell. Adversarial feature learning. arXiv preprintarXiv:1605.09782 , 2016.Vincent Dumoulin, Ishmael Belghazi, Ben Poole, Alex Lamb, Martin Arjovsky, Olivier Mastropi-etro, and Aaron Courville. Adversarially learned inference. arXiv preprint arXiv:1606.00704 ,2016.Harrison Edwards and Amos Storkey. Censoring representations with an adversary. arXiv preprintarXiv:1511.05897 , 2015.Jon Gauthier. Conditional generative adversarial nets for convolutional face generation. ClassProject for Stanford CS231N: Convolutional Neural Networks for Visual Recognition, Wintersemester , 2014, 2014.Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair,Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Advances in Neural Infor-mation Processing Systems , pp. 2672–2680, 2014.Jonathan Ho and Stefano Ermon. Generative adversarial imitation learning. arXiv preprintarXiv:1606.03476 , 2016.Daniel Jiwoong Im, Chris Dongjoo Kim, Hui Jiang, and Roland Memisevic. Generating imageswith recurrent adversarial networks. arXiv preprint arXiv:1602.05110 , 2016.Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training byreducing internal covariate shift. arXiv preprint arXiv:1502.03167 , 2015.Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprintarXiv:1412.6980 , 2014.Alex Krizhevsky. Learning multiple layers of features from tiny images. Master’s Thesis , 2009.Yann LeCun, Corinna Cortes, and Christopher JC Burges. The mnist database of handwritten digits,1998.Yujia Li, Kevin Swersky, and Richard Zemel. Generative moment matching networks. In Interna-tional Conference on Machine Learning , pp. 1718–1727, 2015.Ziwei Liu, Ping Luo, Xiaogang Wang, and Xiaoou Tang. Deep learning face attributes in the wild.InProceedings of International Conference on Computer Vision (ICCV) , December 2015.9Published as a conference paper at ICLR 2017Alireza Makhzani, Jonathon Shlens, Navdeep Jaitly, and Ian Goodfellow. Adversarial autoencoders.arXiv preprint arXiv:1511.05644 , 2015.Mehdi Mirza and Simon Osindero. Conditional generative adversarial nets. arXiv preprintarXiv:1411.1784 , 2014.Sebastian Nowozin, Botond Cseke, and Ryota Tomioka. f-gan: Training generative neural samplersusing variational divergence minimization. arXiv preprint arXiv:1606.00709 , 2016.Alec Radford, Luke Metz, and Soumith Chintala. Unsupervised representation learning with deepconvolutional generative adversarial networks. arXiv preprint arXiv:1511.06434 , 2015.Siamak Ravanbakhsh, Francois Lanusse, Rachel Mandelbaum, Jeff Schneider, and Barnabas Poczos.Enabling dark energy science with deep generative models of galaxy images. arXiv preprintarXiv:1609.05796 , 2016.Tim Salimans, Ian Goodfellow, Wojciech Zaremba, Vicki Cheung, Alec Radford, and Xi Chen.Improved techniques for training gans. arXiv preprint arXiv:1606.03498 , 2016.J ̈urgen Schmidhuber. Learning factorial codes by predictability minimization. Neural Computation ,4(6):863–879, 1992.Jost Tobias Springenberg. Unsupervised and semi-supervised learning with categorical generativeadversarial networks. arXiv preprint arXiv:1511.06390 , 2015.Lucas Theis, A ̈aron van den Oord, and Matthias Bethge. A note on the evaluation of generativemodels. arXiv preprint arXiv:1511.01844v3 , 2016.Masatoshi Uehara, Issei Sato, Masahiro Suzuki, Kotaro Nakayama, and Yutaka Matsuo. Generativeadversarial nets from a density ratio estimation perspective. arXiv preprint arXiv:1610.02920 ,2016.Donggeun Yoo, Namil Kim, Sunggyun Park, Anthony S Paek, and In So Kweon. Pixel-level domaintransfer. arXiv preprint arXiv:1603.07442 , 2016.Matthew D Zeiler, Dilip Krishnan, Graham W Taylor, and Rob Fergus. Deconvolutional networks.InComputer Vision and Pattern Recognition (CVPR), 2010 IEEE Conference on , pp. 2528–2535.IEEE, 2010.Junbo Zhao, Michael Mathieu, and Yann LeCun. Energy-based generative adversarial network.arXiv preprint arXiv:1609.03126 , 2016.10Published as a conference paper at ICLR 2017A A PPENDIXA.1 A CCELERATED CONVERGENCE & R EDUCED VARIANCESee Figures 10, 11, 12, and 13.Figure 10: Generator objective, F, averagedover 5 training runs on CelebA. IncreasingN(# ofD) accelerates convergence of Ftosteady state (solid line) and reduces its vari-ance,2(filled shadow1). Figure 11 pro-vides alternative evidence of GMAN-0’s ac-celerated convergence.Figure 11: Stdev ,, of the generator objec-tive over a sliding window of 500 iterations.Lower values indicate a more steady-state.GMAN-0 with N= 5 achieves steady-stateat2x speed of GAN ( N= 1). Note Fig-ure 10’s filled shadows reveal stdev ofFoverruns, while this plot shows stdev over time.Figure 12: Generator objective, F, averagedover 5 training runs on CIFAR-10. Increas-ingN(# ofD) accelerates convergence ofFto steady state (solid line) and reduces itsvariance,2(filled shadow1). Figure 13provides alternative evidence of GMAN-0’saccelerated convergence.Figure 13: Stdev ,, of the generator objec-tive over a sliding window of 500 iterations.Lower values indicate a more steady-state.GMAN-0 with N= 5 achieves steady-stateat2x speed of GAN ( N= 1). Note Fig-ure 12’s filled shadows reveal stdev ofFoverruns, while this plot shows stdev over time.A.2 A DDITIONAL GMAM T ABLESSee Tables 2, 3, 4, 5, 6. Increasing the number of discriminators from 2 to 5 on CIFAR-10 signif-icantly improves scores over the standard GAN both in terms of the GMAM metric and Inceptionscores.A.3 G ENERATED IMAGESSee Figures 14 and 15.11Published as a conference paper at ICLR 2017Score Variant GMANGMAN-1 GAN GMAN-0 GMAN- max mod-GANBetter!0:184 GMAN-0:0070:0400:0200:0280:0890:067 GMAN-1 0:007 -0:0080:0080:0210:0370:030 GAN 0:040 0:008 - 0:0020:0180:0580:005 GMAN-0 0:020 0:008 0:002 -0:0130:0180:091 GMAN- max 0:028 0:021 0:018 0:013 -0:0110:213 mod-GAN 0:089 0:037 0:058 0:018 0:011 -Table 2: Pairwise GMAM metric means for select models on MNIST. For each column, a positiveGMAM indicates better performance relative to the row opponent; negative implies worse. Scoresare obtained by summing each column.Score Variant GMAN-0 GMAN-1 GMANmod-GANBetter!0:172 GMAN-0 -0:0220:0620:0880:050 GMAN-1 0:022 - 0:0060:0780:055 GMAN0:0620:006 -0:0010:167 mod-GAN 0:088 0:078 0:001 -Table 3: Pairwise GMAM metric means for select models on CIFAR-10. For each column, a positiveGMAM indicates better performance relative to the row opponent; negative implies worse. Scoresare obtained by summing each column. GMAN variants were trained with twodiscriminators.GMAN-0 GMAN-1 mod-GAN GMANScore 5:8780:193 5:7650:168 5:7380:176 5:5390:099Table 4: Inception score means with standard deviations for select models on CIFAR-10. Higherscores are better. GMAN variants were trained with twodiscriminators.Score Variant GMAN-0 GMANGMAN-1 mod-GANBetter!0:180 GMAN-0 -0:0080:0410:1320:122 GMAN0:008 -0:0380:0920:010 GMAN-1 0:041 0:038 -0:0890:313 mod-GAN 0:132 0:092 0:089 -Table 5: Pairwise GMAM metric means for select models on CIFAR-10. For each column, a positiveGMAM indicates better performance relative to the row opponent; negative implies worse. Scoresare obtained by summing each column. GMAN variants were trained with fivediscriminators.GMAN-1 GMAN-0 GMANmod-GANScore 6:0010:194 5:9570:135 5:9550:153 5:7380:176Table 6: Inception score means with standard deviations for select models on CIFAR-10. Higherscores are better. GMAN variants were trained with fivediscriminators.Figure 14: Sample of pictures generated on CelebA cropped dataset.12Published as a conference paper at ICLR 2017Figure 15: Sample of pictures generated by GMAN-0 on CIFAR dataset.A.4 S OMEWHAT RELATED WORKA GAN framework with two discriminators appeared in Yoo et al. (2016), however, it is applica-ble only in a semi-supervised case where a label can be assigned to subsets of the dataset (e.g.,X=fX1=Domain 1;X2=Domain 2;:::g). In contrast, our framework applies to an unsu-pervised scenario where an obvious partition of the dataset is unknown. Furthermore, extendingGMAN to the semi-supervised domain-adaptation scenario would suggest multiple discriminatorsper domain, therefore our line of research is strictly orthogonal to that of their multi-domain dis-criminator approach. Also, note that assigning a discriminator to each domain is akin to prescribinga new discriminator to each value of a conditional variable in conditional GANs (Mirza & Osindero(2014)). In this case, we interpret GMAN as introducing multiple conditional discriminators and nota discriminator for each of the possibly exponentially many conditional labels.In Section 4.4, we describe an approach to customize adversarial training to better suit the devel-opment of the generator. An approach with similar conceptual underpinnings was described inRavanbakhsh et al. (2016), however, similar to the above, it is only admissible in a semi-supervisedscenario whereas our applies to the unsupervised case.A.5 Softmax REPRESENTABILITYLetsoftmax (Vi) =^V2[minVi;maxVi]. Also leta=argminiVi,b=argmaxiVi, andV(t) =V((1t)Da+tDb)so thatV(0) =VaandV(1) =Vb. The softmax and minimax objectiveV(Di;G)are both continuous in their inputs, so by the intermediate value theorem , we have that9^t2[0;1]s:t:V(^t) = ^V, which implies9^D2Ds:t: V (^D;G ) = ^V. This result implies thatthesoftmax (and any other continuous substitute) can be interpreted as returning V(^D;G )for some^Dselected by computing an another, unknown function over the space of the discriminators. Thisresult holds even if ^Dis not representable by the architecture chosen for D’s neural network.13Published as a conference paper at ICLR 2017A.6 U NCONSTRAINED OPTIMIZATIONTo convert GMANminimax formulation to an unconstrained minimax formulation, we introducean auxiliary variable, , define() = log(1 + e), and let the generator minimize over 2R.A.7 B OOSTING WITH AdaBoost.OLAdaBoost.OL (Beygelzimer et al. (2015)) does not require knowledge of the weak learner’s slightedge over random guessing ( P(correct label) = 0:5 +2(0;0:5]), and in fact, allows <0. Thisis crucial because our weak learners are deep nets with unknown, possibly negative, ’s.Figure 16: Sample of pictures generated across 4 independent runs on MNIST with F-boost (similarresults with P-boost).A.8 E XPERIMENTAL SETUPAll experiments were conducted using an architecture similar to DCGAN (Radford et al. (2015)).We use convolutional transpose layers (Zeiler et al. (2010)) for Gand strided convolutions for Dexcept for the input of Gand the last layer of D. We use the single step gradient method as in(Nowozin et al. (2016)), and batch normalization (Ioffe & Szegedy (2015)) was used in each ofthe generator layers. The different discriminators were trained with varying dropout rates from[0:3;0:7]. Variations in the discriminators were effected in two ways. We varied the architecture byvarying the number of filters in the discriminator layers (reduced by factors of 2, 4 and so on), aswell as varying dropout rates. Secondly we also decorrelated the samples that the disriminators weretraining on by splitting the minibatch across the discriminators. The code was written in Tensorflow(Abadi et al. (2016)) and run on Nvidia GTX 980 GPUs. Code to reproduce experiments and plotsis at https://github.com/iDurugkar/GMAN. Specifics for the MNIST architecture and training are:Generator latent variables zU(1;1)100Generator convolution transpose layers: (4;4;128);(8;8;64);(16;16;32);(32;32;1)Base Discriminator architecture: (32;32;1);(16;16;32);(8;8;64);(4;4;128) .Variants have either convolution 3 (4;4;128) removed or all the filter sizesare divided by 2 or 4. That is, (32;32;1);(16;16;16);(8;8;32);(4;4;64) or(32;32;1);(16;16;8);(8;8;16);(4;4;32).ReLu activations for all the hidden units. Tanh activation at the output units of the generator.Sigmoid at the output of the Discriminator.Training was performed with Adam (Kingma & Ba (2014)) ( lr= 2104,1= 0:5).MNIST was trained for 20 epochs with a minibatch of size 100.CelebA and CIFAR were trained over 24000 iterations with a minibatch of size 100.14
HkYCUhr4g
6: Marginally above acceptance threshold
This work brings multiple discriminators into GAN. From the result, multiple discriminators is useful for stabilizing. The main problem of stabilizing seems is from gradient signal from discriminator, the authors motivation is using multiple discriminators to reduce this effect. I think this work indicates the direction is promising, however I think the authors may consider to add more result vs approach which enforce discriminator gradient, such as GAN with DAE (Improving Generative Adversarial Networks with Denoising Feature Matching), to show advantages of multiple discriminators.
4: The reviewer is confident but not absolutely certain that the evaluation is correct
Byk-VI9eg
ICLR.cc/2017/conference
2017
Generative Multi-Adversarial Networks
["Ishan Durugkar", "Ian Gemp", "Sridhar Mahadevan"]
Generative adversarial networks (GANs) are a framework for producing a generative model by way of a two-player minimax game. In this paper, we propose the \emph{Generative Multi-Adversarial Network} (GMAN), a framework that extends GANs to multiple discriminators. In previous work, the successful training of GANs requires modifying the minimax objective to accelerate training early on. In contrast, GMAN can be reliably trained with the original, untampered objective. We explore a number of design perspectives with the discriminator role ranging from formidable adversary to forgiving teacher. Image generation tasks comparing the proposed framework to standard GANs demonstrate GMAN produces higher quality samples in a fraction of the iterations when measured by a pairwise GAM-type metric.
["Deep learning", "Unsupervised Learning", "Games"]
ABSTRACTGenerative adversarial networks (GANs) are a framework for producing a gen-erative model by way of a two-player minimax game. In this paper, we proposetheGenerative Multi-Adversarial Network (GMAN), a framework that extendsGANs to multiple discriminators. In previous work, the successful training ofGANs requires modifying the minimax objective to accelerate training early on.In contrast, GMAN can be reliably trained with the original, untampered objec-tive. We explore a number of design perspectives with the discriminator role rang-ing from formidable adversary to forgiving teacher. Image generation tasks com-paring the proposed framework to standard GANs demonstrate GMAN produceshigher quality samples in a fraction of the iterations when measured by a pairwiseGAM-type metric.1 I NTRODUCTIONGenerative adversarial networks (Goodfellow et al. (2014)) (GANs) are a framework for producinga generative model by way of a two-player minimax game. One player, the generator, attempts togenerate realistic data samples by transforming noisy samples, z, drawn from a simple distribution(e.g.,zN (0;1)) using a transformation function G(z)with learned weights, . The generatorreceives feedback as to how realistic its synthetic sample is from another player, the discriminator,which attempts to discern between synthetic data samples produced by the generator and samplesdrawn from an actual dataset using a function D!(x)with learned weights, !.The GAN framework is one of the more recent successes in a line of research on adversarial train-ing in machine learning (Schmidhuber (1992); Bagnell (2005); Ajakan et al. (2014)) where gamesbetween learners are carefully crafted so that Nash equilibria coincide with some set of desired op-timality criteria. Preliminary work on GANs focused on generating images (e.g., MNIST (LeCunet al. (1998)), CIFAR (Krizhevsky (2009))), however, GANs have proven useful in a variety of appli-cation domains including learning censored representations (Edwards & Storkey (2015)), imitatingexpert policies (Ho & Ermon (2016)), and domain transfer (Yoo et al. (2016)). Work extendingGANs to semi-supervised learning (Chen et al. (2016); Mirza & Osindero (2014); Gauthier (2014);Springenberg (2015)), inference (Makhzani et al. (2015); Dumoulin et al. (2016)), feature learning(Donahue et al. (2016)), and improved image generation (Im et al. (2016); Denton et al. (2015);Radford et al. (2015)) have shown promise as well.Despite these successes, GANs are reputably difficult to train. While research is still underway toimprove training techniques and heuristics (Salimans et al. (2016)), most approaches have focusedon understanding and generalizing GANs theoretically with the aim of exploring more tractableformulations (Zhao et al. (2016); Li et al. (2015); Uehara et al. (2016); Nowozin et al. (2016)).In this paper, we theoretically and empirically justify generalizing the GAN framework to multiplediscriminators. We review GANs and summarize our extension in Section 2. In Sections 3 and 4,we present our N-discriminator extension to the GAN framework ( Generative Multi-AdversarialNetworks ) with several variants which range the role of the discriminator from formidable adversaryto forgiving teacher. Section 4.2 explains how this extension makes training with the untamperedminimax objective tractable. In Section 5, we define an intuitive metric (GMAM) to quantify GMANEqual contribution1Published as a conference paper at ICLR 2017performance and evaluate our framework on a variety of image generation tasks. Section 6 concludeswith a summary of our contributions and directions for future research.Contributions —To summarize, our main contributions are: i) a multi-discriminator GAN frame-work, GMAN, that allows training with the original, untampered minimax objective; ii) a generativemulti-adversarial metric (GMAM) to perform pairwise evaluation of separately trained frameworks;iii) a particular instance of GMAN, GMAN, that allows the generator to automatically regulatetraining and reach higher performance (as measured by GMAM) in a fraction of the training timerequired for the standard GAN model.2 G ENERATIVE ADVERSARIAL NETWORKS TO GMANThe original formulation of a GAN is a minimax game between a generator, G(z) :z!x, and adiscriminator, D!(x) :x![0;1],minGmaxD2DV(D;G ) =Expdata(x)hlog(D(x))i+Ezpz(z)hlog(1D(G(z)))i; (1)wherepdata(x)is the true data distribution and pz(z)is a simple (usually fixed) distribution that iseasy to draw samples from (e.g., N(0;1)). We differentiate between the function space of discrim-inators,D, and elements of this space, D. LetpG(x)be the distribution induced by the generator,G(z). We assume D;G to be deep neural networks as is typically the case.In their original work, Goodfellow et al. (2014) proved that given sufficient network capacitiesand an oracle providing the optimal discriminator, D=argmaxDV(D;G ), gradient descent onpG(x)will recover the desired globally optimal solution, pG(x) =pdata(x), so that the generatordistribution exactly matches the data distribution. In practice, they replaced the second term, log(1D(G(z))), withlog(D(G(z)))to enhance gradient signals at the start of the game; note this is nolonger a zero-sum game. Part of their convergence and optimality proof involves using the oracle,D, to reduce the minimax game to a minimization over Gonly:minGV(D;G) = minGnC(G) =log(4) + 2JSD (pdatajjpG)o(2)whereJSD denotes Jensen-Shannon divergence. Minimizing C(G)necessarily minimizes JSD ,however, we rarely know Dand so we instead minimize V(D;G ), which is only a lower bound.This perspective of minimizing the distance between the distributions, pdata andpG, motivatedLi et al. (2015) to develop a generative model that matches all moments of pG(x)withpdata(x)(atoptimality) by minimizing maximum mean discrepancy (MMD). Another approach, EBGAN, (Zhaoet al. (2016)) explores a larger class of games (non-zero-sum games) which generalize the generatorand discriminator objectives to take real-valued “energies” as input instead of probabilities. Nowozinet al. (2016) and then Uehara et al. (2016) extended the JSD perspective on GANs to more generaldivergences, specifically f-divergences and then Bregman-divergences respectively.In general, these approaches focus on exploring fundamental reformulations of V(D;G ). Similarly,our work focuses on a fundamental reformulation, however, our aim is to provide a framework thataccelerates training of the generator to a more robust state irrespective of the choice of V.2.1 GMAN: A M ULTI -ADVERSARIAL EXTENSIONWe propose introducing multiple discriminators, which brings with it a number of design possibil-ities. We explore approaches ranging between two extremes: 1) a more discriminating D(betterapproximating maxDV(D;G )) and 2) aDbetter matched to the generator’s capabilities. Math-ematically, we reformulate G’s objective as minGmaxF(V(D1;G);:::;V (DN;G))for differentchoices ofF(see Figure 1). Each Diis still expected to independently maximize its own V(Di;G)(i.e. no cooperation). We sometimes abbreviate V(Di;G)withViandF(V1;:::;VN)withFG(Vi).3 A F ORMIDABLE ADVERSARYHere, we consider multi-discriminator variants that attempt to better approximate maxDV(D;G ),providing a harsher critic to the generator.2Published as a conference paper at ICLR 2017G DN D2 D1 V(DN,G) V(D2,G) V(D1,G) F( · ) Figure 1: (GMAN) The generator trains using feedback aggregated over multiple discriminators. IfF:= max ,Gtrains against the best discriminator. If F:=mean ,Gtrains against an ensemble.We explore other alternatives to Fin Sections 4.1 & 4.4 that improve on both these options.3.1 M AXIMIZING V(D,G)For a fixedG, maximizing FG(Vi)withF:= max andNrandomly instantiated copies of our dis-criminator is functionally equivalent to optimizing V(e.g., stochastic gradient ascent) with randomrestarts in parallel and then presenting maxi2f1;:::;NgV(Di;G)as the loss to the generator —a verypragmatic approach to the difficulties presented by the non-convexity of Vcaused by the deep net.Requiring the generator to minimize the max forcesGto generate high fidelity samples that musthold up under the scrutiny of all Ndiscriminators, each potentially representing a distinct max.In practice, maxDi2DV(Di;G)is not performed to convergence (or global optimality), so theabove problem is oversimplified. Furthermore, introducing Ndiscriminators affects the dynam-ics of the game which affects the trajectories of the discriminators. This prevents us from claimingmaxfV1(t);:::;VN(t)g>maxfV01(t)g8teven if we initalize D1(0) =D01(0)as it is unlikely thatD1(t) =D01(t)at some time tafter the start of the game.3.2 B OOSTINGWe can also consider taking the max overNdiscriminators as a form of boosting for the discrim-inator’s online classification problem (online because Gcan produce an infinite data stream). Theboosted discriminator is given a sample xtand must predict whether it came from the generator orthe dataset. The booster then makes its prediction using the predictions of the NweakerDi.There are a few differences between taking the max (case 1) and online boosting (case 2). In case 1,our booster is limited to selecting a single weak discriminator (i.e. a pure strategy), while in case 2,many boosting algorithms more generally use linear combinations of the discriminators. Moreover,in case 2, a booster must make a prediction before receiving a loss function. In case 1, we assumeaccess to the loss function at prediction time, which allows us to compute the max.It is possible to train the weak discriminators using boosting and then ignore the booster’s predictionby instead presenting maxfVig. We explore both variants in our experiments, using the adaptive al-gorithm proposed in Beygelzimer et al. (2015). Unfortunately, boosting failed to produce promisingresults on the image generation tasks. It is possible that boosting produces too strong an adversaryfor learning which motivates the next section. Boosting results appear in Appendix A.7.4 A F ORGIVING TEACHERThe previous perspectives focus on improving the discriminator with the goal of presenting a betterapproximation of maxDV(D;G )to the generator. Our next perspective asks the question, “IsmaxDV(D;G )too harsh a critic?”4.1 Soft-DISCRIMINATORIn practice, training against a far superior discriminator can impede the generator’s learning. Thisis because the generator is unlikely to generate any samples considered “realistic” by the discrimi-nator’s standards, and so the generator will receive uniformly negative feedback. This is problem-3Published as a conference paper at ICLR 2017atic because the information contained in the gradient derived from negative feedback only dictateswhere to drive down pG(x), not specifically where to increase pG(x). Furthermore, driving downpG(x)necessarily increases pG(x)in other regions of X(to maintainRXpG(x) = 1 ) which may ormay not contain samples from the true dataset ( whack-a-mole dilemma). In contrast, a generator ismore likely to see positive feedback against a more lenient discriminator, which may better guide agenerator towards amassing pG(x)in approximately correct regions of X.For this reason, we explore a variety of functions that allow us to soften themax operator. Wechoose to focus on soft versions of the three classical Pythagorean means parameterized by where= 0corresponds to the mean and the max is recovered as !1 :AMsoft(V;) =NXiwiVi (3)GMsoft(V;) =expNXiwilog(Vi)(4)HMsoft(V;) =NXiwiV1i1(5)wherewi=eVi=jeVjwith0;Vi<0. Using a softmax also has the well known advantage ofbeing differentiable (as opposed to subdifferentiable for max). Note that we only require continuityto guarantee that computing the softmax is actually equivalent to computing V(~D;G )where ~Dissome convex combination of Di(see Appendix A.5).4.2 U SING THE ORIGINAL MINIMAX OBJECTIVETo illustrate the effect the softmax has on training, observe that the component of AMsoft(V;0)relevant to generator training can be rewritten as1NNXiExpG(x)hlog(1Di(x))i=1NExpG(x)hlog(z)i: (6)wherez=QNi(1Di(x)). Note that the generator gradient, j@log(z)@zj, is minimized at z= 1overz2(0;1]1. From this form, it is clear that z= 1 if and only if Di= 08i, soGonly receives avanishing gradient if all Diagree that the sample is fake; this is especially unlikely for large N. Inother words, Gonly needs to fool a single Dito receive constructive feedback. This result allows thegenerator to successfully minimize the original generator objective, log(1D). This is in contrastto the more popular log(D)introduced to artificially enhance gradients at the start of training.At the beginning of training, when maxDiV(Di;G)is likely too harsh a critic for the generator, wecan setcloser to zero to use the mean, increasing the odds of providing constructive feedback tothe generator. In addition, the discriminators have the added benefit of functioning as an ensemble,reducing the variance of the feedback presented to the generator, which is especially importantwhen the discriminators are far from optimal and are still learning a reasonable decision boundary.As training progresses and the discriminators improve, we can increase to become more criticalof the generator for more refined training.4.3 M AINTAINING MULTIPLE HYPOTHESESWe argue for this ensemble approach on a more fundamental level as well. Here, we draw onthe density ratio estimation perspective of GANs (Uehara et al. (2016)). The original GAN proofassumes we have access to pdata(x), if only implicitly. In most cases of interest, the discriminatoronly has access to a finite dataset sampled from pdata(x); therefore, when computing expectationsofV(D;G ), we only draw samples from our finite dataset. This is equivalent to training a GANwithpdata(x) = ~pdatawhich is a distribution consisting of point masses on all the data points in thedataset. For the sake of argument, let’s assume we are training a discriminator and generator, each1rGV=PiDiz@Di@GQj6=i(1Dj) =1z@Dk@GforDk= 1;D6=k= 0. Our argument ignores@Dk@G.4Published as a conference paper at ICLR 2017with infinite capacity. In this case, the global optimum ( pG(x) = ~pdata(x)) fails to capture any ofthe interesting structure from pdata(x), the true distribution we are trying to learn. Therefore, it isactually critical that we avoid this global optimum.x p(x) Figure 2: Consider a dataset consisting of the nine 1-dimensional samples in black. Their corre-sponding probability mass function is given in light gray. After training GMAN, three discrimina-tors converge to distinct local optima which implicitly define distributions over the data (red, blue,yellow). Each discriminator may specialize in discriminating a region of the data space (placingmore diffuse mass in other regions). Averaging over the three discriminators results in the distribu-tion in black, which we expect has higher likelihood under reasonable assumptions on the structureof the true distribution.In practice, this degenerate result is avoided by employing learners with limited capacity and corrupt-ing data samples with noise (i.e., dropout), but we might better accomplish this by simultaneouslytraining a variety of limited capacity discriminators. With this approach, we might obtain a diverseset of seemingly tenable hypotheses for the true pdata(x). Averaging over these multiple locallyoptimal discriminators increases the entropy of ~pdata(x)by diffusing the probability mass over thedata space (see Figure 2 for an example).4.4 A UTOMATING REGULATIONThe problem of keeping the discriminator and generator in balance has been widely recognized inprevious work with GANs. Issues with unstable dynamics, oscillatory behavior, and generator col-lapse are not uncommon. In addition, the discriminator is often times able to achieve a high degree ofclassification accuracy (producing a single scalar) before the generator has made sufficient progresson the arguably more difficult generative task (producing a high dimensional sample). Salimanset al. (2016) suggested label smoothing to reduce the vulnerability of the generator to a relativelysuperior discriminator. Here, we explore an approach that enables the generator to automaticallytemper the performance of the discriminator when necessary, but still encourages the generator tochallenge itself against more accurate adversaries. Specifically, we augment the generator objective:minG;> 0FG(Vi)f() (7)wheref()is monotonically increasing in which appears in the softmax equations, (3)—(5). Inexperiments, we simply set f() =cwithca constant (e.g., 0.001). The generator is incentivizedto increaseto reduce its objective at the expense of competing against the best available adversaryD(see Appendix A.6).5 E VALUATIONEvaluating GANs is still an open problem. In their original work, Goodfellow et al. (2014) reportlog likelihood estimates from Gaussian Parzen windows, which they admit, has high variance andis known not to perform well in high dimensions. Theis et al. (2016) recommend avoiding Parzenwindows and argue that generative models should be evaluated with respect to their intended appli-cation. Salimans et al. (2016) suggest an Inception score , however, it assumes labels exist for thedataset. Recently, Im et al. (2016) introduced the Generative Adversarial Metric (GAM) for mak-ing pairwise comparisons between independently trained GAN models. The core idea behind theirapproach is given two generator, discriminator pairs ( G1;D1) and (G2;D2), we should be able tolearn their relative performance by judging each generator under the opponent’s discriminator.5Published as a conference paper at ICLR 20175.1 M ETRICIn GMAN, the opponent may have multiple discriminators, which makes it unclear how to performthe swaps needed for GAM. We introduce a variant of GAM, the generative multi-adversarial metric(GMAM), that is amenable to training with multiple discriminators,GMAM = logFaGb(Vai)FaGa(Vai).FbGa(Vbi)FbGb(Vbi): (8)whereaandbrefer to the two GMAN variants (see Section 3 for notation FG(Vi)). The idea here issimilar. IfG2performs better than G1with respect to both D1andD2, then GMAM >0 (rememberV0always). IfG1performs better in both cases, GMAM <0, otherwise, the result is indeterminate.5.2 E XPERIMENTSWe evaluate the aforementioned variations of GMAN on a variety of image generation tasks: MNIST(LeCun et al. (1998)), CIFAR-10 (Krizhevsky (2009)) and CelebA (Liu et al. (2015)). We focus onrates of convergence to steady state along with quality of the steady state generator according to theGMAM metric. To summarize, loosely in order of increasing discriminator leniency, we compareF-boost: A single AdaBoost.OL -boosted discriminator (see Appendix A.7).P-boost:Diis trained according to AdaBoost.OL . Amax over the weak learner losses ispresented to the generator instead of the boosted prediction (see Appendix A.7).GMAN- max:maxfVigis presented to the generator.GAN: Standard GAN with a single discriminator (see Appendix A.2).mod-GAN: GAN with modified objective (generator minimizes log(D(G(z))).GMAN-: GMAN with F:=arithmetic softmax with parameter .GMAN: The arithmetic softmax is controlled by the generator through .All generator and discriminator models are deep (de)convolutional networks (Radford et al. (2015)),and aside from the boosted variants, all are trained with Adam (Kingma & Ba (2014)) and batchnormalization (Ioffe & Szegedy (2015)). Discriminators convert the real-valued outputs of theirnetworks to probabilities with squashed -sigmoids to prevent saturating logarithms in the minimaxobjective (+121+ez). See Appendix A.8 for further details. We test GMAN systems with N=f2;5gdiscriminators. We maintain discriminator diversity by varying dropout and network depth.5.2.1 MNISTFigure 3 reveals that increasing the number of discriminators reduces the number of iterations tosteady-state by 2x on MNIST; increasing N(the size of the discriminator ensemble) also has theadded benefit of reducing the variance the minimax objective over runs. Figure 4 displays the vari-ance of the same objective over a sliding time window, reaffirming GMAN’s acceleration to steady-state. Figure 5 corroborates this conclusion with recognizable digits appearing approximately anepoch before the single discriminator run; digits at steady-state appear slightly sharper as well.Our GMAM metric (see Table 1) agrees with the relative quality of images in Figure 5 with GMANachieving the best overall performance. Figure 6 reveals GMAN’s attempt to regulate the difficultyScore Variant GMANGMAN-0 GMAN- max mod-GANBetter!0:127 GMAN-0:0200:0090:0280:0190:0890:0360:007 GMAN-0 0:0200:009 -0:0130:0150:0180:0270:034 GMAN- max 0:0280:019 0:0130:015 -0:0110:0240:122 mod-GAN 0:0890:036 0:0180:027 0:0110:024 -Table 1: Pairwise GMAM metric means with stdev for select models on MNIST. For each column, apositive GMAM indicates better performance relative to the row opponent; negative implies worse.Scores are obtained by summing each variant’s column.6Published as a conference paper at ICLR 2017Figure 3: Generator objective, F, averagedover 5 training runs on MNIST. Increas-ing the number of discriminators acceleratesconvergence of Fto steady state (solid line)and reduces its variance, 2(filled shadow1). Figure 4 provides alternative evidenceof GMAN’s accelerated convergence.Figure 4: Stdev ,, of the generator objec-tive over a sliding window of 500 iterations.Lower values indicate a more steady-state.GMANwithN= 5 achieves steady-stateat2x speed of GAN ( N= 1). Note Fig-ure 3’s filled shadows reveal stdev ofFoverruns, while this plot shows stdev over time.Figure 5: Comparison of image quality across epochs for N=f1;2;5gusing GMAN-0 on MNIST.of the game to accelerate learning. Figure 7 displays the GMAM scores comparing fixed ’s to thevariablecontrolled by GMAN.Figure 6: GMANregulates difficulty of thegame by adjusting . Initially,Greducestoease learning and then gradually increases for a more challenging learning environment.Score= 1= 0(N= 5)Better!0:028 -0:0080:0090:0190:0100:001= 10:0080:009-0:0080:0100:025= 00:0190:0100:0080:010-Figure 7: PairwiseGMAMstdev (GMAM)for GMAN-andGMAN() over 5 runs on MNIST.7Published as a conference paper at ICLR 20175.2.2 C ELEB A & CIFAR-10We see similar accelerated convergence behavior for the CelebA dataset in Figure 8.Figure 8: Image quality improvement across number of generators at same number of iterations forGMAN-0 on CelebA.Figure 9 displays images generated by GMAN-0 on CIFAR-10. See Appendix A.3 for more results.Figure 9: Images generated by GMAN-0 on the CIFAR-10 dataset.We also found that GMAN is robust to mode collapse . We believe this is because the generatormust appease a diverse set of discriminators in each minibatch. Emitting a single sample will scorewell for one discriminator at the expense of the rest of the discriminators. Current solutions (e.g.,minibatch discrimination) are quadratic in batch size. GMAN, however, is linear in batch size.6 C ONCLUSIONWe introduced multiple discriminators into the GAN framework and explored discriminator rolesranging from a formidable adversary to a forgiving teacher. Allowing the generator to automaticallytune its learning schedule (GMAN) outperformed GANs with a single discriminator on MNIST. Ingeneral, GMAN variants achieved faster convergence to a higher quality steady state on a variety oftasks as measured by a GAM-type metric (GMAM). In addition, GMAN makes using the originalGAN objective possible by increasing the odds of the generator receiving constructive feedback.In future work, we will look at more sophisticated mechanisms for letting the generator controlthe game as well as other ways to ensure diversity among the discriminators. Introducing multiplegenerators is conceptually an obvious next step, however, we expect difficulties to arise from morecomplex game dynamics. For this reason, game theory and game design will likely be important.ACKNOWLEDGMENTSWe acknowledge helpful conversations with Stefan Dernbach, Archan Ray, Luke Vilnis, Ben Turtel,Stephen Giguere, Rajarshi Das, and Subhransu Maji. We also thank NVIDIA for donating a K40GPU. This material is based upon work supported by the National Science Foundation under GrantNos. IIS-1564032. Any opinions, findings, and conclusions or recommendations expressed in thismaterial are those of the authors and do not necessarily reflect the views of the NSF.8Published as a conference paper at ICLR 2017BIBLIOGRAPHYMartın Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg SCorrado, Andy Davis, Jeffrey Dean, Matthieu Devin, et al. Tensorflow: Large-scale machinelearning on heterogeneous distributed systems. arXiv preprint arXiv:1603.04467 , 2016.Hana Ajakan, Pascal Germain, Hugo Larochelle, Franc ̧ois Laviolette, and Mario Marchand.Domain-adversarial neural networks. arXiv preprint arXiv:1412.4446 , 2014.J Andrew Bagnell. Robust supervised learning. In Proceedings Of The National Conference OnArtificial Intelligence , volume 20, pp. 714. Menlo Park, CA; Cambridge, MA; London; AAAIPress; MIT Press; 1999, 2005.Alina Beygelzimer, Satyen Kale, and Haipeng Luo. Optimal and adaptive algorithms for onlineboosting. arXiv preprint arXiv:1502.02651 , 2015.Xi Chen, Yan Duan, Rein Houthooft, John Schulman, Ilya Sutskever, and Pieter Abbeel. Info-gan: Interpretable representation learning by information maximizing generative adversarial nets.arXiv preprint arXiv:1606.03657 , 2016.Emily L Denton, Soumith Chintala, Rob Fergus, et al. Deep generative image models using alaplacian pyramid of adversarial networks. In Advances in neural information processing systems ,pp. 1486–1494, 2015.Jeff Donahue, Philipp Kr ̈ahenb ̈uhl, and Trevor Darrell. Adversarial feature learning. arXiv preprintarXiv:1605.09782 , 2016.Vincent Dumoulin, Ishmael Belghazi, Ben Poole, Alex Lamb, Martin Arjovsky, Olivier Mastropi-etro, and Aaron Courville. Adversarially learned inference. arXiv preprint arXiv:1606.00704 ,2016.Harrison Edwards and Amos Storkey. Censoring representations with an adversary. arXiv preprintarXiv:1511.05897 , 2015.Jon Gauthier. Conditional generative adversarial nets for convolutional face generation. ClassProject for Stanford CS231N: Convolutional Neural Networks for Visual Recognition, Wintersemester , 2014, 2014.Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair,Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Advances in Neural Infor-mation Processing Systems , pp. 2672–2680, 2014.Jonathan Ho and Stefano Ermon. Generative adversarial imitation learning. arXiv preprintarXiv:1606.03476 , 2016.Daniel Jiwoong Im, Chris Dongjoo Kim, Hui Jiang, and Roland Memisevic. Generating imageswith recurrent adversarial networks. arXiv preprint arXiv:1602.05110 , 2016.Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training byreducing internal covariate shift. arXiv preprint arXiv:1502.03167 , 2015.Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprintarXiv:1412.6980 , 2014.Alex Krizhevsky. Learning multiple layers of features from tiny images. Master’s Thesis , 2009.Yann LeCun, Corinna Cortes, and Christopher JC Burges. The mnist database of handwritten digits,1998.Yujia Li, Kevin Swersky, and Richard Zemel. Generative moment matching networks. In Interna-tional Conference on Machine Learning , pp. 1718–1727, 2015.Ziwei Liu, Ping Luo, Xiaogang Wang, and Xiaoou Tang. Deep learning face attributes in the wild.InProceedings of International Conference on Computer Vision (ICCV) , December 2015.9Published as a conference paper at ICLR 2017Alireza Makhzani, Jonathon Shlens, Navdeep Jaitly, and Ian Goodfellow. Adversarial autoencoders.arXiv preprint arXiv:1511.05644 , 2015.Mehdi Mirza and Simon Osindero. Conditional generative adversarial nets. arXiv preprintarXiv:1411.1784 , 2014.Sebastian Nowozin, Botond Cseke, and Ryota Tomioka. f-gan: Training generative neural samplersusing variational divergence minimization. arXiv preprint arXiv:1606.00709 , 2016.Alec Radford, Luke Metz, and Soumith Chintala. Unsupervised representation learning with deepconvolutional generative adversarial networks. arXiv preprint arXiv:1511.06434 , 2015.Siamak Ravanbakhsh, Francois Lanusse, Rachel Mandelbaum, Jeff Schneider, and Barnabas Poczos.Enabling dark energy science with deep generative models of galaxy images. arXiv preprintarXiv:1609.05796 , 2016.Tim Salimans, Ian Goodfellow, Wojciech Zaremba, Vicki Cheung, Alec Radford, and Xi Chen.Improved techniques for training gans. arXiv preprint arXiv:1606.03498 , 2016.J ̈urgen Schmidhuber. Learning factorial codes by predictability minimization. Neural Computation ,4(6):863–879, 1992.Jost Tobias Springenberg. Unsupervised and semi-supervised learning with categorical generativeadversarial networks. arXiv preprint arXiv:1511.06390 , 2015.Lucas Theis, A ̈aron van den Oord, and Matthias Bethge. A note on the evaluation of generativemodels. arXiv preprint arXiv:1511.01844v3 , 2016.Masatoshi Uehara, Issei Sato, Masahiro Suzuki, Kotaro Nakayama, and Yutaka Matsuo. Generativeadversarial nets from a density ratio estimation perspective. arXiv preprint arXiv:1610.02920 ,2016.Donggeun Yoo, Namil Kim, Sunggyun Park, Anthony S Paek, and In So Kweon. Pixel-level domaintransfer. arXiv preprint arXiv:1603.07442 , 2016.Matthew D Zeiler, Dilip Krishnan, Graham W Taylor, and Rob Fergus. Deconvolutional networks.InComputer Vision and Pattern Recognition (CVPR), 2010 IEEE Conference on , pp. 2528–2535.IEEE, 2010.Junbo Zhao, Michael Mathieu, and Yann LeCun. Energy-based generative adversarial network.arXiv preprint arXiv:1609.03126 , 2016.10Published as a conference paper at ICLR 2017A A PPENDIXA.1 A CCELERATED CONVERGENCE & R EDUCED VARIANCESee Figures 10, 11, 12, and 13.Figure 10: Generator objective, F, averagedover 5 training runs on CelebA. IncreasingN(# ofD) accelerates convergence of Ftosteady state (solid line) and reduces its vari-ance,2(filled shadow1). Figure 11 pro-vides alternative evidence of GMAN-0’s ac-celerated convergence.Figure 11: Stdev ,, of the generator objec-tive over a sliding window of 500 iterations.Lower values indicate a more steady-state.GMAN-0 with N= 5 achieves steady-stateat2x speed of GAN ( N= 1). Note Fig-ure 10’s filled shadows reveal stdev ofFoverruns, while this plot shows stdev over time.Figure 12: Generator objective, F, averagedover 5 training runs on CIFAR-10. Increas-ingN(# ofD) accelerates convergence ofFto steady state (solid line) and reduces itsvariance,2(filled shadow1). Figure 13provides alternative evidence of GMAN-0’saccelerated convergence.Figure 13: Stdev ,, of the generator objec-tive over a sliding window of 500 iterations.Lower values indicate a more steady-state.GMAN-0 with N= 5 achieves steady-stateat2x speed of GAN ( N= 1). Note Fig-ure 12’s filled shadows reveal stdev ofFoverruns, while this plot shows stdev over time.A.2 A DDITIONAL GMAM T ABLESSee Tables 2, 3, 4, 5, 6. Increasing the number of discriminators from 2 to 5 on CIFAR-10 signif-icantly improves scores over the standard GAN both in terms of the GMAM metric and Inceptionscores.A.3 G ENERATED IMAGESSee Figures 14 and 15.11Published as a conference paper at ICLR 2017Score Variant GMANGMAN-1 GAN GMAN-0 GMAN- max mod-GANBetter!0:184 GMAN-0:0070:0400:0200:0280:0890:067 GMAN-1 0:007 -0:0080:0080:0210:0370:030 GAN 0:040 0:008 - 0:0020:0180:0580:005 GMAN-0 0:020 0:008 0:002 -0:0130:0180:091 GMAN- max 0:028 0:021 0:018 0:013 -0:0110:213 mod-GAN 0:089 0:037 0:058 0:018 0:011 -Table 2: Pairwise GMAM metric means for select models on MNIST. For each column, a positiveGMAM indicates better performance relative to the row opponent; negative implies worse. Scoresare obtained by summing each column.Score Variant GMAN-0 GMAN-1 GMANmod-GANBetter!0:172 GMAN-0 -0:0220:0620:0880:050 GMAN-1 0:022 - 0:0060:0780:055 GMAN0:0620:006 -0:0010:167 mod-GAN 0:088 0:078 0:001 -Table 3: Pairwise GMAM metric means for select models on CIFAR-10. For each column, a positiveGMAM indicates better performance relative to the row opponent; negative implies worse. Scoresare obtained by summing each column. GMAN variants were trained with twodiscriminators.GMAN-0 GMAN-1 mod-GAN GMANScore 5:8780:193 5:7650:168 5:7380:176 5:5390:099Table 4: Inception score means with standard deviations for select models on CIFAR-10. Higherscores are better. GMAN variants were trained with twodiscriminators.Score Variant GMAN-0 GMANGMAN-1 mod-GANBetter!0:180 GMAN-0 -0:0080:0410:1320:122 GMAN0:008 -0:0380:0920:010 GMAN-1 0:041 0:038 -0:0890:313 mod-GAN 0:132 0:092 0:089 -Table 5: Pairwise GMAM metric means for select models on CIFAR-10. For each column, a positiveGMAM indicates better performance relative to the row opponent; negative implies worse. Scoresare obtained by summing each column. GMAN variants were trained with fivediscriminators.GMAN-1 GMAN-0 GMANmod-GANScore 6:0010:194 5:9570:135 5:9550:153 5:7380:176Table 6: Inception score means with standard deviations for select models on CIFAR-10. Higherscores are better. GMAN variants were trained with fivediscriminators.Figure 14: Sample of pictures generated on CelebA cropped dataset.12Published as a conference paper at ICLR 2017Figure 15: Sample of pictures generated by GMAN-0 on CIFAR dataset.A.4 S OMEWHAT RELATED WORKA GAN framework with two discriminators appeared in Yoo et al. (2016), however, it is applica-ble only in a semi-supervised case where a label can be assigned to subsets of the dataset (e.g.,X=fX1=Domain 1;X2=Domain 2;:::g). In contrast, our framework applies to an unsu-pervised scenario where an obvious partition of the dataset is unknown. Furthermore, extendingGMAN to the semi-supervised domain-adaptation scenario would suggest multiple discriminatorsper domain, therefore our line of research is strictly orthogonal to that of their multi-domain dis-criminator approach. Also, note that assigning a discriminator to each domain is akin to prescribinga new discriminator to each value of a conditional variable in conditional GANs (Mirza & Osindero(2014)). In this case, we interpret GMAN as introducing multiple conditional discriminators and nota discriminator for each of the possibly exponentially many conditional labels.In Section 4.4, we describe an approach to customize adversarial training to better suit the devel-opment of the generator. An approach with similar conceptual underpinnings was described inRavanbakhsh et al. (2016), however, similar to the above, it is only admissible in a semi-supervisedscenario whereas our applies to the unsupervised case.A.5 Softmax REPRESENTABILITYLetsoftmax (Vi) =^V2[minVi;maxVi]. Also leta=argminiVi,b=argmaxiVi, andV(t) =V((1t)Da+tDb)so thatV(0) =VaandV(1) =Vb. The softmax and minimax objectiveV(Di;G)are both continuous in their inputs, so by the intermediate value theorem , we have that9^t2[0;1]s:t:V(^t) = ^V, which implies9^D2Ds:t: V (^D;G ) = ^V. This result implies thatthesoftmax (and any other continuous substitute) can be interpreted as returning V(^D;G )for some^Dselected by computing an another, unknown function over the space of the discriminators. Thisresult holds even if ^Dis not representable by the architecture chosen for D’s neural network.13Published as a conference paper at ICLR 2017A.6 U NCONSTRAINED OPTIMIZATIONTo convert GMANminimax formulation to an unconstrained minimax formulation, we introducean auxiliary variable, , define() = log(1 + e), and let the generator minimize over 2R.A.7 B OOSTING WITH AdaBoost.OLAdaBoost.OL (Beygelzimer et al. (2015)) does not require knowledge of the weak learner’s slightedge over random guessing ( P(correct label) = 0:5 +2(0;0:5]), and in fact, allows <0. Thisis crucial because our weak learners are deep nets with unknown, possibly negative, ’s.Figure 16: Sample of pictures generated across 4 independent runs on MNIST with F-boost (similarresults with P-boost).A.8 E XPERIMENTAL SETUPAll experiments were conducted using an architecture similar to DCGAN (Radford et al. (2015)).We use convolutional transpose layers (Zeiler et al. (2010)) for Gand strided convolutions for Dexcept for the input of Gand the last layer of D. We use the single step gradient method as in(Nowozin et al. (2016)), and batch normalization (Ioffe & Szegedy (2015)) was used in each ofthe generator layers. The different discriminators were trained with varying dropout rates from[0:3;0:7]. Variations in the discriminators were effected in two ways. We varied the architecture byvarying the number of filters in the discriminator layers (reduced by factors of 2, 4 and so on), aswell as varying dropout rates. Secondly we also decorrelated the samples that the disriminators weretraining on by splitting the minibatch across the discriminators. The code was written in Tensorflow(Abadi et al. (2016)) and run on Nvidia GTX 980 GPUs. Code to reproduce experiments and plotsis at https://github.com/iDurugkar/GMAN. Specifics for the MNIST architecture and training are:Generator latent variables zU(1;1)100Generator convolution transpose layers: (4;4;128);(8;8;64);(16;16;32);(32;32;1)Base Discriminator architecture: (32;32;1);(16;16;32);(8;8;64);(4;4;128) .Variants have either convolution 3 (4;4;128) removed or all the filter sizesare divided by 2 or 4. That is, (32;32;1);(16;16;16);(8;8;32);(4;4;64) or(32;32;1);(16;16;8);(8;8;16);(4;4;32).ReLu activations for all the hidden units. Tanh activation at the output units of the generator.Sigmoid at the output of the Discriminator.Training was performed with Adam (Kingma & Ba (2014)) ( lr= 2104,1= 0:5).MNIST was trained for 20 epochs with a minibatch of size 100.CelebA and CIFAR were trained over 24000 iterations with a minibatch of size 100.14
r1D00RZNl
Review
7: Good paper, accept
In this interesting paper the authors explore the idea of using an ensemble of multiple discriminators in generative adversarial network training. This comes with a number of benefits, mainly being able to use less powerful discriminators which may provide better training signal to the generator early on in training when strong discriminators might overpower the generator. My main comment is about the way the paper is presented. The caption of Figure 1. and Section 3.1 suggests using the best discriminator by taking the maximum over the performance of individual ensemble members. This does not appear to be the best thing to do because we are just bound to get a training signal that is stricter than any of the individual members of the ensemble. Then the rest of the paper explores relaxing the maximum and considers various averaging techniques to obtain a ’soft-discriminator’. To me, this idea is far more appealing, and the results seem to support this, too. Skimming the paper it seems as if the authors mainly advocated always using the strongest discriminator, evidenced by my premature pre-review question earlier. Overall, I think this paper is a valuable contribution, and I think the idea of multiple discriminators is an interesting direction to pursue.
3: The reviewer is fairly confident that the evaluation is correct
Byk-VI9eg
ICLR.cc/2017/conference
2017
Generative Multi-Adversarial Networks
["Ishan Durugkar", "Ian Gemp", "Sridhar Mahadevan"]
Generative adversarial networks (GANs) are a framework for producing a generative model by way of a two-player minimax game. In this paper, we propose the \emph{Generative Multi-Adversarial Network} (GMAN), a framework that extends GANs to multiple discriminators. In previous work, the successful training of GANs requires modifying the minimax objective to accelerate training early on. In contrast, GMAN can be reliably trained with the original, untampered objective. We explore a number of design perspectives with the discriminator role ranging from formidable adversary to forgiving teacher. Image generation tasks comparing the proposed framework to standard GANs demonstrate GMAN produces higher quality samples in a fraction of the iterations when measured by a pairwise GAM-type metric.
["Deep learning", "Unsupervised Learning", "Games"]
ABSTRACTGenerative adversarial networks (GANs) are a framework for producing a gen-erative model by way of a two-player minimax game. In this paper, we proposetheGenerative Multi-Adversarial Network (GMAN), a framework that extendsGANs to multiple discriminators. In previous work, the successful training ofGANs requires modifying the minimax objective to accelerate training early on.In contrast, GMAN can be reliably trained with the original, untampered objec-tive. We explore a number of design perspectives with the discriminator role rang-ing from formidable adversary to forgiving teacher. Image generation tasks com-paring the proposed framework to standard GANs demonstrate GMAN produceshigher quality samples in a fraction of the iterations when measured by a pairwiseGAM-type metric.1 I NTRODUCTIONGenerative adversarial networks (Goodfellow et al. (2014)) (GANs) are a framework for producinga generative model by way of a two-player minimax game. One player, the generator, attempts togenerate realistic data samples by transforming noisy samples, z, drawn from a simple distribution(e.g.,zN (0;1)) using a transformation function G(z)with learned weights, . The generatorreceives feedback as to how realistic its synthetic sample is from another player, the discriminator,which attempts to discern between synthetic data samples produced by the generator and samplesdrawn from an actual dataset using a function D!(x)with learned weights, !.The GAN framework is one of the more recent successes in a line of research on adversarial train-ing in machine learning (Schmidhuber (1992); Bagnell (2005); Ajakan et al. (2014)) where gamesbetween learners are carefully crafted so that Nash equilibria coincide with some set of desired op-timality criteria. Preliminary work on GANs focused on generating images (e.g., MNIST (LeCunet al. (1998)), CIFAR (Krizhevsky (2009))), however, GANs have proven useful in a variety of appli-cation domains including learning censored representations (Edwards & Storkey (2015)), imitatingexpert policies (Ho & Ermon (2016)), and domain transfer (Yoo et al. (2016)). Work extendingGANs to semi-supervised learning (Chen et al. (2016); Mirza & Osindero (2014); Gauthier (2014);Springenberg (2015)), inference (Makhzani et al. (2015); Dumoulin et al. (2016)), feature learning(Donahue et al. (2016)), and improved image generation (Im et al. (2016); Denton et al. (2015);Radford et al. (2015)) have shown promise as well.Despite these successes, GANs are reputably difficult to train. While research is still underway toimprove training techniques and heuristics (Salimans et al. (2016)), most approaches have focusedon understanding and generalizing GANs theoretically with the aim of exploring more tractableformulations (Zhao et al. (2016); Li et al. (2015); Uehara et al. (2016); Nowozin et al. (2016)).In this paper, we theoretically and empirically justify generalizing the GAN framework to multiplediscriminators. We review GANs and summarize our extension in Section 2. In Sections 3 and 4,we present our N-discriminator extension to the GAN framework ( Generative Multi-AdversarialNetworks ) with several variants which range the role of the discriminator from formidable adversaryto forgiving teacher. Section 4.2 explains how this extension makes training with the untamperedminimax objective tractable. In Section 5, we define an intuitive metric (GMAM) to quantify GMANEqual contribution1Published as a conference paper at ICLR 2017performance and evaluate our framework on a variety of image generation tasks. Section 6 concludeswith a summary of our contributions and directions for future research.Contributions —To summarize, our main contributions are: i) a multi-discriminator GAN frame-work, GMAN, that allows training with the original, untampered minimax objective; ii) a generativemulti-adversarial metric (GMAM) to perform pairwise evaluation of separately trained frameworks;iii) a particular instance of GMAN, GMAN, that allows the generator to automatically regulatetraining and reach higher performance (as measured by GMAM) in a fraction of the training timerequired for the standard GAN model.2 G ENERATIVE ADVERSARIAL NETWORKS TO GMANThe original formulation of a GAN is a minimax game between a generator, G(z) :z!x, and adiscriminator, D!(x) :x![0;1],minGmaxD2DV(D;G ) =Expdata(x)hlog(D(x))i+Ezpz(z)hlog(1D(G(z)))i; (1)wherepdata(x)is the true data distribution and pz(z)is a simple (usually fixed) distribution that iseasy to draw samples from (e.g., N(0;1)). We differentiate between the function space of discrim-inators,D, and elements of this space, D. LetpG(x)be the distribution induced by the generator,G(z). We assume D;G to be deep neural networks as is typically the case.In their original work, Goodfellow et al. (2014) proved that given sufficient network capacitiesand an oracle providing the optimal discriminator, D=argmaxDV(D;G ), gradient descent onpG(x)will recover the desired globally optimal solution, pG(x) =pdata(x), so that the generatordistribution exactly matches the data distribution. In practice, they replaced the second term, log(1D(G(z))), withlog(D(G(z)))to enhance gradient signals at the start of the game; note this is nolonger a zero-sum game. Part of their convergence and optimality proof involves using the oracle,D, to reduce the minimax game to a minimization over Gonly:minGV(D;G) = minGnC(G) =log(4) + 2JSD (pdatajjpG)o(2)whereJSD denotes Jensen-Shannon divergence. Minimizing C(G)necessarily minimizes JSD ,however, we rarely know Dand so we instead minimize V(D;G ), which is only a lower bound.This perspective of minimizing the distance between the distributions, pdata andpG, motivatedLi et al. (2015) to develop a generative model that matches all moments of pG(x)withpdata(x)(atoptimality) by minimizing maximum mean discrepancy (MMD). Another approach, EBGAN, (Zhaoet al. (2016)) explores a larger class of games (non-zero-sum games) which generalize the generatorand discriminator objectives to take real-valued “energies” as input instead of probabilities. Nowozinet al. (2016) and then Uehara et al. (2016) extended the JSD perspective on GANs to more generaldivergences, specifically f-divergences and then Bregman-divergences respectively.In general, these approaches focus on exploring fundamental reformulations of V(D;G ). Similarly,our work focuses on a fundamental reformulation, however, our aim is to provide a framework thataccelerates training of the generator to a more robust state irrespective of the choice of V.2.1 GMAN: A M ULTI -ADVERSARIAL EXTENSIONWe propose introducing multiple discriminators, which brings with it a number of design possibil-ities. We explore approaches ranging between two extremes: 1) a more discriminating D(betterapproximating maxDV(D;G )) and 2) aDbetter matched to the generator’s capabilities. Math-ematically, we reformulate G’s objective as minGmaxF(V(D1;G);:::;V (DN;G))for differentchoices ofF(see Figure 1). Each Diis still expected to independently maximize its own V(Di;G)(i.e. no cooperation). We sometimes abbreviate V(Di;G)withViandF(V1;:::;VN)withFG(Vi).3 A F ORMIDABLE ADVERSARYHere, we consider multi-discriminator variants that attempt to better approximate maxDV(D;G ),providing a harsher critic to the generator.2Published as a conference paper at ICLR 2017G DN D2 D1 V(DN,G) V(D2,G) V(D1,G) F( · ) Figure 1: (GMAN) The generator trains using feedback aggregated over multiple discriminators. IfF:= max ,Gtrains against the best discriminator. If F:=mean ,Gtrains against an ensemble.We explore other alternatives to Fin Sections 4.1 & 4.4 that improve on both these options.3.1 M AXIMIZING V(D,G)For a fixedG, maximizing FG(Vi)withF:= max andNrandomly instantiated copies of our dis-criminator is functionally equivalent to optimizing V(e.g., stochastic gradient ascent) with randomrestarts in parallel and then presenting maxi2f1;:::;NgV(Di;G)as the loss to the generator —a verypragmatic approach to the difficulties presented by the non-convexity of Vcaused by the deep net.Requiring the generator to minimize the max forcesGto generate high fidelity samples that musthold up under the scrutiny of all Ndiscriminators, each potentially representing a distinct max.In practice, maxDi2DV(Di;G)is not performed to convergence (or global optimality), so theabove problem is oversimplified. Furthermore, introducing Ndiscriminators affects the dynam-ics of the game which affects the trajectories of the discriminators. This prevents us from claimingmaxfV1(t);:::;VN(t)g>maxfV01(t)g8teven if we initalize D1(0) =D01(0)as it is unlikely thatD1(t) =D01(t)at some time tafter the start of the game.3.2 B OOSTINGWe can also consider taking the max overNdiscriminators as a form of boosting for the discrim-inator’s online classification problem (online because Gcan produce an infinite data stream). Theboosted discriminator is given a sample xtand must predict whether it came from the generator orthe dataset. The booster then makes its prediction using the predictions of the NweakerDi.There are a few differences between taking the max (case 1) and online boosting (case 2). In case 1,our booster is limited to selecting a single weak discriminator (i.e. a pure strategy), while in case 2,many boosting algorithms more generally use linear combinations of the discriminators. Moreover,in case 2, a booster must make a prediction before receiving a loss function. In case 1, we assumeaccess to the loss function at prediction time, which allows us to compute the max.It is possible to train the weak discriminators using boosting and then ignore the booster’s predictionby instead presenting maxfVig. We explore both variants in our experiments, using the adaptive al-gorithm proposed in Beygelzimer et al. (2015). Unfortunately, boosting failed to produce promisingresults on the image generation tasks. It is possible that boosting produces too strong an adversaryfor learning which motivates the next section. Boosting results appear in Appendix A.7.4 A F ORGIVING TEACHERThe previous perspectives focus on improving the discriminator with the goal of presenting a betterapproximation of maxDV(D;G )to the generator. Our next perspective asks the question, “IsmaxDV(D;G )too harsh a critic?”4.1 Soft-DISCRIMINATORIn practice, training against a far superior discriminator can impede the generator’s learning. Thisis because the generator is unlikely to generate any samples considered “realistic” by the discrimi-nator’s standards, and so the generator will receive uniformly negative feedback. This is problem-3Published as a conference paper at ICLR 2017atic because the information contained in the gradient derived from negative feedback only dictateswhere to drive down pG(x), not specifically where to increase pG(x). Furthermore, driving downpG(x)necessarily increases pG(x)in other regions of X(to maintainRXpG(x) = 1 ) which may ormay not contain samples from the true dataset ( whack-a-mole dilemma). In contrast, a generator ismore likely to see positive feedback against a more lenient discriminator, which may better guide agenerator towards amassing pG(x)in approximately correct regions of X.For this reason, we explore a variety of functions that allow us to soften themax operator. Wechoose to focus on soft versions of the three classical Pythagorean means parameterized by where= 0corresponds to the mean and the max is recovered as !1 :AMsoft(V;) =NXiwiVi (3)GMsoft(V;) =expNXiwilog(Vi)(4)HMsoft(V;) =NXiwiV1i1(5)wherewi=eVi=jeVjwith0;Vi<0. Using a softmax also has the well known advantage ofbeing differentiable (as opposed to subdifferentiable for max). Note that we only require continuityto guarantee that computing the softmax is actually equivalent to computing V(~D;G )where ~Dissome convex combination of Di(see Appendix A.5).4.2 U SING THE ORIGINAL MINIMAX OBJECTIVETo illustrate the effect the softmax has on training, observe that the component of AMsoft(V;0)relevant to generator training can be rewritten as1NNXiExpG(x)hlog(1Di(x))i=1NExpG(x)hlog(z)i: (6)wherez=QNi(1Di(x)). Note that the generator gradient, j@log(z)@zj, is minimized at z= 1overz2(0;1]1. From this form, it is clear that z= 1 if and only if Di= 08i, soGonly receives avanishing gradient if all Diagree that the sample is fake; this is especially unlikely for large N. Inother words, Gonly needs to fool a single Dito receive constructive feedback. This result allows thegenerator to successfully minimize the original generator objective, log(1D). This is in contrastto the more popular log(D)introduced to artificially enhance gradients at the start of training.At the beginning of training, when maxDiV(Di;G)is likely too harsh a critic for the generator, wecan setcloser to zero to use the mean, increasing the odds of providing constructive feedback tothe generator. In addition, the discriminators have the added benefit of functioning as an ensemble,reducing the variance of the feedback presented to the generator, which is especially importantwhen the discriminators are far from optimal and are still learning a reasonable decision boundary.As training progresses and the discriminators improve, we can increase to become more criticalof the generator for more refined training.4.3 M AINTAINING MULTIPLE HYPOTHESESWe argue for this ensemble approach on a more fundamental level as well. Here, we draw onthe density ratio estimation perspective of GANs (Uehara et al. (2016)). The original GAN proofassumes we have access to pdata(x), if only implicitly. In most cases of interest, the discriminatoronly has access to a finite dataset sampled from pdata(x); therefore, when computing expectationsofV(D;G ), we only draw samples from our finite dataset. This is equivalent to training a GANwithpdata(x) = ~pdatawhich is a distribution consisting of point masses on all the data points in thedataset. For the sake of argument, let’s assume we are training a discriminator and generator, each1rGV=PiDiz@Di@GQj6=i(1Dj) =1z@Dk@GforDk= 1;D6=k= 0. Our argument ignores@Dk@G.4Published as a conference paper at ICLR 2017with infinite capacity. In this case, the global optimum ( pG(x) = ~pdata(x)) fails to capture any ofthe interesting structure from pdata(x), the true distribution we are trying to learn. Therefore, it isactually critical that we avoid this global optimum.x p(x) Figure 2: Consider a dataset consisting of the nine 1-dimensional samples in black. Their corre-sponding probability mass function is given in light gray. After training GMAN, three discrimina-tors converge to distinct local optima which implicitly define distributions over the data (red, blue,yellow). Each discriminator may specialize in discriminating a region of the data space (placingmore diffuse mass in other regions). Averaging over the three discriminators results in the distribu-tion in black, which we expect has higher likelihood under reasonable assumptions on the structureof the true distribution.In practice, this degenerate result is avoided by employing learners with limited capacity and corrupt-ing data samples with noise (i.e., dropout), but we might better accomplish this by simultaneouslytraining a variety of limited capacity discriminators. With this approach, we might obtain a diverseset of seemingly tenable hypotheses for the true pdata(x). Averaging over these multiple locallyoptimal discriminators increases the entropy of ~pdata(x)by diffusing the probability mass over thedata space (see Figure 2 for an example).4.4 A UTOMATING REGULATIONThe problem of keeping the discriminator and generator in balance has been widely recognized inprevious work with GANs. Issues with unstable dynamics, oscillatory behavior, and generator col-lapse are not uncommon. In addition, the discriminator is often times able to achieve a high degree ofclassification accuracy (producing a single scalar) before the generator has made sufficient progresson the arguably more difficult generative task (producing a high dimensional sample). Salimanset al. (2016) suggested label smoothing to reduce the vulnerability of the generator to a relativelysuperior discriminator. Here, we explore an approach that enables the generator to automaticallytemper the performance of the discriminator when necessary, but still encourages the generator tochallenge itself against more accurate adversaries. Specifically, we augment the generator objective:minG;> 0FG(Vi)f() (7)wheref()is monotonically increasing in which appears in the softmax equations, (3)—(5). Inexperiments, we simply set f() =cwithca constant (e.g., 0.001). The generator is incentivizedto increaseto reduce its objective at the expense of competing against the best available adversaryD(see Appendix A.6).5 E VALUATIONEvaluating GANs is still an open problem. In their original work, Goodfellow et al. (2014) reportlog likelihood estimates from Gaussian Parzen windows, which they admit, has high variance andis known not to perform well in high dimensions. Theis et al. (2016) recommend avoiding Parzenwindows and argue that generative models should be evaluated with respect to their intended appli-cation. Salimans et al. (2016) suggest an Inception score , however, it assumes labels exist for thedataset. Recently, Im et al. (2016) introduced the Generative Adversarial Metric (GAM) for mak-ing pairwise comparisons between independently trained GAN models. The core idea behind theirapproach is given two generator, discriminator pairs ( G1;D1) and (G2;D2), we should be able tolearn their relative performance by judging each generator under the opponent’s discriminator.5Published as a conference paper at ICLR 20175.1 M ETRICIn GMAN, the opponent may have multiple discriminators, which makes it unclear how to performthe swaps needed for GAM. We introduce a variant of GAM, the generative multi-adversarial metric(GMAM), that is amenable to training with multiple discriminators,GMAM = logFaGb(Vai)FaGa(Vai).FbGa(Vbi)FbGb(Vbi): (8)whereaandbrefer to the two GMAN variants (see Section 3 for notation FG(Vi)). The idea here issimilar. IfG2performs better than G1with respect to both D1andD2, then GMAM >0 (rememberV0always). IfG1performs better in both cases, GMAM <0, otherwise, the result is indeterminate.5.2 E XPERIMENTSWe evaluate the aforementioned variations of GMAN on a variety of image generation tasks: MNIST(LeCun et al. (1998)), CIFAR-10 (Krizhevsky (2009)) and CelebA (Liu et al. (2015)). We focus onrates of convergence to steady state along with quality of the steady state generator according to theGMAM metric. To summarize, loosely in order of increasing discriminator leniency, we compareF-boost: A single AdaBoost.OL -boosted discriminator (see Appendix A.7).P-boost:Diis trained according to AdaBoost.OL . Amax over the weak learner losses ispresented to the generator instead of the boosted prediction (see Appendix A.7).GMAN- max:maxfVigis presented to the generator.GAN: Standard GAN with a single discriminator (see Appendix A.2).mod-GAN: GAN with modified objective (generator minimizes log(D(G(z))).GMAN-: GMAN with F:=arithmetic softmax with parameter .GMAN: The arithmetic softmax is controlled by the generator through .All generator and discriminator models are deep (de)convolutional networks (Radford et al. (2015)),and aside from the boosted variants, all are trained with Adam (Kingma & Ba (2014)) and batchnormalization (Ioffe & Szegedy (2015)). Discriminators convert the real-valued outputs of theirnetworks to probabilities with squashed -sigmoids to prevent saturating logarithms in the minimaxobjective (+121+ez). See Appendix A.8 for further details. We test GMAN systems with N=f2;5gdiscriminators. We maintain discriminator diversity by varying dropout and network depth.5.2.1 MNISTFigure 3 reveals that increasing the number of discriminators reduces the number of iterations tosteady-state by 2x on MNIST; increasing N(the size of the discriminator ensemble) also has theadded benefit of reducing the variance the minimax objective over runs. Figure 4 displays the vari-ance of the same objective over a sliding time window, reaffirming GMAN’s acceleration to steady-state. Figure 5 corroborates this conclusion with recognizable digits appearing approximately anepoch before the single discriminator run; digits at steady-state appear slightly sharper as well.Our GMAM metric (see Table 1) agrees with the relative quality of images in Figure 5 with GMANachieving the best overall performance. Figure 6 reveals GMAN’s attempt to regulate the difficultyScore Variant GMANGMAN-0 GMAN- max mod-GANBetter!0:127 GMAN-0:0200:0090:0280:0190:0890:0360:007 GMAN-0 0:0200:009 -0:0130:0150:0180:0270:034 GMAN- max 0:0280:019 0:0130:015 -0:0110:0240:122 mod-GAN 0:0890:036 0:0180:027 0:0110:024 -Table 1: Pairwise GMAM metric means with stdev for select models on MNIST. For each column, apositive GMAM indicates better performance relative to the row opponent; negative implies worse.Scores are obtained by summing each variant’s column.6Published as a conference paper at ICLR 2017Figure 3: Generator objective, F, averagedover 5 training runs on MNIST. Increas-ing the number of discriminators acceleratesconvergence of Fto steady state (solid line)and reduces its variance, 2(filled shadow1). Figure 4 provides alternative evidenceof GMAN’s accelerated convergence.Figure 4: Stdev ,, of the generator objec-tive over a sliding window of 500 iterations.Lower values indicate a more steady-state.GMANwithN= 5 achieves steady-stateat2x speed of GAN ( N= 1). Note Fig-ure 3’s filled shadows reveal stdev ofFoverruns, while this plot shows stdev over time.Figure 5: Comparison of image quality across epochs for N=f1;2;5gusing GMAN-0 on MNIST.of the game to accelerate learning. Figure 7 displays the GMAM scores comparing fixed ’s to thevariablecontrolled by GMAN.Figure 6: GMANregulates difficulty of thegame by adjusting . Initially,Greducestoease learning and then gradually increases for a more challenging learning environment.Score= 1= 0(N= 5)Better!0:028 -0:0080:0090:0190:0100:001= 10:0080:009-0:0080:0100:025= 00:0190:0100:0080:010-Figure 7: PairwiseGMAMstdev (GMAM)for GMAN-andGMAN() over 5 runs on MNIST.7Published as a conference paper at ICLR 20175.2.2 C ELEB A & CIFAR-10We see similar accelerated convergence behavior for the CelebA dataset in Figure 8.Figure 8: Image quality improvement across number of generators at same number of iterations forGMAN-0 on CelebA.Figure 9 displays images generated by GMAN-0 on CIFAR-10. See Appendix A.3 for more results.Figure 9: Images generated by GMAN-0 on the CIFAR-10 dataset.We also found that GMAN is robust to mode collapse . We believe this is because the generatormust appease a diverse set of discriminators in each minibatch. Emitting a single sample will scorewell for one discriminator at the expense of the rest of the discriminators. Current solutions (e.g.,minibatch discrimination) are quadratic in batch size. GMAN, however, is linear in batch size.6 C ONCLUSIONWe introduced multiple discriminators into the GAN framework and explored discriminator rolesranging from a formidable adversary to a forgiving teacher. Allowing the generator to automaticallytune its learning schedule (GMAN) outperformed GANs with a single discriminator on MNIST. Ingeneral, GMAN variants achieved faster convergence to a higher quality steady state on a variety oftasks as measured by a GAM-type metric (GMAM). In addition, GMAN makes using the originalGAN objective possible by increasing the odds of the generator receiving constructive feedback.In future work, we will look at more sophisticated mechanisms for letting the generator controlthe game as well as other ways to ensure diversity among the discriminators. Introducing multiplegenerators is conceptually an obvious next step, however, we expect difficulties to arise from morecomplex game dynamics. For this reason, game theory and game design will likely be important.ACKNOWLEDGMENTSWe acknowledge helpful conversations with Stefan Dernbach, Archan Ray, Luke Vilnis, Ben Turtel,Stephen Giguere, Rajarshi Das, and Subhransu Maji. We also thank NVIDIA for donating a K40GPU. This material is based upon work supported by the National Science Foundation under GrantNos. IIS-1564032. Any opinions, findings, and conclusions or recommendations expressed in thismaterial are those of the authors and do not necessarily reflect the views of the NSF.8Published as a conference paper at ICLR 2017BIBLIOGRAPHYMartın Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg SCorrado, Andy Davis, Jeffrey Dean, Matthieu Devin, et al. Tensorflow: Large-scale machinelearning on heterogeneous distributed systems. arXiv preprint arXiv:1603.04467 , 2016.Hana Ajakan, Pascal Germain, Hugo Larochelle, Franc ̧ois Laviolette, and Mario Marchand.Domain-adversarial neural networks. arXiv preprint arXiv:1412.4446 , 2014.J Andrew Bagnell. Robust supervised learning. In Proceedings Of The National Conference OnArtificial Intelligence , volume 20, pp. 714. Menlo Park, CA; Cambridge, MA; London; AAAIPress; MIT Press; 1999, 2005.Alina Beygelzimer, Satyen Kale, and Haipeng Luo. Optimal and adaptive algorithms for onlineboosting. arXiv preprint arXiv:1502.02651 , 2015.Xi Chen, Yan Duan, Rein Houthooft, John Schulman, Ilya Sutskever, and Pieter Abbeel. Info-gan: Interpretable representation learning by information maximizing generative adversarial nets.arXiv preprint arXiv:1606.03657 , 2016.Emily L Denton, Soumith Chintala, Rob Fergus, et al. Deep generative image models using alaplacian pyramid of adversarial networks. In Advances in neural information processing systems ,pp. 1486–1494, 2015.Jeff Donahue, Philipp Kr ̈ahenb ̈uhl, and Trevor Darrell. Adversarial feature learning. arXiv preprintarXiv:1605.09782 , 2016.Vincent Dumoulin, Ishmael Belghazi, Ben Poole, Alex Lamb, Martin Arjovsky, Olivier Mastropi-etro, and Aaron Courville. Adversarially learned inference. arXiv preprint arXiv:1606.00704 ,2016.Harrison Edwards and Amos Storkey. Censoring representations with an adversary. arXiv preprintarXiv:1511.05897 , 2015.Jon Gauthier. Conditional generative adversarial nets for convolutional face generation. ClassProject for Stanford CS231N: Convolutional Neural Networks for Visual Recognition, Wintersemester , 2014, 2014.Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair,Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Advances in Neural Infor-mation Processing Systems , pp. 2672–2680, 2014.Jonathan Ho and Stefano Ermon. Generative adversarial imitation learning. arXiv preprintarXiv:1606.03476 , 2016.Daniel Jiwoong Im, Chris Dongjoo Kim, Hui Jiang, and Roland Memisevic. Generating imageswith recurrent adversarial networks. arXiv preprint arXiv:1602.05110 , 2016.Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training byreducing internal covariate shift. arXiv preprint arXiv:1502.03167 , 2015.Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprintarXiv:1412.6980 , 2014.Alex Krizhevsky. Learning multiple layers of features from tiny images. Master’s Thesis , 2009.Yann LeCun, Corinna Cortes, and Christopher JC Burges. The mnist database of handwritten digits,1998.Yujia Li, Kevin Swersky, and Richard Zemel. Generative moment matching networks. In Interna-tional Conference on Machine Learning , pp. 1718–1727, 2015.Ziwei Liu, Ping Luo, Xiaogang Wang, and Xiaoou Tang. Deep learning face attributes in the wild.InProceedings of International Conference on Computer Vision (ICCV) , December 2015.9Published as a conference paper at ICLR 2017Alireza Makhzani, Jonathon Shlens, Navdeep Jaitly, and Ian Goodfellow. Adversarial autoencoders.arXiv preprint arXiv:1511.05644 , 2015.Mehdi Mirza and Simon Osindero. Conditional generative adversarial nets. arXiv preprintarXiv:1411.1784 , 2014.Sebastian Nowozin, Botond Cseke, and Ryota Tomioka. f-gan: Training generative neural samplersusing variational divergence minimization. arXiv preprint arXiv:1606.00709 , 2016.Alec Radford, Luke Metz, and Soumith Chintala. Unsupervised representation learning with deepconvolutional generative adversarial networks. arXiv preprint arXiv:1511.06434 , 2015.Siamak Ravanbakhsh, Francois Lanusse, Rachel Mandelbaum, Jeff Schneider, and Barnabas Poczos.Enabling dark energy science with deep generative models of galaxy images. arXiv preprintarXiv:1609.05796 , 2016.Tim Salimans, Ian Goodfellow, Wojciech Zaremba, Vicki Cheung, Alec Radford, and Xi Chen.Improved techniques for training gans. arXiv preprint arXiv:1606.03498 , 2016.J ̈urgen Schmidhuber. Learning factorial codes by predictability minimization. Neural Computation ,4(6):863–879, 1992.Jost Tobias Springenberg. Unsupervised and semi-supervised learning with categorical generativeadversarial networks. arXiv preprint arXiv:1511.06390 , 2015.Lucas Theis, A ̈aron van den Oord, and Matthias Bethge. A note on the evaluation of generativemodels. arXiv preprint arXiv:1511.01844v3 , 2016.Masatoshi Uehara, Issei Sato, Masahiro Suzuki, Kotaro Nakayama, and Yutaka Matsuo. Generativeadversarial nets from a density ratio estimation perspective. arXiv preprint arXiv:1610.02920 ,2016.Donggeun Yoo, Namil Kim, Sunggyun Park, Anthony S Paek, and In So Kweon. Pixel-level domaintransfer. arXiv preprint arXiv:1603.07442 , 2016.Matthew D Zeiler, Dilip Krishnan, Graham W Taylor, and Rob Fergus. Deconvolutional networks.InComputer Vision and Pattern Recognition (CVPR), 2010 IEEE Conference on , pp. 2528–2535.IEEE, 2010.Junbo Zhao, Michael Mathieu, and Yann LeCun. Energy-based generative adversarial network.arXiv preprint arXiv:1609.03126 , 2016.10Published as a conference paper at ICLR 2017A A PPENDIXA.1 A CCELERATED CONVERGENCE & R EDUCED VARIANCESee Figures 10, 11, 12, and 13.Figure 10: Generator objective, F, averagedover 5 training runs on CelebA. IncreasingN(# ofD) accelerates convergence of Ftosteady state (solid line) and reduces its vari-ance,2(filled shadow1). Figure 11 pro-vides alternative evidence of GMAN-0’s ac-celerated convergence.Figure 11: Stdev ,, of the generator objec-tive over a sliding window of 500 iterations.Lower values indicate a more steady-state.GMAN-0 with N= 5 achieves steady-stateat2x speed of GAN ( N= 1). Note Fig-ure 10’s filled shadows reveal stdev ofFoverruns, while this plot shows stdev over time.Figure 12: Generator objective, F, averagedover 5 training runs on CIFAR-10. Increas-ingN(# ofD) accelerates convergence ofFto steady state (solid line) and reduces itsvariance,2(filled shadow1). Figure 13provides alternative evidence of GMAN-0’saccelerated convergence.Figure 13: Stdev ,, of the generator objec-tive over a sliding window of 500 iterations.Lower values indicate a more steady-state.GMAN-0 with N= 5 achieves steady-stateat2x speed of GAN ( N= 1). Note Fig-ure 12’s filled shadows reveal stdev ofFoverruns, while this plot shows stdev over time.A.2 A DDITIONAL GMAM T ABLESSee Tables 2, 3, 4, 5, 6. Increasing the number of discriminators from 2 to 5 on CIFAR-10 signif-icantly improves scores over the standard GAN both in terms of the GMAM metric and Inceptionscores.A.3 G ENERATED IMAGESSee Figures 14 and 15.11Published as a conference paper at ICLR 2017Score Variant GMANGMAN-1 GAN GMAN-0 GMAN- max mod-GANBetter!0:184 GMAN-0:0070:0400:0200:0280:0890:067 GMAN-1 0:007 -0:0080:0080:0210:0370:030 GAN 0:040 0:008 - 0:0020:0180:0580:005 GMAN-0 0:020 0:008 0:002 -0:0130:0180:091 GMAN- max 0:028 0:021 0:018 0:013 -0:0110:213 mod-GAN 0:089 0:037 0:058 0:018 0:011 -Table 2: Pairwise GMAM metric means for select models on MNIST. For each column, a positiveGMAM indicates better performance relative to the row opponent; negative implies worse. Scoresare obtained by summing each column.Score Variant GMAN-0 GMAN-1 GMANmod-GANBetter!0:172 GMAN-0 -0:0220:0620:0880:050 GMAN-1 0:022 - 0:0060:0780:055 GMAN0:0620:006 -0:0010:167 mod-GAN 0:088 0:078 0:001 -Table 3: Pairwise GMAM metric means for select models on CIFAR-10. For each column, a positiveGMAM indicates better performance relative to the row opponent; negative implies worse. Scoresare obtained by summing each column. GMAN variants were trained with twodiscriminators.GMAN-0 GMAN-1 mod-GAN GMANScore 5:8780:193 5:7650:168 5:7380:176 5:5390:099Table 4: Inception score means with standard deviations for select models on CIFAR-10. Higherscores are better. GMAN variants were trained with twodiscriminators.Score Variant GMAN-0 GMANGMAN-1 mod-GANBetter!0:180 GMAN-0 -0:0080:0410:1320:122 GMAN0:008 -0:0380:0920:010 GMAN-1 0:041 0:038 -0:0890:313 mod-GAN 0:132 0:092 0:089 -Table 5: Pairwise GMAM metric means for select models on CIFAR-10. For each column, a positiveGMAM indicates better performance relative to the row opponent; negative implies worse. Scoresare obtained by summing each column. GMAN variants were trained with fivediscriminators.GMAN-1 GMAN-0 GMANmod-GANScore 6:0010:194 5:9570:135 5:9550:153 5:7380:176Table 6: Inception score means with standard deviations for select models on CIFAR-10. Higherscores are better. GMAN variants were trained with fivediscriminators.Figure 14: Sample of pictures generated on CelebA cropped dataset.12Published as a conference paper at ICLR 2017Figure 15: Sample of pictures generated by GMAN-0 on CIFAR dataset.A.4 S OMEWHAT RELATED WORKA GAN framework with two discriminators appeared in Yoo et al. (2016), however, it is applica-ble only in a semi-supervised case where a label can be assigned to subsets of the dataset (e.g.,X=fX1=Domain 1;X2=Domain 2;:::g). In contrast, our framework applies to an unsu-pervised scenario where an obvious partition of the dataset is unknown. Furthermore, extendingGMAN to the semi-supervised domain-adaptation scenario would suggest multiple discriminatorsper domain, therefore our line of research is strictly orthogonal to that of their multi-domain dis-criminator approach. Also, note that assigning a discriminator to each domain is akin to prescribinga new discriminator to each value of a conditional variable in conditional GANs (Mirza & Osindero(2014)). In this case, we interpret GMAN as introducing multiple conditional discriminators and nota discriminator for each of the possibly exponentially many conditional labels.In Section 4.4, we describe an approach to customize adversarial training to better suit the devel-opment of the generator. An approach with similar conceptual underpinnings was described inRavanbakhsh et al. (2016), however, similar to the above, it is only admissible in a semi-supervisedscenario whereas our applies to the unsupervised case.A.5 Softmax REPRESENTABILITYLetsoftmax (Vi) =^V2[minVi;maxVi]. Also leta=argminiVi,b=argmaxiVi, andV(t) =V((1t)Da+tDb)so thatV(0) =VaandV(1) =Vb. The softmax and minimax objectiveV(Di;G)are both continuous in their inputs, so by the intermediate value theorem , we have that9^t2[0;1]s:t:V(^t) = ^V, which implies9^D2Ds:t: V (^D;G ) = ^V. This result implies thatthesoftmax (and any other continuous substitute) can be interpreted as returning V(^D;G )for some^Dselected by computing an another, unknown function over the space of the discriminators. Thisresult holds even if ^Dis not representable by the architecture chosen for D’s neural network.13Published as a conference paper at ICLR 2017A.6 U NCONSTRAINED OPTIMIZATIONTo convert GMANminimax formulation to an unconstrained minimax formulation, we introducean auxiliary variable, , define() = log(1 + e), and let the generator minimize over 2R.A.7 B OOSTING WITH AdaBoost.OLAdaBoost.OL (Beygelzimer et al. (2015)) does not require knowledge of the weak learner’s slightedge over random guessing ( P(correct label) = 0:5 +2(0;0:5]), and in fact, allows <0. Thisis crucial because our weak learners are deep nets with unknown, possibly negative, ’s.Figure 16: Sample of pictures generated across 4 independent runs on MNIST with F-boost (similarresults with P-boost).A.8 E XPERIMENTAL SETUPAll experiments were conducted using an architecture similar to DCGAN (Radford et al. (2015)).We use convolutional transpose layers (Zeiler et al. (2010)) for Gand strided convolutions for Dexcept for the input of Gand the last layer of D. We use the single step gradient method as in(Nowozin et al. (2016)), and batch normalization (Ioffe & Szegedy (2015)) was used in each ofthe generator layers. The different discriminators were trained with varying dropout rates from[0:3;0:7]. Variations in the discriminators were effected in two ways. We varied the architecture byvarying the number of filters in the discriminator layers (reduced by factors of 2, 4 and so on), aswell as varying dropout rates. Secondly we also decorrelated the samples that the disriminators weretraining on by splitting the minibatch across the discriminators. The code was written in Tensorflow(Abadi et al. (2016)) and run on Nvidia GTX 980 GPUs. Code to reproduce experiments and plotsis at https://github.com/iDurugkar/GMAN. Specifics for the MNIST architecture and training are:Generator latent variables zU(1;1)100Generator convolution transpose layers: (4;4;128);(8;8;64);(16;16;32);(32;32;1)Base Discriminator architecture: (32;32;1);(16;16;32);(8;8;64);(4;4;128) .Variants have either convolution 3 (4;4;128) removed or all the filter sizesare divided by 2 or 4. That is, (32;32;1);(16;16;16);(8;8;32);(4;4;64) or(32;32;1);(16;16;8);(8;8;16);(4;4;32).ReLu activations for all the hidden units. Tanh activation at the output units of the generator.Sigmoid at the output of the Discriminator.Training was performed with Adam (Kingma & Ba (2014)) ( lr= 2104,1= 0:5).MNIST was trained for 20 epochs with a minibatch of size 100.CelebA and CIFAR were trained over 24000 iterations with a minibatch of size 100.14
B1Ob_V4Ne
Interesting ideas, needs more empirical results.
7: Good paper, accept
The paper extends the GAN framework to accommodate multiple discriminators. The authors motivate this from two points of view: (1) Having multiple discriminators tackle the task is equivalent to optimizing the value function using random restarts, which can potentially help optimization given the nonconvexity of the value function. (2) Having multiple discriminators can help overcome the optimization problems arising when a discriminator is too harsh a critic. A generator receiving signal from multiple discriminators is less likely to be receiving poor gradient signal from all discriminators. The paper's main idea looks straightforward to implement in practice and makes for a good addition to the GAN training toolbelt. I am not very convinced by the GAM (and by extension the GMAM) evaluation metric. Without evidence that the GAN game is converging (even approximately), it is hard to make the case that the discriminators tell something meaningful about the generators with respect to the data distribution. In particular, it does not inform on mode coverage or probability mass misallocation. The learning curves (Figure 3) look more convincing to me: they provide good evidence that increasing the number of discriminators has a stabilizing effect on the learning dynamics. However, it seems like this figure along with Figure 4 also show that the unmodified generator objective is more stable even with only one discriminator. In that case, is it even necessary to have more than one discriminator to train the generator using an unmodified objective? Overall, I think the ideas presented in this paper show good potential, but I would like to see an extended analysis in the line of Figures 3 and 4 for more datasets before I think it is ready for publication. UPDATE: The rating has been revised to a 7 following discussion with the authors.
4: The reviewer is confident but not absolutely certain that the evaluation is correct
HyecJGP5ge
ICLR.cc/2017/conference
2017
NEUROGENESIS-INSPIRED DICTIONARY LEARNING: ONLINE MODEL ADAPTION IN A CHANGING WORLD
["Sahil Garg", "Irina Rish", "Guillermo Cecchi", "Aurelie Lozano"]
In this paper, we focus on online representation learning in non-stationary environments which may require continuous adaptation of model’s architecture. We propose a novel online dictionary-learning (sparse-coding) framework which incorporates the addition and deletion of hidden units (dictionary elements), and is inspired by the adult neurogenesis phenomenon in the dentate gyrus of the hippocampus, known to be associated with improved cognitive function and adaptation to new environments. In the online learning setting, where new input instances arrive sequentially in batches, the “neuronal birth” is implemented by adding new units with random initial weights (random dictionary elements); the number of new units is determined by the current performance (representation error) of the dictionary, higher error causing an increase in the birth rate. “Neuronal death” is implemented by imposing l1/l2-regularization (group sparsity) on the dictionary within the block-coordinate descent optimization at each iteration of our online alternating minimization scheme, which iterates between the code and dictionary updates. Finally, hidden unit connectivity adaptation is facilitated by introducing sparsity in dictionary elements. Our empirical evaluation on several real-life datasets (images and language) as well as on synthetic data demonstrates that the proposed approach can considerably outperform the state-of-art fixed-size (nonadaptive) online sparse coding of Mairal et al. (2009) in the presence of nonstationary data. Moreover, we identify certain properties of the data (e.g., sparse inputs with nearly non-overlapping supports) and of the model (e.g., dictionary sparsity) associated with such improvements.
["Unsupervised Learning", "Computer vision", "Transfer Learning", "Optimization", "Applications"]
ABSTRACTIn this paper, we focus on online representation learning in non-stationary envi-ronments which may require continuous adaptation of model’s architecture. Wepropose a novel online dictionary-learning (sparse-coding) framework which in-corporates the addition and deletion of hidden units (dictionary elements), and isinspired by the adult neurogenesis phenomenon in the dentate gyrus of the hip-pocampus, known to be associated with improved cognitive function and adapta-tion to new environments. In the online learning setting, where new input instancesarrive sequentially in batches, the “neuronal birth” is implemented by adding newunits with random initial weights (random dictionary elements); the number ofnew units is determined by the current performance (representation error) of thedictionary, higher error causing an increase in the birth rate. “Neuronal death” isimplemented by imposing l1=l2-regularization (group sparsity) on the dictionarywithin the block-coordinate descent optimization at each iteration of our onlinealternating minimization scheme, which iterates between the code and dictionaryupdates. Finally, hidden unit connectivity adaptation is facilitated by introduc-ing sparsity in dictionary elements. Our empirical evaluation on several real-lifedatasets (images and language) as well as on synthetic data demonstrates that theproposed approach can considerably outperform the state-of-art fixed-size (non-adaptive) online sparse coding of Mairal et al. (2009) in the presence of non-stationary data. Moreover, we identify certain properties of the data (e.g., sparseinputs with nearly non-overlapping supports) and of the model (e.g., dictionarysparsity) associated with such improvements.1 I NTRODUCTIONThe ability to adapt to a changing environment is essential for successful functioning in both naturaland artificial intelligent systems. In human brains, adaptation is achieved via neuroplasticity, whichtakes different forms, including synaptic plasticity, i.e. changing connectivity strength among neu-rons, and neurogenesis, i.e. the birth and maturation of new neurons (accompanied with the death ofsome new or old neurons). Particularly, adult neurogenesis (Kempermann, 2006) (i.e., neurogenesisin the adult brain) in the dentate gyrus of the hippocampus is associated with improved cognitivefunctions such as pattern separation (Sahay et al., 2011), and is often implicated as a “candidatemechanism for the specific dynamic and flexible aspects of learning” (Stuchlik, 2014).In the machine-learning context, synaptic plasticity is analogous to parameter tuning (e.g., learningneural net weights), while neurogenesis can be viewed as an online model selection via addition(and deletion) of hidden units in specific hidden-variable models used for representation learning(where hidden variables represent extracted features), from linear and nonlinear component anal-ysis methods such as PCA, ICA, sparse coding (dictionary learning), nonlinear autoencoders, todeep neural nets and general hidden-factor probabilistic models. However, optimal model selectionin large-scale hidden-variable models (e.g., adjusting the number of layers, hidden units, and their1Under review as a conference paper at ICLR 2017connectivity), is intractable due to enormous search space size. Growing a model gradually can be amore feasible alternative; after all, every real brain’s “architecture” development process starts witha single cell. Furthermore, the process of adapting the model’s architecture to dynamically changingenvironments is necessary for achieving a lifelong, continual learning. Finally, an online approachto dynamically expanding and contracting model’s architecture can serve as a potentially more ef-fective alternative to the standard off-line model selection (e.g., MDL-based off-line sparse coding(Ramirez & Sapiro, 2012)), as well as to the currently popular network compression (distillation)approaches (Hinton et al., 2015; Srivastava et al., 2014; Ba & Caruana, 2014; Bucilu et al., 2006),where a very large-scale architecture, such as a deep neural network with millions of parameters,must be first selected in ad-hoc ways and trained on large amounts of data, only to be compressedlater to a more compact and simpler model with similarly good performance; we hypothesize thatadaptive growth and reduction of the network architecture is a viable alternative to the distillationapproach, although developing such an alternative remains the topic of further research.In this paper, we focus on dictionary learning, a.k.a. sparse coding (Olshausen & Field, 1997; Kreutz-Delgado et al., 2003; Aharon et al., 2006; Lee et al., 2006) – a representation learning approachwhich finds a set of basis vectors (atoms, or dictionary elements) and representations (encodings)of the input samples as sparse linear combinations of those elements1. More specifically, our ap-proach builds upon the computationally efficient online dictionary-learning method of Mairal et al.(2009), where the data samples are processed sequentially, one at a time (or in small batches). Onlineapproaches are particularly important in large-scale applications with millions of potential trainingsamples, where off-line learning can be infeasible; furthermore, online approaches are a naturalchoice for building systems capable of continual, lifelong learning.Herein, we propose a novel online dictionary learning approach inspired by adult neurogenesis,which extends the state-of-art method of Mairal et al. (2009) to nonstationary environments by in-corporating online model adaption, i.e. the addition and deletion of dictionary elements (i.e., hiddenunits) in response to the dynamically changing properties of the input data2. More specifically, ateach iteration of online learning (i.e., for every batch of data samples), we add a group of randomdictionary elements (modeling neuronal birth), where the group size depends on the current repre-sentation error, i.e. the mismatch between the new input samples and their approximation based onthe current dictionary: higher error triggers more neurogenesis. The neuronal death, which involvesremoving “useless” dictionary elements, is implemented as an l1=l2group-sparsity regularization;this step is essential in neurogenesis-inspired learning, since it reduces a potentially uncontrolledgrowth of the dictionary, and helps to avoid overfitting (note that neuronal death is also a naturalpart of the adult neurogensis process, where neuronal survival depends on multiple factors, includ-ing the complexity of a learning environment (Kempermann, 2006)). Moreover, we introduce spar-sity in dictionary elements, which reflects sparse connectivity between hidden units/neurons andtheir inputs; this is a more biologically plausible assumption than the fully-connected architectureof standard dictionary learning, and it also works better in our experiments. Thus, adaptation in ourmodel involves not only the addition/deletion of the elements, but adapting their connectivity aswell.We demonstrate on both simulated data and on two real-life datasets (natural images and languageprocessing) that, in presence of a non-stationary input, our approach can significantly outperformnon-adaptive, fixed-dictionary-size online method of Mairal et al. (2009). Moreover, we identify cer-tain data properties and parameter settings associated with such improvements. Finally, we demon-strate that the novel approach not only improves the representation accuracy, but also can boost theclassification accuracy based on the extracted features.Note that, although the group-sparsity constraint enforcing deletion of some dictionary elementswas introduced earlier in the group-sparse coding method of Bengio et al. (2009), it was only im-plemented and tested in the off-line rather than online setting, and, most importantly, it was not ac-1Note that the corresponding neural network interpretation of sparse coding framework is a (single-hidden-layer) linear autoencoder with sparsity constraints: the hidden units are associated with dictionary elements,each element represented by a weight vector associated with unit’s outgoing links in the output layer, and thesparse vector of hidden unit activations corresponding to the encoding of an input.2An early version of our neurogenetic online dictionary learning approach was presented as a poster at the2011 Society for Neuroscience meeting (Rish et al., 2011), although it did not appear before as a peer-reviewedpublication.2Under review as a conference paper at ICLR 2017companied by the neurogenesis. On the other hand, while some prior work considered online nodeaddition in hidden-variable models, and specifically, in neural networks, from cascade correlations(Fahlman & Lebiere, 1989) to the recent work by Draelos et al. (2016a;b), no model pruning wasincorporated in those approaches in order to balance the model expansion. Overall, we are not awareof any prior work which would propose and systematically evaluate, empirically and theoretically, adynamic process involving both addition and deletion of hidden units in the online model selectionsetting, either in sparse coding or in a neural network setting.To summarize, the main contributions of this paper are as follows:we propose a novel online model-selection approach to dictionary learning3, inspired bytheadult neurogenesis phenomenon; our method significantly outperforms the state-of-artbaseline , especially in non-stationary settings;we perform an extensive empirical evaluation, on both synthetic and real data , in orderto identify the conditions when the proposed adaptive approach is most beneficial, bothfor data reconstruction and for classification based on extracted features; we conclude thatthese conditions include a combination of sparse dictionary elements (and thus a morebiologically plausible sparse network connectivity as opposed to fully connected units),accompanied by sufficiently dense codes ;furthermore, we provide an intuitive discussion, as well as theoretical analysis of certaincombinations of the input data properties and the algorithm’s parameters when the pro-posed approach is most beneficial;from the neuroscientific perspective, we propose a computational model which supportsearlier empirical observations indicating that adult neurogenesis is particularly beneficialin changing environments, and that certain amount of neuronal death, which accompaniesthe neuronal birth, is an important component of an efficient neurogenesis process;overall, to the best of our knowledge, we are the first to perform an in-depth evaluationof the interplay between the birth and death of hidden units in the context of online modelselection in representation learning, and, more specifically, in online dictionary learning.This paper is organized as follows. In Sec. 2, we summarize the state-of-art non-adaptive (fixed-size) online dictionary learning method of Mairal et al. (2009). Thereafter, in Sec. 3, we describeour adaptive online dictionary learning algorithm. In Sec. 4, we present our empirical results on bothsynthetic and real datasets, including images and language data. Next, in Sec. 5, we provide sometheoretical, as well as an intuitive analysis of settings which can benefit most from our approach.Finally, we conclude with a summary of our contributions in Sec. 6. The implementation details ofthe algorithms and additional experimental results are described in the Appendix.2 B ACKGROUND ON DICTIONARY LEARNINGTraditional off-line dictionary learning (Olshausen & Field, 1997; Aharon et al., 2006; Lee et al.,2006) aims at finding a dictionaryD2Rmk, which allows for an accurate representation of atraining data set X=fx1;;xn2Rmg, where each sample xiis approximated by a linearcombinationxiD iof the columns of D, called dictionary elements fd1;;dk2Rmg.Hereiis the encoding (code vector , or simply code ) ofxiin the dictionary. Dictionary learningis also referred to as sparse coding , since it is assumed that the code vectors are sparse , i.e. have arelatively small number of nonzeros; the problem is formulated as minimizing the objectivefn(D) =1nnXi=112jjxiD ijj22+cjjijj1 (1)where the first term is the mean square error loss incurred due to approximating the input samplesby their representations in the dictionary, and the second term is the l1-regularization which enforcesthe codes to be sparse. The joint minimization of fn(D)with respect to the dictionary and codes isnon-convex; thus, a common approach is alternating minimization involving convex subproblems offinding optimal codes while fixing a dictionary, and vice versa.3The Matlab code is available at https://github.com/sgarg87/neurogenesis_inspired_dictionary_learning .3Under review as a conference paper at ICLR 2017However, the classical dictionary learning does not scale to very large datasets; moreover, it is notimmediately applicable to online learning from a continuous stream of data. The online dictionarylearning (ODL) method proposed by Mairal et al. (2009) overcomes both of these limitations, andserves as a basis for our proposed approach, presented in Alg. 1 in the next section. While the high-lighted lines in Alg. 1 represent our extension of ODL , the non-highlighted ones are common to bothapproaches, and are discussed first. The algorithms start with some dictionary D0, e.g. a randomlyinitialized one (other approaches include using some of the inputs as dictionary elements (Mairalet al., 2010; Bengio et al., 2009)). At each iteration t, both online approaches consider the next inputsamplext(more generally, a batch of samples) as in the step 3 of Alg. 1 and compute its sparsecodetby solving the LASSO (Tibshirani, 1996) problem (the step 4 in Alg. 1), with respect to thecurrent dictionary. In Alg. 1, we simply use Dinstead ofD(t)to simplify the notation. Next, thestandard ODL algorithm computes the dictionary update, D(t), by optimizing the surrogate objec-tive function ^ft(D)which is defined just as the original objective in eq. (1), for n=t, but with oneimportant difference: unlike the original objective, where each code ifor samplexiis computedwith respect to the same dictionaryD, the surrogate function includes the codes 1;2;;tcomputed at the previous iterations, using the dictionaries D(0);:::;D(t1), respectively; in otherwords, it does not recompute the codes for previously seen samples after each dictionary update.This speeds up the learning without worsening the (asymptotic) performance, since the surrogateobjective converges to the original one in (1), under certain assumptions, including data stationarity(Mairal et al., 2009). Note that, in order to prevent the dictionary entries from growing arbitrarilylarge, Mairal et al. (2009; 2010) impose the norm constraint, i.e. keep the columns of Dwithin theconvex setC=fD2Rmks:t:8jdTjuj1g. Then the dictionary update step computesD(t)= arg min D2C^ft(D), ignoringl1-regularizer over the code which is fixed at this step, asarg minD2C1ttXi=112jjxiD ijj22= arg minD2C12Tr(DTDA)Tr(DTB); (2)whereA=Pti=1iTiandB=Pti=1xiTiare the “bookkeeping” matrices (we also call them“memories” of the model), compactly representing the input samples and encoding history. At eachiteration, once the new input sample xiis encoded, the matrices are updated as A A+tTtandB B+xtTt(see the step 11 of Alg. 1). In (Mairal et al., 2009; 2010), a block coordinatedescent is used to optimize the convex objective in eq. 2; it iterates over the dictionary elements in afixed sequence, optimizing each while keeping the others fixed as shown in eq. (3) (essentially, thesteps 14 and 17 in Alg. 1; the only difference is that our approach will transform ujintowjin orderto impose additional regularizer before computing step 17), until convergence.uj bjPk6=jdkajkajj;dj ujmax(1;jjujjj2)(3)Herein, when the off-diagonal entries ajkinAare as large as the diagonal ajj, the dictionary ele-ments get “tied” to each other, playing complementary roles in the dictionary, thereby constrainingthe updates of each other.It is important to note that, for the experiment settings where we consider dictionary elements tobe sparse in our algorithm NODL (discussed next in Sec. 3), we will actually use as a baselinealgorithm a modified version of the fixed-size ODL, which allows for sparse dictionary elements, i.e.includes the sparsification step 15 in Alg. 1, thus optimizing the following objective in dictionaryupdate step instead of the one in eq. (2):arg minD2C1ttXi=112jjxiD ijj22+Xjjjjdjjj1: (4)From now on, ODL will refer to the above extended version of the fixed-size method of Mairalet al. (2009) wherever we have sparsity in dictionary elements (otherwise, the standard method ofMairal et al. (2009) is the baseline); in our experiments, dictionary sparsity of both the baselineand the proposed method (discussed in the next section) will be matched. Note that Mairal et al.(2010) mention that the convergence guaranties for ODL hold even with the sparsity constraints ondictionary elements.4Under review as a conference paper at ICLR 20173 O URAPPROACH : NEUROGENIC ONLINE DICTIONARY LEARNING (NODL)Our objective is to extend the state-of-art online dictionary learning, designed for stationary inputdistributions, to a more adaptive framework capable of handling nonstationary data effectively, andlearning to represent new types of data without forgetting how to represent the old ones. Towards thisend, we propose a novel algorithm, called Neurogenetic Online Dictionary Learning (see Alg. 1),which can flexibly extend and reduce a dictionary in response to the changes in an input distribution,and possibly to the inherent representation complexity of the data. The main changes, as compared tothe non-adaptive, fixed-dictionary-size algorithm of Mairal et al. (2009), are highlighted in Alg. 1;the two parts involve (1) neurogenesis, i.e. the addition of dictionary elements (hidden units, or“neurons”) and (2) the death of old and/or new elements which are “less useful” than other elementsfor the task of data reconstruction.At each iteration in Alg. 1, the next batch of samples is received and the corresponding codes, inthe dictionary, are computed; next, we add knnew dictionary elements sampled at random fromRm(i.e.,knrandom linear projections of the input sample). The choice of the parameter knisimportant; one approach is to tune it (e.g., by cross-validation), while another is to adjust it dynam-ically, based on the dictionary performance: e.g., if the environment is changing, the old dictionarymay not be able to represent the new input well, leading to decline in the representation accuracy,which triggers neurogenesis. Herein, we use as the performance measure the Pearson correlationbetween a new sample and its representation in the current dictionary r(xt;D(t1)t), i.e. denotedaspc(xt;D(t1);t)(for a batch of data, the average over pc(:)is taken). If it drops below a certainpre-specified threshold (where 01), the neurogenesis is triggered (the step 5 in Alg. 1).The number knof new dictionary elements is proportional to the error 1pc(), so that worse per-formance will trigger more neurogenesis, and vice versa; the maximum number of new elements isbounded by ck(the step 6 in Alg. 1). We refer to this approach as conditional neurogenesis as itinvolves the conditional birth of new elements. Next, knrandom elements are generated and addedto the current dictionary (the step 7), and the memory matrices A;Bare updated, respectively, toaccount for larger dictionary (the step 8). Finally, the sparse code is recomputed for xt(or, all thesamples in the current batch) with respect to the extended dictionary (the step 9).The next step is the dictionary update, which uses, similarly to the standard online dictionary learn-ing, the block-coordinate descent approach. However, the objective function includes additionalregularization terms, as compared to (2):D(t)=arg minD2C1ttXi=112jjxiD ijj22+gXjjjdjjj2+Xjjjjdjjj1: (5)The first term is the standard reconstruction error, as before. The second term, l1=l2-regularization,promotes group sparsity over the dictionary entries, where each group corresponds to a column, i.e.a dictionary element. The group-sparsity (Yuan & Lin, 2006) regularizer causes some columns inDto be set to zero (i.e. the columns less useful for accurate data representation), thus effectivelyeliminating the corresponding dictionary elements from the dictionary (“killing” the correspondinghidden units). As it was mentioned previously, Bengio et al. (2009) used the l1=l2-regularizer indictionary learning, though not in online setting, and without neurogenesis.Finally, the third term imposes l1-regularization on dictionary elements thus promoting sparse dic-tionary, besides the sparse coding. Introducing sparsity in dictionary elements, corresponding to thesparse connectivity of hidden units in the neural net representation of a dictionary, is motivated byboth their biological plausibility (neuronal connectivity tends to be rather sparse in multiple brainnetworks), and by the computational advantages this extra regularization can provide, as we observelater in experiments section (Sec. 4).As in the original algorithm of Mairal et al. (2009), the above objective is optimized by the block-coordinate descent, where each block of variables corresponds to a dictionary element, i.e., a columninD; the loop in steps 12-19 of the Alg. 1 iterates until convergence, defined by the magnitude ofchange between the two successive versions of the dictionary falling below some threshold. Foreach column update, the first and the last steps (the steps 14 and 17) are the same as in the originalmethod of Mairal et al. (2009), while the two intermediate steps (the steps 15 and 16) are implement-ing additional regularization. Both steps 15 and 16 (sparsity and group sparsity regularization) are5Under review as a conference paper at ICLR 2017Algorithm 1 Neurogenetic Online Dictionary Learning (NODL)Require: Data streamx1;x2;;xn2Rm; initial dictionary D2Rmk; conditionalneurogenesis threshold, ; max number of new elements added per data batch, ck; group sparsity regularizationparameter,g; number of non-zeros in a dictionary element, d; number of non-zeros in a code, c.1:Initialize:A 0,B 0% reset the ‘‘memory’’% assuming single data in a batch, for the simpler exposition2:fort= 1 tondo3: Inputxt% representing the tthbatch of data% Sparse coding of data:4:t= arg2Rkmin12jjxtDjj22+cjjjj1%ctuned to have cnon-zeros in t% Conditional neurogenesis: if accuracy below threshold, add moreelements (should not be more than the number of data in a batch5: ifpc(xt;D;t)then6:kn= (1pc(xt;D;t))ck% the count of the births of neurons7:Dn initializeRand (kn),D [D Dn]8:A A00 0,B [B0],k k+kn% Repeat sparse coding, now including the new dictionary elements9:t= arg2Rkmin12jjxtDjj22+cjjjj110: end if % End of neurogenesis% ‘‘Memory’’ update:11:A A+tTt,B B+xtTt% Dictionary update by block-coordinate descent with l1=l2group sparsity12: repeat13: forj= 1 tokdo14:uj bjPk 6=jdkajkajj% Sparsifying elements (optional):15:vj Proxjjj:jj1(uj) =sgn(uj)(jujjj)+;%jtuned to get dnon-zeros in vj% Killing useless elements with l1=l2group sparsity16:wj vj1gjjvjjj2+17:dj wjmax(1;jjwjjj2)18: end for19: until convergence20:end for21:returnDimplemented using the standard proximal operators as described in Jenatton et al. (2011). Note thatwe actually use as input the desired number of non-zeros, and determine the corresponding sparsityparametercandjusing a binary search procedure (see Appendix).Overall, the key features of our algorithm is the interplay of both the (conditional) birth and (group-sparsity) death of dictionary elements in an online setting.3.1 D ISCUSSION OF IMPORTANT ALGORITHMIC DETAILSA rationale behind sparsity of dictionary elements. We focus here on sparse dictionary ele-ments, which, in the network terms, correspond to sparse connectivity between hidden units andtheir inputs; one reason for this choice was that sparse connectivity appears to be a more biologi-cally plausible assumption than a fully-connected architecture implied by dense dictionary, in manybrain areas, and specifically between dentate gyrus and CA3. The other reason relates to computa-tional advantages.Note that (Mairal et al., 2009) state that convergence guaranties for the original ODL algorithmwould also hold for the case of sparse dictionary elements. However, no empirical evaluation isprovided for this case; furthermore, we are not aware of any previous work on sparse coding whichwould involve and extensive empirical evaluation for such setting. Prior focus on dense rather thansparse dictionary elements is perhaps more natural when the input consists of a large number ofrelatively small image patches, and thus each element also represents a small patch. In our work,however, dictionary is being learned on full images, and thus a nonzero pattern in a sparse dictionaryelement corresponds to a small patch within a larger image, with multiple sparse elements (patches)covering the image. Thus, rather than explicitly representing an image as a set of patches and then6Under review as a conference paper at ICLR 2017learning a dictionary of dense elements for accurate representation of such patches, a dictionary offull-image-size, but sparse dictionary elements can be used to implicitly represents an image as alinear combination of those elements, with possible overlap of non-zero pixels between elements;the non-zero pixels in a sparse element of a dictionary are learned automatically. Computationaladvantages of using sparse dictionaries are demonstrated in our experiment results (Sec. 4), whereclassifiers learned on top of representations extracted with sparse dictionaries yield smaller errors.The memory matrix Aand its properties. The matrixAkeeps the “memory” of the encodingstfor the previous data samples, in a sense, as it accumulates the sum of tTtmatrices fromeach iteration t. It turns out that the matrix Acan have a significant effect on dictionary learningin both ODL and NODL algorithms. As it is pointed out in (Mairal et al., 2009), the quadraticsurrogate function in (2) is strictly convex with a lower-bounded Hessian Aensuring convergence toa solution. From the practical standpoint, when the matrix Ahas a high condition number (the ratioof the largest to smallest singular value in the singular value decomposition of a matrix), despiteits lower-bounded eigenvalues, the adaptation of a dictionary elements using the standard ODLalgorithm can be difficult, as we see in our experiments. Specifically, when the dictionary elementsare sparse, this effect is more pronounced, since the condition number of Abecomes high dueto the complementary roles of sparse dictionary elements in the reconstruction process (see thecomparison of Afrom dense elements and sparse elements in 6(a) and 6(b), respectively). In suchscenarios, the submatrix of Acorresponding to the new elements in a dictionary, added by ourNODL algorithm, can have a better condition number, leading to an improved adaptation of thedictionary.Code Sparsity. Code sparsity is controlled by the parameter c, the number of nonzeros, whichdetermines the corresponding regularization weight cin step 4 of Alg. 1; note that cis determinedvia binary search for each input sample separately, as shown in Algorithm 2, and thus may varyslightly for different instances given a fixed c.Selecting an appropriate level of code sparsity depends on the choice of other parameters, such as theinput batch size, sparsity of the dictionary elements, the extent of non-stationarity and complexityof the data, and so on. When the dictionary elements are themselves sparse, denser codes may bemore appropriate, since each sparse dictionary element represents only a relatively small subset ofimage pixels, and thus a large number of those subsets covering the whole image may be needed foran accurate input representation.Interestingly, using very sparse codes in combination with non-sparse dictionary elements in thestandard ODL approach can sometimes lead to creation of “dead” (zero l2-norm) elements in thedictionary, especially if the input batch size is small. This is avoided by our NODL algorithm, sincesuch dead elements are implicitly removed via group sparsity at the dictionary update step, alongwith the “weak” (very small l2-norm) elements. Also, a very high code sparsity in combination withdense dictionary elements can lead to a significant decrease in the reconstruction accuracy for bothODL and our NODL when the online data stream is non-stationary. Such shortcomings were notencountered in (Mairal et al., 2009; 2010), where only stationary data streams were studied, both intheoretical and empirical results. On the other hand, high sparsity in dictionary elements does notseem to cause a degradation in the reconstruction accuracy, as long as the codes are not too sparse.The choice and tuning of metric for conditional neuronal birth. In the “conditional birth” ap-proach described above, the number of new elements knis determined based on the performance ofthe current dictionary, using the Pearson correlation between the actual and reconstructed data, forthe current batch. This is, of course, just one particular approach to measuring data nonstationarityand the need for adaptation, but we consider it a reasonable heuristic. Low reconstruction error in-dicates that the old dictionary is still capable of representing the new data, and thus less adaptationmight be needed, while a high error indicates that the data distribution might have changed, andtrigger neurogenesis in order to better adapt to a new environment. We choose the Pearson correla-tion as the measure of reconstruction accuracy since its value is easily interpretable, is always in therange [0;1](unlike, for example, the mean-square error), which simplifies tuning the threshold pa-rameter. Clearly, one can also try other interpretable metrics, such as, for example, the Spearmancorrelation.7Under review as a conference paper at ICLR 2017Tuning parameters: group sparsity gand others. The group sparsity regularization parametergcontrols the amount of removal (“death”) of elements in NODL : in step 16 of the Alg. 1, all ele-ments withl2-norm below g(i.e., “weak” elements), are set to zero (“killed”). Since the dictionaryelements are normalized to have l2-norm less than one, we only need to consider g2[0;1]. (Notethat the step of killing dictionary elements precedes the normalization step in the algorithm. Thus,the tuning of gis affected by the normalization of the elements from the previous iteration.) Notethat increasing the sparsity of the dictionary elelments, i.e. decreasing d(the number of nozeros indictionary elements) may require the corresponding reduction of g, while an increase in the inputdimensionality mmay also require an increase in the gparameter. Tuning the rest of the parametersis relatively easy. Clearly, the batch size should be kept relatively small, and, ideally, not exceed the“window of stationarity” size in the data (however, the frequency of the input distribution changemay need to be also estimated from the data, and thus the batch size may need to be tuned adaptively,which is outside of the scope of this paper). Mairal et al. (2009) suggest to use a batch size of 256intheir experiments while getting similar performance with values 128and512. As to the maximumnumber of new elements ckadded at each iteration, it is reasonable to keep it smaller than the batchsize.4 E XPERIMENTSWe now evaluate empirically the proposed approach, NODL, against ODL, the standard (non-adaptive) online dictionary learning of Mairal et al. (2009). Moreover, in order to evaluate separatelythe effects of either only adding, or only deleting dictionary elements, we also evaluate two restrictedversions of our method: NODL+ involves only addition but no deletion (equivalent to NODL withno group-sparsity, i.e. g= 0), and NODL- which, vice versa, involves deletion only but no addition(equivalent to NODL with the number of new elements ck= 0). The above algorithms are evalu-ated in a non-stationary setting, where a sequence of training samples from one environment (firstdomain) is followed by another sequence from a different environment (second domain), in order totest their ability to adapt to new environments without “forgetting” the previous ones.4.1 R EAL-LIFE IMAGESOur first domain includes the images of Oxford buildings4(urban environment), while the seconduses a combination of images from Flowers5and Animals6image databases (natural environment);examples of both types of images are shown in Fig. 1(a) and 1(b). We converted the original colorimages into black&white format and compressed them to smaller sizes, 32x32 and 100x100. Notethat, unlike (Mairal et al., 2009), we used full images rather than image patches as our inputs.(a) Urban: Oxford Buildings (b) Nature: Flowers and AnimalsFigure 1: The image data sets for the evaluation of the online dictionary learning algorithms.We selected 5700 images for training and another 5700 for testing; each subset contained 1900images of each type (i.e., Oxford, Flowers, Animals). In the training phase, as mentioned above,4http://www.robots.ox.ac.uk/ ̃vgg/data/oxbuildings/index.html5http://www.robots.ox.ac.uk/ ̃vgg/data/flowers/102/6http://www.robots.ox.ac.uk/ ̃vgg/data/pets/8Under review as a conference paper at ICLR 2017(a) Learned Dictionary Size (b) 1st domain (Oxford) (c) 2nd domain (Flowers)Figure 2: Reconstruction accuracy of NODL and ODL on 32x32 images (sparse dictionary).(a) 1st domain (Oxford) (b) 2nd domain (Flowers) (c) Classification ErrorFigure 3: Reconstruction accuracy of NODL and ODL on 100x100 images with sparse dictionaryelements (50 non-zeros) and non-sparse codes.each online dictionary learning algorithm receives a sequence of 1900 samples from the first, urbandomain (Oxford), and then a sequence of 3800 samples from the second, natural domain (1900Flowers and 1900 Animals, permuted randomly). At each iteration, a batch of 200 images is receivedas an input. (For comparison, Mairal et al. (2009) used a batch of size 256, though image patchesrather than full images.) The following parameters are used by our algorithm: Pearson correlationthreshold= 0:9, group sparsity parameter g= 0:03andg= 0:07, for 32x32 and 100x100images, respectively. The upper bound on the number of new dictionary elements at each iteration isck= 50 . (We observed that the results are only mildly sensitive to the specified parameter values.)Once the training phase is completed, the resulting dictionary is evaluated on test images from boththe first (urban) and the second (natural) domains; for the second domain, separate evaluation isperformed for flowers and animals. First, we evaluate the reconstruction ability of the resultingdictionaryD, comparing the actual inputs xversus approximations x=D, using the meansquare error (MSE), Pearson correlation, and the Spearman correlation. We present the results forPearson correlations between the actual and reconstructed inputs, since all the three metrics showconsistent patterns (for completeness, MSE results are shown in Appendix). Moreover, we evaluatethe dictionaries in a binary classification setting (e.g., flowers vs animals), using as features thecodes of test samples in a given dictionary. Finally, we explored a wide range of sparsity parametersfor both the codes and the dictionary elements.Our key observations are that: (1) the proposed method frequently often outperforms (or is at leastas good as) its competitors, on both the new data (adaptation) and the old ones (memory); (2) it ismost beneficial when dictionary elements are sparse; (3) vice versa, when dictionary elements aredense, neurogenetic approach matches the baseline, fixed-size dictionary learning. We now discussthe results in detail.Sparse Dictionary ElementsIn Fig. 2, we present the results for sparse dictionaries, where each column (an element in thedictionary) has 5 nonzeros out of the 1024 dimensions; the codes are relatively dense, with at most200 nonzeros out of k(the number of dictionary elements), and kranging from 5 to 1000 (i.e. thecodes are not sparse for k200). Due to space limitations, we put in the Appendix (Sec. B.2)our results on a wider range of values for the dictionary and code sparsity (Fig. 12). In Fig. 2(a),we compare the dictionary size for different methods: the final dictionary size after completing thetraining phase (y-axis) is plotted against the initial dictionary size (x-axis). Obviously, the baseline(fixed-size) ODL method (magenta plot) keeps the size constant, deletion-only NODL- approachreduces the initial size (red plot), and addition-only NODL+ increases the size (light-blue plot).9Under review as a conference paper at ICLR 2017However, the interplay between the addition and deletion in our NODL method (dark-blue) producesa more interesting behavior: it tends to adjust the representation complexity towards certain balancedrange, i.e. very small initial dictionaries are expanded, while very large ones are, vice versa, reduced.Our main results demonstrating the advantages of the proposed NODL method are shown next inFig. 2(b) and Fig. 2(c), for the “old” (Oxford) and “new” (Flowers) environment (domain), respec-tively. (Very similar result are shown for Animals as well, in the Appendix). The x-axis shows thefinal dictionary size, and the y-axis is the reconstruction accuracy achieved by the trained dictionaryon the test samples, measured by Pearson correlation between the actual and reconstructed data.NODL clearly outperforms the fixed-size ODL, especially on smaller dictionary sizes; remarkably,this happens on both domains, i.e. besides improved adaptation to the new data, NODL is also betterat preserving the “memories” of the old data, without increasing the representation complexity, i.e.for the same dictionary size .Interestingly, just deletion would not suffice, as deletion-only version, NODL-, is inferior to ourNODL method. On the other hand, addition-only, or NODL+, method is as accurate as NODL, buttends to increase the dictionary size too much. The interplay between the addition and deletion pro-cesses in our NODL seems to achieve the best of the two worlds, achieving superior performancewhile keeping the dictionary size under control, in a narrower range (400 to 650 elements), expand-ing, as necessary, small dictionaries, while compressing large ones7.We will now focus on comparing the two main methods, the baseline ODL and the proposed NODLmethod. The advantages of our approach become even more pronounced on larger input sizes, e.g.100x100 images, in similar sparse-dictionary, dense-code settings. (We keep the dictionary elementsat the same sparsity rate, 50 nonzeros out of 10,000 dimensions, and just use completely non-sparsecodes). In Fig. 3(a) and Fig. 3(b), we see that NODL considerably outperforms ODL on both thefirst (Oxford) and the (part of the ) second domain (Flowers); the results for Animals are very similarand are given in the Appendix in Fig. 10. In Appendix Sec. B.6, Fig. 17 depicts examples of actualanimal images and the corresponding reconstructions by the fixed-size ODL and our NODL methods(not included here due to space restrictions). A better reconstruction quality of our method can beobserved (e.g., a more visible dog shape, more details such as dog’s legs, as opposed to a collectionclusters produced by the ODL methods note however that printer resolution may reduce the visibledifference, and looking at the images in online version of this paper is recommended).Moreover, NODL can be also beneficial in classification settings. Given a dictionary, i.e. a sparse lin-ear autoencoder trained in an unsupervised setting, we use the codes (i.e., feature vectors) computedon the test data from the second domain (Animals and Flowers) and evaluate multiple classifierslearned on those features in order to discriminate between the two classes. In Fig. 3(c), we show thelogistic regression results using 10-fold cross-validation; similar results for several other classifiersare presented in the Appendix, Fig. 10. Note that we also perform filter-based feature subset selec-tion, using the features statistical significance as measured by its p-value as the ranking function,and selecting subsets of top kfeatures, increasing kfrom 1 to the total number of features (the codelength, i.e. the number of dictionary elements). The x-axis in Fig. 3(c) shows the value of k, whilethe y-axis plots the classification error rate for the features derived by each method. We can see thatour NODL method (blue) yields lower errors than the baseline ODL (magenta) for relatively smallsubsets of features, although the difference is negligible for the full feature set. Overall, this suggeststhat our NODL approach achieves better reconstruction performance of the input data, without extraoverfitting in classification setting, since it generalizes at least as good as, and often better than thebaseline ODL method.Non-sparse dictionary elementsWhen exploring a wide range of sparsity settings (see Appendix), we observed quite different resultsfor non-sparse dictionaries as opposed to those presented above. Fig. 8(b) (in Appendix, due tospace constraints) summarizes the results for a particular setting of fully dense dictionaries (nozero entries), but sparse codes (50 non-zeros out of up to 600 dictionary elements; however, thecodes are still dense when dictionary size is below 50). In this setting, unlike the previous one,we do not observe any significant improvement in accuracy due to neurogenetic approach, neither inreconstruction nor in classification accuracy; both methods perform practically the same. (Also, note7In our experiments, we also track which dictionary elements are deleted by our method; generally, both oldand newly added elements get deleted, depending on specific settings.10Under review as a conference paper at ICLR 2017a somewhat surprising phenomenon: after a certain point, i.e. about 50 elements, the reconstructionaccuracy of both methods actually declines rather than improves with increasing dictionary size.)It is interesting to note, however, that the overall classification errors, for both methods, are muchhigher in this setting (from 0.4 to 0.52) than in the sparse-dictionary setting (from 0.22 to 0.36).Even using non-sparse codes in the non-sparse dictionary setting still yields inferior results whencompared to sparse dictionaries (see the results in the Appendix).In summary, on real-life image datasets we considered herein, our NODL approach is often superior(and never inferior) to the standard ODL method; also, there is a consistent evidence that ourapproach is most beneficial in sparse dictionary settings.4.2 S PARSE ORTHOGONAL INPUTS : NLP AND SYNTHETIC DATASo far, we explored some conditions on methods properties (e.g., sparse versus dense dictionaries,as well as code sparsity/density) which can be beneficial for the neurogenetic approach. Our furtherquestion is: what kind of specific data properties would best justify neurogenetic versus traditional,fixed-size dictionary learning? As it turns out, the fixed-size ODL approach has difficulties adaptingto a new domain in nonstationary settings, when the data in both domains are sparse and, acrossthe domains, the supports (i.e., the sets of non-zero coordinates) are almost non-overlapping (i.e.,datasets are nearly orthogonal). This type of data properties is related to a natural language process-ing problem considered below. Furthermore, pushing this type of structure to the extreme, we usedsimulations to better understand the behavior of our method. Herein, we focused, again, on sparsedictionary elements, as a well-suited basis for representing sparse data. Moreover, our empirical re-sults confirm that using dense dictionary elements does not yield good reconstruction of sparse data,as expected.Sparse Natural Language Processing ProblemWe consider a very sparse word co-occurrence matrix (on average, about 14 non-zeros in a columnof size 12,883) using the text from two different domains, biology and mathematics, with the totalvocabulary size of approximately 12,883 words. The full matrix was split in two for illustrationpurposes and shown in Fig. 4(c) and 4(d), where math terms correspond to the first block of columnsand the biology terms correspond to the second one (though it might be somewhat hard to see in thepicture, the average number of nozeros per row/column is indeed about 14).We use the sparse columns (or rows) in the matrix, indexed by the vocabulary words, as our inputdata to learn the dictionary of sparse elements (25 non-zeros) with sparse codes (38 non-zeros). Thecorresponding word codes in the learned dictionary can be later used as word embeddings, or wordvectors, in various NLP tasks such as information extraction, semantic parsing, and others Yogatamaet al. (2015); Faruqui et al. (2015); Sun et al. (2016). (Note that many of the non-domain specificwords were removed from the vocabulary to obtain the final size of 12,883.) Herein, we evaluateour NODL method (i.e. NODL (sparse) in the plots) versus baseline ODL dictionary learning ap-proach (i.e. ODL (sparse)) in the settings where the biology domain is processed first and then onehave to switch to the the mathematics domain. We use 2750 samples from each of the domainsfor training and the same number for testing. The evaluation results are shown in Fig. 4. For thefirst domain (biology), both methods perform very similarly (i.e., remember the old data equallywell), while for the second, more recent domain, our NODL algorithm is clearly outperforming itscompetitor. Moreover, as we mention above, non-sparse (dense) dictionaries are not suited for themodeling of highly sparse data such as our NLP data. In the Fig. 4, both random dense dictionar-ies (random-D) and the dense dictionaries learned with ODL (i.e. ODL (dense)) do poorly in thebiology and mathematics domains.However, the reconstruction accuracy as measured by Pearson correlation was not too high, overall,i.e. the problem turned out to be more challenging than encoding image data. It gave us an intuitionabout the structure of sparse data that may be contributing to the improvements due to neurogenesis.Note that the word co-occurrence matrix from different domains such as biology and mathemat-ics tends to have approximately block-diagonal structure, where words from the same domain areoccurring together more frequently than they co-occur with the words from the different domain.Pushing this type of structure to extreme, we studied next the simulated sparse dataset where thesamples from the two different domains are not only sparse, but have completely non-overlappingsupports, i.e. the data matrix is block-diagonal (see Fig. 7(c) in Appendix).11Under review as a conference paper at ICLR 2017(a)1st domain (Biology) (b)2nd Domain (Mathematics) (c)Biology (d)MathFigure 4: Reconstruction accuracy for the sparse NLP data.(a)Pearson- First Domain (b)Pearson- Second Domain (c)D- ODL (d)D- NODL (ours)Figure 5: Reconstruction accuracy for the sparse synthetic data.Synthetic Sparse DataWe generated a synthetic sparse dataset with 1024 dimension, and only 50 nonzeros in each sam-ple. Moreover, we ensured that the data in both domains had non-overlapping supports (i.e., non-intersecting sets of non-zero coordinates), by always selecting nonzeros in the first domain from thefirst 512 dimensions, while only using the last 512 dimensions for the second domain Fig. 7(c) inAppendix). For the evaluation on the synthetic data, we use the total of 200 samples for the trainingand testing purposes each (100 samples for each of the two domains), and smaller batches for onlinetraining, containing 20 samples each (instead of 200 samples used earlier for images and languagedata).Since the data is sparse, we accordingly adjust the sparsity of dictionary elements (50 nonzeros inan element; for the code sparsity, we will present the results with 50 nonzeros as well). In Fig. 5,we see reconstruction accuracy, for the first and second domain data. For the first domain, the base-line ODL method (i.e. ODL (sparse) in the plots) and our NODL (i.e. NODL (sparse)) performequally well. On the other hand, for the second domain, the ODL algorithm’s performance degradessignificantly compared to the first domain. This is because the data from the second domain havenon-overlapping support w.r.t. the data from the first domain. Our method is able to perform verywell on the second domain (almost as good as the first domain). It is further interesting to analyzethe case of random non-sparse dictionary (random-D) which even performs better than the baselineODL method, for the second domain. This is because random dictionary elements remain non-sparsein all the dimensions thereby doing an average job in both of the domains. Along the same lines,ODL (dense) performs better than the ODL (sparse) in the second domain. Though, the performanceof non-sparse dictionaries should degrade significantly with an increase in the sparsity of data, aswe see above for the NLP data. Clearly, our NODL (sparse) gives consistently better reconstructionaccuracy, compared to the other methods, across the two domains.In Fig. 5(c) and Fig. 5(d), we see the sparsity structure of the dictionary elements learned using thebaseline ODL method and our NODL method respectively. From these plots, we get better insightson why the baseline method does not work. It keeps same sparsity structure as it used for the datafrom the first domain. Our NODL adapts to the second domain data because of its ability to add newdictionary elements, that are randomly initialized with non-zero support in all the dimensions.Next, in Sec. 5, we discuss our intuitions on why NODL performs better than the ODL algorithmunder certain conditions.12Under review as a conference paper at ICLR 20175 W HEN NEUROGENESIS CANHELP,AND WHYIn the Sec. 4, we observed that our NODL method outperforms the ODL algorithm in two generalsettings, both involving sparse dictionary elements: (i) non-sparse data such as real-life images, and(ii) sparse data with (almost) non-overlapping supports. In this section, we attempt to analyze whatcontributes to the success of our approach in these settings, starting with the last one.Sparse data with non-overlapping supports, sparse dictionaryAs discussed above, in this scenario, the data from both the first and the second domain are sparse,and their supports (non-zero dimensions) are non-overlapping, as shown in the Fig. 7(c). Note that,when training a dictionary using the fixed-size, sparse-dictionary ODL method, we observe only aminor adaptation to the second domain after training on the first domain, as shown in Fig. 5(c).Our empirical observations are supported by the theoretical result summarized in Lemma 1 below.Namely, we prove that when using the ODL algorithm in the above scenario, the dictionary trainedon the first domain can not adapt to the second domain. (The minor adaptation, i.e., a few nonzeros,observed in our results in Fig. 5(c) occurs only due to implementation details involving normal-ization of sparse dictionary elements when computing codes in the dictionary – the normalizationintroduces non-zeros of small magnitude in all dimensions (see Appendix for the experiment resultswith no normalization of the elements, conforming to the Lemma 1)).Lemma 1. Letx1;x2;;xt12Rmbe a set of samples from the first domain, with non-zeros(support) in the set of dimensions PM=f1;;mg, and letxt;xt+1;;xn2Rmbe aset of samples from the second domain, with non-zeros (support) in dimensions QM, such thatP\Q= ;jPj=jQj=l. Let us denote as d1;d2;;dk2Rmdictionary elements learned byODL algorithm, with the sparsity constraint of at most lnonzeros in each element8, on the data fromthe first domain, x1;;xt1. Then (1) those elements have non-zero support in Ponly, and (2)after learning from the second domain data, the support (nonzero dimensions) of the correspondingupdated dictionary elements will remain in P.Proof Sketch. Let us consider processing the data from the first domain. At the first iteration, asamplex1is received, its code 1is computed, and the matrices AandBare updated, as shown inAlg. 1 (non-highlighted part); next, the dictionary update step is performed, which optimizesD(1)=arg minD2C12Tr(DTDA)Tr(DTB) +Xjjjjdjjj1: (6)Since the support of x1is limited to P, we can show that optimal dictionary Dmust also haveall columns/elements with support in P. Indeed, assuming the contrary, let dj(i)6= 0 for somedictionary element/column j, wherei =2P. But then it is easy to see that setting dj(i)to zeroreduces the sum-squared error and the l1-norm in (6), yielding another dictionary that achieves alower overall objective; this contradicts our assumption that Dwas optimal. Thus, the dictionaryupdate step must produce a dictionary where all columns have their support in P. By induction,this statement will also be true for the dictionary obtained after processing all samples from the firstdomain. Next, the samples from the second domain start arriving; note that those samples belong to adifferent subspace, spanning the dimensions within the support set Q, which is not intersecting withP. Thus, using the current dictionary, the encoding tof first sample xtfrom the second domain(i.e. the solution of the LASSO problem in step 4 of the Alg. 1 ) will be a zero vector. Therefore, thematricesAandBremains unchanged during the update in step 11, and thus the support of each bj,and, consequently, ujand the updated dictionary elements djwill remain in P. By induction, everydictionary update in response to a new sample from the second domain will preserve the support ofthe dictionary elements, and thus the final dictionary elements will also have their support only inP.Non-sparse data, sparse dictionaryWe will now discuss an intuitive explanation behind the success of neurogenetic approach in thisscenario, leaving a formal theoretical analysis as a direction for future work. When learning sparse8lcorresponds to din Alg. 113Under review as a conference paper at ICLR 2017(a)Awith ODL method (with dense elements) (b)Awith ODL method (with sparse elements)(c)Awith our method (with sparse elements) (d)Dwith ODL method (with sparse elements)Figure 6: Visualization of the sparse dictionary and the matrix Alearned on the first imagingdomain (Oxford images), using the baseline ODL method and our method.dictionaries on non-sparse data such as natural images, we observed that many dictionary elementshave non-overlapping supports with respect to each other; see, for example, Fig. 6(d), where eachcolumn corresponds to a 10000-dimensional dictionary element with nonzero dimensions shownin black color. Apparently, the non-zeros dimensions of an element tend to cluster spatially, i.e.to form a patch in an image. The non-overlapping support of dictionary elements results into aspecific structure of the matrix A. As shown in Fig. 6(b), for ODL approach, the resulting matrixAincludes many off-diagonal nonzero elements of large absolute values (along with high valueson the diagonal). Note that, by definition, Ais an empirical covariance of the code vectors, andit is easy to see that a nonzero value of ajkimplies that the j-th and thek-th dictionary elementswere used jointly to explain the same data sample(s). Thus, the dense matrix structure with manynon-zero off-diagonal elements, shown in Fig. 6(b), implies that, when the dictionary elements aresparse, they will be often used jointly to reconstruct the data. On the other hand, in the case ofnon-sparse dictionary elements, the matrix Ahas an almost diagonally-dominant structure, i.e. onlya few dictionary elements are used effectively in the reconstruction of each data sample even withnon-sparse codes (see Appendix for details).Note that in the dictionary update expression uj bjPk 6=jdkajkajjin (3), when the values ajk=ajjare large for multiple k, thejthdictionary element becomes tightly coupled with other dictionaryelements, which reduces its adaptability to new, non-stationary data. In our algorithm, the valuesajk=ajjremain high if both elements jandkhave similar “age”; however, those values are muchlower if one of the elements is introduced by neurogenesis much more recently than the other one.In 6(c), the upper left block on the diagonal, representing the oldest elements (added during theinitialization), is not diagonally-dominant (see the sub-matrices of Awith NODL in Fig. 14 in theAppendix). The lower right block, corresponding to the most recently added new elements, may alsohave a similar structure (though not visible due to relatively low magnitudes of the new elements;see the Appendix). Overall, our interpretation is that the old elements are tied to each other whereasthe new elements may also be tied to each other but less strongly, and not tied to the old elements,yielding a block-diagonal structure of Ain case of neurogenetic approach, where blocks correspond14Under review as a conference paper at ICLR 2017to dictionary elements adapted to particular domains. In other words, neurogenesis allows for anadaptation to a new domain without forgetting the old one.6 C ONCLUSIONSIn this work, we proposed a novel algorithm, Neurogenetic Online Dictionary Learning (NODL),for the problem of learning representations in non-stationary environments. Our algorithm buildsa dictionary of elements by learning from an online stream of data while also adapting the dic-tionary structure (the number of elements/hidden units and their connectivity) via continuous birth(addition) and death (deletion) of dictionary elements, inspired by the adult neurogenesis process inhippocampus, which is known to be associated with better adaptation of an adult brain to changingenvironments. Moreover, introducing sparsity in dictionary elements allows for adaptation of thehidden unit connectivity and further performance improvements.Our extensive empirical evaluation on both real world and synthetic data demonstrated that the in-terplay between the birth and death of dictionary elements allows for a more adaptive dictionarylearning, better suited for non-stationary environments than both of its counterparts, such as thefixed-size online method of Mairal et al. (2009) (no addition and no deletion), and the online ver-sion of the group-sparse coding method by Bengio et al. (2009) (deletion only). Furthermore weevaluated, both empirically and theoretically, several specific conditions on both method’s and dataproperties (involving the sparsity of elements, codes, and data) where our method has significantadvantage over the standard, fixed-size online dictionary learning. Overall, we can conclude thatneurogenetic dictionary learning typically performs as good as, and often much better than its com-petitors. In our future work, we plan to explore the non-linear extension of the dictionary model, aswell as a stacked auto-encoder consisting of multiple layers.
SkDONYuVx
Simple interesting modified online dictionary learning
7: Good paper, accept
The authors propose a simple modification of online dictionary learning: inspired by neurogenesis, they propose to add steps of atom addition, or atom deletion, in order to extent the online dictionary learning algorithm algorithm of Mairal et al. Such extensions helps to adapt the dictionary to changing properties of the data. The online adaptation is very interesting, even if it is quite simple. The overall algorithm is quite reasonable, but not always described in sufficient details: for example, the thresholds or conditions for neuronal birth or death are not supported by a strong analysis, even if the resulting algorithm seems to perform well on quite extensive experiments. The overall idea is nevertheless interesting (even if not completely new), and the paper generally well written and pretty easy to follow. The analysis is however quite minimal: it could have been interesting to study the evolving properties of the dictionary, to analyse its accuracy for following the changes in the data, etc. Still: this is a nice work!
4: The reviewer is confident but not absolutely certain that the evaluation is correct
HyecJGP5ge
ICLR.cc/2017/conference
2017
NEUROGENESIS-INSPIRED DICTIONARY LEARNING: ONLINE MODEL ADAPTION IN A CHANGING WORLD
["Sahil Garg", "Irina Rish", "Guillermo Cecchi", "Aurelie Lozano"]
In this paper, we focus on online representation learning in non-stationary environments which may require continuous adaptation of model’s architecture. We propose a novel online dictionary-learning (sparse-coding) framework which incorporates the addition and deletion of hidden units (dictionary elements), and is inspired by the adult neurogenesis phenomenon in the dentate gyrus of the hippocampus, known to be associated with improved cognitive function and adaptation to new environments. In the online learning setting, where new input instances arrive sequentially in batches, the “neuronal birth” is implemented by adding new units with random initial weights (random dictionary elements); the number of new units is determined by the current performance (representation error) of the dictionary, higher error causing an increase in the birth rate. “Neuronal death” is implemented by imposing l1/l2-regularization (group sparsity) on the dictionary within the block-coordinate descent optimization at each iteration of our online alternating minimization scheme, which iterates between the code and dictionary updates. Finally, hidden unit connectivity adaptation is facilitated by introducing sparsity in dictionary elements. Our empirical evaluation on several real-life datasets (images and language) as well as on synthetic data demonstrates that the proposed approach can considerably outperform the state-of-art fixed-size (nonadaptive) online sparse coding of Mairal et al. (2009) in the presence of nonstationary data. Moreover, we identify certain properties of the data (e.g., sparse inputs with nearly non-overlapping supports) and of the model (e.g., dictionary sparsity) associated with such improvements.
["Unsupervised Learning", "Computer vision", "Transfer Learning", "Optimization", "Applications"]
ABSTRACTIn this paper, we focus on online representation learning in non-stationary envi-ronments which may require continuous adaptation of model’s architecture. Wepropose a novel online dictionary-learning (sparse-coding) framework which in-corporates the addition and deletion of hidden units (dictionary elements), and isinspired by the adult neurogenesis phenomenon in the dentate gyrus of the hip-pocampus, known to be associated with improved cognitive function and adapta-tion to new environments. In the online learning setting, where new input instancesarrive sequentially in batches, the “neuronal birth” is implemented by adding newunits with random initial weights (random dictionary elements); the number ofnew units is determined by the current performance (representation error) of thedictionary, higher error causing an increase in the birth rate. “Neuronal death” isimplemented by imposing l1=l2-regularization (group sparsity) on the dictionarywithin the block-coordinate descent optimization at each iteration of our onlinealternating minimization scheme, which iterates between the code and dictionaryupdates. Finally, hidden unit connectivity adaptation is facilitated by introduc-ing sparsity in dictionary elements. Our empirical evaluation on several real-lifedatasets (images and language) as well as on synthetic data demonstrates that theproposed approach can considerably outperform the state-of-art fixed-size (non-adaptive) online sparse coding of Mairal et al. (2009) in the presence of non-stationary data. Moreover, we identify certain properties of the data (e.g., sparseinputs with nearly non-overlapping supports) and of the model (e.g., dictionarysparsity) associated with such improvements.1 I NTRODUCTIONThe ability to adapt to a changing environment is essential for successful functioning in both naturaland artificial intelligent systems. In human brains, adaptation is achieved via neuroplasticity, whichtakes different forms, including synaptic plasticity, i.e. changing connectivity strength among neu-rons, and neurogenesis, i.e. the birth and maturation of new neurons (accompanied with the death ofsome new or old neurons). Particularly, adult neurogenesis (Kempermann, 2006) (i.e., neurogenesisin the adult brain) in the dentate gyrus of the hippocampus is associated with improved cognitivefunctions such as pattern separation (Sahay et al., 2011), and is often implicated as a “candidatemechanism for the specific dynamic and flexible aspects of learning” (Stuchlik, 2014).In the machine-learning context, synaptic plasticity is analogous to parameter tuning (e.g., learningneural net weights), while neurogenesis can be viewed as an online model selection via addition(and deletion) of hidden units in specific hidden-variable models used for representation learning(where hidden variables represent extracted features), from linear and nonlinear component anal-ysis methods such as PCA, ICA, sparse coding (dictionary learning), nonlinear autoencoders, todeep neural nets and general hidden-factor probabilistic models. However, optimal model selectionin large-scale hidden-variable models (e.g., adjusting the number of layers, hidden units, and their1Under review as a conference paper at ICLR 2017connectivity), is intractable due to enormous search space size. Growing a model gradually can be amore feasible alternative; after all, every real brain’s “architecture” development process starts witha single cell. Furthermore, the process of adapting the model’s architecture to dynamically changingenvironments is necessary for achieving a lifelong, continual learning. Finally, an online approachto dynamically expanding and contracting model’s architecture can serve as a potentially more ef-fective alternative to the standard off-line model selection (e.g., MDL-based off-line sparse coding(Ramirez & Sapiro, 2012)), as well as to the currently popular network compression (distillation)approaches (Hinton et al., 2015; Srivastava et al., 2014; Ba & Caruana, 2014; Bucilu et al., 2006),where a very large-scale architecture, such as a deep neural network with millions of parameters,must be first selected in ad-hoc ways and trained on large amounts of data, only to be compressedlater to a more compact and simpler model with similarly good performance; we hypothesize thatadaptive growth and reduction of the network architecture is a viable alternative to the distillationapproach, although developing such an alternative remains the topic of further research.In this paper, we focus on dictionary learning, a.k.a. sparse coding (Olshausen & Field, 1997; Kreutz-Delgado et al., 2003; Aharon et al., 2006; Lee et al., 2006) – a representation learning approachwhich finds a set of basis vectors (atoms, or dictionary elements) and representations (encodings)of the input samples as sparse linear combinations of those elements1. More specifically, our ap-proach builds upon the computationally efficient online dictionary-learning method of Mairal et al.(2009), where the data samples are processed sequentially, one at a time (or in small batches). Onlineapproaches are particularly important in large-scale applications with millions of potential trainingsamples, where off-line learning can be infeasible; furthermore, online approaches are a naturalchoice for building systems capable of continual, lifelong learning.Herein, we propose a novel online dictionary learning approach inspired by adult neurogenesis,which extends the state-of-art method of Mairal et al. (2009) to nonstationary environments by in-corporating online model adaption, i.e. the addition and deletion of dictionary elements (i.e., hiddenunits) in response to the dynamically changing properties of the input data2. More specifically, ateach iteration of online learning (i.e., for every batch of data samples), we add a group of randomdictionary elements (modeling neuronal birth), where the group size depends on the current repre-sentation error, i.e. the mismatch between the new input samples and their approximation based onthe current dictionary: higher error triggers more neurogenesis. The neuronal death, which involvesremoving “useless” dictionary elements, is implemented as an l1=l2group-sparsity regularization;this step is essential in neurogenesis-inspired learning, since it reduces a potentially uncontrolledgrowth of the dictionary, and helps to avoid overfitting (note that neuronal death is also a naturalpart of the adult neurogensis process, where neuronal survival depends on multiple factors, includ-ing the complexity of a learning environment (Kempermann, 2006)). Moreover, we introduce spar-sity in dictionary elements, which reflects sparse connectivity between hidden units/neurons andtheir inputs; this is a more biologically plausible assumption than the fully-connected architectureof standard dictionary learning, and it also works better in our experiments. Thus, adaptation in ourmodel involves not only the addition/deletion of the elements, but adapting their connectivity aswell.We demonstrate on both simulated data and on two real-life datasets (natural images and languageprocessing) that, in presence of a non-stationary input, our approach can significantly outperformnon-adaptive, fixed-dictionary-size online method of Mairal et al. (2009). Moreover, we identify cer-tain data properties and parameter settings associated with such improvements. Finally, we demon-strate that the novel approach not only improves the representation accuracy, but also can boost theclassification accuracy based on the extracted features.Note that, although the group-sparsity constraint enforcing deletion of some dictionary elementswas introduced earlier in the group-sparse coding method of Bengio et al. (2009), it was only im-plemented and tested in the off-line rather than online setting, and, most importantly, it was not ac-1Note that the corresponding neural network interpretation of sparse coding framework is a (single-hidden-layer) linear autoencoder with sparsity constraints: the hidden units are associated with dictionary elements,each element represented by a weight vector associated with unit’s outgoing links in the output layer, and thesparse vector of hidden unit activations corresponding to the encoding of an input.2An early version of our neurogenetic online dictionary learning approach was presented as a poster at the2011 Society for Neuroscience meeting (Rish et al., 2011), although it did not appear before as a peer-reviewedpublication.2Under review as a conference paper at ICLR 2017companied by the neurogenesis. On the other hand, while some prior work considered online nodeaddition in hidden-variable models, and specifically, in neural networks, from cascade correlations(Fahlman & Lebiere, 1989) to the recent work by Draelos et al. (2016a;b), no model pruning wasincorporated in those approaches in order to balance the model expansion. Overall, we are not awareof any prior work which would propose and systematically evaluate, empirically and theoretically, adynamic process involving both addition and deletion of hidden units in the online model selectionsetting, either in sparse coding or in a neural network setting.To summarize, the main contributions of this paper are as follows:we propose a novel online model-selection approach to dictionary learning3, inspired bytheadult neurogenesis phenomenon; our method significantly outperforms the state-of-artbaseline , especially in non-stationary settings;we perform an extensive empirical evaluation, on both synthetic and real data , in orderto identify the conditions when the proposed adaptive approach is most beneficial, bothfor data reconstruction and for classification based on extracted features; we conclude thatthese conditions include a combination of sparse dictionary elements (and thus a morebiologically plausible sparse network connectivity as opposed to fully connected units),accompanied by sufficiently dense codes ;furthermore, we provide an intuitive discussion, as well as theoretical analysis of certaincombinations of the input data properties and the algorithm’s parameters when the pro-posed approach is most beneficial;from the neuroscientific perspective, we propose a computational model which supportsearlier empirical observations indicating that adult neurogenesis is particularly beneficialin changing environments, and that certain amount of neuronal death, which accompaniesthe neuronal birth, is an important component of an efficient neurogenesis process;overall, to the best of our knowledge, we are the first to perform an in-depth evaluationof the interplay between the birth and death of hidden units in the context of online modelselection in representation learning, and, more specifically, in online dictionary learning.This paper is organized as follows. In Sec. 2, we summarize the state-of-art non-adaptive (fixed-size) online dictionary learning method of Mairal et al. (2009). Thereafter, in Sec. 3, we describeour adaptive online dictionary learning algorithm. In Sec. 4, we present our empirical results on bothsynthetic and real datasets, including images and language data. Next, in Sec. 5, we provide sometheoretical, as well as an intuitive analysis of settings which can benefit most from our approach.Finally, we conclude with a summary of our contributions in Sec. 6. The implementation details ofthe algorithms and additional experimental results are described in the Appendix.2 B ACKGROUND ON DICTIONARY LEARNINGTraditional off-line dictionary learning (Olshausen & Field, 1997; Aharon et al., 2006; Lee et al.,2006) aims at finding a dictionaryD2Rmk, which allows for an accurate representation of atraining data set X=fx1;;xn2Rmg, where each sample xiis approximated by a linearcombinationxiD iof the columns of D, called dictionary elements fd1;;dk2Rmg.Hereiis the encoding (code vector , or simply code ) ofxiin the dictionary. Dictionary learningis also referred to as sparse coding , since it is assumed that the code vectors are sparse , i.e. have arelatively small number of nonzeros; the problem is formulated as minimizing the objectivefn(D) =1nnXi=112jjxiD ijj22+cjjijj1 (1)where the first term is the mean square error loss incurred due to approximating the input samplesby their representations in the dictionary, and the second term is the l1-regularization which enforcesthe codes to be sparse. The joint minimization of fn(D)with respect to the dictionary and codes isnon-convex; thus, a common approach is alternating minimization involving convex subproblems offinding optimal codes while fixing a dictionary, and vice versa.3The Matlab code is available at https://github.com/sgarg87/neurogenesis_inspired_dictionary_learning .3Under review as a conference paper at ICLR 2017However, the classical dictionary learning does not scale to very large datasets; moreover, it is notimmediately applicable to online learning from a continuous stream of data. The online dictionarylearning (ODL) method proposed by Mairal et al. (2009) overcomes both of these limitations, andserves as a basis for our proposed approach, presented in Alg. 1 in the next section. While the high-lighted lines in Alg. 1 represent our extension of ODL , the non-highlighted ones are common to bothapproaches, and are discussed first. The algorithms start with some dictionary D0, e.g. a randomlyinitialized one (other approaches include using some of the inputs as dictionary elements (Mairalet al., 2010; Bengio et al., 2009)). At each iteration t, both online approaches consider the next inputsamplext(more generally, a batch of samples) as in the step 3 of Alg. 1 and compute its sparsecodetby solving the LASSO (Tibshirani, 1996) problem (the step 4 in Alg. 1), with respect to thecurrent dictionary. In Alg. 1, we simply use Dinstead ofD(t)to simplify the notation. Next, thestandard ODL algorithm computes the dictionary update, D(t), by optimizing the surrogate objec-tive function ^ft(D)which is defined just as the original objective in eq. (1), for n=t, but with oneimportant difference: unlike the original objective, where each code ifor samplexiis computedwith respect to the same dictionaryD, the surrogate function includes the codes 1;2;;tcomputed at the previous iterations, using the dictionaries D(0);:::;D(t1), respectively; in otherwords, it does not recompute the codes for previously seen samples after each dictionary update.This speeds up the learning without worsening the (asymptotic) performance, since the surrogateobjective converges to the original one in (1), under certain assumptions, including data stationarity(Mairal et al., 2009). Note that, in order to prevent the dictionary entries from growing arbitrarilylarge, Mairal et al. (2009; 2010) impose the norm constraint, i.e. keep the columns of Dwithin theconvex setC=fD2Rmks:t:8jdTjuj1g. Then the dictionary update step computesD(t)= arg min D2C^ft(D), ignoringl1-regularizer over the code which is fixed at this step, asarg minD2C1ttXi=112jjxiD ijj22= arg minD2C12Tr(DTDA)Tr(DTB); (2)whereA=Pti=1iTiandB=Pti=1xiTiare the “bookkeeping” matrices (we also call them“memories” of the model), compactly representing the input samples and encoding history. At eachiteration, once the new input sample xiis encoded, the matrices are updated as A A+tTtandB B+xtTt(see the step 11 of Alg. 1). In (Mairal et al., 2009; 2010), a block coordinatedescent is used to optimize the convex objective in eq. 2; it iterates over the dictionary elements in afixed sequence, optimizing each while keeping the others fixed as shown in eq. (3) (essentially, thesteps 14 and 17 in Alg. 1; the only difference is that our approach will transform ujintowjin orderto impose additional regularizer before computing step 17), until convergence.uj bjPk6=jdkajkajj;dj ujmax(1;jjujjj2)(3)Herein, when the off-diagonal entries ajkinAare as large as the diagonal ajj, the dictionary ele-ments get “tied” to each other, playing complementary roles in the dictionary, thereby constrainingthe updates of each other.It is important to note that, for the experiment settings where we consider dictionary elements tobe sparse in our algorithm NODL (discussed next in Sec. 3), we will actually use as a baselinealgorithm a modified version of the fixed-size ODL, which allows for sparse dictionary elements, i.e.includes the sparsification step 15 in Alg. 1, thus optimizing the following objective in dictionaryupdate step instead of the one in eq. (2):arg minD2C1ttXi=112jjxiD ijj22+Xjjjjdjjj1: (4)From now on, ODL will refer to the above extended version of the fixed-size method of Mairalet al. (2009) wherever we have sparsity in dictionary elements (otherwise, the standard method ofMairal et al. (2009) is the baseline); in our experiments, dictionary sparsity of both the baselineand the proposed method (discussed in the next section) will be matched. Note that Mairal et al.(2010) mention that the convergence guaranties for ODL hold even with the sparsity constraints ondictionary elements.4Under review as a conference paper at ICLR 20173 O URAPPROACH : NEUROGENIC ONLINE DICTIONARY LEARNING (NODL)Our objective is to extend the state-of-art online dictionary learning, designed for stationary inputdistributions, to a more adaptive framework capable of handling nonstationary data effectively, andlearning to represent new types of data without forgetting how to represent the old ones. Towards thisend, we propose a novel algorithm, called Neurogenetic Online Dictionary Learning (see Alg. 1),which can flexibly extend and reduce a dictionary in response to the changes in an input distribution,and possibly to the inherent representation complexity of the data. The main changes, as compared tothe non-adaptive, fixed-dictionary-size algorithm of Mairal et al. (2009), are highlighted in Alg. 1;the two parts involve (1) neurogenesis, i.e. the addition of dictionary elements (hidden units, or“neurons”) and (2) the death of old and/or new elements which are “less useful” than other elementsfor the task of data reconstruction.At each iteration in Alg. 1, the next batch of samples is received and the corresponding codes, inthe dictionary, are computed; next, we add knnew dictionary elements sampled at random fromRm(i.e.,knrandom linear projections of the input sample). The choice of the parameter knisimportant; one approach is to tune it (e.g., by cross-validation), while another is to adjust it dynam-ically, based on the dictionary performance: e.g., if the environment is changing, the old dictionarymay not be able to represent the new input well, leading to decline in the representation accuracy,which triggers neurogenesis. Herein, we use as the performance measure the Pearson correlationbetween a new sample and its representation in the current dictionary r(xt;D(t1)t), i.e. denotedaspc(xt;D(t1);t)(for a batch of data, the average over pc(:)is taken). If it drops below a certainpre-specified threshold (where 01), the neurogenesis is triggered (the step 5 in Alg. 1).The number knof new dictionary elements is proportional to the error 1pc(), so that worse per-formance will trigger more neurogenesis, and vice versa; the maximum number of new elements isbounded by ck(the step 6 in Alg. 1). We refer to this approach as conditional neurogenesis as itinvolves the conditional birth of new elements. Next, knrandom elements are generated and addedto the current dictionary (the step 7), and the memory matrices A;Bare updated, respectively, toaccount for larger dictionary (the step 8). Finally, the sparse code is recomputed for xt(or, all thesamples in the current batch) with respect to the extended dictionary (the step 9).The next step is the dictionary update, which uses, similarly to the standard online dictionary learn-ing, the block-coordinate descent approach. However, the objective function includes additionalregularization terms, as compared to (2):D(t)=arg minD2C1ttXi=112jjxiD ijj22+gXjjjdjjj2+Xjjjjdjjj1: (5)The first term is the standard reconstruction error, as before. The second term, l1=l2-regularization,promotes group sparsity over the dictionary entries, where each group corresponds to a column, i.e.a dictionary element. The group-sparsity (Yuan & Lin, 2006) regularizer causes some columns inDto be set to zero (i.e. the columns less useful for accurate data representation), thus effectivelyeliminating the corresponding dictionary elements from the dictionary (“killing” the correspondinghidden units). As it was mentioned previously, Bengio et al. (2009) used the l1=l2-regularizer indictionary learning, though not in online setting, and without neurogenesis.Finally, the third term imposes l1-regularization on dictionary elements thus promoting sparse dic-tionary, besides the sparse coding. Introducing sparsity in dictionary elements, corresponding to thesparse connectivity of hidden units in the neural net representation of a dictionary, is motivated byboth their biological plausibility (neuronal connectivity tends to be rather sparse in multiple brainnetworks), and by the computational advantages this extra regularization can provide, as we observelater in experiments section (Sec. 4).As in the original algorithm of Mairal et al. (2009), the above objective is optimized by the block-coordinate descent, where each block of variables corresponds to a dictionary element, i.e., a columninD; the loop in steps 12-19 of the Alg. 1 iterates until convergence, defined by the magnitude ofchange between the two successive versions of the dictionary falling below some threshold. Foreach column update, the first and the last steps (the steps 14 and 17) are the same as in the originalmethod of Mairal et al. (2009), while the two intermediate steps (the steps 15 and 16) are implement-ing additional regularization. Both steps 15 and 16 (sparsity and group sparsity regularization) are5Under review as a conference paper at ICLR 2017Algorithm 1 Neurogenetic Online Dictionary Learning (NODL)Require: Data streamx1;x2;;xn2Rm; initial dictionary D2Rmk; conditionalneurogenesis threshold, ; max number of new elements added per data batch, ck; group sparsity regularizationparameter,g; number of non-zeros in a dictionary element, d; number of non-zeros in a code, c.1:Initialize:A 0,B 0% reset the ‘‘memory’’% assuming single data in a batch, for the simpler exposition2:fort= 1 tondo3: Inputxt% representing the tthbatch of data% Sparse coding of data:4:t= arg2Rkmin12jjxtDjj22+cjjjj1%ctuned to have cnon-zeros in t% Conditional neurogenesis: if accuracy below threshold, add moreelements (should not be more than the number of data in a batch5: ifpc(xt;D;t)then6:kn= (1pc(xt;D;t))ck% the count of the births of neurons7:Dn initializeRand (kn),D [D Dn]8:A A00 0,B [B0],k k+kn% Repeat sparse coding, now including the new dictionary elements9:t= arg2Rkmin12jjxtDjj22+cjjjj110: end if % End of neurogenesis% ‘‘Memory’’ update:11:A A+tTt,B B+xtTt% Dictionary update by block-coordinate descent with l1=l2group sparsity12: repeat13: forj= 1 tokdo14:uj bjPk 6=jdkajkajj% Sparsifying elements (optional):15:vj Proxjjj:jj1(uj) =sgn(uj)(jujjj)+;%jtuned to get dnon-zeros in vj% Killing useless elements with l1=l2group sparsity16:wj vj1gjjvjjj2+17:dj wjmax(1;jjwjjj2)18: end for19: until convergence20:end for21:returnDimplemented using the standard proximal operators as described in Jenatton et al. (2011). Note thatwe actually use as input the desired number of non-zeros, and determine the corresponding sparsityparametercandjusing a binary search procedure (see Appendix).Overall, the key features of our algorithm is the interplay of both the (conditional) birth and (group-sparsity) death of dictionary elements in an online setting.3.1 D ISCUSSION OF IMPORTANT ALGORITHMIC DETAILSA rationale behind sparsity of dictionary elements. We focus here on sparse dictionary ele-ments, which, in the network terms, correspond to sparse connectivity between hidden units andtheir inputs; one reason for this choice was that sparse connectivity appears to be a more biologi-cally plausible assumption than a fully-connected architecture implied by dense dictionary, in manybrain areas, and specifically between dentate gyrus and CA3. The other reason relates to computa-tional advantages.Note that (Mairal et al., 2009) state that convergence guaranties for the original ODL algorithmwould also hold for the case of sparse dictionary elements. However, no empirical evaluation isprovided for this case; furthermore, we are not aware of any previous work on sparse coding whichwould involve and extensive empirical evaluation for such setting. Prior focus on dense rather thansparse dictionary elements is perhaps more natural when the input consists of a large number ofrelatively small image patches, and thus each element also represents a small patch. In our work,however, dictionary is being learned on full images, and thus a nonzero pattern in a sparse dictionaryelement corresponds to a small patch within a larger image, with multiple sparse elements (patches)covering the image. Thus, rather than explicitly representing an image as a set of patches and then6Under review as a conference paper at ICLR 2017learning a dictionary of dense elements for accurate representation of such patches, a dictionary offull-image-size, but sparse dictionary elements can be used to implicitly represents an image as alinear combination of those elements, with possible overlap of non-zero pixels between elements;the non-zero pixels in a sparse element of a dictionary are learned automatically. Computationaladvantages of using sparse dictionaries are demonstrated in our experiment results (Sec. 4), whereclassifiers learned on top of representations extracted with sparse dictionaries yield smaller errors.The memory matrix Aand its properties. The matrixAkeeps the “memory” of the encodingstfor the previous data samples, in a sense, as it accumulates the sum of tTtmatrices fromeach iteration t. It turns out that the matrix Acan have a significant effect on dictionary learningin both ODL and NODL algorithms. As it is pointed out in (Mairal et al., 2009), the quadraticsurrogate function in (2) is strictly convex with a lower-bounded Hessian Aensuring convergence toa solution. From the practical standpoint, when the matrix Ahas a high condition number (the ratioof the largest to smallest singular value in the singular value decomposition of a matrix), despiteits lower-bounded eigenvalues, the adaptation of a dictionary elements using the standard ODLalgorithm can be difficult, as we see in our experiments. Specifically, when the dictionary elementsare sparse, this effect is more pronounced, since the condition number of Abecomes high dueto the complementary roles of sparse dictionary elements in the reconstruction process (see thecomparison of Afrom dense elements and sparse elements in 6(a) and 6(b), respectively). In suchscenarios, the submatrix of Acorresponding to the new elements in a dictionary, added by ourNODL algorithm, can have a better condition number, leading to an improved adaptation of thedictionary.Code Sparsity. Code sparsity is controlled by the parameter c, the number of nonzeros, whichdetermines the corresponding regularization weight cin step 4 of Alg. 1; note that cis determinedvia binary search for each input sample separately, as shown in Algorithm 2, and thus may varyslightly for different instances given a fixed c.Selecting an appropriate level of code sparsity depends on the choice of other parameters, such as theinput batch size, sparsity of the dictionary elements, the extent of non-stationarity and complexityof the data, and so on. When the dictionary elements are themselves sparse, denser codes may bemore appropriate, since each sparse dictionary element represents only a relatively small subset ofimage pixels, and thus a large number of those subsets covering the whole image may be needed foran accurate input representation.Interestingly, using very sparse codes in combination with non-sparse dictionary elements in thestandard ODL approach can sometimes lead to creation of “dead” (zero l2-norm) elements in thedictionary, especially if the input batch size is small. This is avoided by our NODL algorithm, sincesuch dead elements are implicitly removed via group sparsity at the dictionary update step, alongwith the “weak” (very small l2-norm) elements. Also, a very high code sparsity in combination withdense dictionary elements can lead to a significant decrease in the reconstruction accuracy for bothODL and our NODL when the online data stream is non-stationary. Such shortcomings were notencountered in (Mairal et al., 2009; 2010), where only stationary data streams were studied, both intheoretical and empirical results. On the other hand, high sparsity in dictionary elements does notseem to cause a degradation in the reconstruction accuracy, as long as the codes are not too sparse.The choice and tuning of metric for conditional neuronal birth. In the “conditional birth” ap-proach described above, the number of new elements knis determined based on the performance ofthe current dictionary, using the Pearson correlation between the actual and reconstructed data, forthe current batch. This is, of course, just one particular approach to measuring data nonstationarityand the need for adaptation, but we consider it a reasonable heuristic. Low reconstruction error in-dicates that the old dictionary is still capable of representing the new data, and thus less adaptationmight be needed, while a high error indicates that the data distribution might have changed, andtrigger neurogenesis in order to better adapt to a new environment. We choose the Pearson correla-tion as the measure of reconstruction accuracy since its value is easily interpretable, is always in therange [0;1](unlike, for example, the mean-square error), which simplifies tuning the threshold pa-rameter. Clearly, one can also try other interpretable metrics, such as, for example, the Spearmancorrelation.7Under review as a conference paper at ICLR 2017Tuning parameters: group sparsity gand others. The group sparsity regularization parametergcontrols the amount of removal (“death”) of elements in NODL : in step 16 of the Alg. 1, all ele-ments withl2-norm below g(i.e., “weak” elements), are set to zero (“killed”). Since the dictionaryelements are normalized to have l2-norm less than one, we only need to consider g2[0;1]. (Notethat the step of killing dictionary elements precedes the normalization step in the algorithm. Thus,the tuning of gis affected by the normalization of the elements from the previous iteration.) Notethat increasing the sparsity of the dictionary elelments, i.e. decreasing d(the number of nozeros indictionary elements) may require the corresponding reduction of g, while an increase in the inputdimensionality mmay also require an increase in the gparameter. Tuning the rest of the parametersis relatively easy. Clearly, the batch size should be kept relatively small, and, ideally, not exceed the“window of stationarity” size in the data (however, the frequency of the input distribution changemay need to be also estimated from the data, and thus the batch size may need to be tuned adaptively,which is outside of the scope of this paper). Mairal et al. (2009) suggest to use a batch size of 256intheir experiments while getting similar performance with values 128and512. As to the maximumnumber of new elements ckadded at each iteration, it is reasonable to keep it smaller than the batchsize.4 E XPERIMENTSWe now evaluate empirically the proposed approach, NODL, against ODL, the standard (non-adaptive) online dictionary learning of Mairal et al. (2009). Moreover, in order to evaluate separatelythe effects of either only adding, or only deleting dictionary elements, we also evaluate two restrictedversions of our method: NODL+ involves only addition but no deletion (equivalent to NODL withno group-sparsity, i.e. g= 0), and NODL- which, vice versa, involves deletion only but no addition(equivalent to NODL with the number of new elements ck= 0). The above algorithms are evalu-ated in a non-stationary setting, where a sequence of training samples from one environment (firstdomain) is followed by another sequence from a different environment (second domain), in order totest their ability to adapt to new environments without “forgetting” the previous ones.4.1 R EAL-LIFE IMAGESOur first domain includes the images of Oxford buildings4(urban environment), while the seconduses a combination of images from Flowers5and Animals6image databases (natural environment);examples of both types of images are shown in Fig. 1(a) and 1(b). We converted the original colorimages into black&white format and compressed them to smaller sizes, 32x32 and 100x100. Notethat, unlike (Mairal et al., 2009), we used full images rather than image patches as our inputs.(a) Urban: Oxford Buildings (b) Nature: Flowers and AnimalsFigure 1: The image data sets for the evaluation of the online dictionary learning algorithms.We selected 5700 images for training and another 5700 for testing; each subset contained 1900images of each type (i.e., Oxford, Flowers, Animals). In the training phase, as mentioned above,4http://www.robots.ox.ac.uk/ ̃vgg/data/oxbuildings/index.html5http://www.robots.ox.ac.uk/ ̃vgg/data/flowers/102/6http://www.robots.ox.ac.uk/ ̃vgg/data/pets/8Under review as a conference paper at ICLR 2017(a) Learned Dictionary Size (b) 1st domain (Oxford) (c) 2nd domain (Flowers)Figure 2: Reconstruction accuracy of NODL and ODL on 32x32 images (sparse dictionary).(a) 1st domain (Oxford) (b) 2nd domain (Flowers) (c) Classification ErrorFigure 3: Reconstruction accuracy of NODL and ODL on 100x100 images with sparse dictionaryelements (50 non-zeros) and non-sparse codes.each online dictionary learning algorithm receives a sequence of 1900 samples from the first, urbandomain (Oxford), and then a sequence of 3800 samples from the second, natural domain (1900Flowers and 1900 Animals, permuted randomly). At each iteration, a batch of 200 images is receivedas an input. (For comparison, Mairal et al. (2009) used a batch of size 256, though image patchesrather than full images.) The following parameters are used by our algorithm: Pearson correlationthreshold= 0:9, group sparsity parameter g= 0:03andg= 0:07, for 32x32 and 100x100images, respectively. The upper bound on the number of new dictionary elements at each iteration isck= 50 . (We observed that the results are only mildly sensitive to the specified parameter values.)Once the training phase is completed, the resulting dictionary is evaluated on test images from boththe first (urban) and the second (natural) domains; for the second domain, separate evaluation isperformed for flowers and animals. First, we evaluate the reconstruction ability of the resultingdictionaryD, comparing the actual inputs xversus approximations x=D, using the meansquare error (MSE), Pearson correlation, and the Spearman correlation. We present the results forPearson correlations between the actual and reconstructed inputs, since all the three metrics showconsistent patterns (for completeness, MSE results are shown in Appendix). Moreover, we evaluatethe dictionaries in a binary classification setting (e.g., flowers vs animals), using as features thecodes of test samples in a given dictionary. Finally, we explored a wide range of sparsity parametersfor both the codes and the dictionary elements.Our key observations are that: (1) the proposed method frequently often outperforms (or is at leastas good as) its competitors, on both the new data (adaptation) and the old ones (memory); (2) it ismost beneficial when dictionary elements are sparse; (3) vice versa, when dictionary elements aredense, neurogenetic approach matches the baseline, fixed-size dictionary learning. We now discussthe results in detail.Sparse Dictionary ElementsIn Fig. 2, we present the results for sparse dictionaries, where each column (an element in thedictionary) has 5 nonzeros out of the 1024 dimensions; the codes are relatively dense, with at most200 nonzeros out of k(the number of dictionary elements), and kranging from 5 to 1000 (i.e. thecodes are not sparse for k200). Due to space limitations, we put in the Appendix (Sec. B.2)our results on a wider range of values for the dictionary and code sparsity (Fig. 12). In Fig. 2(a),we compare the dictionary size for different methods: the final dictionary size after completing thetraining phase (y-axis) is plotted against the initial dictionary size (x-axis). Obviously, the baseline(fixed-size) ODL method (magenta plot) keeps the size constant, deletion-only NODL- approachreduces the initial size (red plot), and addition-only NODL+ increases the size (light-blue plot).9Under review as a conference paper at ICLR 2017However, the interplay between the addition and deletion in our NODL method (dark-blue) producesa more interesting behavior: it tends to adjust the representation complexity towards certain balancedrange, i.e. very small initial dictionaries are expanded, while very large ones are, vice versa, reduced.Our main results demonstrating the advantages of the proposed NODL method are shown next inFig. 2(b) and Fig. 2(c), for the “old” (Oxford) and “new” (Flowers) environment (domain), respec-tively. (Very similar result are shown for Animals as well, in the Appendix). The x-axis shows thefinal dictionary size, and the y-axis is the reconstruction accuracy achieved by the trained dictionaryon the test samples, measured by Pearson correlation between the actual and reconstructed data.NODL clearly outperforms the fixed-size ODL, especially on smaller dictionary sizes; remarkably,this happens on both domains, i.e. besides improved adaptation to the new data, NODL is also betterat preserving the “memories” of the old data, without increasing the representation complexity, i.e.for the same dictionary size .Interestingly, just deletion would not suffice, as deletion-only version, NODL-, is inferior to ourNODL method. On the other hand, addition-only, or NODL+, method is as accurate as NODL, buttends to increase the dictionary size too much. The interplay between the addition and deletion pro-cesses in our NODL seems to achieve the best of the two worlds, achieving superior performancewhile keeping the dictionary size under control, in a narrower range (400 to 650 elements), expand-ing, as necessary, small dictionaries, while compressing large ones7.We will now focus on comparing the two main methods, the baseline ODL and the proposed NODLmethod. The advantages of our approach become even more pronounced on larger input sizes, e.g.100x100 images, in similar sparse-dictionary, dense-code settings. (We keep the dictionary elementsat the same sparsity rate, 50 nonzeros out of 10,000 dimensions, and just use completely non-sparsecodes). In Fig. 3(a) and Fig. 3(b), we see that NODL considerably outperforms ODL on both thefirst (Oxford) and the (part of the ) second domain (Flowers); the results for Animals are very similarand are given in the Appendix in Fig. 10. In Appendix Sec. B.6, Fig. 17 depicts examples of actualanimal images and the corresponding reconstructions by the fixed-size ODL and our NODL methods(not included here due to space restrictions). A better reconstruction quality of our method can beobserved (e.g., a more visible dog shape, more details such as dog’s legs, as opposed to a collectionclusters produced by the ODL methods note however that printer resolution may reduce the visibledifference, and looking at the images in online version of this paper is recommended).Moreover, NODL can be also beneficial in classification settings. Given a dictionary, i.e. a sparse lin-ear autoencoder trained in an unsupervised setting, we use the codes (i.e., feature vectors) computedon the test data from the second domain (Animals and Flowers) and evaluate multiple classifierslearned on those features in order to discriminate between the two classes. In Fig. 3(c), we show thelogistic regression results using 10-fold cross-validation; similar results for several other classifiersare presented in the Appendix, Fig. 10. Note that we also perform filter-based feature subset selec-tion, using the features statistical significance as measured by its p-value as the ranking function,and selecting subsets of top kfeatures, increasing kfrom 1 to the total number of features (the codelength, i.e. the number of dictionary elements). The x-axis in Fig. 3(c) shows the value of k, whilethe y-axis plots the classification error rate for the features derived by each method. We can see thatour NODL method (blue) yields lower errors than the baseline ODL (magenta) for relatively smallsubsets of features, although the difference is negligible for the full feature set. Overall, this suggeststhat our NODL approach achieves better reconstruction performance of the input data, without extraoverfitting in classification setting, since it generalizes at least as good as, and often better than thebaseline ODL method.Non-sparse dictionary elementsWhen exploring a wide range of sparsity settings (see Appendix), we observed quite different resultsfor non-sparse dictionaries as opposed to those presented above. Fig. 8(b) (in Appendix, due tospace constraints) summarizes the results for a particular setting of fully dense dictionaries (nozero entries), but sparse codes (50 non-zeros out of up to 600 dictionary elements; however, thecodes are still dense when dictionary size is below 50). In this setting, unlike the previous one,we do not observe any significant improvement in accuracy due to neurogenetic approach, neither inreconstruction nor in classification accuracy; both methods perform practically the same. (Also, note7In our experiments, we also track which dictionary elements are deleted by our method; generally, both oldand newly added elements get deleted, depending on specific settings.10Under review as a conference paper at ICLR 2017a somewhat surprising phenomenon: after a certain point, i.e. about 50 elements, the reconstructionaccuracy of both methods actually declines rather than improves with increasing dictionary size.)It is interesting to note, however, that the overall classification errors, for both methods, are muchhigher in this setting (from 0.4 to 0.52) than in the sparse-dictionary setting (from 0.22 to 0.36).Even using non-sparse codes in the non-sparse dictionary setting still yields inferior results whencompared to sparse dictionaries (see the results in the Appendix).In summary, on real-life image datasets we considered herein, our NODL approach is often superior(and never inferior) to the standard ODL method; also, there is a consistent evidence that ourapproach is most beneficial in sparse dictionary settings.4.2 S PARSE ORTHOGONAL INPUTS : NLP AND SYNTHETIC DATASo far, we explored some conditions on methods properties (e.g., sparse versus dense dictionaries,as well as code sparsity/density) which can be beneficial for the neurogenetic approach. Our furtherquestion is: what kind of specific data properties would best justify neurogenetic versus traditional,fixed-size dictionary learning? As it turns out, the fixed-size ODL approach has difficulties adaptingto a new domain in nonstationary settings, when the data in both domains are sparse and, acrossthe domains, the supports (i.e., the sets of non-zero coordinates) are almost non-overlapping (i.e.,datasets are nearly orthogonal). This type of data properties is related to a natural language process-ing problem considered below. Furthermore, pushing this type of structure to the extreme, we usedsimulations to better understand the behavior of our method. Herein, we focused, again, on sparsedictionary elements, as a well-suited basis for representing sparse data. Moreover, our empirical re-sults confirm that using dense dictionary elements does not yield good reconstruction of sparse data,as expected.Sparse Natural Language Processing ProblemWe consider a very sparse word co-occurrence matrix (on average, about 14 non-zeros in a columnof size 12,883) using the text from two different domains, biology and mathematics, with the totalvocabulary size of approximately 12,883 words. The full matrix was split in two for illustrationpurposes and shown in Fig. 4(c) and 4(d), where math terms correspond to the first block of columnsand the biology terms correspond to the second one (though it might be somewhat hard to see in thepicture, the average number of nozeros per row/column is indeed about 14).We use the sparse columns (or rows) in the matrix, indexed by the vocabulary words, as our inputdata to learn the dictionary of sparse elements (25 non-zeros) with sparse codes (38 non-zeros). Thecorresponding word codes in the learned dictionary can be later used as word embeddings, or wordvectors, in various NLP tasks such as information extraction, semantic parsing, and others Yogatamaet al. (2015); Faruqui et al. (2015); Sun et al. (2016). (Note that many of the non-domain specificwords were removed from the vocabulary to obtain the final size of 12,883.) Herein, we evaluateour NODL method (i.e. NODL (sparse) in the plots) versus baseline ODL dictionary learning ap-proach (i.e. ODL (sparse)) in the settings where the biology domain is processed first and then onehave to switch to the the mathematics domain. We use 2750 samples from each of the domainsfor training and the same number for testing. The evaluation results are shown in Fig. 4. For thefirst domain (biology), both methods perform very similarly (i.e., remember the old data equallywell), while for the second, more recent domain, our NODL algorithm is clearly outperforming itscompetitor. Moreover, as we mention above, non-sparse (dense) dictionaries are not suited for themodeling of highly sparse data such as our NLP data. In the Fig. 4, both random dense dictionar-ies (random-D) and the dense dictionaries learned with ODL (i.e. ODL (dense)) do poorly in thebiology and mathematics domains.However, the reconstruction accuracy as measured by Pearson correlation was not too high, overall,i.e. the problem turned out to be more challenging than encoding image data. It gave us an intuitionabout the structure of sparse data that may be contributing to the improvements due to neurogenesis.Note that the word co-occurrence matrix from different domains such as biology and mathemat-ics tends to have approximately block-diagonal structure, where words from the same domain areoccurring together more frequently than they co-occur with the words from the different domain.Pushing this type of structure to extreme, we studied next the simulated sparse dataset where thesamples from the two different domains are not only sparse, but have completely non-overlappingsupports, i.e. the data matrix is block-diagonal (see Fig. 7(c) in Appendix).11Under review as a conference paper at ICLR 2017(a)1st domain (Biology) (b)2nd Domain (Mathematics) (c)Biology (d)MathFigure 4: Reconstruction accuracy for the sparse NLP data.(a)Pearson- First Domain (b)Pearson- Second Domain (c)D- ODL (d)D- NODL (ours)Figure 5: Reconstruction accuracy for the sparse synthetic data.Synthetic Sparse DataWe generated a synthetic sparse dataset with 1024 dimension, and only 50 nonzeros in each sam-ple. Moreover, we ensured that the data in both domains had non-overlapping supports (i.e., non-intersecting sets of non-zero coordinates), by always selecting nonzeros in the first domain from thefirst 512 dimensions, while only using the last 512 dimensions for the second domain Fig. 7(c) inAppendix). For the evaluation on the synthetic data, we use the total of 200 samples for the trainingand testing purposes each (100 samples for each of the two domains), and smaller batches for onlinetraining, containing 20 samples each (instead of 200 samples used earlier for images and languagedata).Since the data is sparse, we accordingly adjust the sparsity of dictionary elements (50 nonzeros inan element; for the code sparsity, we will present the results with 50 nonzeros as well). In Fig. 5,we see reconstruction accuracy, for the first and second domain data. For the first domain, the base-line ODL method (i.e. ODL (sparse) in the plots) and our NODL (i.e. NODL (sparse)) performequally well. On the other hand, for the second domain, the ODL algorithm’s performance degradessignificantly compared to the first domain. This is because the data from the second domain havenon-overlapping support w.r.t. the data from the first domain. Our method is able to perform verywell on the second domain (almost as good as the first domain). It is further interesting to analyzethe case of random non-sparse dictionary (random-D) which even performs better than the baselineODL method, for the second domain. This is because random dictionary elements remain non-sparsein all the dimensions thereby doing an average job in both of the domains. Along the same lines,ODL (dense) performs better than the ODL (sparse) in the second domain. Though, the performanceof non-sparse dictionaries should degrade significantly with an increase in the sparsity of data, aswe see above for the NLP data. Clearly, our NODL (sparse) gives consistently better reconstructionaccuracy, compared to the other methods, across the two domains.In Fig. 5(c) and Fig. 5(d), we see the sparsity structure of the dictionary elements learned using thebaseline ODL method and our NODL method respectively. From these plots, we get better insightson why the baseline method does not work. It keeps same sparsity structure as it used for the datafrom the first domain. Our NODL adapts to the second domain data because of its ability to add newdictionary elements, that are randomly initialized with non-zero support in all the dimensions.Next, in Sec. 5, we discuss our intuitions on why NODL performs better than the ODL algorithmunder certain conditions.12Under review as a conference paper at ICLR 20175 W HEN NEUROGENESIS CANHELP,AND WHYIn the Sec. 4, we observed that our NODL method outperforms the ODL algorithm in two generalsettings, both involving sparse dictionary elements: (i) non-sparse data such as real-life images, and(ii) sparse data with (almost) non-overlapping supports. In this section, we attempt to analyze whatcontributes to the success of our approach in these settings, starting with the last one.Sparse data with non-overlapping supports, sparse dictionaryAs discussed above, in this scenario, the data from both the first and the second domain are sparse,and their supports (non-zero dimensions) are non-overlapping, as shown in the Fig. 7(c). Note that,when training a dictionary using the fixed-size, sparse-dictionary ODL method, we observe only aminor adaptation to the second domain after training on the first domain, as shown in Fig. 5(c).Our empirical observations are supported by the theoretical result summarized in Lemma 1 below.Namely, we prove that when using the ODL algorithm in the above scenario, the dictionary trainedon the first domain can not adapt to the second domain. (The minor adaptation, i.e., a few nonzeros,observed in our results in Fig. 5(c) occurs only due to implementation details involving normal-ization of sparse dictionary elements when computing codes in the dictionary – the normalizationintroduces non-zeros of small magnitude in all dimensions (see Appendix for the experiment resultswith no normalization of the elements, conforming to the Lemma 1)).Lemma 1. Letx1;x2;;xt12Rmbe a set of samples from the first domain, with non-zeros(support) in the set of dimensions PM=f1;;mg, and letxt;xt+1;;xn2Rmbe aset of samples from the second domain, with non-zeros (support) in dimensions QM, such thatP\Q= ;jPj=jQj=l. Let us denote as d1;d2;;dk2Rmdictionary elements learned byODL algorithm, with the sparsity constraint of at most lnonzeros in each element8, on the data fromthe first domain, x1;;xt1. Then (1) those elements have non-zero support in Ponly, and (2)after learning from the second domain data, the support (nonzero dimensions) of the correspondingupdated dictionary elements will remain in P.Proof Sketch. Let us consider processing the data from the first domain. At the first iteration, asamplex1is received, its code 1is computed, and the matrices AandBare updated, as shown inAlg. 1 (non-highlighted part); next, the dictionary update step is performed, which optimizesD(1)=arg minD2C12Tr(DTDA)Tr(DTB) +Xjjjjdjjj1: (6)Since the support of x1is limited to P, we can show that optimal dictionary Dmust also haveall columns/elements with support in P. Indeed, assuming the contrary, let dj(i)6= 0 for somedictionary element/column j, wherei =2P. But then it is easy to see that setting dj(i)to zeroreduces the sum-squared error and the l1-norm in (6), yielding another dictionary that achieves alower overall objective; this contradicts our assumption that Dwas optimal. Thus, the dictionaryupdate step must produce a dictionary where all columns have their support in P. By induction,this statement will also be true for the dictionary obtained after processing all samples from the firstdomain. Next, the samples from the second domain start arriving; note that those samples belong to adifferent subspace, spanning the dimensions within the support set Q, which is not intersecting withP. Thus, using the current dictionary, the encoding tof first sample xtfrom the second domain(i.e. the solution of the LASSO problem in step 4 of the Alg. 1 ) will be a zero vector. Therefore, thematricesAandBremains unchanged during the update in step 11, and thus the support of each bj,and, consequently, ujand the updated dictionary elements djwill remain in P. By induction, everydictionary update in response to a new sample from the second domain will preserve the support ofthe dictionary elements, and thus the final dictionary elements will also have their support only inP.Non-sparse data, sparse dictionaryWe will now discuss an intuitive explanation behind the success of neurogenetic approach in thisscenario, leaving a formal theoretical analysis as a direction for future work. When learning sparse8lcorresponds to din Alg. 113Under review as a conference paper at ICLR 2017(a)Awith ODL method (with dense elements) (b)Awith ODL method (with sparse elements)(c)Awith our method (with sparse elements) (d)Dwith ODL method (with sparse elements)Figure 6: Visualization of the sparse dictionary and the matrix Alearned on the first imagingdomain (Oxford images), using the baseline ODL method and our method.dictionaries on non-sparse data such as natural images, we observed that many dictionary elementshave non-overlapping supports with respect to each other; see, for example, Fig. 6(d), where eachcolumn corresponds to a 10000-dimensional dictionary element with nonzero dimensions shownin black color. Apparently, the non-zeros dimensions of an element tend to cluster spatially, i.e.to form a patch in an image. The non-overlapping support of dictionary elements results into aspecific structure of the matrix A. As shown in Fig. 6(b), for ODL approach, the resulting matrixAincludes many off-diagonal nonzero elements of large absolute values (along with high valueson the diagonal). Note that, by definition, Ais an empirical covariance of the code vectors, andit is easy to see that a nonzero value of ajkimplies that the j-th and thek-th dictionary elementswere used jointly to explain the same data sample(s). Thus, the dense matrix structure with manynon-zero off-diagonal elements, shown in Fig. 6(b), implies that, when the dictionary elements aresparse, they will be often used jointly to reconstruct the data. On the other hand, in the case ofnon-sparse dictionary elements, the matrix Ahas an almost diagonally-dominant structure, i.e. onlya few dictionary elements are used effectively in the reconstruction of each data sample even withnon-sparse codes (see Appendix for details).Note that in the dictionary update expression uj bjPk 6=jdkajkajjin (3), when the values ajk=ajjare large for multiple k, thejthdictionary element becomes tightly coupled with other dictionaryelements, which reduces its adaptability to new, non-stationary data. In our algorithm, the valuesajk=ajjremain high if both elements jandkhave similar “age”; however, those values are muchlower if one of the elements is introduced by neurogenesis much more recently than the other one.In 6(c), the upper left block on the diagonal, representing the oldest elements (added during theinitialization), is not diagonally-dominant (see the sub-matrices of Awith NODL in Fig. 14 in theAppendix). The lower right block, corresponding to the most recently added new elements, may alsohave a similar structure (though not visible due to relatively low magnitudes of the new elements;see the Appendix). Overall, our interpretation is that the old elements are tied to each other whereasthe new elements may also be tied to each other but less strongly, and not tied to the old elements,yielding a block-diagonal structure of Ain case of neurogenetic approach, where blocks correspond14Under review as a conference paper at ICLR 2017to dictionary elements adapted to particular domains. In other words, neurogenesis allows for anadaptation to a new domain without forgetting the old one.6 C ONCLUSIONSIn this work, we proposed a novel algorithm, Neurogenetic Online Dictionary Learning (NODL),for the problem of learning representations in non-stationary environments. Our algorithm buildsa dictionary of elements by learning from an online stream of data while also adapting the dic-tionary structure (the number of elements/hidden units and their connectivity) via continuous birth(addition) and death (deletion) of dictionary elements, inspired by the adult neurogenesis process inhippocampus, which is known to be associated with better adaptation of an adult brain to changingenvironments. Moreover, introducing sparsity in dictionary elements allows for adaptation of thehidden unit connectivity and further performance improvements.Our extensive empirical evaluation on both real world and synthetic data demonstrated that the in-terplay between the birth and death of dictionary elements allows for a more adaptive dictionarylearning, better suited for non-stationary environments than both of its counterparts, such as thefixed-size online method of Mairal et al. (2009) (no addition and no deletion), and the online ver-sion of the group-sparse coding method by Bengio et al. (2009) (deletion only). Furthermore weevaluated, both empirically and theoretically, several specific conditions on both method’s and dataproperties (involving the sparsity of elements, codes, and data) where our method has significantadvantage over the standard, fixed-size online dictionary learning. Overall, we can conclude thatneurogenetic dictionary learning typically performs as good as, and often much better than its com-petitors. In our future work, we plan to explore the non-linear extension of the dictionary model, aswell as a stacked auto-encoder consisting of multiple layers.
H1BU-VuEg
Interesting idea related to biology, good experimental validation, but more work is probably needed
5: Marginally below acceptance threshold
The paper is interesting, it relates findings from neurscience and biology to a method for sparse coding that is adaptive and able to automatically generate (or even delete) codes as new data is coming, from a nonstationary distribution. I have a few points to make: 1. the algorithm could be discussed more, to give a more solid view of the contribution. The technique is not novel in spirit. Codes are added when they are needed, and removed when they dont do much. 2. Is there a way to relate the organization of the data to the behavior of this method? In this paper, buildings are shown first, and natural images (which are less structured, more difficult) later. Is this just a way to perform curriculum learning? What happens when data simply changes in structure, with no apparent movement from simple to more complex (e.g. from flowers, to birds, to fish, to leaves, to trees etc) In a way, it makes sense to see an improvement when the training data has such a structure, by going from something artificial and simpler to a more complex, less structured domain. The paper is interesting, the idea useful with some interesting insights. I am not sure it is ready for publication yet.
3: The reviewer is fairly confident that the evaluation is correct
HyecJGP5ge
ICLR.cc/2017/conference
2017
NEUROGENESIS-INSPIRED DICTIONARY LEARNING: ONLINE MODEL ADAPTION IN A CHANGING WORLD
["Sahil Garg", "Irina Rish", "Guillermo Cecchi", "Aurelie Lozano"]
In this paper, we focus on online representation learning in non-stationary environments which may require continuous adaptation of model’s architecture. We propose a novel online dictionary-learning (sparse-coding) framework which incorporates the addition and deletion of hidden units (dictionary elements), and is inspired by the adult neurogenesis phenomenon in the dentate gyrus of the hippocampus, known to be associated with improved cognitive function and adaptation to new environments. In the online learning setting, where new input instances arrive sequentially in batches, the “neuronal birth” is implemented by adding new units with random initial weights (random dictionary elements); the number of new units is determined by the current performance (representation error) of the dictionary, higher error causing an increase in the birth rate. “Neuronal death” is implemented by imposing l1/l2-regularization (group sparsity) on the dictionary within the block-coordinate descent optimization at each iteration of our online alternating minimization scheme, which iterates between the code and dictionary updates. Finally, hidden unit connectivity adaptation is facilitated by introducing sparsity in dictionary elements. Our empirical evaluation on several real-life datasets (images and language) as well as on synthetic data demonstrates that the proposed approach can considerably outperform the state-of-art fixed-size (nonadaptive) online sparse coding of Mairal et al. (2009) in the presence of nonstationary data. Moreover, we identify certain properties of the data (e.g., sparse inputs with nearly non-overlapping supports) and of the model (e.g., dictionary sparsity) associated with such improvements.
["Unsupervised Learning", "Computer vision", "Transfer Learning", "Optimization", "Applications"]
ABSTRACTIn this paper, we focus on online representation learning in non-stationary envi-ronments which may require continuous adaptation of model’s architecture. Wepropose a novel online dictionary-learning (sparse-coding) framework which in-corporates the addition and deletion of hidden units (dictionary elements), and isinspired by the adult neurogenesis phenomenon in the dentate gyrus of the hip-pocampus, known to be associated with improved cognitive function and adapta-tion to new environments. In the online learning setting, where new input instancesarrive sequentially in batches, the “neuronal birth” is implemented by adding newunits with random initial weights (random dictionary elements); the number ofnew units is determined by the current performance (representation error) of thedictionary, higher error causing an increase in the birth rate. “Neuronal death” isimplemented by imposing l1=l2-regularization (group sparsity) on the dictionarywithin the block-coordinate descent optimization at each iteration of our onlinealternating minimization scheme, which iterates between the code and dictionaryupdates. Finally, hidden unit connectivity adaptation is facilitated by introduc-ing sparsity in dictionary elements. Our empirical evaluation on several real-lifedatasets (images and language) as well as on synthetic data demonstrates that theproposed approach can considerably outperform the state-of-art fixed-size (non-adaptive) online sparse coding of Mairal et al. (2009) in the presence of non-stationary data. Moreover, we identify certain properties of the data (e.g., sparseinputs with nearly non-overlapping supports) and of the model (e.g., dictionarysparsity) associated with such improvements.1 I NTRODUCTIONThe ability to adapt to a changing environment is essential for successful functioning in both naturaland artificial intelligent systems. In human brains, adaptation is achieved via neuroplasticity, whichtakes different forms, including synaptic plasticity, i.e. changing connectivity strength among neu-rons, and neurogenesis, i.e. the birth and maturation of new neurons (accompanied with the death ofsome new or old neurons). Particularly, adult neurogenesis (Kempermann, 2006) (i.e., neurogenesisin the adult brain) in the dentate gyrus of the hippocampus is associated with improved cognitivefunctions such as pattern separation (Sahay et al., 2011), and is often implicated as a “candidatemechanism for the specific dynamic and flexible aspects of learning” (Stuchlik, 2014).In the machine-learning context, synaptic plasticity is analogous to parameter tuning (e.g., learningneural net weights), while neurogenesis can be viewed as an online model selection via addition(and deletion) of hidden units in specific hidden-variable models used for representation learning(where hidden variables represent extracted features), from linear and nonlinear component anal-ysis methods such as PCA, ICA, sparse coding (dictionary learning), nonlinear autoencoders, todeep neural nets and general hidden-factor probabilistic models. However, optimal model selectionin large-scale hidden-variable models (e.g., adjusting the number of layers, hidden units, and their1Under review as a conference paper at ICLR 2017connectivity), is intractable due to enormous search space size. Growing a model gradually can be amore feasible alternative; after all, every real brain’s “architecture” development process starts witha single cell. Furthermore, the process of adapting the model’s architecture to dynamically changingenvironments is necessary for achieving a lifelong, continual learning. Finally, an online approachto dynamically expanding and contracting model’s architecture can serve as a potentially more ef-fective alternative to the standard off-line model selection (e.g., MDL-based off-line sparse coding(Ramirez & Sapiro, 2012)), as well as to the currently popular network compression (distillation)approaches (Hinton et al., 2015; Srivastava et al., 2014; Ba & Caruana, 2014; Bucilu et al., 2006),where a very large-scale architecture, such as a deep neural network with millions of parameters,must be first selected in ad-hoc ways and trained on large amounts of data, only to be compressedlater to a more compact and simpler model with similarly good performance; we hypothesize thatadaptive growth and reduction of the network architecture is a viable alternative to the distillationapproach, although developing such an alternative remains the topic of further research.In this paper, we focus on dictionary learning, a.k.a. sparse coding (Olshausen & Field, 1997; Kreutz-Delgado et al., 2003; Aharon et al., 2006; Lee et al., 2006) – a representation learning approachwhich finds a set of basis vectors (atoms, or dictionary elements) and representations (encodings)of the input samples as sparse linear combinations of those elements1. More specifically, our ap-proach builds upon the computationally efficient online dictionary-learning method of Mairal et al.(2009), where the data samples are processed sequentially, one at a time (or in small batches). Onlineapproaches are particularly important in large-scale applications with millions of potential trainingsamples, where off-line learning can be infeasible; furthermore, online approaches are a naturalchoice for building systems capable of continual, lifelong learning.Herein, we propose a novel online dictionary learning approach inspired by adult neurogenesis,which extends the state-of-art method of Mairal et al. (2009) to nonstationary environments by in-corporating online model adaption, i.e. the addition and deletion of dictionary elements (i.e., hiddenunits) in response to the dynamically changing properties of the input data2. More specifically, ateach iteration of online learning (i.e., for every batch of data samples), we add a group of randomdictionary elements (modeling neuronal birth), where the group size depends on the current repre-sentation error, i.e. the mismatch between the new input samples and their approximation based onthe current dictionary: higher error triggers more neurogenesis. The neuronal death, which involvesremoving “useless” dictionary elements, is implemented as an l1=l2group-sparsity regularization;this step is essential in neurogenesis-inspired learning, since it reduces a potentially uncontrolledgrowth of the dictionary, and helps to avoid overfitting (note that neuronal death is also a naturalpart of the adult neurogensis process, where neuronal survival depends on multiple factors, includ-ing the complexity of a learning environment (Kempermann, 2006)). Moreover, we introduce spar-sity in dictionary elements, which reflects sparse connectivity between hidden units/neurons andtheir inputs; this is a more biologically plausible assumption than the fully-connected architectureof standard dictionary learning, and it also works better in our experiments. Thus, adaptation in ourmodel involves not only the addition/deletion of the elements, but adapting their connectivity aswell.We demonstrate on both simulated data and on two real-life datasets (natural images and languageprocessing) that, in presence of a non-stationary input, our approach can significantly outperformnon-adaptive, fixed-dictionary-size online method of Mairal et al. (2009). Moreover, we identify cer-tain data properties and parameter settings associated with such improvements. Finally, we demon-strate that the novel approach not only improves the representation accuracy, but also can boost theclassification accuracy based on the extracted features.Note that, although the group-sparsity constraint enforcing deletion of some dictionary elementswas introduced earlier in the group-sparse coding method of Bengio et al. (2009), it was only im-plemented and tested in the off-line rather than online setting, and, most importantly, it was not ac-1Note that the corresponding neural network interpretation of sparse coding framework is a (single-hidden-layer) linear autoencoder with sparsity constraints: the hidden units are associated with dictionary elements,each element represented by a weight vector associated with unit’s outgoing links in the output layer, and thesparse vector of hidden unit activations corresponding to the encoding of an input.2An early version of our neurogenetic online dictionary learning approach was presented as a poster at the2011 Society for Neuroscience meeting (Rish et al., 2011), although it did not appear before as a peer-reviewedpublication.2Under review as a conference paper at ICLR 2017companied by the neurogenesis. On the other hand, while some prior work considered online nodeaddition in hidden-variable models, and specifically, in neural networks, from cascade correlations(Fahlman & Lebiere, 1989) to the recent work by Draelos et al. (2016a;b), no model pruning wasincorporated in those approaches in order to balance the model expansion. Overall, we are not awareof any prior work which would propose and systematically evaluate, empirically and theoretically, adynamic process involving both addition and deletion of hidden units in the online model selectionsetting, either in sparse coding or in a neural network setting.To summarize, the main contributions of this paper are as follows:we propose a novel online model-selection approach to dictionary learning3, inspired bytheadult neurogenesis phenomenon; our method significantly outperforms the state-of-artbaseline , especially in non-stationary settings;we perform an extensive empirical evaluation, on both synthetic and real data , in orderto identify the conditions when the proposed adaptive approach is most beneficial, bothfor data reconstruction and for classification based on extracted features; we conclude thatthese conditions include a combination of sparse dictionary elements (and thus a morebiologically plausible sparse network connectivity as opposed to fully connected units),accompanied by sufficiently dense codes ;furthermore, we provide an intuitive discussion, as well as theoretical analysis of certaincombinations of the input data properties and the algorithm’s parameters when the pro-posed approach is most beneficial;from the neuroscientific perspective, we propose a computational model which supportsearlier empirical observations indicating that adult neurogenesis is particularly beneficialin changing environments, and that certain amount of neuronal death, which accompaniesthe neuronal birth, is an important component of an efficient neurogenesis process;overall, to the best of our knowledge, we are the first to perform an in-depth evaluationof the interplay between the birth and death of hidden units in the context of online modelselection in representation learning, and, more specifically, in online dictionary learning.This paper is organized as follows. In Sec. 2, we summarize the state-of-art non-adaptive (fixed-size) online dictionary learning method of Mairal et al. (2009). Thereafter, in Sec. 3, we describeour adaptive online dictionary learning algorithm. In Sec. 4, we present our empirical results on bothsynthetic and real datasets, including images and language data. Next, in Sec. 5, we provide sometheoretical, as well as an intuitive analysis of settings which can benefit most from our approach.Finally, we conclude with a summary of our contributions in Sec. 6. The implementation details ofthe algorithms and additional experimental results are described in the Appendix.2 B ACKGROUND ON DICTIONARY LEARNINGTraditional off-line dictionary learning (Olshausen & Field, 1997; Aharon et al., 2006; Lee et al.,2006) aims at finding a dictionaryD2Rmk, which allows for an accurate representation of atraining data set X=fx1;;xn2Rmg, where each sample xiis approximated by a linearcombinationxiD iof the columns of D, called dictionary elements fd1;;dk2Rmg.Hereiis the encoding (code vector , or simply code ) ofxiin the dictionary. Dictionary learningis also referred to as sparse coding , since it is assumed that the code vectors are sparse , i.e. have arelatively small number of nonzeros; the problem is formulated as minimizing the objectivefn(D) =1nnXi=112jjxiD ijj22+cjjijj1 (1)where the first term is the mean square error loss incurred due to approximating the input samplesby their representations in the dictionary, and the second term is the l1-regularization which enforcesthe codes to be sparse. The joint minimization of fn(D)with respect to the dictionary and codes isnon-convex; thus, a common approach is alternating minimization involving convex subproblems offinding optimal codes while fixing a dictionary, and vice versa.3The Matlab code is available at https://github.com/sgarg87/neurogenesis_inspired_dictionary_learning .3Under review as a conference paper at ICLR 2017However, the classical dictionary learning does not scale to very large datasets; moreover, it is notimmediately applicable to online learning from a continuous stream of data. The online dictionarylearning (ODL) method proposed by Mairal et al. (2009) overcomes both of these limitations, andserves as a basis for our proposed approach, presented in Alg. 1 in the next section. While the high-lighted lines in Alg. 1 represent our extension of ODL , the non-highlighted ones are common to bothapproaches, and are discussed first. The algorithms start with some dictionary D0, e.g. a randomlyinitialized one (other approaches include using some of the inputs as dictionary elements (Mairalet al., 2010; Bengio et al., 2009)). At each iteration t, both online approaches consider the next inputsamplext(more generally, a batch of samples) as in the step 3 of Alg. 1 and compute its sparsecodetby solving the LASSO (Tibshirani, 1996) problem (the step 4 in Alg. 1), with respect to thecurrent dictionary. In Alg. 1, we simply use Dinstead ofD(t)to simplify the notation. Next, thestandard ODL algorithm computes the dictionary update, D(t), by optimizing the surrogate objec-tive function ^ft(D)which is defined just as the original objective in eq. (1), for n=t, but with oneimportant difference: unlike the original objective, where each code ifor samplexiis computedwith respect to the same dictionaryD, the surrogate function includes the codes 1;2;;tcomputed at the previous iterations, using the dictionaries D(0);:::;D(t1), respectively; in otherwords, it does not recompute the codes for previously seen samples after each dictionary update.This speeds up the learning without worsening the (asymptotic) performance, since the surrogateobjective converges to the original one in (1), under certain assumptions, including data stationarity(Mairal et al., 2009). Note that, in order to prevent the dictionary entries from growing arbitrarilylarge, Mairal et al. (2009; 2010) impose the norm constraint, i.e. keep the columns of Dwithin theconvex setC=fD2Rmks:t:8jdTjuj1g. Then the dictionary update step computesD(t)= arg min D2C^ft(D), ignoringl1-regularizer over the code which is fixed at this step, asarg minD2C1ttXi=112jjxiD ijj22= arg minD2C12Tr(DTDA)Tr(DTB); (2)whereA=Pti=1iTiandB=Pti=1xiTiare the “bookkeeping” matrices (we also call them“memories” of the model), compactly representing the input samples and encoding history. At eachiteration, once the new input sample xiis encoded, the matrices are updated as A A+tTtandB B+xtTt(see the step 11 of Alg. 1). In (Mairal et al., 2009; 2010), a block coordinatedescent is used to optimize the convex objective in eq. 2; it iterates over the dictionary elements in afixed sequence, optimizing each while keeping the others fixed as shown in eq. (3) (essentially, thesteps 14 and 17 in Alg. 1; the only difference is that our approach will transform ujintowjin orderto impose additional regularizer before computing step 17), until convergence.uj bjPk6=jdkajkajj;dj ujmax(1;jjujjj2)(3)Herein, when the off-diagonal entries ajkinAare as large as the diagonal ajj, the dictionary ele-ments get “tied” to each other, playing complementary roles in the dictionary, thereby constrainingthe updates of each other.It is important to note that, for the experiment settings where we consider dictionary elements tobe sparse in our algorithm NODL (discussed next in Sec. 3), we will actually use as a baselinealgorithm a modified version of the fixed-size ODL, which allows for sparse dictionary elements, i.e.includes the sparsification step 15 in Alg. 1, thus optimizing the following objective in dictionaryupdate step instead of the one in eq. (2):arg minD2C1ttXi=112jjxiD ijj22+Xjjjjdjjj1: (4)From now on, ODL will refer to the above extended version of the fixed-size method of Mairalet al. (2009) wherever we have sparsity in dictionary elements (otherwise, the standard method ofMairal et al. (2009) is the baseline); in our experiments, dictionary sparsity of both the baselineand the proposed method (discussed in the next section) will be matched. Note that Mairal et al.(2010) mention that the convergence guaranties for ODL hold even with the sparsity constraints ondictionary elements.4Under review as a conference paper at ICLR 20173 O URAPPROACH : NEUROGENIC ONLINE DICTIONARY LEARNING (NODL)Our objective is to extend the state-of-art online dictionary learning, designed for stationary inputdistributions, to a more adaptive framework capable of handling nonstationary data effectively, andlearning to represent new types of data without forgetting how to represent the old ones. Towards thisend, we propose a novel algorithm, called Neurogenetic Online Dictionary Learning (see Alg. 1),which can flexibly extend and reduce a dictionary in response to the changes in an input distribution,and possibly to the inherent representation complexity of the data. The main changes, as compared tothe non-adaptive, fixed-dictionary-size algorithm of Mairal et al. (2009), are highlighted in Alg. 1;the two parts involve (1) neurogenesis, i.e. the addition of dictionary elements (hidden units, or“neurons”) and (2) the death of old and/or new elements which are “less useful” than other elementsfor the task of data reconstruction.At each iteration in Alg. 1, the next batch of samples is received and the corresponding codes, inthe dictionary, are computed; next, we add knnew dictionary elements sampled at random fromRm(i.e.,knrandom linear projections of the input sample). The choice of the parameter knisimportant; one approach is to tune it (e.g., by cross-validation), while another is to adjust it dynam-ically, based on the dictionary performance: e.g., if the environment is changing, the old dictionarymay not be able to represent the new input well, leading to decline in the representation accuracy,which triggers neurogenesis. Herein, we use as the performance measure the Pearson correlationbetween a new sample and its representation in the current dictionary r(xt;D(t1)t), i.e. denotedaspc(xt;D(t1);t)(for a batch of data, the average over pc(:)is taken). If it drops below a certainpre-specified threshold (where 01), the neurogenesis is triggered (the step 5 in Alg. 1).The number knof new dictionary elements is proportional to the error 1pc(), so that worse per-formance will trigger more neurogenesis, and vice versa; the maximum number of new elements isbounded by ck(the step 6 in Alg. 1). We refer to this approach as conditional neurogenesis as itinvolves the conditional birth of new elements. Next, knrandom elements are generated and addedto the current dictionary (the step 7), and the memory matrices A;Bare updated, respectively, toaccount for larger dictionary (the step 8). Finally, the sparse code is recomputed for xt(or, all thesamples in the current batch) with respect to the extended dictionary (the step 9).The next step is the dictionary update, which uses, similarly to the standard online dictionary learn-ing, the block-coordinate descent approach. However, the objective function includes additionalregularization terms, as compared to (2):D(t)=arg minD2C1ttXi=112jjxiD ijj22+gXjjjdjjj2+Xjjjjdjjj1: (5)The first term is the standard reconstruction error, as before. The second term, l1=l2-regularization,promotes group sparsity over the dictionary entries, where each group corresponds to a column, i.e.a dictionary element. The group-sparsity (Yuan & Lin, 2006) regularizer causes some columns inDto be set to zero (i.e. the columns less useful for accurate data representation), thus effectivelyeliminating the corresponding dictionary elements from the dictionary (“killing” the correspondinghidden units). As it was mentioned previously, Bengio et al. (2009) used the l1=l2-regularizer indictionary learning, though not in online setting, and without neurogenesis.Finally, the third term imposes l1-regularization on dictionary elements thus promoting sparse dic-tionary, besides the sparse coding. Introducing sparsity in dictionary elements, corresponding to thesparse connectivity of hidden units in the neural net representation of a dictionary, is motivated byboth their biological plausibility (neuronal connectivity tends to be rather sparse in multiple brainnetworks), and by the computational advantages this extra regularization can provide, as we observelater in experiments section (Sec. 4).As in the original algorithm of Mairal et al. (2009), the above objective is optimized by the block-coordinate descent, where each block of variables corresponds to a dictionary element, i.e., a columninD; the loop in steps 12-19 of the Alg. 1 iterates until convergence, defined by the magnitude ofchange between the two successive versions of the dictionary falling below some threshold. Foreach column update, the first and the last steps (the steps 14 and 17) are the same as in the originalmethod of Mairal et al. (2009), while the two intermediate steps (the steps 15 and 16) are implement-ing additional regularization. Both steps 15 and 16 (sparsity and group sparsity regularization) are5Under review as a conference paper at ICLR 2017Algorithm 1 Neurogenetic Online Dictionary Learning (NODL)Require: Data streamx1;x2;;xn2Rm; initial dictionary D2Rmk; conditionalneurogenesis threshold, ; max number of new elements added per data batch, ck; group sparsity regularizationparameter,g; number of non-zeros in a dictionary element, d; number of non-zeros in a code, c.1:Initialize:A 0,B 0% reset the ‘‘memory’’% assuming single data in a batch, for the simpler exposition2:fort= 1 tondo3: Inputxt% representing the tthbatch of data% Sparse coding of data:4:t= arg2Rkmin12jjxtDjj22+cjjjj1%ctuned to have cnon-zeros in t% Conditional neurogenesis: if accuracy below threshold, add moreelements (should not be more than the number of data in a batch5: ifpc(xt;D;t)then6:kn= (1pc(xt;D;t))ck% the count of the births of neurons7:Dn initializeRand (kn),D [D Dn]8:A A00 0,B [B0],k k+kn% Repeat sparse coding, now including the new dictionary elements9:t= arg2Rkmin12jjxtDjj22+cjjjj110: end if % End of neurogenesis% ‘‘Memory’’ update:11:A A+tTt,B B+xtTt% Dictionary update by block-coordinate descent with l1=l2group sparsity12: repeat13: forj= 1 tokdo14:uj bjPk 6=jdkajkajj% Sparsifying elements (optional):15:vj Proxjjj:jj1(uj) =sgn(uj)(jujjj)+;%jtuned to get dnon-zeros in vj% Killing useless elements with l1=l2group sparsity16:wj vj1gjjvjjj2+17:dj wjmax(1;jjwjjj2)18: end for19: until convergence20:end for21:returnDimplemented using the standard proximal operators as described in Jenatton et al. (2011). Note thatwe actually use as input the desired number of non-zeros, and determine the corresponding sparsityparametercandjusing a binary search procedure (see Appendix).Overall, the key features of our algorithm is the interplay of both the (conditional) birth and (group-sparsity) death of dictionary elements in an online setting.3.1 D ISCUSSION OF IMPORTANT ALGORITHMIC DETAILSA rationale behind sparsity of dictionary elements. We focus here on sparse dictionary ele-ments, which, in the network terms, correspond to sparse connectivity between hidden units andtheir inputs; one reason for this choice was that sparse connectivity appears to be a more biologi-cally plausible assumption than a fully-connected architecture implied by dense dictionary, in manybrain areas, and specifically between dentate gyrus and CA3. The other reason relates to computa-tional advantages.Note that (Mairal et al., 2009) state that convergence guaranties for the original ODL algorithmwould also hold for the case of sparse dictionary elements. However, no empirical evaluation isprovided for this case; furthermore, we are not aware of any previous work on sparse coding whichwould involve and extensive empirical evaluation for such setting. Prior focus on dense rather thansparse dictionary elements is perhaps more natural when the input consists of a large number ofrelatively small image patches, and thus each element also represents a small patch. In our work,however, dictionary is being learned on full images, and thus a nonzero pattern in a sparse dictionaryelement corresponds to a small patch within a larger image, with multiple sparse elements (patches)covering the image. Thus, rather than explicitly representing an image as a set of patches and then6Under review as a conference paper at ICLR 2017learning a dictionary of dense elements for accurate representation of such patches, a dictionary offull-image-size, but sparse dictionary elements can be used to implicitly represents an image as alinear combination of those elements, with possible overlap of non-zero pixels between elements;the non-zero pixels in a sparse element of a dictionary are learned automatically. Computationaladvantages of using sparse dictionaries are demonstrated in our experiment results (Sec. 4), whereclassifiers learned on top of representations extracted with sparse dictionaries yield smaller errors.The memory matrix Aand its properties. The matrixAkeeps the “memory” of the encodingstfor the previous data samples, in a sense, as it accumulates the sum of tTtmatrices fromeach iteration t. It turns out that the matrix Acan have a significant effect on dictionary learningin both ODL and NODL algorithms. As it is pointed out in (Mairal et al., 2009), the quadraticsurrogate function in (2) is strictly convex with a lower-bounded Hessian Aensuring convergence toa solution. From the practical standpoint, when the matrix Ahas a high condition number (the ratioof the largest to smallest singular value in the singular value decomposition of a matrix), despiteits lower-bounded eigenvalues, the adaptation of a dictionary elements using the standard ODLalgorithm can be difficult, as we see in our experiments. Specifically, when the dictionary elementsare sparse, this effect is more pronounced, since the condition number of Abecomes high dueto the complementary roles of sparse dictionary elements in the reconstruction process (see thecomparison of Afrom dense elements and sparse elements in 6(a) and 6(b), respectively). In suchscenarios, the submatrix of Acorresponding to the new elements in a dictionary, added by ourNODL algorithm, can have a better condition number, leading to an improved adaptation of thedictionary.Code Sparsity. Code sparsity is controlled by the parameter c, the number of nonzeros, whichdetermines the corresponding regularization weight cin step 4 of Alg. 1; note that cis determinedvia binary search for each input sample separately, as shown in Algorithm 2, and thus may varyslightly for different instances given a fixed c.Selecting an appropriate level of code sparsity depends on the choice of other parameters, such as theinput batch size, sparsity of the dictionary elements, the extent of non-stationarity and complexityof the data, and so on. When the dictionary elements are themselves sparse, denser codes may bemore appropriate, since each sparse dictionary element represents only a relatively small subset ofimage pixels, and thus a large number of those subsets covering the whole image may be needed foran accurate input representation.Interestingly, using very sparse codes in combination with non-sparse dictionary elements in thestandard ODL approach can sometimes lead to creation of “dead” (zero l2-norm) elements in thedictionary, especially if the input batch size is small. This is avoided by our NODL algorithm, sincesuch dead elements are implicitly removed via group sparsity at the dictionary update step, alongwith the “weak” (very small l2-norm) elements. Also, a very high code sparsity in combination withdense dictionary elements can lead to a significant decrease in the reconstruction accuracy for bothODL and our NODL when the online data stream is non-stationary. Such shortcomings were notencountered in (Mairal et al., 2009; 2010), where only stationary data streams were studied, both intheoretical and empirical results. On the other hand, high sparsity in dictionary elements does notseem to cause a degradation in the reconstruction accuracy, as long as the codes are not too sparse.The choice and tuning of metric for conditional neuronal birth. In the “conditional birth” ap-proach described above, the number of new elements knis determined based on the performance ofthe current dictionary, using the Pearson correlation between the actual and reconstructed data, forthe current batch. This is, of course, just one particular approach to measuring data nonstationarityand the need for adaptation, but we consider it a reasonable heuristic. Low reconstruction error in-dicates that the old dictionary is still capable of representing the new data, and thus less adaptationmight be needed, while a high error indicates that the data distribution might have changed, andtrigger neurogenesis in order to better adapt to a new environment. We choose the Pearson correla-tion as the measure of reconstruction accuracy since its value is easily interpretable, is always in therange [0;1](unlike, for example, the mean-square error), which simplifies tuning the threshold pa-rameter. Clearly, one can also try other interpretable metrics, such as, for example, the Spearmancorrelation.7Under review as a conference paper at ICLR 2017Tuning parameters: group sparsity gand others. The group sparsity regularization parametergcontrols the amount of removal (“death”) of elements in NODL : in step 16 of the Alg. 1, all ele-ments withl2-norm below g(i.e., “weak” elements), are set to zero (“killed”). Since the dictionaryelements are normalized to have l2-norm less than one, we only need to consider g2[0;1]. (Notethat the step of killing dictionary elements precedes the normalization step in the algorithm. Thus,the tuning of gis affected by the normalization of the elements from the previous iteration.) Notethat increasing the sparsity of the dictionary elelments, i.e. decreasing d(the number of nozeros indictionary elements) may require the corresponding reduction of g, while an increase in the inputdimensionality mmay also require an increase in the gparameter. Tuning the rest of the parametersis relatively easy. Clearly, the batch size should be kept relatively small, and, ideally, not exceed the“window of stationarity” size in the data (however, the frequency of the input distribution changemay need to be also estimated from the data, and thus the batch size may need to be tuned adaptively,which is outside of the scope of this paper). Mairal et al. (2009) suggest to use a batch size of 256intheir experiments while getting similar performance with values 128and512. As to the maximumnumber of new elements ckadded at each iteration, it is reasonable to keep it smaller than the batchsize.4 E XPERIMENTSWe now evaluate empirically the proposed approach, NODL, against ODL, the standard (non-adaptive) online dictionary learning of Mairal et al. (2009). Moreover, in order to evaluate separatelythe effects of either only adding, or only deleting dictionary elements, we also evaluate two restrictedversions of our method: NODL+ involves only addition but no deletion (equivalent to NODL withno group-sparsity, i.e. g= 0), and NODL- which, vice versa, involves deletion only but no addition(equivalent to NODL with the number of new elements ck= 0). The above algorithms are evalu-ated in a non-stationary setting, where a sequence of training samples from one environment (firstdomain) is followed by another sequence from a different environment (second domain), in order totest their ability to adapt to new environments without “forgetting” the previous ones.4.1 R EAL-LIFE IMAGESOur first domain includes the images of Oxford buildings4(urban environment), while the seconduses a combination of images from Flowers5and Animals6image databases (natural environment);examples of both types of images are shown in Fig. 1(a) and 1(b). We converted the original colorimages into black&white format and compressed them to smaller sizes, 32x32 and 100x100. Notethat, unlike (Mairal et al., 2009), we used full images rather than image patches as our inputs.(a) Urban: Oxford Buildings (b) Nature: Flowers and AnimalsFigure 1: The image data sets for the evaluation of the online dictionary learning algorithms.We selected 5700 images for training and another 5700 for testing; each subset contained 1900images of each type (i.e., Oxford, Flowers, Animals). In the training phase, as mentioned above,4http://www.robots.ox.ac.uk/ ̃vgg/data/oxbuildings/index.html5http://www.robots.ox.ac.uk/ ̃vgg/data/flowers/102/6http://www.robots.ox.ac.uk/ ̃vgg/data/pets/8Under review as a conference paper at ICLR 2017(a) Learned Dictionary Size (b) 1st domain (Oxford) (c) 2nd domain (Flowers)Figure 2: Reconstruction accuracy of NODL and ODL on 32x32 images (sparse dictionary).(a) 1st domain (Oxford) (b) 2nd domain (Flowers) (c) Classification ErrorFigure 3: Reconstruction accuracy of NODL and ODL on 100x100 images with sparse dictionaryelements (50 non-zeros) and non-sparse codes.each online dictionary learning algorithm receives a sequence of 1900 samples from the first, urbandomain (Oxford), and then a sequence of 3800 samples from the second, natural domain (1900Flowers and 1900 Animals, permuted randomly). At each iteration, a batch of 200 images is receivedas an input. (For comparison, Mairal et al. (2009) used a batch of size 256, though image patchesrather than full images.) The following parameters are used by our algorithm: Pearson correlationthreshold= 0:9, group sparsity parameter g= 0:03andg= 0:07, for 32x32 and 100x100images, respectively. The upper bound on the number of new dictionary elements at each iteration isck= 50 . (We observed that the results are only mildly sensitive to the specified parameter values.)Once the training phase is completed, the resulting dictionary is evaluated on test images from boththe first (urban) and the second (natural) domains; for the second domain, separate evaluation isperformed for flowers and animals. First, we evaluate the reconstruction ability of the resultingdictionaryD, comparing the actual inputs xversus approximations x=D, using the meansquare error (MSE), Pearson correlation, and the Spearman correlation. We present the results forPearson correlations between the actual and reconstructed inputs, since all the three metrics showconsistent patterns (for completeness, MSE results are shown in Appendix). Moreover, we evaluatethe dictionaries in a binary classification setting (e.g., flowers vs animals), using as features thecodes of test samples in a given dictionary. Finally, we explored a wide range of sparsity parametersfor both the codes and the dictionary elements.Our key observations are that: (1) the proposed method frequently often outperforms (or is at leastas good as) its competitors, on both the new data (adaptation) and the old ones (memory); (2) it ismost beneficial when dictionary elements are sparse; (3) vice versa, when dictionary elements aredense, neurogenetic approach matches the baseline, fixed-size dictionary learning. We now discussthe results in detail.Sparse Dictionary ElementsIn Fig. 2, we present the results for sparse dictionaries, where each column (an element in thedictionary) has 5 nonzeros out of the 1024 dimensions; the codes are relatively dense, with at most200 nonzeros out of k(the number of dictionary elements), and kranging from 5 to 1000 (i.e. thecodes are not sparse for k200). Due to space limitations, we put in the Appendix (Sec. B.2)our results on a wider range of values for the dictionary and code sparsity (Fig. 12). In Fig. 2(a),we compare the dictionary size for different methods: the final dictionary size after completing thetraining phase (y-axis) is plotted against the initial dictionary size (x-axis). Obviously, the baseline(fixed-size) ODL method (magenta plot) keeps the size constant, deletion-only NODL- approachreduces the initial size (red plot), and addition-only NODL+ increases the size (light-blue plot).9Under review as a conference paper at ICLR 2017However, the interplay between the addition and deletion in our NODL method (dark-blue) producesa more interesting behavior: it tends to adjust the representation complexity towards certain balancedrange, i.e. very small initial dictionaries are expanded, while very large ones are, vice versa, reduced.Our main results demonstrating the advantages of the proposed NODL method are shown next inFig. 2(b) and Fig. 2(c), for the “old” (Oxford) and “new” (Flowers) environment (domain), respec-tively. (Very similar result are shown for Animals as well, in the Appendix). The x-axis shows thefinal dictionary size, and the y-axis is the reconstruction accuracy achieved by the trained dictionaryon the test samples, measured by Pearson correlation between the actual and reconstructed data.NODL clearly outperforms the fixed-size ODL, especially on smaller dictionary sizes; remarkably,this happens on both domains, i.e. besides improved adaptation to the new data, NODL is also betterat preserving the “memories” of the old data, without increasing the representation complexity, i.e.for the same dictionary size .Interestingly, just deletion would not suffice, as deletion-only version, NODL-, is inferior to ourNODL method. On the other hand, addition-only, or NODL+, method is as accurate as NODL, buttends to increase the dictionary size too much. The interplay between the addition and deletion pro-cesses in our NODL seems to achieve the best of the two worlds, achieving superior performancewhile keeping the dictionary size under control, in a narrower range (400 to 650 elements), expand-ing, as necessary, small dictionaries, while compressing large ones7.We will now focus on comparing the two main methods, the baseline ODL and the proposed NODLmethod. The advantages of our approach become even more pronounced on larger input sizes, e.g.100x100 images, in similar sparse-dictionary, dense-code settings. (We keep the dictionary elementsat the same sparsity rate, 50 nonzeros out of 10,000 dimensions, and just use completely non-sparsecodes). In Fig. 3(a) and Fig. 3(b), we see that NODL considerably outperforms ODL on both thefirst (Oxford) and the (part of the ) second domain (Flowers); the results for Animals are very similarand are given in the Appendix in Fig. 10. In Appendix Sec. B.6, Fig. 17 depicts examples of actualanimal images and the corresponding reconstructions by the fixed-size ODL and our NODL methods(not included here due to space restrictions). A better reconstruction quality of our method can beobserved (e.g., a more visible dog shape, more details such as dog’s legs, as opposed to a collectionclusters produced by the ODL methods note however that printer resolution may reduce the visibledifference, and looking at the images in online version of this paper is recommended).Moreover, NODL can be also beneficial in classification settings. Given a dictionary, i.e. a sparse lin-ear autoencoder trained in an unsupervised setting, we use the codes (i.e., feature vectors) computedon the test data from the second domain (Animals and Flowers) and evaluate multiple classifierslearned on those features in order to discriminate between the two classes. In Fig. 3(c), we show thelogistic regression results using 10-fold cross-validation; similar results for several other classifiersare presented in the Appendix, Fig. 10. Note that we also perform filter-based feature subset selec-tion, using the features statistical significance as measured by its p-value as the ranking function,and selecting subsets of top kfeatures, increasing kfrom 1 to the total number of features (the codelength, i.e. the number of dictionary elements). The x-axis in Fig. 3(c) shows the value of k, whilethe y-axis plots the classification error rate for the features derived by each method. We can see thatour NODL method (blue) yields lower errors than the baseline ODL (magenta) for relatively smallsubsets of features, although the difference is negligible for the full feature set. Overall, this suggeststhat our NODL approach achieves better reconstruction performance of the input data, without extraoverfitting in classification setting, since it generalizes at least as good as, and often better than thebaseline ODL method.Non-sparse dictionary elementsWhen exploring a wide range of sparsity settings (see Appendix), we observed quite different resultsfor non-sparse dictionaries as opposed to those presented above. Fig. 8(b) (in Appendix, due tospace constraints) summarizes the results for a particular setting of fully dense dictionaries (nozero entries), but sparse codes (50 non-zeros out of up to 600 dictionary elements; however, thecodes are still dense when dictionary size is below 50). In this setting, unlike the previous one,we do not observe any significant improvement in accuracy due to neurogenetic approach, neither inreconstruction nor in classification accuracy; both methods perform practically the same. (Also, note7In our experiments, we also track which dictionary elements are deleted by our method; generally, both oldand newly added elements get deleted, depending on specific settings.10Under review as a conference paper at ICLR 2017a somewhat surprising phenomenon: after a certain point, i.e. about 50 elements, the reconstructionaccuracy of both methods actually declines rather than improves with increasing dictionary size.)It is interesting to note, however, that the overall classification errors, for both methods, are muchhigher in this setting (from 0.4 to 0.52) than in the sparse-dictionary setting (from 0.22 to 0.36).Even using non-sparse codes in the non-sparse dictionary setting still yields inferior results whencompared to sparse dictionaries (see the results in the Appendix).In summary, on real-life image datasets we considered herein, our NODL approach is often superior(and never inferior) to the standard ODL method; also, there is a consistent evidence that ourapproach is most beneficial in sparse dictionary settings.4.2 S PARSE ORTHOGONAL INPUTS : NLP AND SYNTHETIC DATASo far, we explored some conditions on methods properties (e.g., sparse versus dense dictionaries,as well as code sparsity/density) which can be beneficial for the neurogenetic approach. Our furtherquestion is: what kind of specific data properties would best justify neurogenetic versus traditional,fixed-size dictionary learning? As it turns out, the fixed-size ODL approach has difficulties adaptingto a new domain in nonstationary settings, when the data in both domains are sparse and, acrossthe domains, the supports (i.e., the sets of non-zero coordinates) are almost non-overlapping (i.e.,datasets are nearly orthogonal). This type of data properties is related to a natural language process-ing problem considered below. Furthermore, pushing this type of structure to the extreme, we usedsimulations to better understand the behavior of our method. Herein, we focused, again, on sparsedictionary elements, as a well-suited basis for representing sparse data. Moreover, our empirical re-sults confirm that using dense dictionary elements does not yield good reconstruction of sparse data,as expected.Sparse Natural Language Processing ProblemWe consider a very sparse word co-occurrence matrix (on average, about 14 non-zeros in a columnof size 12,883) using the text from two different domains, biology and mathematics, with the totalvocabulary size of approximately 12,883 words. The full matrix was split in two for illustrationpurposes and shown in Fig. 4(c) and 4(d), where math terms correspond to the first block of columnsand the biology terms correspond to the second one (though it might be somewhat hard to see in thepicture, the average number of nozeros per row/column is indeed about 14).We use the sparse columns (or rows) in the matrix, indexed by the vocabulary words, as our inputdata to learn the dictionary of sparse elements (25 non-zeros) with sparse codes (38 non-zeros). Thecorresponding word codes in the learned dictionary can be later used as word embeddings, or wordvectors, in various NLP tasks such as information extraction, semantic parsing, and others Yogatamaet al. (2015); Faruqui et al. (2015); Sun et al. (2016). (Note that many of the non-domain specificwords were removed from the vocabulary to obtain the final size of 12,883.) Herein, we evaluateour NODL method (i.e. NODL (sparse) in the plots) versus baseline ODL dictionary learning ap-proach (i.e. ODL (sparse)) in the settings where the biology domain is processed first and then onehave to switch to the the mathematics domain. We use 2750 samples from each of the domainsfor training and the same number for testing. The evaluation results are shown in Fig. 4. For thefirst domain (biology), both methods perform very similarly (i.e., remember the old data equallywell), while for the second, more recent domain, our NODL algorithm is clearly outperforming itscompetitor. Moreover, as we mention above, non-sparse (dense) dictionaries are not suited for themodeling of highly sparse data such as our NLP data. In the Fig. 4, both random dense dictionar-ies (random-D) and the dense dictionaries learned with ODL (i.e. ODL (dense)) do poorly in thebiology and mathematics domains.However, the reconstruction accuracy as measured by Pearson correlation was not too high, overall,i.e. the problem turned out to be more challenging than encoding image data. It gave us an intuitionabout the structure of sparse data that may be contributing to the improvements due to neurogenesis.Note that the word co-occurrence matrix from different domains such as biology and mathemat-ics tends to have approximately block-diagonal structure, where words from the same domain areoccurring together more frequently than they co-occur with the words from the different domain.Pushing this type of structure to extreme, we studied next the simulated sparse dataset where thesamples from the two different domains are not only sparse, but have completely non-overlappingsupports, i.e. the data matrix is block-diagonal (see Fig. 7(c) in Appendix).11Under review as a conference paper at ICLR 2017(a)1st domain (Biology) (b)2nd Domain (Mathematics) (c)Biology (d)MathFigure 4: Reconstruction accuracy for the sparse NLP data.(a)Pearson- First Domain (b)Pearson- Second Domain (c)D- ODL (d)D- NODL (ours)Figure 5: Reconstruction accuracy for the sparse synthetic data.Synthetic Sparse DataWe generated a synthetic sparse dataset with 1024 dimension, and only 50 nonzeros in each sam-ple. Moreover, we ensured that the data in both domains had non-overlapping supports (i.e., non-intersecting sets of non-zero coordinates), by always selecting nonzeros in the first domain from thefirst 512 dimensions, while only using the last 512 dimensions for the second domain Fig. 7(c) inAppendix). For the evaluation on the synthetic data, we use the total of 200 samples for the trainingand testing purposes each (100 samples for each of the two domains), and smaller batches for onlinetraining, containing 20 samples each (instead of 200 samples used earlier for images and languagedata).Since the data is sparse, we accordingly adjust the sparsity of dictionary elements (50 nonzeros inan element; for the code sparsity, we will present the results with 50 nonzeros as well). In Fig. 5,we see reconstruction accuracy, for the first and second domain data. For the first domain, the base-line ODL method (i.e. ODL (sparse) in the plots) and our NODL (i.e. NODL (sparse)) performequally well. On the other hand, for the second domain, the ODL algorithm’s performance degradessignificantly compared to the first domain. This is because the data from the second domain havenon-overlapping support w.r.t. the data from the first domain. Our method is able to perform verywell on the second domain (almost as good as the first domain). It is further interesting to analyzethe case of random non-sparse dictionary (random-D) which even performs better than the baselineODL method, for the second domain. This is because random dictionary elements remain non-sparsein all the dimensions thereby doing an average job in both of the domains. Along the same lines,ODL (dense) performs better than the ODL (sparse) in the second domain. Though, the performanceof non-sparse dictionaries should degrade significantly with an increase in the sparsity of data, aswe see above for the NLP data. Clearly, our NODL (sparse) gives consistently better reconstructionaccuracy, compared to the other methods, across the two domains.In Fig. 5(c) and Fig. 5(d), we see the sparsity structure of the dictionary elements learned using thebaseline ODL method and our NODL method respectively. From these plots, we get better insightson why the baseline method does not work. It keeps same sparsity structure as it used for the datafrom the first domain. Our NODL adapts to the second domain data because of its ability to add newdictionary elements, that are randomly initialized with non-zero support in all the dimensions.Next, in Sec. 5, we discuss our intuitions on why NODL performs better than the ODL algorithmunder certain conditions.12Under review as a conference paper at ICLR 20175 W HEN NEUROGENESIS CANHELP,AND WHYIn the Sec. 4, we observed that our NODL method outperforms the ODL algorithm in two generalsettings, both involving sparse dictionary elements: (i) non-sparse data such as real-life images, and(ii) sparse data with (almost) non-overlapping supports. In this section, we attempt to analyze whatcontributes to the success of our approach in these settings, starting with the last one.Sparse data with non-overlapping supports, sparse dictionaryAs discussed above, in this scenario, the data from both the first and the second domain are sparse,and their supports (non-zero dimensions) are non-overlapping, as shown in the Fig. 7(c). Note that,when training a dictionary using the fixed-size, sparse-dictionary ODL method, we observe only aminor adaptation to the second domain after training on the first domain, as shown in Fig. 5(c).Our empirical observations are supported by the theoretical result summarized in Lemma 1 below.Namely, we prove that when using the ODL algorithm in the above scenario, the dictionary trainedon the first domain can not adapt to the second domain. (The minor adaptation, i.e., a few nonzeros,observed in our results in Fig. 5(c) occurs only due to implementation details involving normal-ization of sparse dictionary elements when computing codes in the dictionary – the normalizationintroduces non-zeros of small magnitude in all dimensions (see Appendix for the experiment resultswith no normalization of the elements, conforming to the Lemma 1)).Lemma 1. Letx1;x2;;xt12Rmbe a set of samples from the first domain, with non-zeros(support) in the set of dimensions PM=f1;;mg, and letxt;xt+1;;xn2Rmbe aset of samples from the second domain, with non-zeros (support) in dimensions QM, such thatP\Q= ;jPj=jQj=l. Let us denote as d1;d2;;dk2Rmdictionary elements learned byODL algorithm, with the sparsity constraint of at most lnonzeros in each element8, on the data fromthe first domain, x1;;xt1. Then (1) those elements have non-zero support in Ponly, and (2)after learning from the second domain data, the support (nonzero dimensions) of the correspondingupdated dictionary elements will remain in P.Proof Sketch. Let us consider processing the data from the first domain. At the first iteration, asamplex1is received, its code 1is computed, and the matrices AandBare updated, as shown inAlg. 1 (non-highlighted part); next, the dictionary update step is performed, which optimizesD(1)=arg minD2C12Tr(DTDA)Tr(DTB) +Xjjjjdjjj1: (6)Since the support of x1is limited to P, we can show that optimal dictionary Dmust also haveall columns/elements with support in P. Indeed, assuming the contrary, let dj(i)6= 0 for somedictionary element/column j, wherei =2P. But then it is easy to see that setting dj(i)to zeroreduces the sum-squared error and the l1-norm in (6), yielding another dictionary that achieves alower overall objective; this contradicts our assumption that Dwas optimal. Thus, the dictionaryupdate step must produce a dictionary where all columns have their support in P. By induction,this statement will also be true for the dictionary obtained after processing all samples from the firstdomain. Next, the samples from the second domain start arriving; note that those samples belong to adifferent subspace, spanning the dimensions within the support set Q, which is not intersecting withP. Thus, using the current dictionary, the encoding tof first sample xtfrom the second domain(i.e. the solution of the LASSO problem in step 4 of the Alg. 1 ) will be a zero vector. Therefore, thematricesAandBremains unchanged during the update in step 11, and thus the support of each bj,and, consequently, ujand the updated dictionary elements djwill remain in P. By induction, everydictionary update in response to a new sample from the second domain will preserve the support ofthe dictionary elements, and thus the final dictionary elements will also have their support only inP.Non-sparse data, sparse dictionaryWe will now discuss an intuitive explanation behind the success of neurogenetic approach in thisscenario, leaving a formal theoretical analysis as a direction for future work. When learning sparse8lcorresponds to din Alg. 113Under review as a conference paper at ICLR 2017(a)Awith ODL method (with dense elements) (b)Awith ODL method (with sparse elements)(c)Awith our method (with sparse elements) (d)Dwith ODL method (with sparse elements)Figure 6: Visualization of the sparse dictionary and the matrix Alearned on the first imagingdomain (Oxford images), using the baseline ODL method and our method.dictionaries on non-sparse data such as natural images, we observed that many dictionary elementshave non-overlapping supports with respect to each other; see, for example, Fig. 6(d), where eachcolumn corresponds to a 10000-dimensional dictionary element with nonzero dimensions shownin black color. Apparently, the non-zeros dimensions of an element tend to cluster spatially, i.e.to form a patch in an image. The non-overlapping support of dictionary elements results into aspecific structure of the matrix A. As shown in Fig. 6(b), for ODL approach, the resulting matrixAincludes many off-diagonal nonzero elements of large absolute values (along with high valueson the diagonal). Note that, by definition, Ais an empirical covariance of the code vectors, andit is easy to see that a nonzero value of ajkimplies that the j-th and thek-th dictionary elementswere used jointly to explain the same data sample(s). Thus, the dense matrix structure with manynon-zero off-diagonal elements, shown in Fig. 6(b), implies that, when the dictionary elements aresparse, they will be often used jointly to reconstruct the data. On the other hand, in the case ofnon-sparse dictionary elements, the matrix Ahas an almost diagonally-dominant structure, i.e. onlya few dictionary elements are used effectively in the reconstruction of each data sample even withnon-sparse codes (see Appendix for details).Note that in the dictionary update expression uj bjPk 6=jdkajkajjin (3), when the values ajk=ajjare large for multiple k, thejthdictionary element becomes tightly coupled with other dictionaryelements, which reduces its adaptability to new, non-stationary data. In our algorithm, the valuesajk=ajjremain high if both elements jandkhave similar “age”; however, those values are muchlower if one of the elements is introduced by neurogenesis much more recently than the other one.In 6(c), the upper left block on the diagonal, representing the oldest elements (added during theinitialization), is not diagonally-dominant (see the sub-matrices of Awith NODL in Fig. 14 in theAppendix). The lower right block, corresponding to the most recently added new elements, may alsohave a similar structure (though not visible due to relatively low magnitudes of the new elements;see the Appendix). Overall, our interpretation is that the old elements are tied to each other whereasthe new elements may also be tied to each other but less strongly, and not tied to the old elements,yielding a block-diagonal structure of Ain case of neurogenetic approach, where blocks correspond14Under review as a conference paper at ICLR 2017to dictionary elements adapted to particular domains. In other words, neurogenesis allows for anadaptation to a new domain without forgetting the old one.6 C ONCLUSIONSIn this work, we proposed a novel algorithm, Neurogenetic Online Dictionary Learning (NODL),for the problem of learning representations in non-stationary environments. Our algorithm buildsa dictionary of elements by learning from an online stream of data while also adapting the dic-tionary structure (the number of elements/hidden units and their connectivity) via continuous birth(addition) and death (deletion) of dictionary elements, inspired by the adult neurogenesis process inhippocampus, which is known to be associated with better adaptation of an adult brain to changingenvironments. Moreover, introducing sparsity in dictionary elements allows for adaptation of thehidden unit connectivity and further performance improvements.Our extensive empirical evaluation on both real world and synthetic data demonstrated that the in-terplay between the birth and death of dictionary elements allows for a more adaptive dictionarylearning, better suited for non-stationary environments than both of its counterparts, such as thefixed-size online method of Mairal et al. (2009) (no addition and no deletion), and the online ver-sion of the group-sparse coding method by Bengio et al. (2009) (deletion only). Furthermore weevaluated, both empirically and theoretically, several specific conditions on both method’s and dataproperties (involving the sparsity of elements, codes, and data) where our method has significantadvantage over the standard, fixed-size online dictionary learning. Overall, we can conclude thatneurogenetic dictionary learning typically performs as good as, and often much better than its com-petitors. In our future work, we plan to explore the non-linear extension of the dictionary model, aswell as a stacked auto-encoder consisting of multiple layers.
Syk3UmQEe
review
5: Marginally below acceptance threshold
I'd like to thank the authors for their detailed response and clarifications. This work proposes new training scheme for online sparse dictionary learning. The model assumes a non-stationary flow of the incoming data. The goal (and the challenge) is to learn a model in an online manner in a way that is capable of adjusting to the new incoming data without forgetting how to represent previously seen data. The proposed approach deals with this problem by incorporating a mechanism for adding or deleting atoms in the dictionary. This procedure is inspired by the adult neurogenesis phenomenon in the dentate gyrus of the hippocampus. The paper has two main innovations over the baseline approach (Mairal et al): (i) “neuronal birth” which represents an adaptive way of increasing the number of atoms in the dictionary (ii) "neuronal death", which corresponds to removing “useless” dictionary atoms. Neural death is implemented by including an group-sparsity regularization to the dictionary atoms themselves (the group corresponds to a column of the dictionary). This promotes to shrink to zero atoms that are not very useful, keeping controlled the increase of the dictionary size. I believe that the strong side of the paper is its connections with the adult neurogenesis phenomenon, which is, in my opinion a very nice feature. The paper is very well written and easy to follow. On the other hand, the overall technique is not very novel. Although not exactly equivalent, similar ideas have been explored. While the neural death is implemente elegantly with a sparsity-promoting regularization term, the neural birth is performed by relying on heuristics that measure how well the dictionary can represent new incoming data. Which depending on the "level" of non-stationarity in the incoming data (or presence of outliers) could be difficult to set. Still, having adaptive dictionary size is very interesting. The authors could also cite some references in model selection literature. In particular, some ideas such as MDL have been used for automatically selecting the dictionary size (I believe this work does not address the online setting, but still its a relevant reference to have). For instance, Ramirez, Ignacio, and Guillermo Sapiro. "An MDL framework for sparse coding and dictionary learning." IEEE Transactions on Signal Processing 60.6 (2012): 2913-2927.
4: The reviewer is confident but not absolutely certain that the evaluation is correct
SJk01vogl
ICLR.cc/2017/conference
2017
Adversarial examples for generative models
["Jernej Kos", "Ian Fischer", "Dawn Song"]
We explore methods of producing adversarial examples on deep generative models such as the variational autoencoder (VAE) and the VAE-GAN. Deep learning architectures are known to be vulnerable to adversarial examples, but previous work has focused on the application of adversarial examples to classification tasks. Deep generative models have recently become popular due to their ability to model input data distributions and generate realistic examples from those distributions. We present three classes of attacks on the VAE and VAE-GAN architectures and demonstrate them against networks trained on MNIST, SVHN and CelebA. Our first attack leverages classification-based adversaries by attaching a classifier to the trained encoder of the target generative model, which can then be used to indirectly manipulate the latent representation. Our second attack directly uses the VAE loss function to generate a target reconstruction image from the adversarial example. Our third attack moves beyond relying on classification or the standard loss for the gradient and directly optimizes against differences in source and target latent representations. We also motivate why an attacker might be interested in deploying such techniques against a target generative network.
["Computer vision", "Unsupervised Learning"]
ABSTRACTWe explore methods of producing adversarial examples on deep generative mod-els such as the variational autoencoder (V AE) and the V AE-GAN. Deep learningarchitectures are known to be vulnerable to adversarial examples, but previouswork has focused on the application of adversarial examples to classification tasks.Deep generative models have recently become popular due to their ability to modelinput data distributions and generate realistic examples from those distributions.We present three classes of attacks on the V AE and V AE-GAN architectures anddemonstrate them against networks trained on MNIST, SVHN and CelebA. Ourfirst attack leverages classification-based adversaries by attaching a classifier tothe trained encoder of the target generative model, which can then be used to in-directly manipulate the latent representation. Our second attack directly uses theV AE loss function to generate a target reconstruction image from the adversarialexample. Our third attack moves beyond relying on classification or the standardloss for the gradient and directly optimizes against differences in source and tar-get latent representations. We also motivate why an attacker might be interestedin deploying such techniques against a target generative network.1 I NTRODUCTIONAdversarial examples have been shown to exist for a variety of deep learning architectures.1Theyare small perturbations of the original inputs, often barely visible to a human observer, but carefullycrafted to misguide the network into producing incorrect outputs. Seminal work by Szegedy et al.(2013) and Goodfellow et al. (2014), as well as much recent work, has shown that adversarialexamples are abundant and finding them is easy.Most previous work focuses on the application of adversarial examples to the task of classification,where the deep network assigns classes to input images. The attack adds small adversarial perturba-tions to the original input image. These perturbations cause the network to change its classificationof the input, from the correct class to some other incorrect class (possibly chosen by the attacker).Critically, the perturbed input must still be recognizable to a human observer as belonging to theoriginal input class.2Deep generative models, such as Kingma & Welling (2013), learn to generate a variety of outputs,ranging from handwritten digits to faces (Kulkarni et al., 2015), realistic scenes (Oord et al., 2016),videos (Kalchbrenner et al., 2016), 3D objects (Dosovitskiy et al., 2016), and audio (van den Oordet al., 2016). These models learn an approximation of the input data distribution in different ways,and then sample from this distribution to generate previously unseen but plausible outputs.To the best of our knowledge, no prior work has explored using adversarial inputs to attack gen-erative models. There are two main requirements for such work: describing a plausible scenarioin which an attacker might want to attack a generative model; and designing and demonstrating anattack that succeeds against generative models. We address both of these requirements in this work.One of the most basic applications of generative models is input reconstruction. Given an input im-age, the model first encodes it into a lower-dimensional latent representation, and then uses that rep-resentation to generate a reconstruction of the original input image. Since the latent representation1Adversarial examples are even easier to produce against most other machine learning architectures, asshown in Papernot et al. (2016), but we are focused on deep networks.2Random noise images and “fooling” images (Nguyen et al., 2014) do not belong to this strict definition ofan adversarial input, although they do highlight other limitations of current classifiers.1Under review as a conference paper at ICLR 2017usually has much fewer dimensions than the original input, it can be used as a form of compression.The latent representation can also be used to remove some types of noise from inputs, even when thenetwork has not been explicitly trained for denoising, due to the lower dimensionality of the latentrepresentation restricting what information the trained network is able to represent. Many genera-tive models also allow manipulation of the generated output by sampling different latent values ormodifying individual dimensions of the latent vectors without needing to pass through the encodingstep.These properties of input reconstruction generative networks suggest a variety of different attacksthat would be enabled by effective adversaries against generative networks. Any attack that targetsthe compression bottleneck of the latent representation can exploit natural security vulnerabilities inapplications built to use that latent representation. Specifically, if the person doing the encoding stepis separated from the person doing the decoding step, the attacker may be able to cause the encodingparty to believe they have encoded a particular message for the decoding party, but in reality theyhave encoded a different message of the attacker’s choosing. We explore this idea in more detail asit applies to the application of compressing images using a V AE or V AE-GAN architecture.2 R ELATED WORK AND BACKGROUNDThis work focuses on adversaries for variational autoencoders (V AEs, proposed in Kingma &Welling (2013)) and V AE-GANs (V AEs composed with a generative adversarial network, proposedin Larsen et al. (2015)).2.1 R ELATED WORK ON ADVERSARIESMany adversarial attacks on classification models have been described in existing literature (Good-fellow et al., 2014; Szegedy et al., 2013). These attacks can be untargeted, where the adversary’sgoal is to cause any misclassification, or the least likely misclassification (Goodfellow et al., 2014;Kurakin et al., 2016); or they can be targeted, where the attacker desires a specific misclassification.Moosavi-Dezfooli et al. (2016) gives a recent example of a strong targeted adversarial attack. Someadversarial attacks allow for a threat model where the adversary does not have access to the targetmodel (Szegedy et al., 2013; Papernot et al., 2016), but commonly it is assumed that the attackerdoes have that access, in an online or offline setting (Goodfellow et al., 2014; Kurakin et al., 2016).3Given a classifier f(x) : x2 X !y2 Y and original inputs x2 X , the problemof generating untargeted adversarial examples can be expressed as the following optimization:argminxL(x;x)s:t: f (x)6=f(x), whereL()is a chosen distance measure between exam-ples from the input space (e.g., the L2norm). Similarly, generating a targeted adversarial attack ona classifier can be expressed as argminxL(x;x)s:t:f (x) =yt, whereyt2Y is some targetlabel chosen by the attacker.These optimization problems can often be solved with optimizers like L-BFGS or Adam (Kingma& Ba, 2015), as done in Szegedy et al. (2013) and Carlini & Wagner (2016). They can also beapproximated with single-step gradient-based techniques like fast gradient sign (Goodfellow et al.,2014), fast gradient L2(Huang et al., 2015), or fast least likely class (Kurakin et al., 2016); or theycan be approximated with iterative variants of those and other gradient-based techniques (Kurakinet al., 2016; Moosavi-Dezfooli et al., 2016).An interesting variation of this type of attack can be found in Sabour et al. (2015). In that work,they attack the hidden state of the target network directly by taking an input image xand a targetimage xtand searching for a perturbed variant of xthat generates similar hidden state at layer lofthe target network to the hidden state at the same layer generated by xt. This approach can also beapplied directly to attacking the latent vector of a generative model.A variant of this attack has also been applied to V AE models in the concurrent work of Tabacofet al. (2016)4, which uses the KL divergence between the latent representation of the source andtarget images to generate the adversarial example. However in their paper, the authors mention thatthey tried attacking the output directly and that this only managed to make the reconstructions more3See Papernot et al. (2015) for an overview of different adversarial threat models.4This work was made public shortly after we published our early drafts.2Under review as a conference paper at ICLR 2017ReceiverzSender Attackerfenc fdecFigure 1: Depiction of the attack scenario. The V AE is used as a compression scheme to transmita latent representation of the image from the sender (left) to the receiver (right). The attacker con-vinces the sender to compress a particular image into its latent vector, which is sent to the receiver,where the decoder reconstructs the latent vector into some other image chosen by the attacker.blurry. While they do not explain the exact experimental setting, the attack sounds similar to ourLVAE attack, which we find very successful. Also, in their paper the authors do not consider themore advanced V AE-GAN models and more complex datasets like CelebA.2.2 B ACKGROUND ON VAE S AND VAE-GAN SThe general architecture of a variational autoencoder consists of three components, as shown in Fig-ure 8. The encoderfenc(x)is a neural network mapping a high-dimensional input representationxinto a lower-dimensional (compressed) latent representation z. All possible values of zform alatent space. Similar values in the latent space should produce similar outputs from the decoder ina well-trained V AE. And finally, the decoder/generator fdec(z), which is a neural network map-ping the compressed latent representation back to a high-dimensional output ^x. Composing thesenetworks allows basic input reconstruction ^x=fdec(fenc(x)). This composed architecture is usedduring training to backpropagate errors from the loss function.The variational autoencoder’s loss function LVAE enables the network to learn a latent representationthat approximates the intractable posterior distribution p(zjx):LVAE=DKL[q(zjx)jjp(z)] +Eq[logp(xjz)]: (1)q(zjx)is the learned approximation of the posterior distribution p(zjx).p(z)is the prior distributionof the latent representation z.DKLdenotes the Kullback–Leibler divergence. Eq[logp(xjz)]isthe variational lower bound, which in the case of input reconstruction is the cross-entropy H[x;^x]between the inputs xand their reconstructions ^x. In order to generate ^xthe V AE needs to sampleq(zjx)and then compute fdec(z).For the V AE to be fully differentiable while sampling from q(zjx), the reparametrization trick(Kingma & Welling, 2013) extracts the random sampling step from the network and turns it intoan input,". V AEs are often parameterized with Gaussian distributions. In this case, fenc(x)outputsthe distribution parameters and2. That distribution is then sampled by computing z=+"p2where"N(0;1)is the input random sample, which does not depend on any parameters of fenc,and thus does not impact differentiation of the network.The V AE-GAN architecture of Larsen et al. (2015) has the same fencandfdecpair as in the V AE.It also adds a discriminator fdiscthat is used during training, as in standard generative adversarialnetworks (Goodfellow et al., 2014). The loss function of fdecuses the disciminator loss instead ofcross-entropy for estimating the reconstruction error.3 P ROBLEM DEFINITIONWe provide a motivating attack scenario for adversaries against generative models, as well as aformal definition of an adversary in the generative setting.3.1 M OTIVATING ATTACK SCENARIOTo motivate the attacks presented below, we describe the attack scenario depicted in Figure 1. Inthis scenario, there are two parties, the sender and the receiver, who wish to share images with eachother over a computer network. In order to conserve bandwidth, they share a V AE trained on theinput distribution of interest, which will allow them to send only latent vectors z.3Under review as a conference paper at ICLR 2017Figure 2: Results for the L2optimization latent attack (see Section 4.3) on the V AE-GAN, targetinga specific image from the class 0. Shown are the first 12 non-zero images from the test SVHN dataset. The columns are, in order: the original image, the reconstruction of the original image, theadversarial example, the predicted class of the adversarial example, the reconstruction of the adver-sarial example, the predicted class of the reconstructed adversarial example, the reconstruction of thereconstructed adversarial example (see Section 4.5), and the predicted class of that reconstruction.The attacker’s goal is to convince the sender to send an image of the attacker’s choosing to thereceiver, but the attacker has no direct control over the bytes sent between the two parties. However,the attacker has a copy of the shared V AE. The attacker presents an image xto the sender whichresembles an image xthat the sender wants to share with the receiver. For example, the senderwants to share pictures of kittens with the receiver, so the attacker presents a web page to the senderwith a picture of a kitten, which is x. The sender chooses xand sends its corresponding zto thereceiver, who reconstructs it. However, because the attacker controlled the chosen image, when thereceiver reconstructs it, instead of getting a faithful reproduction ^xofx(e.g., a kitten), the receiversees some other image of the attacker’s choosing, ^xadv, which has a different meaning from x(e.g.,a request to send money to the attacker’s bank account).There are other attacks of this general form, where the sender and the receiver may be separatedby distance, as in this example, or by time, in the case of storing compressed images to disk forlater retrieval. In the time-separated attack, the sender and the receiver may be the same person ormultiple different people. In either case, if they are using the insecure channel of the V AE’s latentspace, the messages they share may be under the control of an attacker. For example, an attackermay be able to fool an automatic surveillance system if the system uses this type of compression tostore the video signal before it is processed by other systems. In this case, the subsequent analysisof the video signal could be on compromised data showing what the attacker wants to show.While we do not specifically attack their models, viable compression schemes based on deep neuralnetworks have already been proposed in the literature, showing promising results Toderici et al.(2015; 2016).3.2 D EFINING ADVERSARIAL EXAMPLES AGAINST GENERATIVE MODELSWe make the following assumptions about generating adversarial examples on a target generativemodel,Gtarg(x) =fdec(fenc(x)).Gtargis trained on inputs Xthat can naturally be labeled withsemantically meaningful classes Y, although there may be no such labels at training time, or thelabels may not have been used during training. Gtargnormally succeeds at generating an output^x=Gtarg(x)in classywhen presented with an input xfrom classy. In other words, whatevertarget output class the attacker is interested in, we assume that Gtargsuccessfully captures it in thelatent representation such that it can generate examples of that class from the decoder. This targetoutput class does not need to be from the most salient classes in the training dataset. For example, onmodels trained on MNIST, the attacker may not care about generating different target digits (whichare the most salient classes). The attacker may prefer to generate the same input digits in a differentstyle (perhaps to aid forgery). We also assume that the attacker has access to Gtarg. Finally, theattacker has access to a set of examples from the same distribution as Xthat have the target label4Under review as a conference paper at ICLR 2017xEncoderfenczDecoderfdecxClassifierfclassVAE-GANDiscriminatorfdisc(0, 1)yFigure 3: The V AE-GAN classifier architecture used to generate classifier-based adversarial exam-ples on the V AE-GAN. The V AE-GAN in the dashed box is the target network and is frozen whiletraining the classifier. The path x!fenc!z!fclass!^yis used to generate adversarialexamples in z, which can then be reconstructed by fdec.ytthe attacker wants to generate. This does not mean that the attacker needs access to the labeledtraining dataset (which may not exist), or to an appropriate labeled dataset with large numbers ofexamples labeled for each class y2Y (which may be hard or expensive to collect). The attacksdescribed here may be successful with only a small amount of data labeled for a single target classof interest.One way to generate such adversaries is by solving the optimization problemargminxL(x;x)s:t: ORACLE (Gtarg(x)) =yt, where O RACLE reliably discriminatesbetween inputs of class ytand inputs of other classes. In practice, a classifier trained by theattacker may server as O RACLE . Other types of adversaries from Section 2.1 can also be used toapproximate this optimization in natural ways, some of which we describe in Section 4.If the attacker only needs to generate one successful attack, the problem of determining if an attackis successful can be solved by manually reviewing the xand^xadvpairs and choosing whicheverthe attacker considers best. However, if the attacker wants to generate many successful attacks, anautomated method of evaluating the success of an attack is necessary. We show in Section 4.5 howto measure the effectiveness of an attack automatically using a classifier trained on z=fenc(x).4 A TTACK METHODOLOGYThe attacker would like to construct an adversarially-perturbed input to influence the latent repre-sentation in a way that will cause the reconstruction process to reconstruct an output for a differentclass. We propose three approaches to attacking generative models: a classifier-based attack, wherewe train a new classifier on top of the latent space zand use that classifier to find adversarial exam-ples in the latent space; an attack using LVAE to target the output directly; and an attack on the latentspace, z. All three methods are technically applicable to any generative architecture that relies on alearned latent representation z. Without loss of generality, we focus on the V AE-GAN architecture.4.1 C LASSIFIER ATTACKBy adding a classifier fclass to the pre-trained generative model5, we can turn the problem of gen-erating adversaries for generative models back into the previously solved problem of generatingadversarial examples for classifiers. This approach allows us to apply all of the existing attackson classifiers in the literature. However, as discussed below, using this classifier tends to producelower-quality reconstructions from the adversarial examples than the other two attacks due to theinaccuracies of the classifier.Step 1. The weights of the target generative model are frozen, and a new classifier fclass(z)!^yistrained on top of fencusing a standard classification loss Lclassier such as cross-entropy, as shownin Figure 3. This process is independent of how the original model is trained, but it requires a5This is similar to the process of semi-supervised learning in Kingma et al. (2014), although the goal isdifferent.5Under review as a conference paper at ICLR 2017training corpus pulled from approximately the same input distribution as was used to train Gtarg,with ground truth labels for at least two classes: ytandy~t, the negative class.Step 2. With the trained classifier, the attacker finds adversarial examples xusing the methodsdescribed in Section 4.4.Usingfclass to generate adversarial examples does not always result in high-quality reconstructions,as can be seen in the middle column of Figure 5 and in Figure 11. This appears to be due tothe fact that fclass adds additional noise to the process. For example, fclass sometimes confidentlymisclassifies latent vectors zthat represent inputs that are far from the training data distribution,resulting infdecfailing to reconstruct a plausible output from the adversarial example.4.2LVAE ATTACKOur second approach generates adversarial perturbations using the V AE loss function. The attackerchooses two inputs, xs(the source) and xt(the target), and uses one of the standard adversarialmethods to perturb xsintoxsuch that its reconstruction ^xmatches the reconstruction of xt, usingthe methods described in Section 4.4.The adversary precomputes the reconstruction ^xtby evaluating fdec(fenc(xt))once before per-forming optimization. In order to use LVAE in an attack, the second term (the reconstruction loss)ofLVAE (see Equation 1) is changed so that instead of computing the reconstruction loss between xand^x, the loss is computed between ^xand^xt. This means that during each optimization iteration,the adversary needs to compute ^x, which requires the full fdec(fenc(x))to be evaluated.4.3 L ATENT ATTACKOur third approach attacks the latent space of the generative model.Single latent vector target. This attack is similar to the work of Sabour et al. (2015), in whichthey use a pair of source image xsand target image xtto generate xthat induces the target networkto produce similar activations at some hidden layer las are produced by xt, while maintainingsimilarity between xsandx.For this attack to work on latent generative models, it is sufficient to compute zt=fenc(xt)andthen use the following loss function to generate adversarial examples from different source imagesxs, using the methods described in Section 4.4:Llatent =L(zt;fenc(x)): (2)L()is a distance measure between two vectors. We use the L2norm, under the assumption that thelatent space is approximately euclidean.We also explored a variation on the single latent vector target attack, which we describe in Sec-tion A.1 in the Appendix.4.4 M ETHODS FOR SOLVING THE ADVERSARIAL OPTIMIZATION PROBLEMWe can use a number of different methods to generate the adversarial examples. We initially evalu-ated both the fast gradient sign Goodfellow et al. (2014) method and an L2optimization method. Asthe latter produces much better results we focus on the L2optimization method, while we includesome FGS results in the Appendix. The attack can be used either in targeted mode (where we wanta specific class, yt, to be reconstructed) or untargeted mode (where we just want an incorrect classto be reconstructed). In this paper, we focus on the targeted mode of the attacks.L2optimization. The optimization-based approach, explored in Szegedy et al. (2013) and Carlini& Wagner (2016), poses the adversarial generation problem as the following optimization problem:argminxL(x;x) +L(x;yt): (3)As above,L()is a distance measure, and Lis one ofLclassier ,LVAE, orLlatent . The constantis used to balance the two loss contributions. For the LVAE attack, the optimizer must do a full6Under review as a conference paper at ICLR 2017reconstruction at each step of the optimizer. The other two attacks do not need to do reconstructionswhile the optimizer is running, so they generate adversarial examples much more quickly, as shownin Table 1.4.5 M EASURING ATTACK EFFECTIVENESSTo generate a large number of adversarial examples automatically against a generative model, theattacker needs a way to judge the quality of the adversarial examples. We leverage fclass to estimatewhether a particular attack was successful.6Reconstruction feedback loop. The architecture is the same as shown in Figure 3. We use thegenerative model to reconstruct the attempted adversarial inputs xby computing:^x=fdec(fenc(x)): (4)Then,fclass is used to compute:^y=fclass(fenc(^x)): (5)The input adversarial examples xare not classified directly, but are first fed to the generative modelfor reconstruction. This reconstruction loop improves the accuracy of the classifier by 60% on av-erage against the adversarial attacks we examined. The predicted label ^yafter the reconstructionfeedback loop is compared with the attack target ytto determine if the adversarial example success-fully reconstructed to the target class. If the precision and recall of fclass are sufficiently high onyt,fclass can be used to filter out most of the failed adversarial examples while keeping most of thegood ones.We derive two metrics from classifier predictions after one reconstruction feedback loop. The firstmetric isASignoretarget , the attack success rate ignoring targeting, i.e., without requiring the out-put class of the adversarial example to match the target class:ASignoretarget =1NNXi=11^yi6=yi (6)Nis the total number of reconstructed adversarial examples; 1^yi6=yiis1when ^yi, the classificationof the reconstruction for image i, does not equal yi, the ground truth classification of the originalimage, and 0otherwise. The second metric is AStarget , the attack success rate including targeting(i.e., requiring the output class of the adversarial example to match the target class), which we definesimilarly as:AStarget =1NNXi=11^yi=yit: (7)Both metrics are expected to be higher for more successful attacks. Note that AStargetASignoretarget . When computing these metrics, we exclude input examples that have the sameground truth class as the target class.5 E VALUATIONWe evaluate the three attacks on MNIST (LeCun et al., 1998), SVHN (Netzer et al., 2011) andCelebA (Liu et al., 2015), using the standard training and validation set splits. The V AE and V AE-GAN architectures are implemented in TensorFlow (Abadi & et al., 2015). We optimized usingAdam with learning rate 0:001and other parameters set to default values for both the generativemodel and the classifier. For the V AE, we use two architectures: a simple architecture with a singlefully-connected hidden layer with 512 units and ReLU activation function; and a convolutional ar-chitecture taken from the original V AE-GAN paper Larsen et al. (2015) (but trained with only theV AE loss). We use the same architecture trained with the additional GAN loss for the V AE-GANmodel, as described in that work. For both V AE and V AE-GAN we use a 50-dimensional latent rep-resentation on MNIST, a 1024-dimensional latent representation on SVHN and 2048-dimensionallatent representation on CelebA.6Note that fclass here is being used in a different manner than when we use it to generate adversarialexamples. However, the network itself is identical, so we don’t distinguish between the two uses in the notation.7Under review as a conference paper at ICLR 2017Figure 4: Results for the L2optimization latent attack on the V AE-GAN, targeting the mean latentvector for 0. Shown are the first 12 non-zero images from the test MNIST data set. The columnsare, in order: the original image, the reconstruction of the original image, the adversarial example,the predicted class of the adversarial example, the reconstruction of the adversarial example, thepredicted class of the reconstructed adversarial example, the reconstruction of the reconstructedadversarial example (see Section 4.5), and the predicted class of that reconstruction.In this section we only show results where no sampling from latent space has been performed.Instead we use the mean vector as the latent representation z. As sampling can have an effect onthe resulting reconstructions, we evaluated it separately. We show the results with different numberof samples in Figure 22 in the Appendix. On most examples, the visible change is small and ingeneral the attack is still successful.5.1 MNISTBoth V AE and V AE-GAN by themselves reconstruct the original inputs well as show in Figure 9,although the quality from the V AE-GAN is noticeably better. As a control, we also generate randomnoise of the same magnitude as used for the adversarial examples (see Figure 13), to show that ran-dom noise does not cause the reconstructed noisy images to change in any significant way. Althoughwe ran experiments on both V AEs and V AE-GANs, we only show results for the V AE-GAN as itgenerates much higher quality reconstructions than the corresponding V AE.5.1.1 C LASSIFIER ATTACKWe use a simple classifier architecture to help generate attacks on the V AE and V AE-GAN models.The classifier consists of two fully-connected hidden layers with 512 units each, using the ReLUactivation function. The output layer is a 10 dimensional softmax. The input to the classifier isthe 50 dimensional latent representation produced by the V AE/V AE-GAN encoder. The classifierachieves 98:05% accuracy on the validation set after training for 100 epochs.To see if there are differences between classes, we generate targeted adversarial examples for eachMNIST class and present the results per-class. For the targeted attacks we used the optimizationmethod with lambda 0:001, where Adam-based optimization was performed for 1000 epochs witha learning rate of 0:1. The mean L2norm of the difference between original images and generatedadversarial examples using the classifier attack is 3:36, while the mean RMSD is 0:120.Numerical results in Table 2 show that the targeted classifier attack successfully fools the classifier.Classifier accuracy is reduced to 0%, while the matching rate (the ratio between the number ofpredictions matching the target class and the number of incorrectly classified images) is 100% , whichmeans that all incorrect predictions match the target class. However, what we are interested in (asper the attack definition from Section 3.2) is how the generative model reconstructs the adversarialexamples. If we look at the images generated by the V AE-GAN for class 0, shown in Figure 4, thetargeted attack is successful on some reconstructed images (e.g. one, four, five, six and nine arereconstructed as zeroes). But even when the classifier accuracy is 0%and matching rate is 100% ,an incorrect classification does not always result in a reconstruction to the target class, which showsthat the classifier is fooled by an adversarial example more easily than the generative model.Reconstruction feedback loop. The reconstruction feedback loop described in Section 4.5 canbe used to measure how well a targeted attack succeeds in making the generative model change the8Under review as a conference paper at ICLR 2017Figure 5: Left: representative adversarial examples with a target class of 0on the first 100non-zero images from the MNIST validation set. These were produced using the L2optimization latentattack (Section 4.3). Middle: V AE-GAN reconstructions from adversarial examples produced usingtheL2optimization classifier attack on the same set of 100validation images (those adversariesare not shown, but are qualitatively similiar, see Section 4.1). Right: V AE-GAN reconstructionsfrom the adversarial examples in the left column. Many of the classifier adversarial examples fail toreconstruct as zeros, whereas almost every adversarial example from the latent attack reconstructsas zero.reconstructed classes. Table 4 in the Appendix shows ASignoretarget andAStarget for all sourceand target class pairs. A higher value signifies a more successful attack for that pair of classes. Itis interesting to observe that attacking some source/target pairs is much easier than others (e.g. pair(4;0)vs.(0;8)) and that the results are not symmetric over source/target pairs. Also, some pairs dowell inASignoretarget , but do poorly in AStarget (e.g., all source digits when targeting 4). As canbe seen in Figure 11, the classifier adversarial examples targeting 4consistently fail to reconstructinto something easily recognizable as a 4. Most of the reconstructions look like 5, but the adversarialexample reconstructions of source 5s instead look like 0or3.5.1.2LVAE ATTACKFor generating adversarial examples using the LVAE attack, we used the optimization method with= 1:0, where Adam-based optimization was performed for 1000 epochs with a learning rate of 0:1.The meanL2norm of the difference between original images and generated adversarial exampleswith this approach is 3:68, while the mean RMSD is 0:131.We showASignoretarget andAStarget of theLVAE attack in Table 5 in the Appendix. Comparingwith the numerical evaluation results of the latent attack (below), we can see that both methodsachieve similar results on MNIST.5.1.3 L ATENT ATTACKTo generate adversarial examples using the latent attack, we used the optimization method with= 1:0, where Adam-based optimization was performed for 1000 epochs with a learning rateof0:1. The mean L2norm of the difference between original images and generated adversarialexamples using this approach is 2:96, while the mean RMSD is 0:105.Table 3 shows ASignoretarget andAStarget for all source and target class pairs. Comparing withthe numerical evaluation results of the classifier attack we can see that the latent attack performsmuch better. This result remains true when visually comparing the reconstructed images, shown inFigure 5.We also tried an untargeted version of the latent attack, where we change Equation 2 to maximizethe distance in latent space between the encoding of the original image and the encoding of theadversarial example. In this case the loss we are trying to minimize is unbounded, since the L2distance can always grow larger, so the attack normally fails to generate a reasonable adversarialexample.9Under review as a conference paper at ICLR 2017Figure 6: Left: V AE-GAN reconstructions of adversarial examples generated using the L2optimiza-tionLVAE attack (single image target). Right: V AE-GAN reconstructions of adversarial examplesgenerated using the L2optimization latent attack (single image target). Approximately 85out of100images are convincing zeros for the L2latent attack, whereas only about 5out of 100could bemistaken for zeros with the LVAE attack.Additionally, we also experimented with targeting latent representations of specific images from thetraining set instead of taking the mean, as described in Section 4.3. We show the numerical resultsin Table 3 and the generated reconstructions in Figure 15 (in the Appendix). It is also interestingto compare the results with LVAE, by choosing the same image as the target. Results for LVAE forthe same target images as in Table 3 are shown in Table 6 in the Appendix. The results are identicalbetween the two attacks, which is expected as the target image is the same – only the loss functiondiffers between the methods.5.2 SVHNThe SVHN dataset consists of cropped street number images and is much less clean than MNIST.Due to the way the images have been processed, each image may contain more than one digit; thetarget digit is roughly in the center. V AE-GAN produces high-quality reconstructions of the originalimages as shown in Figure 17 in the Appendix.For the classifier attack, we set = 105after testing a range of values, although we were unable tofind an effective value for this attack against SVHN. For the latent and LVAE attacks we set = 10 .In Table 10 we show ASignoretarget andAStarget for theL2optimization latent attack. The eval-uation metrics are less strong on SVHN than on MNIST, but it is still straightforward for an attackerto find a successful attack for almost all source/target pairs. Figure 2 supports this evaluation. Visualinspection shows that 11out of the 12adversarial examples reconstructed as 0, the target digit. Itis worth noting that 2out of the 12adversarial examples look like zeros (rows 1and11), and twoothers look like both the original digit and zero, depending on whether the viewer focuses on thelight or dark areas of the image (rows 4and7). TheL2optimization latent attack achieves muchbetter results than the LVAE attack (see Table 11 and Figure 6) on SVHN, while both attacks workequally well on MNIST.5.3 C ELEB AThe CelebA dataset consists of more than 200,000 cropped faces of celebrities, each annotatedwith 40 different attributes. For our experiments, we further scale the images to 64x64 and ignorethe attribute annotations. V AE-GAN reconstructions of original images after training are shown inFigure 19 in the Appendix.Since faces don’t have natural classes, we only evaluated the latent and LVAE attacks. We triedlambdas ranging from 0:1to0:75for both attacks. Figure 20 shows adversarial examples generated10Under review as a conference paper at ICLR 2017MNIST SVHNMethod MeanL2 Mean RMSD Time to attack MeanL2 Mean RMSD Time to attackL2Optimization Classifier Attack 3:36 0:120 277 1:77 0:032 274L2Optimization LVAE Attack 3:68 0:131 734 2:36 0:043 895L2Optimization Latent Attack 2:96 0:105 236 2:80 0:051 242Table 1: Comparison of mean L2norm and RMSD between the original images and the generatedadversarial examples for the different attacks. Time to attack is the mean number of seconds it takesto generate 1000 adversarial examples using the given attack method (with the same number ofoptimization iterations for each attack).using the latent attack and a lambda value of 0:5(L2norm between original images and generatedadversarial examples 9:78, RMSD 0:088) and the corresponding V AE-GAN reconstructions. Mostof the reconstructions reflect the target image very well. We get even better results with the LVAEattack, using a lambda value of 0:75(L2norm between original images and generated adversarialexamples 8:98, RMSD 0:081) as shown in Figure 21.Figure 7: Summary of different attacks on CelebA dataset: reconstructions of original images (top),reconstructions of adversarial examples generated using the latent attack (middle) and LVAE attack(bottom). Target reconstruction is shown on the right. Full results are in the Appendix.5.4 S UMMARY OF DIFFERENT ATTACK METHODSTable 1 shows a comparison of the mean distances between original images and generated adver-sarial examples for the three different attack methods. The larger the distance between the originalimage and the adversarial perturbation, the more noticeable the perturbation will tend to be, and themore likely a human observer will no longer recognize the original input, so effective attacks keepthese distances small while still achieving their goal. The latent attack consistently gives the bestresults in our experiments, and the classifier attack performs the worst.We also measure the time it takes to generate 1000 adversarial examples using the given attackmethod. TheLVAE attack is by far the slowest of the three, due to the fact that it requires computingfull reconstructions at each step of the optimizer when generating the adversarial examples. Theother two attacks do not need to run the reconstruction step during optimization of the adversarialexamples.6 C ONCLUSIONWe explored generating adversarial examples against generative models such as V AEs and V AE-GANs. These models are also vulnerable to adversaries that convince them to turn inputs intosurprisingly different outputs. We have also motivated why an attacker might want to attack gen-erative models. Our work adds further support to the hypothesis that adversarial examples are ageneral phenomenon for current neural network architectures, given our successful application ofadversarial attacks to popular generative models. In this work, we are helping to lay the foundationsfor understanding how to build more robust networks. Future work will explore defense and robusti-fication in greater depth as well as attacks on generative models trained using natural image datasetssuch as CIFAR-10 and ImageNet.ACKNOWLEDGMENTSThis material is in part based upon work supported by the National Science Foundation under GrantNo. TWC-1409915. Any opinions, findings, and conclusions or recommendations expressed in this11Under review as a conference paper at ICLR 2017material are those of the author(s) and do not necessarily reflect the views of the National ScienceFoundation.
HJ4v6R_Ng
6: Marginally above acceptance threshold
Comments: "This contrasts to adversarial attacks on classifiers, where any inspection of the inputs will reveal the original bytes the adversary supplied, which often have telltale noise" Is this really true? If it were the case, wouldn't it imply that training "against" adversarial examples should easily make a classifier robust to adversarial examples (if they all have a telltale noise)? Pros: -The question of whether adversarial examples exist in generative models, and indeed how the definition of "adversarial example" carries over is an interesting one. -Finding that a certain type of generative model *doesn't have* adversarial examples would be a really significant result, finding that generative models have adversarial examples would also be a worth negative result. -The adversarial examples in figures 5 and 6 seem convincing, though they seem much more overt and noisy than the adversarial examples on MNIST shown in (Szegedy 2014). Is this because it's actually harder to find adversarial examples in these types of generative models? Issues: -Paper is significantly over length at 13 pages. -The beginning of the paper should more clearly motivate its purpose. -Paper has "generative models" in the title but as far as I can tell the whole paper is concerned with autoencoder-type models. This is kind of annoying because if someone wanted to consider adversarial attacks on, say, autoregressive models, they might be unreasonably burdened by having to explain how they're distinct from a paper called "adversarial examples for generative models". -I think that the introduction contains too much background information - it could be tightened.
3: The reviewer is fairly confident that the evaluation is correct
SJk01vogl
ICLR.cc/2017/conference
2017
Adversarial examples for generative models
["Jernej Kos", "Ian Fischer", "Dawn Song"]
We explore methods of producing adversarial examples on deep generative models such as the variational autoencoder (VAE) and the VAE-GAN. Deep learning architectures are known to be vulnerable to adversarial examples, but previous work has focused on the application of adversarial examples to classification tasks. Deep generative models have recently become popular due to their ability to model input data distributions and generate realistic examples from those distributions. We present three classes of attacks on the VAE and VAE-GAN architectures and demonstrate them against networks trained on MNIST, SVHN and CelebA. Our first attack leverages classification-based adversaries by attaching a classifier to the trained encoder of the target generative model, which can then be used to indirectly manipulate the latent representation. Our second attack directly uses the VAE loss function to generate a target reconstruction image from the adversarial example. Our third attack moves beyond relying on classification or the standard loss for the gradient and directly optimizes against differences in source and target latent representations. We also motivate why an attacker might be interested in deploying such techniques against a target generative network.
["Computer vision", "Unsupervised Learning"]
ABSTRACTWe explore methods of producing adversarial examples on deep generative mod-els such as the variational autoencoder (V AE) and the V AE-GAN. Deep learningarchitectures are known to be vulnerable to adversarial examples, but previouswork has focused on the application of adversarial examples to classification tasks.Deep generative models have recently become popular due to their ability to modelinput data distributions and generate realistic examples from those distributions.We present three classes of attacks on the V AE and V AE-GAN architectures anddemonstrate them against networks trained on MNIST, SVHN and CelebA. Ourfirst attack leverages classification-based adversaries by attaching a classifier tothe trained encoder of the target generative model, which can then be used to in-directly manipulate the latent representation. Our second attack directly uses theV AE loss function to generate a target reconstruction image from the adversarialexample. Our third attack moves beyond relying on classification or the standardloss for the gradient and directly optimizes against differences in source and tar-get latent representations. We also motivate why an attacker might be interestedin deploying such techniques against a target generative network.1 I NTRODUCTIONAdversarial examples have been shown to exist for a variety of deep learning architectures.1Theyare small perturbations of the original inputs, often barely visible to a human observer, but carefullycrafted to misguide the network into producing incorrect outputs. Seminal work by Szegedy et al.(2013) and Goodfellow et al. (2014), as well as much recent work, has shown that adversarialexamples are abundant and finding them is easy.Most previous work focuses on the application of adversarial examples to the task of classification,where the deep network assigns classes to input images. The attack adds small adversarial perturba-tions to the original input image. These perturbations cause the network to change its classificationof the input, from the correct class to some other incorrect class (possibly chosen by the attacker).Critically, the perturbed input must still be recognizable to a human observer as belonging to theoriginal input class.2Deep generative models, such as Kingma & Welling (2013), learn to generate a variety of outputs,ranging from handwritten digits to faces (Kulkarni et al., 2015), realistic scenes (Oord et al., 2016),videos (Kalchbrenner et al., 2016), 3D objects (Dosovitskiy et al., 2016), and audio (van den Oordet al., 2016). These models learn an approximation of the input data distribution in different ways,and then sample from this distribution to generate previously unseen but plausible outputs.To the best of our knowledge, no prior work has explored using adversarial inputs to attack gen-erative models. There are two main requirements for such work: describing a plausible scenarioin which an attacker might want to attack a generative model; and designing and demonstrating anattack that succeeds against generative models. We address both of these requirements in this work.One of the most basic applications of generative models is input reconstruction. Given an input im-age, the model first encodes it into a lower-dimensional latent representation, and then uses that rep-resentation to generate a reconstruction of the original input image. Since the latent representation1Adversarial examples are even easier to produce against most other machine learning architectures, asshown in Papernot et al. (2016), but we are focused on deep networks.2Random noise images and “fooling” images (Nguyen et al., 2014) do not belong to this strict definition ofan adversarial input, although they do highlight other limitations of current classifiers.1Under review as a conference paper at ICLR 2017usually has much fewer dimensions than the original input, it can be used as a form of compression.The latent representation can also be used to remove some types of noise from inputs, even when thenetwork has not been explicitly trained for denoising, due to the lower dimensionality of the latentrepresentation restricting what information the trained network is able to represent. Many genera-tive models also allow manipulation of the generated output by sampling different latent values ormodifying individual dimensions of the latent vectors without needing to pass through the encodingstep.These properties of input reconstruction generative networks suggest a variety of different attacksthat would be enabled by effective adversaries against generative networks. Any attack that targetsthe compression bottleneck of the latent representation can exploit natural security vulnerabilities inapplications built to use that latent representation. Specifically, if the person doing the encoding stepis separated from the person doing the decoding step, the attacker may be able to cause the encodingparty to believe they have encoded a particular message for the decoding party, but in reality theyhave encoded a different message of the attacker’s choosing. We explore this idea in more detail asit applies to the application of compressing images using a V AE or V AE-GAN architecture.2 R ELATED WORK AND BACKGROUNDThis work focuses on adversaries for variational autoencoders (V AEs, proposed in Kingma &Welling (2013)) and V AE-GANs (V AEs composed with a generative adversarial network, proposedin Larsen et al. (2015)).2.1 R ELATED WORK ON ADVERSARIESMany adversarial attacks on classification models have been described in existing literature (Good-fellow et al., 2014; Szegedy et al., 2013). These attacks can be untargeted, where the adversary’sgoal is to cause any misclassification, or the least likely misclassification (Goodfellow et al., 2014;Kurakin et al., 2016); or they can be targeted, where the attacker desires a specific misclassification.Moosavi-Dezfooli et al. (2016) gives a recent example of a strong targeted adversarial attack. Someadversarial attacks allow for a threat model where the adversary does not have access to the targetmodel (Szegedy et al., 2013; Papernot et al., 2016), but commonly it is assumed that the attackerdoes have that access, in an online or offline setting (Goodfellow et al., 2014; Kurakin et al., 2016).3Given a classifier f(x) : x2 X !y2 Y and original inputs x2 X , the problemof generating untargeted adversarial examples can be expressed as the following optimization:argminxL(x;x)s:t: f (x)6=f(x), whereL()is a chosen distance measure between exam-ples from the input space (e.g., the L2norm). Similarly, generating a targeted adversarial attack ona classifier can be expressed as argminxL(x;x)s:t:f (x) =yt, whereyt2Y is some targetlabel chosen by the attacker.These optimization problems can often be solved with optimizers like L-BFGS or Adam (Kingma& Ba, 2015), as done in Szegedy et al. (2013) and Carlini & Wagner (2016). They can also beapproximated with single-step gradient-based techniques like fast gradient sign (Goodfellow et al.,2014), fast gradient L2(Huang et al., 2015), or fast least likely class (Kurakin et al., 2016); or theycan be approximated with iterative variants of those and other gradient-based techniques (Kurakinet al., 2016; Moosavi-Dezfooli et al., 2016).An interesting variation of this type of attack can be found in Sabour et al. (2015). In that work,they attack the hidden state of the target network directly by taking an input image xand a targetimage xtand searching for a perturbed variant of xthat generates similar hidden state at layer lofthe target network to the hidden state at the same layer generated by xt. This approach can also beapplied directly to attacking the latent vector of a generative model.A variant of this attack has also been applied to V AE models in the concurrent work of Tabacofet al. (2016)4, which uses the KL divergence between the latent representation of the source andtarget images to generate the adversarial example. However in their paper, the authors mention thatthey tried attacking the output directly and that this only managed to make the reconstructions more3See Papernot et al. (2015) for an overview of different adversarial threat models.4This work was made public shortly after we published our early drafts.2Under review as a conference paper at ICLR 2017ReceiverzSender Attackerfenc fdecFigure 1: Depiction of the attack scenario. The V AE is used as a compression scheme to transmita latent representation of the image from the sender (left) to the receiver (right). The attacker con-vinces the sender to compress a particular image into its latent vector, which is sent to the receiver,where the decoder reconstructs the latent vector into some other image chosen by the attacker.blurry. While they do not explain the exact experimental setting, the attack sounds similar to ourLVAE attack, which we find very successful. Also, in their paper the authors do not consider themore advanced V AE-GAN models and more complex datasets like CelebA.2.2 B ACKGROUND ON VAE S AND VAE-GAN SThe general architecture of a variational autoencoder consists of three components, as shown in Fig-ure 8. The encoderfenc(x)is a neural network mapping a high-dimensional input representationxinto a lower-dimensional (compressed) latent representation z. All possible values of zform alatent space. Similar values in the latent space should produce similar outputs from the decoder ina well-trained V AE. And finally, the decoder/generator fdec(z), which is a neural network map-ping the compressed latent representation back to a high-dimensional output ^x. Composing thesenetworks allows basic input reconstruction ^x=fdec(fenc(x)). This composed architecture is usedduring training to backpropagate errors from the loss function.The variational autoencoder’s loss function LVAE enables the network to learn a latent representationthat approximates the intractable posterior distribution p(zjx):LVAE=DKL[q(zjx)jjp(z)] +Eq[logp(xjz)]: (1)q(zjx)is the learned approximation of the posterior distribution p(zjx).p(z)is the prior distributionof the latent representation z.DKLdenotes the Kullback–Leibler divergence. Eq[logp(xjz)]isthe variational lower bound, which in the case of input reconstruction is the cross-entropy H[x;^x]between the inputs xand their reconstructions ^x. In order to generate ^xthe V AE needs to sampleq(zjx)and then compute fdec(z).For the V AE to be fully differentiable while sampling from q(zjx), the reparametrization trick(Kingma & Welling, 2013) extracts the random sampling step from the network and turns it intoan input,". V AEs are often parameterized with Gaussian distributions. In this case, fenc(x)outputsthe distribution parameters and2. That distribution is then sampled by computing z=+"p2where"N(0;1)is the input random sample, which does not depend on any parameters of fenc,and thus does not impact differentiation of the network.The V AE-GAN architecture of Larsen et al. (2015) has the same fencandfdecpair as in the V AE.It also adds a discriminator fdiscthat is used during training, as in standard generative adversarialnetworks (Goodfellow et al., 2014). The loss function of fdecuses the disciminator loss instead ofcross-entropy for estimating the reconstruction error.3 P ROBLEM DEFINITIONWe provide a motivating attack scenario for adversaries against generative models, as well as aformal definition of an adversary in the generative setting.3.1 M OTIVATING ATTACK SCENARIOTo motivate the attacks presented below, we describe the attack scenario depicted in Figure 1. Inthis scenario, there are two parties, the sender and the receiver, who wish to share images with eachother over a computer network. In order to conserve bandwidth, they share a V AE trained on theinput distribution of interest, which will allow them to send only latent vectors z.3Under review as a conference paper at ICLR 2017Figure 2: Results for the L2optimization latent attack (see Section 4.3) on the V AE-GAN, targetinga specific image from the class 0. Shown are the first 12 non-zero images from the test SVHN dataset. The columns are, in order: the original image, the reconstruction of the original image, theadversarial example, the predicted class of the adversarial example, the reconstruction of the adver-sarial example, the predicted class of the reconstructed adversarial example, the reconstruction of thereconstructed adversarial example (see Section 4.5), and the predicted class of that reconstruction.The attacker’s goal is to convince the sender to send an image of the attacker’s choosing to thereceiver, but the attacker has no direct control over the bytes sent between the two parties. However,the attacker has a copy of the shared V AE. The attacker presents an image xto the sender whichresembles an image xthat the sender wants to share with the receiver. For example, the senderwants to share pictures of kittens with the receiver, so the attacker presents a web page to the senderwith a picture of a kitten, which is x. The sender chooses xand sends its corresponding zto thereceiver, who reconstructs it. However, because the attacker controlled the chosen image, when thereceiver reconstructs it, instead of getting a faithful reproduction ^xofx(e.g., a kitten), the receiversees some other image of the attacker’s choosing, ^xadv, which has a different meaning from x(e.g.,a request to send money to the attacker’s bank account).There are other attacks of this general form, where the sender and the receiver may be separatedby distance, as in this example, or by time, in the case of storing compressed images to disk forlater retrieval. In the time-separated attack, the sender and the receiver may be the same person ormultiple different people. In either case, if they are using the insecure channel of the V AE’s latentspace, the messages they share may be under the control of an attacker. For example, an attackermay be able to fool an automatic surveillance system if the system uses this type of compression tostore the video signal before it is processed by other systems. In this case, the subsequent analysisof the video signal could be on compromised data showing what the attacker wants to show.While we do not specifically attack their models, viable compression schemes based on deep neuralnetworks have already been proposed in the literature, showing promising results Toderici et al.(2015; 2016).3.2 D EFINING ADVERSARIAL EXAMPLES AGAINST GENERATIVE MODELSWe make the following assumptions about generating adversarial examples on a target generativemodel,Gtarg(x) =fdec(fenc(x)).Gtargis trained on inputs Xthat can naturally be labeled withsemantically meaningful classes Y, although there may be no such labels at training time, or thelabels may not have been used during training. Gtargnormally succeeds at generating an output^x=Gtarg(x)in classywhen presented with an input xfrom classy. In other words, whatevertarget output class the attacker is interested in, we assume that Gtargsuccessfully captures it in thelatent representation such that it can generate examples of that class from the decoder. This targetoutput class does not need to be from the most salient classes in the training dataset. For example, onmodels trained on MNIST, the attacker may not care about generating different target digits (whichare the most salient classes). The attacker may prefer to generate the same input digits in a differentstyle (perhaps to aid forgery). We also assume that the attacker has access to Gtarg. Finally, theattacker has access to a set of examples from the same distribution as Xthat have the target label4Under review as a conference paper at ICLR 2017xEncoderfenczDecoderfdecxClassifierfclassVAE-GANDiscriminatorfdisc(0, 1)yFigure 3: The V AE-GAN classifier architecture used to generate classifier-based adversarial exam-ples on the V AE-GAN. The V AE-GAN in the dashed box is the target network and is frozen whiletraining the classifier. The path x!fenc!z!fclass!^yis used to generate adversarialexamples in z, which can then be reconstructed by fdec.ytthe attacker wants to generate. This does not mean that the attacker needs access to the labeledtraining dataset (which may not exist), or to an appropriate labeled dataset with large numbers ofexamples labeled for each class y2Y (which may be hard or expensive to collect). The attacksdescribed here may be successful with only a small amount of data labeled for a single target classof interest.One way to generate such adversaries is by solving the optimization problemargminxL(x;x)s:t: ORACLE (Gtarg(x)) =yt, where O RACLE reliably discriminatesbetween inputs of class ytand inputs of other classes. In practice, a classifier trained by theattacker may server as O RACLE . Other types of adversaries from Section 2.1 can also be used toapproximate this optimization in natural ways, some of which we describe in Section 4.If the attacker only needs to generate one successful attack, the problem of determining if an attackis successful can be solved by manually reviewing the xand^xadvpairs and choosing whicheverthe attacker considers best. However, if the attacker wants to generate many successful attacks, anautomated method of evaluating the success of an attack is necessary. We show in Section 4.5 howto measure the effectiveness of an attack automatically using a classifier trained on z=fenc(x).4 A TTACK METHODOLOGYThe attacker would like to construct an adversarially-perturbed input to influence the latent repre-sentation in a way that will cause the reconstruction process to reconstruct an output for a differentclass. We propose three approaches to attacking generative models: a classifier-based attack, wherewe train a new classifier on top of the latent space zand use that classifier to find adversarial exam-ples in the latent space; an attack using LVAE to target the output directly; and an attack on the latentspace, z. All three methods are technically applicable to any generative architecture that relies on alearned latent representation z. Without loss of generality, we focus on the V AE-GAN architecture.4.1 C LASSIFIER ATTACKBy adding a classifier fclass to the pre-trained generative model5, we can turn the problem of gen-erating adversaries for generative models back into the previously solved problem of generatingadversarial examples for classifiers. This approach allows us to apply all of the existing attackson classifiers in the literature. However, as discussed below, using this classifier tends to producelower-quality reconstructions from the adversarial examples than the other two attacks due to theinaccuracies of the classifier.Step 1. The weights of the target generative model are frozen, and a new classifier fclass(z)!^yistrained on top of fencusing a standard classification loss Lclassier such as cross-entropy, as shownin Figure 3. This process is independent of how the original model is trained, but it requires a5This is similar to the process of semi-supervised learning in Kingma et al. (2014), although the goal isdifferent.5Under review as a conference paper at ICLR 2017training corpus pulled from approximately the same input distribution as was used to train Gtarg,with ground truth labels for at least two classes: ytandy~t, the negative class.Step 2. With the trained classifier, the attacker finds adversarial examples xusing the methodsdescribed in Section 4.4.Usingfclass to generate adversarial examples does not always result in high-quality reconstructions,as can be seen in the middle column of Figure 5 and in Figure 11. This appears to be due tothe fact that fclass adds additional noise to the process. For example, fclass sometimes confidentlymisclassifies latent vectors zthat represent inputs that are far from the training data distribution,resulting infdecfailing to reconstruct a plausible output from the adversarial example.4.2LVAE ATTACKOur second approach generates adversarial perturbations using the V AE loss function. The attackerchooses two inputs, xs(the source) and xt(the target), and uses one of the standard adversarialmethods to perturb xsintoxsuch that its reconstruction ^xmatches the reconstruction of xt, usingthe methods described in Section 4.4.The adversary precomputes the reconstruction ^xtby evaluating fdec(fenc(xt))once before per-forming optimization. In order to use LVAE in an attack, the second term (the reconstruction loss)ofLVAE (see Equation 1) is changed so that instead of computing the reconstruction loss between xand^x, the loss is computed between ^xand^xt. This means that during each optimization iteration,the adversary needs to compute ^x, which requires the full fdec(fenc(x))to be evaluated.4.3 L ATENT ATTACKOur third approach attacks the latent space of the generative model.Single latent vector target. This attack is similar to the work of Sabour et al. (2015), in whichthey use a pair of source image xsand target image xtto generate xthat induces the target networkto produce similar activations at some hidden layer las are produced by xt, while maintainingsimilarity between xsandx.For this attack to work on latent generative models, it is sufficient to compute zt=fenc(xt)andthen use the following loss function to generate adversarial examples from different source imagesxs, using the methods described in Section 4.4:Llatent =L(zt;fenc(x)): (2)L()is a distance measure between two vectors. We use the L2norm, under the assumption that thelatent space is approximately euclidean.We also explored a variation on the single latent vector target attack, which we describe in Sec-tion A.1 in the Appendix.4.4 M ETHODS FOR SOLVING THE ADVERSARIAL OPTIMIZATION PROBLEMWe can use a number of different methods to generate the adversarial examples. We initially evalu-ated both the fast gradient sign Goodfellow et al. (2014) method and an L2optimization method. Asthe latter produces much better results we focus on the L2optimization method, while we includesome FGS results in the Appendix. The attack can be used either in targeted mode (where we wanta specific class, yt, to be reconstructed) or untargeted mode (where we just want an incorrect classto be reconstructed). In this paper, we focus on the targeted mode of the attacks.L2optimization. The optimization-based approach, explored in Szegedy et al. (2013) and Carlini& Wagner (2016), poses the adversarial generation problem as the following optimization problem:argminxL(x;x) +L(x;yt): (3)As above,L()is a distance measure, and Lis one ofLclassier ,LVAE, orLlatent . The constantis used to balance the two loss contributions. For the LVAE attack, the optimizer must do a full6Under review as a conference paper at ICLR 2017reconstruction at each step of the optimizer. The other two attacks do not need to do reconstructionswhile the optimizer is running, so they generate adversarial examples much more quickly, as shownin Table 1.4.5 M EASURING ATTACK EFFECTIVENESSTo generate a large number of adversarial examples automatically against a generative model, theattacker needs a way to judge the quality of the adversarial examples. We leverage fclass to estimatewhether a particular attack was successful.6Reconstruction feedback loop. The architecture is the same as shown in Figure 3. We use thegenerative model to reconstruct the attempted adversarial inputs xby computing:^x=fdec(fenc(x)): (4)Then,fclass is used to compute:^y=fclass(fenc(^x)): (5)The input adversarial examples xare not classified directly, but are first fed to the generative modelfor reconstruction. This reconstruction loop improves the accuracy of the classifier by 60% on av-erage against the adversarial attacks we examined. The predicted label ^yafter the reconstructionfeedback loop is compared with the attack target ytto determine if the adversarial example success-fully reconstructed to the target class. If the precision and recall of fclass are sufficiently high onyt,fclass can be used to filter out most of the failed adversarial examples while keeping most of thegood ones.We derive two metrics from classifier predictions after one reconstruction feedback loop. The firstmetric isASignoretarget , the attack success rate ignoring targeting, i.e., without requiring the out-put class of the adversarial example to match the target class:ASignoretarget =1NNXi=11^yi6=yi (6)Nis the total number of reconstructed adversarial examples; 1^yi6=yiis1when ^yi, the classificationof the reconstruction for image i, does not equal yi, the ground truth classification of the originalimage, and 0otherwise. The second metric is AStarget , the attack success rate including targeting(i.e., requiring the output class of the adversarial example to match the target class), which we definesimilarly as:AStarget =1NNXi=11^yi=yit: (7)Both metrics are expected to be higher for more successful attacks. Note that AStargetASignoretarget . When computing these metrics, we exclude input examples that have the sameground truth class as the target class.5 E VALUATIONWe evaluate the three attacks on MNIST (LeCun et al., 1998), SVHN (Netzer et al., 2011) andCelebA (Liu et al., 2015), using the standard training and validation set splits. The V AE and V AE-GAN architectures are implemented in TensorFlow (Abadi & et al., 2015). We optimized usingAdam with learning rate 0:001and other parameters set to default values for both the generativemodel and the classifier. For the V AE, we use two architectures: a simple architecture with a singlefully-connected hidden layer with 512 units and ReLU activation function; and a convolutional ar-chitecture taken from the original V AE-GAN paper Larsen et al. (2015) (but trained with only theV AE loss). We use the same architecture trained with the additional GAN loss for the V AE-GANmodel, as described in that work. For both V AE and V AE-GAN we use a 50-dimensional latent rep-resentation on MNIST, a 1024-dimensional latent representation on SVHN and 2048-dimensionallatent representation on CelebA.6Note that fclass here is being used in a different manner than when we use it to generate adversarialexamples. However, the network itself is identical, so we don’t distinguish between the two uses in the notation.7Under review as a conference paper at ICLR 2017Figure 4: Results for the L2optimization latent attack on the V AE-GAN, targeting the mean latentvector for 0. Shown are the first 12 non-zero images from the test MNIST data set. The columnsare, in order: the original image, the reconstruction of the original image, the adversarial example,the predicted class of the adversarial example, the reconstruction of the adversarial example, thepredicted class of the reconstructed adversarial example, the reconstruction of the reconstructedadversarial example (see Section 4.5), and the predicted class of that reconstruction.In this section we only show results where no sampling from latent space has been performed.Instead we use the mean vector as the latent representation z. As sampling can have an effect onthe resulting reconstructions, we evaluated it separately. We show the results with different numberof samples in Figure 22 in the Appendix. On most examples, the visible change is small and ingeneral the attack is still successful.5.1 MNISTBoth V AE and V AE-GAN by themselves reconstruct the original inputs well as show in Figure 9,although the quality from the V AE-GAN is noticeably better. As a control, we also generate randomnoise of the same magnitude as used for the adversarial examples (see Figure 13), to show that ran-dom noise does not cause the reconstructed noisy images to change in any significant way. Althoughwe ran experiments on both V AEs and V AE-GANs, we only show results for the V AE-GAN as itgenerates much higher quality reconstructions than the corresponding V AE.5.1.1 C LASSIFIER ATTACKWe use a simple classifier architecture to help generate attacks on the V AE and V AE-GAN models.The classifier consists of two fully-connected hidden layers with 512 units each, using the ReLUactivation function. The output layer is a 10 dimensional softmax. The input to the classifier isthe 50 dimensional latent representation produced by the V AE/V AE-GAN encoder. The classifierachieves 98:05% accuracy on the validation set after training for 100 epochs.To see if there are differences between classes, we generate targeted adversarial examples for eachMNIST class and present the results per-class. For the targeted attacks we used the optimizationmethod with lambda 0:001, where Adam-based optimization was performed for 1000 epochs witha learning rate of 0:1. The mean L2norm of the difference between original images and generatedadversarial examples using the classifier attack is 3:36, while the mean RMSD is 0:120.Numerical results in Table 2 show that the targeted classifier attack successfully fools the classifier.Classifier accuracy is reduced to 0%, while the matching rate (the ratio between the number ofpredictions matching the target class and the number of incorrectly classified images) is 100% , whichmeans that all incorrect predictions match the target class. However, what we are interested in (asper the attack definition from Section 3.2) is how the generative model reconstructs the adversarialexamples. If we look at the images generated by the V AE-GAN for class 0, shown in Figure 4, thetargeted attack is successful on some reconstructed images (e.g. one, four, five, six and nine arereconstructed as zeroes). But even when the classifier accuracy is 0%and matching rate is 100% ,an incorrect classification does not always result in a reconstruction to the target class, which showsthat the classifier is fooled by an adversarial example more easily than the generative model.Reconstruction feedback loop. The reconstruction feedback loop described in Section 4.5 canbe used to measure how well a targeted attack succeeds in making the generative model change the8Under review as a conference paper at ICLR 2017Figure 5: Left: representative adversarial examples with a target class of 0on the first 100non-zero images from the MNIST validation set. These were produced using the L2optimization latentattack (Section 4.3). Middle: V AE-GAN reconstructions from adversarial examples produced usingtheL2optimization classifier attack on the same set of 100validation images (those adversariesare not shown, but are qualitatively similiar, see Section 4.1). Right: V AE-GAN reconstructionsfrom the adversarial examples in the left column. Many of the classifier adversarial examples fail toreconstruct as zeros, whereas almost every adversarial example from the latent attack reconstructsas zero.reconstructed classes. Table 4 in the Appendix shows ASignoretarget andAStarget for all sourceand target class pairs. A higher value signifies a more successful attack for that pair of classes. Itis interesting to observe that attacking some source/target pairs is much easier than others (e.g. pair(4;0)vs.(0;8)) and that the results are not symmetric over source/target pairs. Also, some pairs dowell inASignoretarget , but do poorly in AStarget (e.g., all source digits when targeting 4). As canbe seen in Figure 11, the classifier adversarial examples targeting 4consistently fail to reconstructinto something easily recognizable as a 4. Most of the reconstructions look like 5, but the adversarialexample reconstructions of source 5s instead look like 0or3.5.1.2LVAE ATTACKFor generating adversarial examples using the LVAE attack, we used the optimization method with= 1:0, where Adam-based optimization was performed for 1000 epochs with a learning rate of 0:1.The meanL2norm of the difference between original images and generated adversarial exampleswith this approach is 3:68, while the mean RMSD is 0:131.We showASignoretarget andAStarget of theLVAE attack in Table 5 in the Appendix. Comparingwith the numerical evaluation results of the latent attack (below), we can see that both methodsachieve similar results on MNIST.5.1.3 L ATENT ATTACKTo generate adversarial examples using the latent attack, we used the optimization method with= 1:0, where Adam-based optimization was performed for 1000 epochs with a learning rateof0:1. The mean L2norm of the difference between original images and generated adversarialexamples using this approach is 2:96, while the mean RMSD is 0:105.Table 3 shows ASignoretarget andAStarget for all source and target class pairs. Comparing withthe numerical evaluation results of the classifier attack we can see that the latent attack performsmuch better. This result remains true when visually comparing the reconstructed images, shown inFigure 5.We also tried an untargeted version of the latent attack, where we change Equation 2 to maximizethe distance in latent space between the encoding of the original image and the encoding of theadversarial example. In this case the loss we are trying to minimize is unbounded, since the L2distance can always grow larger, so the attack normally fails to generate a reasonable adversarialexample.9Under review as a conference paper at ICLR 2017Figure 6: Left: V AE-GAN reconstructions of adversarial examples generated using the L2optimiza-tionLVAE attack (single image target). Right: V AE-GAN reconstructions of adversarial examplesgenerated using the L2optimization latent attack (single image target). Approximately 85out of100images are convincing zeros for the L2latent attack, whereas only about 5out of 100could bemistaken for zeros with the LVAE attack.Additionally, we also experimented with targeting latent representations of specific images from thetraining set instead of taking the mean, as described in Section 4.3. We show the numerical resultsin Table 3 and the generated reconstructions in Figure 15 (in the Appendix). It is also interestingto compare the results with LVAE, by choosing the same image as the target. Results for LVAE forthe same target images as in Table 3 are shown in Table 6 in the Appendix. The results are identicalbetween the two attacks, which is expected as the target image is the same – only the loss functiondiffers between the methods.5.2 SVHNThe SVHN dataset consists of cropped street number images and is much less clean than MNIST.Due to the way the images have been processed, each image may contain more than one digit; thetarget digit is roughly in the center. V AE-GAN produces high-quality reconstructions of the originalimages as shown in Figure 17 in the Appendix.For the classifier attack, we set = 105after testing a range of values, although we were unable tofind an effective value for this attack against SVHN. For the latent and LVAE attacks we set = 10 .In Table 10 we show ASignoretarget andAStarget for theL2optimization latent attack. The eval-uation metrics are less strong on SVHN than on MNIST, but it is still straightforward for an attackerto find a successful attack for almost all source/target pairs. Figure 2 supports this evaluation. Visualinspection shows that 11out of the 12adversarial examples reconstructed as 0, the target digit. Itis worth noting that 2out of the 12adversarial examples look like zeros (rows 1and11), and twoothers look like both the original digit and zero, depending on whether the viewer focuses on thelight or dark areas of the image (rows 4and7). TheL2optimization latent attack achieves muchbetter results than the LVAE attack (see Table 11 and Figure 6) on SVHN, while both attacks workequally well on MNIST.5.3 C ELEB AThe CelebA dataset consists of more than 200,000 cropped faces of celebrities, each annotatedwith 40 different attributes. For our experiments, we further scale the images to 64x64 and ignorethe attribute annotations. V AE-GAN reconstructions of original images after training are shown inFigure 19 in the Appendix.Since faces don’t have natural classes, we only evaluated the latent and LVAE attacks. We triedlambdas ranging from 0:1to0:75for both attacks. Figure 20 shows adversarial examples generated10Under review as a conference paper at ICLR 2017MNIST SVHNMethod MeanL2 Mean RMSD Time to attack MeanL2 Mean RMSD Time to attackL2Optimization Classifier Attack 3:36 0:120 277 1:77 0:032 274L2Optimization LVAE Attack 3:68 0:131 734 2:36 0:043 895L2Optimization Latent Attack 2:96 0:105 236 2:80 0:051 242Table 1: Comparison of mean L2norm and RMSD between the original images and the generatedadversarial examples for the different attacks. Time to attack is the mean number of seconds it takesto generate 1000 adversarial examples using the given attack method (with the same number ofoptimization iterations for each attack).using the latent attack and a lambda value of 0:5(L2norm between original images and generatedadversarial examples 9:78, RMSD 0:088) and the corresponding V AE-GAN reconstructions. Mostof the reconstructions reflect the target image very well. We get even better results with the LVAEattack, using a lambda value of 0:75(L2norm between original images and generated adversarialexamples 8:98, RMSD 0:081) as shown in Figure 21.Figure 7: Summary of different attacks on CelebA dataset: reconstructions of original images (top),reconstructions of adversarial examples generated using the latent attack (middle) and LVAE attack(bottom). Target reconstruction is shown on the right. Full results are in the Appendix.5.4 S UMMARY OF DIFFERENT ATTACK METHODSTable 1 shows a comparison of the mean distances between original images and generated adver-sarial examples for the three different attack methods. The larger the distance between the originalimage and the adversarial perturbation, the more noticeable the perturbation will tend to be, and themore likely a human observer will no longer recognize the original input, so effective attacks keepthese distances small while still achieving their goal. The latent attack consistently gives the bestresults in our experiments, and the classifier attack performs the worst.We also measure the time it takes to generate 1000 adversarial examples using the given attackmethod. TheLVAE attack is by far the slowest of the three, due to the fact that it requires computingfull reconstructions at each step of the optimizer when generating the adversarial examples. Theother two attacks do not need to run the reconstruction step during optimization of the adversarialexamples.6 C ONCLUSIONWe explored generating adversarial examples against generative models such as V AEs and V AE-GANs. These models are also vulnerable to adversaries that convince them to turn inputs intosurprisingly different outputs. We have also motivated why an attacker might want to attack gen-erative models. Our work adds further support to the hypothesis that adversarial examples are ageneral phenomenon for current neural network architectures, given our successful application ofadversarial attacks to popular generative models. In this work, we are helping to lay the foundationsfor understanding how to build more robust networks. Future work will explore defense and robusti-fication in greater depth as well as attacks on generative models trained using natural image datasetssuch as CIFAR-10 and ImageNet.ACKNOWLEDGMENTSThis material is in part based upon work supported by the National Science Foundation under GrantNo. TWC-1409915. Any opinions, findings, and conclusions or recommendations expressed in this11Under review as a conference paper at ICLR 2017material are those of the author(s) and do not necessarily reflect the views of the National ScienceFoundation.
S1omxn-Vg
Final review
5: Marginally below acceptance threshold
After the rebuttal: The paper contains an interesting set of results (mainly produced after the initial submission), but novelty is limited, and presentation is suboptimal. For me now the biggest problem now is that the title and the content do not correspond. The authors clearly attack deterministic encoder-decoder models (as described in 3.2), which are not at all the same as generative models, even though many generative models make use of this architecture. A small experiment with sampling is interesting, but does not change the overall focus of the paper. This inconsistency in not acceptable. The whole issue could be resolved for example by simply replacing "generative models" by "encoder-decoder networks" in the title. Then I would tend towards recommending acceptance. ------ Initial review: The paper describes three approaches to generating adversarial examples for deep encoder-decoder generative networks (trained as VAE or VAE-GAN), and shows a comparative analysis of these. While the phenomenon of adversarial examples in discriminative models is widely known and relatively well studied, I am not aware of previous work on adversarial examples for generative networks, so this work is novel (there is a concurrent work by Tabacof et al. which should be cited, though). The paper has significantly improved since the initial submission; still, I have a number of remarks on presentation and experimental evaluation. I am in the borderline mode, and may change my rating during the discussion phase. Detailed comments: 1) The paper is 13 pages long - significantly over the recommended page limit of 8 pages. Reviewers have to read multiple papers, multiple versions of each, it is a lot of work. Large portions of the paper should be shortened and/or moved to the appendix. It is job of the authors to make the paper concise and readable. "in our attempts to be thorough, we have had a hard time keeping the length down" is a bad excuse - it may be hard, but has to be done. 2) I intentionally avoided term "generative model" above because it is not obvious to me if the attacks described by the authors indeed attack generative models. To clarify, the authors train encoder-decoders as generative models (VAE or VAE-GAN), but then remove all stochasticity (sampling) and prior on the latent variables: that is, they treat the models as deterministic encoders-decoders. It is not a big surprise that a deterministic deep network can be easily tricked; it would be much more interesting to see if the probabilistic aspect of generative models makes them more robust to such attacks. Am I missing something? I would like the authors to clarify their view on this and adjust the claims in the paper if necessary. 3) The paper is motivated by possible attacks on a data channel which uses a generative network for compressing information. Description of the attack scenario in 3.1 does not look convincing to me. It takes a huge amount of space and I do not think it adds much to the paper. First, experiments on natural images are necessary to judge if the proposed attack could succeed in a realistic scenario and second, I am not aware of any existing practical applications of VAEs to image compression: attacking JPEG would be much more practical. 4) Experiments are limited to MNIST and, in the latest version, SVHN (which is very nice). While no good generative models of general natural images exist, it is common to evaluate generative models on datasets of faces, so this would be another very natural domain for testing the proposed approach. Smaller remarks: 1) Usage of "Oracle" in 3.2 does not look appropriate - oracle typically has access to (part of) ground truth, which is not the case here as far as I understand. 2) Beginning of section 4: "All three methods work for any generative architecture that relies on a learned latent representation z" - "are technically applicable to" would be more correct than "work for". 3) 4.1: "confidentally"
4: The reviewer is confident but not absolutely certain that the evaluation is correct
SJk01vogl
ICLR.cc/2017/conference
2017
Adversarial examples for generative models
["Jernej Kos", "Ian Fischer", "Dawn Song"]
We explore methods of producing adversarial examples on deep generative models such as the variational autoencoder (VAE) and the VAE-GAN. Deep learning architectures are known to be vulnerable to adversarial examples, but previous work has focused on the application of adversarial examples to classification tasks. Deep generative models have recently become popular due to their ability to model input data distributions and generate realistic examples from those distributions. We present three classes of attacks on the VAE and VAE-GAN architectures and demonstrate them against networks trained on MNIST, SVHN and CelebA. Our first attack leverages classification-based adversaries by attaching a classifier to the trained encoder of the target generative model, which can then be used to indirectly manipulate the latent representation. Our second attack directly uses the VAE loss function to generate a target reconstruction image from the adversarial example. Our third attack moves beyond relying on classification or the standard loss for the gradient and directly optimizes against differences in source and target latent representations. We also motivate why an attacker might be interested in deploying such techniques against a target generative network.
["Computer vision", "Unsupervised Learning"]
ABSTRACTWe explore methods of producing adversarial examples on deep generative mod-els such as the variational autoencoder (V AE) and the V AE-GAN. Deep learningarchitectures are known to be vulnerable to adversarial examples, but previouswork has focused on the application of adversarial examples to classification tasks.Deep generative models have recently become popular due to their ability to modelinput data distributions and generate realistic examples from those distributions.We present three classes of attacks on the V AE and V AE-GAN architectures anddemonstrate them against networks trained on MNIST, SVHN and CelebA. Ourfirst attack leverages classification-based adversaries by attaching a classifier tothe trained encoder of the target generative model, which can then be used to in-directly manipulate the latent representation. Our second attack directly uses theV AE loss function to generate a target reconstruction image from the adversarialexample. Our third attack moves beyond relying on classification or the standardloss for the gradient and directly optimizes against differences in source and tar-get latent representations. We also motivate why an attacker might be interestedin deploying such techniques against a target generative network.1 I NTRODUCTIONAdversarial examples have been shown to exist for a variety of deep learning architectures.1Theyare small perturbations of the original inputs, often barely visible to a human observer, but carefullycrafted to misguide the network into producing incorrect outputs. Seminal work by Szegedy et al.(2013) and Goodfellow et al. (2014), as well as much recent work, has shown that adversarialexamples are abundant and finding them is easy.Most previous work focuses on the application of adversarial examples to the task of classification,where the deep network assigns classes to input images. The attack adds small adversarial perturba-tions to the original input image. These perturbations cause the network to change its classificationof the input, from the correct class to some other incorrect class (possibly chosen by the attacker).Critically, the perturbed input must still be recognizable to a human observer as belonging to theoriginal input class.2Deep generative models, such as Kingma & Welling (2013), learn to generate a variety of outputs,ranging from handwritten digits to faces (Kulkarni et al., 2015), realistic scenes (Oord et al., 2016),videos (Kalchbrenner et al., 2016), 3D objects (Dosovitskiy et al., 2016), and audio (van den Oordet al., 2016). These models learn an approximation of the input data distribution in different ways,and then sample from this distribution to generate previously unseen but plausible outputs.To the best of our knowledge, no prior work has explored using adversarial inputs to attack gen-erative models. There are two main requirements for such work: describing a plausible scenarioin which an attacker might want to attack a generative model; and designing and demonstrating anattack that succeeds against generative models. We address both of these requirements in this work.One of the most basic applications of generative models is input reconstruction. Given an input im-age, the model first encodes it into a lower-dimensional latent representation, and then uses that rep-resentation to generate a reconstruction of the original input image. Since the latent representation1Adversarial examples are even easier to produce against most other machine learning architectures, asshown in Papernot et al. (2016), but we are focused on deep networks.2Random noise images and “fooling” images (Nguyen et al., 2014) do not belong to this strict definition ofan adversarial input, although they do highlight other limitations of current classifiers.1Under review as a conference paper at ICLR 2017usually has much fewer dimensions than the original input, it can be used as a form of compression.The latent representation can also be used to remove some types of noise from inputs, even when thenetwork has not been explicitly trained for denoising, due to the lower dimensionality of the latentrepresentation restricting what information the trained network is able to represent. Many genera-tive models also allow manipulation of the generated output by sampling different latent values ormodifying individual dimensions of the latent vectors without needing to pass through the encodingstep.These properties of input reconstruction generative networks suggest a variety of different attacksthat would be enabled by effective adversaries against generative networks. Any attack that targetsthe compression bottleneck of the latent representation can exploit natural security vulnerabilities inapplications built to use that latent representation. Specifically, if the person doing the encoding stepis separated from the person doing the decoding step, the attacker may be able to cause the encodingparty to believe they have encoded a particular message for the decoding party, but in reality theyhave encoded a different message of the attacker’s choosing. We explore this idea in more detail asit applies to the application of compressing images using a V AE or V AE-GAN architecture.2 R ELATED WORK AND BACKGROUNDThis work focuses on adversaries for variational autoencoders (V AEs, proposed in Kingma &Welling (2013)) and V AE-GANs (V AEs composed with a generative adversarial network, proposedin Larsen et al. (2015)).2.1 R ELATED WORK ON ADVERSARIESMany adversarial attacks on classification models have been described in existing literature (Good-fellow et al., 2014; Szegedy et al., 2013). These attacks can be untargeted, where the adversary’sgoal is to cause any misclassification, or the least likely misclassification (Goodfellow et al., 2014;Kurakin et al., 2016); or they can be targeted, where the attacker desires a specific misclassification.Moosavi-Dezfooli et al. (2016) gives a recent example of a strong targeted adversarial attack. Someadversarial attacks allow for a threat model where the adversary does not have access to the targetmodel (Szegedy et al., 2013; Papernot et al., 2016), but commonly it is assumed that the attackerdoes have that access, in an online or offline setting (Goodfellow et al., 2014; Kurakin et al., 2016).3Given a classifier f(x) : x2 X !y2 Y and original inputs x2 X , the problemof generating untargeted adversarial examples can be expressed as the following optimization:argminxL(x;x)s:t: f (x)6=f(x), whereL()is a chosen distance measure between exam-ples from the input space (e.g., the L2norm). Similarly, generating a targeted adversarial attack ona classifier can be expressed as argminxL(x;x)s:t:f (x) =yt, whereyt2Y is some targetlabel chosen by the attacker.These optimization problems can often be solved with optimizers like L-BFGS or Adam (Kingma& Ba, 2015), as done in Szegedy et al. (2013) and Carlini & Wagner (2016). They can also beapproximated with single-step gradient-based techniques like fast gradient sign (Goodfellow et al.,2014), fast gradient L2(Huang et al., 2015), or fast least likely class (Kurakin et al., 2016); or theycan be approximated with iterative variants of those and other gradient-based techniques (Kurakinet al., 2016; Moosavi-Dezfooli et al., 2016).An interesting variation of this type of attack can be found in Sabour et al. (2015). In that work,they attack the hidden state of the target network directly by taking an input image xand a targetimage xtand searching for a perturbed variant of xthat generates similar hidden state at layer lofthe target network to the hidden state at the same layer generated by xt. This approach can also beapplied directly to attacking the latent vector of a generative model.A variant of this attack has also been applied to V AE models in the concurrent work of Tabacofet al. (2016)4, which uses the KL divergence between the latent representation of the source andtarget images to generate the adversarial example. However in their paper, the authors mention thatthey tried attacking the output directly and that this only managed to make the reconstructions more3See Papernot et al. (2015) for an overview of different adversarial threat models.4This work was made public shortly after we published our early drafts.2Under review as a conference paper at ICLR 2017ReceiverzSender Attackerfenc fdecFigure 1: Depiction of the attack scenario. The V AE is used as a compression scheme to transmita latent representation of the image from the sender (left) to the receiver (right). The attacker con-vinces the sender to compress a particular image into its latent vector, which is sent to the receiver,where the decoder reconstructs the latent vector into some other image chosen by the attacker.blurry. While they do not explain the exact experimental setting, the attack sounds similar to ourLVAE attack, which we find very successful. Also, in their paper the authors do not consider themore advanced V AE-GAN models and more complex datasets like CelebA.2.2 B ACKGROUND ON VAE S AND VAE-GAN SThe general architecture of a variational autoencoder consists of three components, as shown in Fig-ure 8. The encoderfenc(x)is a neural network mapping a high-dimensional input representationxinto a lower-dimensional (compressed) latent representation z. All possible values of zform alatent space. Similar values in the latent space should produce similar outputs from the decoder ina well-trained V AE. And finally, the decoder/generator fdec(z), which is a neural network map-ping the compressed latent representation back to a high-dimensional output ^x. Composing thesenetworks allows basic input reconstruction ^x=fdec(fenc(x)). This composed architecture is usedduring training to backpropagate errors from the loss function.The variational autoencoder’s loss function LVAE enables the network to learn a latent representationthat approximates the intractable posterior distribution p(zjx):LVAE=DKL[q(zjx)jjp(z)] +Eq[logp(xjz)]: (1)q(zjx)is the learned approximation of the posterior distribution p(zjx).p(z)is the prior distributionof the latent representation z.DKLdenotes the Kullback–Leibler divergence. Eq[logp(xjz)]isthe variational lower bound, which in the case of input reconstruction is the cross-entropy H[x;^x]between the inputs xand their reconstructions ^x. In order to generate ^xthe V AE needs to sampleq(zjx)and then compute fdec(z).For the V AE to be fully differentiable while sampling from q(zjx), the reparametrization trick(Kingma & Welling, 2013) extracts the random sampling step from the network and turns it intoan input,". V AEs are often parameterized with Gaussian distributions. In this case, fenc(x)outputsthe distribution parameters and2. That distribution is then sampled by computing z=+"p2where"N(0;1)is the input random sample, which does not depend on any parameters of fenc,and thus does not impact differentiation of the network.The V AE-GAN architecture of Larsen et al. (2015) has the same fencandfdecpair as in the V AE.It also adds a discriminator fdiscthat is used during training, as in standard generative adversarialnetworks (Goodfellow et al., 2014). The loss function of fdecuses the disciminator loss instead ofcross-entropy for estimating the reconstruction error.3 P ROBLEM DEFINITIONWe provide a motivating attack scenario for adversaries against generative models, as well as aformal definition of an adversary in the generative setting.3.1 M OTIVATING ATTACK SCENARIOTo motivate the attacks presented below, we describe the attack scenario depicted in Figure 1. Inthis scenario, there are two parties, the sender and the receiver, who wish to share images with eachother over a computer network. In order to conserve bandwidth, they share a V AE trained on theinput distribution of interest, which will allow them to send only latent vectors z.3Under review as a conference paper at ICLR 2017Figure 2: Results for the L2optimization latent attack (see Section 4.3) on the V AE-GAN, targetinga specific image from the class 0. Shown are the first 12 non-zero images from the test SVHN dataset. The columns are, in order: the original image, the reconstruction of the original image, theadversarial example, the predicted class of the adversarial example, the reconstruction of the adver-sarial example, the predicted class of the reconstructed adversarial example, the reconstruction of thereconstructed adversarial example (see Section 4.5), and the predicted class of that reconstruction.The attacker’s goal is to convince the sender to send an image of the attacker’s choosing to thereceiver, but the attacker has no direct control over the bytes sent between the two parties. However,the attacker has a copy of the shared V AE. The attacker presents an image xto the sender whichresembles an image xthat the sender wants to share with the receiver. For example, the senderwants to share pictures of kittens with the receiver, so the attacker presents a web page to the senderwith a picture of a kitten, which is x. The sender chooses xand sends its corresponding zto thereceiver, who reconstructs it. However, because the attacker controlled the chosen image, when thereceiver reconstructs it, instead of getting a faithful reproduction ^xofx(e.g., a kitten), the receiversees some other image of the attacker’s choosing, ^xadv, which has a different meaning from x(e.g.,a request to send money to the attacker’s bank account).There are other attacks of this general form, where the sender and the receiver may be separatedby distance, as in this example, or by time, in the case of storing compressed images to disk forlater retrieval. In the time-separated attack, the sender and the receiver may be the same person ormultiple different people. In either case, if they are using the insecure channel of the V AE’s latentspace, the messages they share may be under the control of an attacker. For example, an attackermay be able to fool an automatic surveillance system if the system uses this type of compression tostore the video signal before it is processed by other systems. In this case, the subsequent analysisof the video signal could be on compromised data showing what the attacker wants to show.While we do not specifically attack their models, viable compression schemes based on deep neuralnetworks have already been proposed in the literature, showing promising results Toderici et al.(2015; 2016).3.2 D EFINING ADVERSARIAL EXAMPLES AGAINST GENERATIVE MODELSWe make the following assumptions about generating adversarial examples on a target generativemodel,Gtarg(x) =fdec(fenc(x)).Gtargis trained on inputs Xthat can naturally be labeled withsemantically meaningful classes Y, although there may be no such labels at training time, or thelabels may not have been used during training. Gtargnormally succeeds at generating an output^x=Gtarg(x)in classywhen presented with an input xfrom classy. In other words, whatevertarget output class the attacker is interested in, we assume that Gtargsuccessfully captures it in thelatent representation such that it can generate examples of that class from the decoder. This targetoutput class does not need to be from the most salient classes in the training dataset. For example, onmodels trained on MNIST, the attacker may not care about generating different target digits (whichare the most salient classes). The attacker may prefer to generate the same input digits in a differentstyle (perhaps to aid forgery). We also assume that the attacker has access to Gtarg. Finally, theattacker has access to a set of examples from the same distribution as Xthat have the target label4Under review as a conference paper at ICLR 2017xEncoderfenczDecoderfdecxClassifierfclassVAE-GANDiscriminatorfdisc(0, 1)yFigure 3: The V AE-GAN classifier architecture used to generate classifier-based adversarial exam-ples on the V AE-GAN. The V AE-GAN in the dashed box is the target network and is frozen whiletraining the classifier. The path x!fenc!z!fclass!^yis used to generate adversarialexamples in z, which can then be reconstructed by fdec.ytthe attacker wants to generate. This does not mean that the attacker needs access to the labeledtraining dataset (which may not exist), or to an appropriate labeled dataset with large numbers ofexamples labeled for each class y2Y (which may be hard or expensive to collect). The attacksdescribed here may be successful with only a small amount of data labeled for a single target classof interest.One way to generate such adversaries is by solving the optimization problemargminxL(x;x)s:t: ORACLE (Gtarg(x)) =yt, where O RACLE reliably discriminatesbetween inputs of class ytand inputs of other classes. In practice, a classifier trained by theattacker may server as O RACLE . Other types of adversaries from Section 2.1 can also be used toapproximate this optimization in natural ways, some of which we describe in Section 4.If the attacker only needs to generate one successful attack, the problem of determining if an attackis successful can be solved by manually reviewing the xand^xadvpairs and choosing whicheverthe attacker considers best. However, if the attacker wants to generate many successful attacks, anautomated method of evaluating the success of an attack is necessary. We show in Section 4.5 howto measure the effectiveness of an attack automatically using a classifier trained on z=fenc(x).4 A TTACK METHODOLOGYThe attacker would like to construct an adversarially-perturbed input to influence the latent repre-sentation in a way that will cause the reconstruction process to reconstruct an output for a differentclass. We propose three approaches to attacking generative models: a classifier-based attack, wherewe train a new classifier on top of the latent space zand use that classifier to find adversarial exam-ples in the latent space; an attack using LVAE to target the output directly; and an attack on the latentspace, z. All three methods are technically applicable to any generative architecture that relies on alearned latent representation z. Without loss of generality, we focus on the V AE-GAN architecture.4.1 C LASSIFIER ATTACKBy adding a classifier fclass to the pre-trained generative model5, we can turn the problem of gen-erating adversaries for generative models back into the previously solved problem of generatingadversarial examples for classifiers. This approach allows us to apply all of the existing attackson classifiers in the literature. However, as discussed below, using this classifier tends to producelower-quality reconstructions from the adversarial examples than the other two attacks due to theinaccuracies of the classifier.Step 1. The weights of the target generative model are frozen, and a new classifier fclass(z)!^yistrained on top of fencusing a standard classification loss Lclassier such as cross-entropy, as shownin Figure 3. This process is independent of how the original model is trained, but it requires a5This is similar to the process of semi-supervised learning in Kingma et al. (2014), although the goal isdifferent.5Under review as a conference paper at ICLR 2017training corpus pulled from approximately the same input distribution as was used to train Gtarg,with ground truth labels for at least two classes: ytandy~t, the negative class.Step 2. With the trained classifier, the attacker finds adversarial examples xusing the methodsdescribed in Section 4.4.Usingfclass to generate adversarial examples does not always result in high-quality reconstructions,as can be seen in the middle column of Figure 5 and in Figure 11. This appears to be due tothe fact that fclass adds additional noise to the process. For example, fclass sometimes confidentlymisclassifies latent vectors zthat represent inputs that are far from the training data distribution,resulting infdecfailing to reconstruct a plausible output from the adversarial example.4.2LVAE ATTACKOur second approach generates adversarial perturbations using the V AE loss function. The attackerchooses two inputs, xs(the source) and xt(the target), and uses one of the standard adversarialmethods to perturb xsintoxsuch that its reconstruction ^xmatches the reconstruction of xt, usingthe methods described in Section 4.4.The adversary precomputes the reconstruction ^xtby evaluating fdec(fenc(xt))once before per-forming optimization. In order to use LVAE in an attack, the second term (the reconstruction loss)ofLVAE (see Equation 1) is changed so that instead of computing the reconstruction loss between xand^x, the loss is computed between ^xand^xt. This means that during each optimization iteration,the adversary needs to compute ^x, which requires the full fdec(fenc(x))to be evaluated.4.3 L ATENT ATTACKOur third approach attacks the latent space of the generative model.Single latent vector target. This attack is similar to the work of Sabour et al. (2015), in whichthey use a pair of source image xsand target image xtto generate xthat induces the target networkto produce similar activations at some hidden layer las are produced by xt, while maintainingsimilarity between xsandx.For this attack to work on latent generative models, it is sufficient to compute zt=fenc(xt)andthen use the following loss function to generate adversarial examples from different source imagesxs, using the methods described in Section 4.4:Llatent =L(zt;fenc(x)): (2)L()is a distance measure between two vectors. We use the L2norm, under the assumption that thelatent space is approximately euclidean.We also explored a variation on the single latent vector target attack, which we describe in Sec-tion A.1 in the Appendix.4.4 M ETHODS FOR SOLVING THE ADVERSARIAL OPTIMIZATION PROBLEMWe can use a number of different methods to generate the adversarial examples. We initially evalu-ated both the fast gradient sign Goodfellow et al. (2014) method and an L2optimization method. Asthe latter produces much better results we focus on the L2optimization method, while we includesome FGS results in the Appendix. The attack can be used either in targeted mode (where we wanta specific class, yt, to be reconstructed) or untargeted mode (where we just want an incorrect classto be reconstructed). In this paper, we focus on the targeted mode of the attacks.L2optimization. The optimization-based approach, explored in Szegedy et al. (2013) and Carlini& Wagner (2016), poses the adversarial generation problem as the following optimization problem:argminxL(x;x) +L(x;yt): (3)As above,L()is a distance measure, and Lis one ofLclassier ,LVAE, orLlatent . The constantis used to balance the two loss contributions. For the LVAE attack, the optimizer must do a full6Under review as a conference paper at ICLR 2017reconstruction at each step of the optimizer. The other two attacks do not need to do reconstructionswhile the optimizer is running, so they generate adversarial examples much more quickly, as shownin Table 1.4.5 M EASURING ATTACK EFFECTIVENESSTo generate a large number of adversarial examples automatically against a generative model, theattacker needs a way to judge the quality of the adversarial examples. We leverage fclass to estimatewhether a particular attack was successful.6Reconstruction feedback loop. The architecture is the same as shown in Figure 3. We use thegenerative model to reconstruct the attempted adversarial inputs xby computing:^x=fdec(fenc(x)): (4)Then,fclass is used to compute:^y=fclass(fenc(^x)): (5)The input adversarial examples xare not classified directly, but are first fed to the generative modelfor reconstruction. This reconstruction loop improves the accuracy of the classifier by 60% on av-erage against the adversarial attacks we examined. The predicted label ^yafter the reconstructionfeedback loop is compared with the attack target ytto determine if the adversarial example success-fully reconstructed to the target class. If the precision and recall of fclass are sufficiently high onyt,fclass can be used to filter out most of the failed adversarial examples while keeping most of thegood ones.We derive two metrics from classifier predictions after one reconstruction feedback loop. The firstmetric isASignoretarget , the attack success rate ignoring targeting, i.e., without requiring the out-put class of the adversarial example to match the target class:ASignoretarget =1NNXi=11^yi6=yi (6)Nis the total number of reconstructed adversarial examples; 1^yi6=yiis1when ^yi, the classificationof the reconstruction for image i, does not equal yi, the ground truth classification of the originalimage, and 0otherwise. The second metric is AStarget , the attack success rate including targeting(i.e., requiring the output class of the adversarial example to match the target class), which we definesimilarly as:AStarget =1NNXi=11^yi=yit: (7)Both metrics are expected to be higher for more successful attacks. Note that AStargetASignoretarget . When computing these metrics, we exclude input examples that have the sameground truth class as the target class.5 E VALUATIONWe evaluate the three attacks on MNIST (LeCun et al., 1998), SVHN (Netzer et al., 2011) andCelebA (Liu et al., 2015), using the standard training and validation set splits. The V AE and V AE-GAN architectures are implemented in TensorFlow (Abadi & et al., 2015). We optimized usingAdam with learning rate 0:001and other parameters set to default values for both the generativemodel and the classifier. For the V AE, we use two architectures: a simple architecture with a singlefully-connected hidden layer with 512 units and ReLU activation function; and a convolutional ar-chitecture taken from the original V AE-GAN paper Larsen et al. (2015) (but trained with only theV AE loss). We use the same architecture trained with the additional GAN loss for the V AE-GANmodel, as described in that work. For both V AE and V AE-GAN we use a 50-dimensional latent rep-resentation on MNIST, a 1024-dimensional latent representation on SVHN and 2048-dimensionallatent representation on CelebA.6Note that fclass here is being used in a different manner than when we use it to generate adversarialexamples. However, the network itself is identical, so we don’t distinguish between the two uses in the notation.7Under review as a conference paper at ICLR 2017Figure 4: Results for the L2optimization latent attack on the V AE-GAN, targeting the mean latentvector for 0. Shown are the first 12 non-zero images from the test MNIST data set. The columnsare, in order: the original image, the reconstruction of the original image, the adversarial example,the predicted class of the adversarial example, the reconstruction of the adversarial example, thepredicted class of the reconstructed adversarial example, the reconstruction of the reconstructedadversarial example (see Section 4.5), and the predicted class of that reconstruction.In this section we only show results where no sampling from latent space has been performed.Instead we use the mean vector as the latent representation z. As sampling can have an effect onthe resulting reconstructions, we evaluated it separately. We show the results with different numberof samples in Figure 22 in the Appendix. On most examples, the visible change is small and ingeneral the attack is still successful.5.1 MNISTBoth V AE and V AE-GAN by themselves reconstruct the original inputs well as show in Figure 9,although the quality from the V AE-GAN is noticeably better. As a control, we also generate randomnoise of the same magnitude as used for the adversarial examples (see Figure 13), to show that ran-dom noise does not cause the reconstructed noisy images to change in any significant way. Althoughwe ran experiments on both V AEs and V AE-GANs, we only show results for the V AE-GAN as itgenerates much higher quality reconstructions than the corresponding V AE.5.1.1 C LASSIFIER ATTACKWe use a simple classifier architecture to help generate attacks on the V AE and V AE-GAN models.The classifier consists of two fully-connected hidden layers with 512 units each, using the ReLUactivation function. The output layer is a 10 dimensional softmax. The input to the classifier isthe 50 dimensional latent representation produced by the V AE/V AE-GAN encoder. The classifierachieves 98:05% accuracy on the validation set after training for 100 epochs.To see if there are differences between classes, we generate targeted adversarial examples for eachMNIST class and present the results per-class. For the targeted attacks we used the optimizationmethod with lambda 0:001, where Adam-based optimization was performed for 1000 epochs witha learning rate of 0:1. The mean L2norm of the difference between original images and generatedadversarial examples using the classifier attack is 3:36, while the mean RMSD is 0:120.Numerical results in Table 2 show that the targeted classifier attack successfully fools the classifier.Classifier accuracy is reduced to 0%, while the matching rate (the ratio between the number ofpredictions matching the target class and the number of incorrectly classified images) is 100% , whichmeans that all incorrect predictions match the target class. However, what we are interested in (asper the attack definition from Section 3.2) is how the generative model reconstructs the adversarialexamples. If we look at the images generated by the V AE-GAN for class 0, shown in Figure 4, thetargeted attack is successful on some reconstructed images (e.g. one, four, five, six and nine arereconstructed as zeroes). But even when the classifier accuracy is 0%and matching rate is 100% ,an incorrect classification does not always result in a reconstruction to the target class, which showsthat the classifier is fooled by an adversarial example more easily than the generative model.Reconstruction feedback loop. The reconstruction feedback loop described in Section 4.5 canbe used to measure how well a targeted attack succeeds in making the generative model change the8Under review as a conference paper at ICLR 2017Figure 5: Left: representative adversarial examples with a target class of 0on the first 100non-zero images from the MNIST validation set. These were produced using the L2optimization latentattack (Section 4.3). Middle: V AE-GAN reconstructions from adversarial examples produced usingtheL2optimization classifier attack on the same set of 100validation images (those adversariesare not shown, but are qualitatively similiar, see Section 4.1). Right: V AE-GAN reconstructionsfrom the adversarial examples in the left column. Many of the classifier adversarial examples fail toreconstruct as zeros, whereas almost every adversarial example from the latent attack reconstructsas zero.reconstructed classes. Table 4 in the Appendix shows ASignoretarget andAStarget for all sourceand target class pairs. A higher value signifies a more successful attack for that pair of classes. Itis interesting to observe that attacking some source/target pairs is much easier than others (e.g. pair(4;0)vs.(0;8)) and that the results are not symmetric over source/target pairs. Also, some pairs dowell inASignoretarget , but do poorly in AStarget (e.g., all source digits when targeting 4). As canbe seen in Figure 11, the classifier adversarial examples targeting 4consistently fail to reconstructinto something easily recognizable as a 4. Most of the reconstructions look like 5, but the adversarialexample reconstructions of source 5s instead look like 0or3.5.1.2LVAE ATTACKFor generating adversarial examples using the LVAE attack, we used the optimization method with= 1:0, where Adam-based optimization was performed for 1000 epochs with a learning rate of 0:1.The meanL2norm of the difference between original images and generated adversarial exampleswith this approach is 3:68, while the mean RMSD is 0:131.We showASignoretarget andAStarget of theLVAE attack in Table 5 in the Appendix. Comparingwith the numerical evaluation results of the latent attack (below), we can see that both methodsachieve similar results on MNIST.5.1.3 L ATENT ATTACKTo generate adversarial examples using the latent attack, we used the optimization method with= 1:0, where Adam-based optimization was performed for 1000 epochs with a learning rateof0:1. The mean L2norm of the difference between original images and generated adversarialexamples using this approach is 2:96, while the mean RMSD is 0:105.Table 3 shows ASignoretarget andAStarget for all source and target class pairs. Comparing withthe numerical evaluation results of the classifier attack we can see that the latent attack performsmuch better. This result remains true when visually comparing the reconstructed images, shown inFigure 5.We also tried an untargeted version of the latent attack, where we change Equation 2 to maximizethe distance in latent space between the encoding of the original image and the encoding of theadversarial example. In this case the loss we are trying to minimize is unbounded, since the L2distance can always grow larger, so the attack normally fails to generate a reasonable adversarialexample.9Under review as a conference paper at ICLR 2017Figure 6: Left: V AE-GAN reconstructions of adversarial examples generated using the L2optimiza-tionLVAE attack (single image target). Right: V AE-GAN reconstructions of adversarial examplesgenerated using the L2optimization latent attack (single image target). Approximately 85out of100images are convincing zeros for the L2latent attack, whereas only about 5out of 100could bemistaken for zeros with the LVAE attack.Additionally, we also experimented with targeting latent representations of specific images from thetraining set instead of taking the mean, as described in Section 4.3. We show the numerical resultsin Table 3 and the generated reconstructions in Figure 15 (in the Appendix). It is also interestingto compare the results with LVAE, by choosing the same image as the target. Results for LVAE forthe same target images as in Table 3 are shown in Table 6 in the Appendix. The results are identicalbetween the two attacks, which is expected as the target image is the same – only the loss functiondiffers between the methods.5.2 SVHNThe SVHN dataset consists of cropped street number images and is much less clean than MNIST.Due to the way the images have been processed, each image may contain more than one digit; thetarget digit is roughly in the center. V AE-GAN produces high-quality reconstructions of the originalimages as shown in Figure 17 in the Appendix.For the classifier attack, we set = 105after testing a range of values, although we were unable tofind an effective value for this attack against SVHN. For the latent and LVAE attacks we set = 10 .In Table 10 we show ASignoretarget andAStarget for theL2optimization latent attack. The eval-uation metrics are less strong on SVHN than on MNIST, but it is still straightforward for an attackerto find a successful attack for almost all source/target pairs. Figure 2 supports this evaluation. Visualinspection shows that 11out of the 12adversarial examples reconstructed as 0, the target digit. Itis worth noting that 2out of the 12adversarial examples look like zeros (rows 1and11), and twoothers look like both the original digit and zero, depending on whether the viewer focuses on thelight or dark areas of the image (rows 4and7). TheL2optimization latent attack achieves muchbetter results than the LVAE attack (see Table 11 and Figure 6) on SVHN, while both attacks workequally well on MNIST.5.3 C ELEB AThe CelebA dataset consists of more than 200,000 cropped faces of celebrities, each annotatedwith 40 different attributes. For our experiments, we further scale the images to 64x64 and ignorethe attribute annotations. V AE-GAN reconstructions of original images after training are shown inFigure 19 in the Appendix.Since faces don’t have natural classes, we only evaluated the latent and LVAE attacks. We triedlambdas ranging from 0:1to0:75for both attacks. Figure 20 shows adversarial examples generated10Under review as a conference paper at ICLR 2017MNIST SVHNMethod MeanL2 Mean RMSD Time to attack MeanL2 Mean RMSD Time to attackL2Optimization Classifier Attack 3:36 0:120 277 1:77 0:032 274L2Optimization LVAE Attack 3:68 0:131 734 2:36 0:043 895L2Optimization Latent Attack 2:96 0:105 236 2:80 0:051 242Table 1: Comparison of mean L2norm and RMSD between the original images and the generatedadversarial examples for the different attacks. Time to attack is the mean number of seconds it takesto generate 1000 adversarial examples using the given attack method (with the same number ofoptimization iterations for each attack).using the latent attack and a lambda value of 0:5(L2norm between original images and generatedadversarial examples 9:78, RMSD 0:088) and the corresponding V AE-GAN reconstructions. Mostof the reconstructions reflect the target image very well. We get even better results with the LVAEattack, using a lambda value of 0:75(L2norm between original images and generated adversarialexamples 8:98, RMSD 0:081) as shown in Figure 21.Figure 7: Summary of different attacks on CelebA dataset: reconstructions of original images (top),reconstructions of adversarial examples generated using the latent attack (middle) and LVAE attack(bottom). Target reconstruction is shown on the right. Full results are in the Appendix.5.4 S UMMARY OF DIFFERENT ATTACK METHODSTable 1 shows a comparison of the mean distances between original images and generated adver-sarial examples for the three different attack methods. The larger the distance between the originalimage and the adversarial perturbation, the more noticeable the perturbation will tend to be, and themore likely a human observer will no longer recognize the original input, so effective attacks keepthese distances small while still achieving their goal. The latent attack consistently gives the bestresults in our experiments, and the classifier attack performs the worst.We also measure the time it takes to generate 1000 adversarial examples using the given attackmethod. TheLVAE attack is by far the slowest of the three, due to the fact that it requires computingfull reconstructions at each step of the optimizer when generating the adversarial examples. Theother two attacks do not need to run the reconstruction step during optimization of the adversarialexamples.6 C ONCLUSIONWe explored generating adversarial examples against generative models such as V AEs and V AE-GANs. These models are also vulnerable to adversaries that convince them to turn inputs intosurprisingly different outputs. We have also motivated why an attacker might want to attack gen-erative models. Our work adds further support to the hypothesis that adversarial examples are ageneral phenomenon for current neural network architectures, given our successful application ofadversarial attacks to popular generative models. In this work, we are helping to lay the foundationsfor understanding how to build more robust networks. Future work will explore defense and robusti-fication in greater depth as well as attacks on generative models trained using natural image datasetssuch as CIFAR-10 and ImageNet.ACKNOWLEDGMENTSThis material is in part based upon work supported by the National Science Foundation under GrantNo. TWC-1409915. Any opinions, findings, and conclusions or recommendations expressed in this11Under review as a conference paper at ICLR 2017material are those of the author(s) and do not necessarily reflect the views of the National ScienceFoundation.
H1GtVkvEg
Review
5: Marginally below acceptance threshold
This paper considers different methods of producing adversarial examples for generative models such as VAE and VAEGAN. Specifically, three methods are considered: classification-based adversaries which uses a classifier on top of the hidden code, VAE loss which directly uses the VAE loss and the "latent attack" which finds adversarial perturbation in the input so as to match the latent representation of a target input. I think the problem that this paper considers is potentially useful and interesting to the community. To the best of my knowledge this is the first paper that considers adversarial examples for generative models. As I pointed out in my pre-review comments, there is also a concurrent work of "Adversarial Images for Variational Autoencoders" that essentially proposes the same "latent attack" idea of this paper with both L2 distance and KL divergence. Novelty/originality: I didn't find the ideas of this paper very original. All the proposed three attacks are well-known and standard methods that here are applied to a new problem and this paper does not develop *novel* algorithms for attacking specifically *generative models*. However I still find it interesting to see how these standard methods compare in this new problem domain. The clarity and presentation of the paper is very unsatisfying. The first version of the paper proposes the "classification-based adversaries" and reports only negative results. In the second set of revisions, the core idea of the paper changes and almost an entirely new paper with a new co-author is submitted and the idea of "latent attack" is proposed which works much better than the "classification-based adversaries". However, the authors try to keep around the materials of the first version, which results in a 13 page long paper, with different claims and unrelated set of experiments. "in our attempts to be thorough, we have had a hard time keeping the length down" is not a valid excuse. In short, the paper is investigating an interesting problem and apply and compare standard adversarial methods to this domain, but the novelty and the presentation of the paper is limited.
4: The reviewer is confident but not absolutely certain that the evaluation is correct
Byx5BTilg
ICLR.cc/2017/conference
2017
Exploring the Application of Deep Learning for Supervised Learning Problems
["Jose Rozanec", "Gilad Katz", "Eui Chul Richard Shin", "Dawn Song"]
One of the main difficulties in applying deep neural nets (DNNs) to new domains is the need to explore multiple architectures in order to discover ones that perform well. We analyze a large set of DNNs across multiple domains and derive insights regarding their effectiveness. We also analyze the characteristics of various DNNs and the general effect they may have on performance. Finally, we explore the application of meta-learning to the problem of architecture ranking. We demonstrate that by using topological features and modeling the changes in its weights, biases and activation functions layers of the initial training steps, we are able to rank architectures based on their predicted performance. We consider this work to be a first step in the important and challenging direction of exploring the space of different neural network architectures.
["Deep learning", "Supervised Learning"]
ABSTRACTOne of the main difficulties in applying deep neural nets (DNNs) to new domainsis the need to explore multiple architectures in order to discover ones that performwell. We analyze a large set of DNNs across multiple domains and derive insightsregarding their effectiveness. We also analyze the characteristics of various DNNsand the general effect they may have on performance. Finally, we explore the ap-plication of meta-learning to the problem of architecture ranking. We demonstratethat by using topological features and modeling the changes in its weights, biasesand activation functions layers of the initial training steps, we are able to rankarchitectures based on their predicted performance. We consider this work to bea first step in the important and challenging direction of exploring the space ofdifferent neural network architectures.1 I NTRODUCTIONRecent advances in deep neural networks (DNNs) have led to breakthroughs in fields such as imageclassification (He et al., 2015; Krizhevsky et al., 2012) and speech recognition (Yu et al., 2010; Dahlet al., 2012). One reason for the effectiveness of DNNs is their ability to integrate low, mid and high-level features in a natural way (Zeiler & Fergus, 2014). While recent work such as (Simonyan &Zisserman, 2014) suggests that in many cases the depth of the architecture is crucial, the emergenceof more complex architectures (He et al., 2015; Szegedy et al., 2015) demonstrates that depth aloneoften does not suffice.While DNNs have been highly effective in several domains, their application in additional fieldsis yet to become widespread. We argue that this is the case due to two challenges. The first isthe difficulty of designing effective architectures for domains in which there is little or no previousknowledge on the application of deep learning. Moreover, since designing DNN architectures isnot intuitive for most people, this task is likely to fall to experts whose time is in high demand.The second challenge, which is strongly coupled with the first, is the large amounts of computingpower and time required to evaluate multiple DNNs. These traits constrain the number of DNNarchitectures that can be evaluated, thus further limiting one’s ability to explore new architectures orrespond to changing circumstances.In this study we explore the possibility of applying architectures that are effective for one domain toanother. We do so by generating a large number of architectures and evaluate their performance onmultiple tabular datasets in order to determine whether the architectures are transferable. We alsoexplore the feasibility of architectures with parallel layers and compare their effectiveness to thatof their “linear” counterparts. Our results show that while architectures do not perform well acrossmultiple datasets, parallel architectures are surprisingly effective.When attempting to apply DNNs to an unknown domain, one way of approaching the problemwould be to randomly “sample” various architectures and analyze their performance distribution.The top-performing architectures found in the sampling can form the base for future explorationwhile the variance in performance can assist in determining the number of architectures that need tobe sampled. We explore a meta-learning approach that may improve the efficiency of this process byranking the architectures based on their expected performance. Our approach models the topologyof the DNN as well as the changes in weights, biases and activation function layers throughout theinitial training steps and uses this information to rank the architectures by their relative performance.Preliminary results are encouraging.1Under review as a conference paper at ICLR 2017While we consider this study to be an important first step, we feel obliged to point out that workis done in a limited setting. To enable the generation of multiple DNN architectures with diversetopologies, we applied uniform and fixed parameters such as layer sizes and learning rates. As aresult, the architecture space we explore is limited. Validating our results on a more diverse set ofarchitectures with multiple hyperparameter configuration will require additional experimentation.We plan to address these issues in future work.Our contributions are as follows:We explore DNNs across multiple datasets, evaluate their effectiveness and analyze if someperform best across datasets.We systematically evaluate a large number of architectures over multiple supervised-classification datasets and derive insights regarding the design and application of DNNswith parallel layers for general classification problems.We present a novel meta learning-based ranking method that utilizes both topological fea-tures as well as weights, biases and activation function layers of the various components ofthe DNN architecture during the initial training phase. To the best of our knowledge, this isthe first time these characteristics have been used in a meta-learning scheme. Preliminaryresults of this approach are promising.2 R ELATED WORKWe review two areas of research whose aim is to better understand and improve the performanceof DNN architectures. The first is area of research focuses on the exploration and analysis of DNNarchitectures. The second area of research is automatic parameter tuning.2.1 E XPLORATION AND ANALYSIS OF DNN ARCHITECTURESDespite their remarkable success in various domains, the inner-workings of DNNs remain to somedegree a “black box”. Multiple studies attempted to provide insight into this matter. In Jarrett et al.(2009), the authors analyze convolutional neural networks (CNNs) and derive insights regarding thearchitecture design and the contribution of its different components. Another work aimed at betterunderstanding CNNs is presented in Shang et al. (2016). The authors analyze widely used CNNarchitectures and derive insights into their possible shortcomings. To address these shortcomings,they propose a new version of the popular ReLU activation scheme.The exploration of DNN architectures has also taken place for recurrent neural networks (RNNs).In Zaremba (2015), the authors explore various modifications to LSTM architectures to improvetheir performance, and propose several enhancements to the architecture. Another study Wu & King(2016) aims to determine the reasons for the effectiveness of LSTMs and identify the contributionof its different elements. Based on their conclusions, the authors proposed a simplified version ofLSTM.2.2 A UTOMATIC DNN PARAMETER TUNINGThe ability to automatically tune the hyperparameters of a DNN architecture is important not onlybecause of its ability to improve performance, but also due to the considerable time it can poten-tially save. In Maclaurin et al. (2015) the authors demonstrate how information extracted from thestochastic gradient descent can efficiently tune multiple parameters in the architecture. An addi-tional work that analyzes the gradient is presented in Duvenaud et al. (2016), where the informationis used to determine when to terminate the training of the architecture to avoid over-fitting. A dif-ferent optimization approach is presented in Mendoza et al., where the authors define a large setof hyperparameters (batch size, learning rate, activation types, etc.) and apply Bayesian optimiza-tion on top-performing configurations. The approach is only applied to feed-forward networks andoutperforms human experts by 10%, using the AUC measure.Additional types of optimization have also been proposed in recent years. In Jin et al. (2016), theauthors focus on setting the size of hidden layers in RNNs. They accomplish this by convertingthe optimization problem into a subset selection problem. An important aspect of this approach is2Under review as a conference paper at ICLR 2017that it takes time constraints into account, thus enabling solutions that are feasible given availableresources. Another approach, in which one long-short term memory network (LSTM) is used tooptimize another, was proposed by Andrychowicz et al. (2016). The two networks have shared pa-rameters but separate hidden states and the optimizer network is both modifying its own weights andthose of the optimized simultaneously. Finally, an approach that automatically adjusts the learningrates of the neural net was presented in Schaul et al. (2013). The approach has been shown to beeffective both on convex and non-convex learning tasks.Recent work by Li et al. (2016) proposes an exploration/exploitation scheme for hyperparametertuning. The authors apply a multi-arm bandits algorithm, with each arm representing a parameterconfiguration. A process of successive halving (Jamieson & Talwalkar, 2015), in which a certainpercentage of the lowest-performing configurations are dropped every nsteps enables the frameworkto explore promising directions. We consider this approach complementary to our proposed meta-learning approach, as the former enables exploration of a large number of configurations while thelatter can reduce time required to assess their performance.3 P ROBLEM DEFINITIONAs mentioned in Section 1, one of the challenges in applying deep learning to a new field is the needto design and test multiple DNN architectures. Only by iterative testing can practitioners discoverthe capabilities and limitations of deep learning in the domain. Even with ever-increasing computingpower, the high computational cost of this process currently presents a significant barrier for mostpractitioners.This limitation leads us to explore the following questions:1. Would DNN architectures that perform well on one general supervised classification prob-lem also be effective when applied to dataset in other domains?2. What types of architectures are effective for general supervised learning problems? Shouldpractitioners consider other types architectures besides “deep”?3. Can DNN architectures outperform “conventional” machine learning classifiers in generalsupervised problems?4. Is it possible to identify top-performing networks in the early stages of the training? ifpossible, such a technique could preserve valuable computing resources.We attempt to begin addressing these questions in the subsequent sections of this study. We itera-tively evaluate a large number of DNN architectures on a set of supervised classification problems.These datasets differ from those of image and speech classification in that they consist of tabulardata with both numeric and discrete features. These differences make it unclear what types of archi-tectures are likely to perform well on these domains. The datasets we analyze were selected becauseof their diversity in terms of size and feature number and composition. These traits also enable us tobetter understand the difficulties in applying DNN architectures across multiple domains.In order to provide meaningful results, the set of architectures we evaluate is also diverse. Wetherefore automatically generate a diverse set of architecture with various topological traits. Becauselittle information is available on the application of deep learning to general supervised classificationproblems, we choose to explore not only architectures that are linear but also architectures withparallel layers. While the generate set is diverse, additional work is required in order to modeladditional types of architectures. We elaborate on these points further in the subsequent section.4 G ENERATING MULTIPLE DNN ARCHITECTURESIn order to effectively explore the architecture space, we require a large and diverse set. We createthis set by automatically generating a large number of architectures and training each of them on alltraining set datasets. Our generation algorithm, presented in Algorithm 1, generates both “deep” and“wide” architectures with parallel layers (see Figure 1(b)). Next we describe the generation process.We consider DNN architectures to consist of components . We define a component as any partof an architecture, be it a layer, normalization or activation function. In this study we consider3Under review as a conference paper at ICLR 2017InputHidden Layer 2Hidden Layer 1OutputInputOutputHidden Layer 1 Hidden Layer 3ConcatOriginal (a) (b)InputHidden Layer 2Hidden Layer 1OutputHidden Layer 3Hidden Layer 2Figure 1: An example of the architectures that can be derived from an existing one.the following components: fully-connected layers, softmax, batch normalization, dropout and theReLU, sigmoid and tanh activation functions.We begin the generation process with a “basic” architecture consisting only of two components:a fully-connected input layer and an output softmax layer. We then expand the set of possiblearchitectures by iteratively applying the following steps:1. For each pair of components in the architecture, identify all component that could be in-serted between them (Figure 1(a)).2. For each pair of components in the architecture, identify all component that could be in-serted in parallel to one of them (Figure 1(b)).3. For each of the components identified in the previous steps, generate a new copy of thearchitecture and perform the corresponding insertion.Our proposed architecture generation approach enables us to generate the topological representationof every possible neural networks that consist of the predefined components. However, we do notgenerate multiple hyperparameter configurations for each topology and use fixed parameters foreach component. We plan to address this limitation in future work, possibly by using an approachsimilar to the one presented in Li et al. (2016). It is also important to point out that we currently donot support weight-sharing and therefore do not consider CNN and RNN architectures. Given thecharacteristics of the analyzed data, we do not consider these architecture types likely to producemeaningful results.Another important aspect of the our architecture generation approach is that we generate architec-tures with connections between layers of various depths. An example of this is shown in Figure 1(b),where we connect layers of depths 1 and 2. This setting enables us to systematically explore morecomplex designs than those commonly used. We analyze these architectures further in Section 6.As the number of possible architectures grows exponentially, we limit the total number of architec-tures that we generate by constraining the maximal number of components in a architecture and thenumber of parallel layers an architecture may contain. The specific settings used in our experimentsare presented in Section 6.1. These settings were chosen in order to ensure a diverse set of both deepand wide architectures given the time and computing-power constraints, and we plan to change themin future work to further diversify the set of generated architectures. To select the architectures fromwhich additional ones will be generated, we apply a priority queue. We first sort the architectures bythe number of their activation layers (in a descending order) with a secondary sorting based on thetotal number of components (in an ascending order). This setting prioritizes the creation of deeperarchitectures with multiple activation layers. For each architecture in the final set, we generate the4Under review as a conference paper at ICLR 2017meta-features described in Section 5. The algorithm for the architecture generation is presented inAlgorithm 1.Algorithm 1 Automatic architecture generation1:procedure ARCHITECTURE GENERATION (arcQueue ,initArc )2: architecturesSet initArc3: architecturesQueue initArc4: while (architecturesQueue 6=;)do5: newarchitectures ;6: architecture arcQueue:pop ()7: for each P(ci; cj)i6=j2fc1; c2; :::; c ngdo8: candidateComponents proposeInsertBetweenCandidates (P(ci; cj))9: for each candidate2candidateComponents do10: newarchitecture insertBetween (architecture; P (ci; cj); candidate )11: newarchitectures newarchitectures [newarchitecture12: candidateComponents proposeInsertAsideCandidates (P(ci; cj))13: for each candidate2candidateComponents do14: newarchitecture insertAside (architecture; P (ci; cj); candidate )15: newarchitectures newarchitectures [newarchitecture16: newarchitectures filter (newarchitectures )17: arcQueue arcQueue[newarchitectures18: architecturesSet architecturesSet [newarchitectures19: return architecturesSet5 M ETA-LEARNING FOR ARCHITECTURE RANKINGOur goal is to determine whether by analyzing the topology of DNN architecture as well as thetransformations it undergoes in its early training iterations could be used to predict its performance.To this end we develop a novel machine learning-based approach that generates a set of featuresfor each analyzed architecture. Once the features are generated, we use a ranking classifier to as-sign a score to each architecture. The classifier is trained on a large corpus of datasets (additionalinformation is provided in Section 6.1).We apply meta-learning (Vilalta & Drissi, 2002) to predict the performance of the DNN architec-tures. Meta-learning is a branch of machine learning in which an algorithm “learns how to learn” byextracting information on the learning process of another algorithm. The features extracted in thisprocess are called meta-features. We generate three types of meta-features: dataset-based ,topology-based andtraining-based . We hypothesize that these groups represent the elements that affect theperformance of the DNN architecture - the data on which it is trained, the structure of the networkand the changes in its weights, biases and activation functions during throughout the training pro-cess. We provide a full overview of the meta-features groups below and detailed information inAppendix A .Dataset-based meta-features. As explained in Section 3, the datasets we use in the evaluationvary significantly in size and feature composition. These meta-features attempt to represent themultiple characteristics that may affect the performance of deep learning algorithms. We generatethree types of meta-features:1.General information: general statistics on the analyzed dataset: number of instances andclasses, number and type of features and statistics on the correlations among various fea-tures.2.Entropy-based measures: we partition the dataset’s features based on their type (discrete,numeric, etc.) and calculate statistics on the Information Gain (IG) of the features in eachgroup.3.Feature diversity: we partition the dataset into type-based groups and use the chi-squaredand paired-t test to calculate the similarity of each pair in each group. We then generatemeta-features using the tests’ statistic values.5Under review as a conference paper at ICLR 2017Topology-based meta-features. Our generated architectures vary significantly in size, depth andwidth. Since these traits are likely to affect their performance, we use the meta-features of this groupto quantify and model them. The meta-features can be partitioned into two groups:1.Architecture composition: general statistics on the number and types of layers and func-tions that make up the architecture, statistics on layer composition as a function of depthetc.2.Connectivity-based measures: for each layer in the architectures, we calculate variousmeasures that are frequently used for graph-analysis. These measures include statistics onthe number and ratio of incoming and outgoing edges (overall, per depth and per type) andnode-centrality evaluation measures.Training-based meta-features. The goal of these meta-features is to model the transformationsundergone by the DNN during the course of its training. These meta-features consist of statisticson the weights, biases and activation function layers of the various components in the architecture.These meta-features can be partitioned into two groups:1.Static evaluation: general statistics on the distribution of the various values across differ-ent depths and layer types. These features provide “snapshot” information on the trainingstatus of the architecture in multiple training steps.2.Time series-based evaluation: We compare the values obtained in the various trainingiterations to those obtained earlier, calculate ratios and modeling the changes in valuesdistribution over time.A full description of all meta-features is provided in Appendix A.6 E XPERIMENTS AND ANALYSIS6.1 E XPERIMENTAL SETUPWe conduct our experiments on 13 supervised classification datasets in a tabular form. We se-lected these datasets since they represent common supervised-learning problems that are not oftenaddressed by deep learning. In addition, their feature composition consists of both numeric and dis-crete features, a trait that makes them different from image and speech classification datasets. Thedatasets vary significantly in size, number and type of features (some contain only numerical featureswhile others also contain discrete features) and class imbalance - traits we hypothesize will makelearning across domains more challenging. All datasets are available on the OpenML repository andtheir properties are represented in Appendix B.We use the following settings:For each dataset, we train the same set of 11,170 architectures, generated as described inSection 4. The maximal width (number of parallel layers) allowed for an architecture wasset to 4, and we terminated the generation process upon reaching the predefined numberof architectures. This deepest architectures generated by this approach have 8 activationlayers and 14 components overall.For architectures training, all datasets were randomly partitioned into training, validationand test sets. 80% of the data points was used by the training and the remaining two setsassigned 10% each. The same split was used for all the architectures explored for eachdataset. Original class ratios were maintained in all sets.All generated architectures were trained until convergence, with the time of terminationdetermined by performance on the validation set.The training-based meta-features were only extracted for the following steps: 20, 40, 60,80 and 100.We used a leave-one-out (LOO) cross-validation approach to train the ranking classifier:for each evaluated dataset di, we train the ranking classifier using the meta-features fromdj2Dwhere i6=j. This setting enables to test whether a meta-model trained on onedataset could be effectively applied on another.6Under review as a conference paper at ICLR 2017We randomly split the generated architectures into two groups. The first group, consistingof 70% of the architectures, is used for training. We use the remaining 30% to evaluate theperformance of our approach on each dataset.6.2 A NALYSISWe begin by analyzing the accuracy distribution of the generated architectures across the datasets.We found that the distribution of accuracies varies significantly across the different datasets, withsome datasets with ranges of [45%-90%] accuracy while others are in the range [89%-95%]. Thisdifference has significant impact on one’s ability to apply architectures that are effective in onedomain to another, as we confirm with the next experiment. An example of accuracies distributionsis presented in figures 2 and 3 and plots for all datasets are presented in Appendix D.Figure 2: Accuracies plot for the datasetAileronsFigure 3: Accuracies plot for the dataset Con-traceptiveAnalyzing the performance differences of “parent–child” architectures. In order to determinewhether our architecture generation method is effective, we analyzed the differences in accuracybetween every architecture and its descendant. Our reason for performing this analysis is as fol-lows: if making incremental additions to an existing architecture does not significantly change itsperformance, then we simply generate a large number of architecture which are nearly identical inperformance.The results of our analysis are presented in Table 1. For every ”parent–child“ pair we calculatethe difference in accuracy on the test set. We then calculate the maximal and average changes inaccuracy for each dataset. It is clear from the results that the changes in accuracy are significant,especially given the fact that changes are accumulated over time (deeper architectures are a result ofmultiple modifications).Next we analyze the “parent–child” architectures with the maximal differences in order to determinewhether the addition of particular component is most likely to induce large changes in accuracy. Ourresults, presented in Table 2, show that no one component type can be consistently attributed withinducing large changes.Applying architectures across datasets. We attempt to determine whether it is possible to findarchitectures that perform well across multiple datasets. For each of the generated architectures, wecalculate its performance-based ranking (i.e. position in a list ordered by the accuracy measure) oneach of the datasets. Then, for each dataset we test the performance of the architecture with thebest average ranking on the remaining datasets. We compare the performance of this architecture tothat of the best evaluated architecture and to that of the best architecture found by our meta-learningmodel (described in the following section). The results, presented in Table 3, show significantdifferences in performance and lead us to conclude that in most cases DNN architectures do notperform well across multiple datasets.7Under review as a conference paper at ICLR 2017Table 1: Analyzing the differences in accuracy for the different architecture parent–child pairs foreach dataset.Dataset Max difference Average differenceContraceptive 5% 1.8%Seismic bumps 4.9% 1.1%Page Blocks 7.4% 1.4%Wind 35% 3.2%Puma 32 19.2% 1.8%CPU act 40% 3.3%Delta elevators 39.5% 2.7%Mammography 3% 1.1%Ailerons 17.4% 5.7%Bank marketing 3.5% 0.8%German Credit 5% 1%Space 11.5% 2.5%Cardiography 11.5% 1%Table 2: Analyzing the differences in accuracy for the different architecture parent–child pairs foreach dataset.Component type Number of appearancesDropout 2Sigmoid 3TanH 2Fully connected 2ReLU 1Batchnorm 3Comparing the performance of DNN architectures to those of “conventional classifiers”. Asa point of reference to “classical” machine learning approaches for classifying tabular data, in Table3 we also presents the performance of the Random Forest algorithm (using the Weka Hall et al.(2009) implementation with the default parameters ). It is clear that neither Random Forest nor theDNN architectures consistently outperform the other. We intend to explore the factors that causethese differences in performance in future work.Table 3: Comparison of the accuracy performance of the best average-ranking architectures to thetop-ranking architecture found by our approach for each dataset.Dataset Best architecture Top ranked (bestfound by model)Architecture with bestaverage rankingRandom ForestContraceptive 84.5% 84% 79.7% 76.4%Seismic bumps 95% 94.1% 92.1% 93.4%Page Blocks 97% 95.2% 89.6% 97.9%Wind 88% 84.3% 54% 86.5%Puma 32 70% 67% 50.7% 88.1%CPU act 91% 87.7% 70% 93.7%Delta elevators 90% 88.7% 79.2% 87.7%Mammography 99% 98.9% 97% 98.8%Ailerons 89% 86.2% 59% 88.6%Bank marketing 96% 95% 94% 90.5%German Credit 77.1% 73.6% 68.2% 76.9%Space 69.6% 66.8% 56.5% 84%Cardiography 94.5% 93.7 86.4% 95.5%Analyzing the performance of architectures with parallel layers. Next we explore whetherarchitectures with parallel layers outperform similar non-parallel architectures. We analyze the 100top-performing architectures of each dataset and calculate the percentage of architectures with par-allel layers. The results, presented in Appendix C, show that this type of architecture consists onaverage of 62% of the top-performing architectures.8Under review as a conference paper at ICLR 2017To determine whether the benefit of applying parallel layers is significant, we randomly choose oneof our datasets (Ailerons) and identify the 100 top-performing architectures with parallel layers.From this set we randomly sample 10 architectures and compare the performance of each of themto those of allof their possible serial counterparts, created by iteratively removing all but one ofthe different parallel layers. Our results, presented in Table 4, show that architectures with parallellayers significantly outperform allof their serial counterparts.Considering the same sample of parallel architectures, we analyze whether architectures perfor-mance can be improved by adding a batch normalization before, after or before and after eachactivation function. As shown by the results in Table 4, we did not find evidence that the additionof batch normalization improves the performance of architectures with parallel layers. We find thisfact surprising and intend to explore this further in future work. An example of one of the parallelarchitectures is presented in Figure 4 in Appendix C.Finally, we also analyze the component composition of the 100 top-performing architectures foreach dataset. The most interesting conclusion found in this analysis is the fact that a relativelyshallow architectures ( 4 fully-connected layers) seem to yield the best performance on average forall datasets. The full analysis of the architecture components is presented in Table 12 in AppendixC.Table 4: Comparison of the performance of parallel architectures to their serial counterparts.Parallel Ar-chitecturesSerialversionsParallel withbatchnorm –beforeParallel withbatchnorm –afterParallel withbatchnorm –before & after)Average 87.6% 71.8% 70.4% 77.4% 76.5%Standard Deviation 0.39% 7.8% 9.9% 4.2% 3.6%6.3 E VALUATING THE META -LEARNING APPRAOCHWe analyze the performance of our meta-learning model as a classifier to rank architectures basedon their performance. For these experiments, we use the following settings:We define the 5% of the top-performing architectures of each dataset as “good” and labelthe remaining as “bad”. We use this setting due to the large variance in the performance ofthe DNN architectures on the different datasets (see Appendix D for full details). We alsointend to experiment with other labeling methods in future work.We use the precision@X measure as the evaluation metric. We calculate it by ranking allarchitectures according with the confidence of the meta-classifier (i.e. the classifier trainedon the meta-features) in them being “good”. Then, for the Xtop-ranking architectures wecalculate the actual percentage of “good” architectures is X.We conduct a separate evaluation on the training-based meta-features and the dataset-basedand topological meta-features. Since the training-based features are more computationallyexpensive to compute, we find it interesting to compare their performance to the othertypes of meta-features. In our experiments we denote the full set as ML full, the training-based meta-features as ML train and the topological and dataset-based meta-features asML data+top.We use the Random Forest algorithm for the training of the meta-model.The results of our evaluation are presented in Table 5. We show that we are able to identify multiplearchitectures in the top-ranking spots in a much higher ratio than their share of the population. It isalso clear that the joint set of all meta-features outperforms both of the examined subsets.Next we conduct a random sampling over architectures, and compare the performance of the sam-pled architectures to those obtained by ranking all architectures using the proposed meta-classifier.Our goal is to determine the probability that Nrandomly-sampled architectures will consist of atleast one architecture that outperforms all the top Mitems ranked by the meta-classifier. We conductthe experiment as follows: for each dataset, we randomly sample a fixed number of architecturesand identify the one with the highest performance among those sampled. We then check if this9Under review as a conference paper at ICLR 2017architecture outperforms all those in the ranked list provided by the meta-learning model. We re-peat this process 50,000 for each dataset and calculate the probability of this scenario. The results,presented in Table 6, show that our model outperforms random sampling for all datasets, often by alarge margin. However, further experimentation is required to fully determine the effectiveness ofthe meta-learning approach.Finally, we analyze the results in order to determine the effectiveness of the different meta-featuresused by our model. The analysis was carried out by running LASSO logistic regression and analyz-ing the weights assigned to the various meta-features. Based on this analysis we reach the followingconclusions:The dataset-based meta-features had the smallest contribution to the performance. Whileit is somewhat surprising given the fact that DNNs perform very differently on datasetwith different characteristics, we conclude that the model is focused on the way in thearchitecture is trained on the data (i.e. weights and activations).The topological meta-features that had the largest contribution were those modeling thedepth of the network, the number of parallel layers and those counting the number of vari-ous components.The ranking model uses a large number of training-based meta-features and from all typesdescribed in Appendix A. However, the model includes only weight and activation-basedmeta-features among the training-based meta-features. The biases-based meta-features arealmost never used.Table 5: The evaluation results of different approaches using the precision@X metric. full,trainandd+tdenote ML full (all meta-features), ML train (training-based meta-features only) andML data+top(dataset-based and topological meta-features) respectively. Best results are in bold.Datasetprecision@5 precision@10 precision@20 precision@50full train d+tfull train d+tfull train d+tfull train d+tContraceptive 20% 20% 0% 20% 10% 20% 20% 5% 15% 20% 10% 8%Seismic Bumps 20% 40% 20% 20% 20% 10% 25% 20% 15% 12% 16% 12%Page Blocks 40% 20% 0% 30% 20% 0% 20% 15% 0% 16% 14% 14%Wind 40% 0% 40% 20% 20% 30% 10% 15% 25% 12% 16% 20%Puma32 20% 20% 0% 10% 20% 20% 15% 20% 10% 16% 10% 10%CPU Act 40% 20% 20% 30% 20% 20% 30% 15% 10% 22% 12% 16%Delta Elevators 20% 20% 20% 20% 20% 10% 15% 25% 20% 20% 20% 12%Mammography 20% 0% 0% 20% 20% 0% 20% 15% 5% 20% 10% 12%Ailerons 40% 40% 40% 30% 30% 20% 30% 20% 20% 28% 22% 26%Bank Marketing 20% 0% 20% 30% 10% 20% 20% 10% 10% 10% 14% 10%German Credit 40% 20% 20% 40% 10% 10% 20% 10% 10% 14% 10% 10%Space 20% 0% 0% 10% 10% 0% 15% 10% 10% 18% 14% 10%Cardiography 20% 0% 20% 20% 10% 10% 20% 15% 20% 18% 14% 16%7 C ONCLUSIONS AND FUTURE WORKIn this study we have explored several aspects of applying DNNs to supervised classification prob-lems. Our results demonstrate the difficulty in using DNN architectures that are effective in onedomain to another. We also systematically compare the performance of architectures with parallellayers to those of similar linear architectures and demonstrate that the former outperforms the latterin many cases. We present a novel approach for predicting the performance of a DNN architectureby analyzing its topology and the changes in its weights, biases and activation function values duringearly phases of training. Our aim is that this work can lay the foundation for a better understandingof the DNN architectures space.For future work we consider several directions. First, we plan to add additional components to theones currently used in our automatic architecture generation method in order to enable further ex-ploration. In addition, we will seek to enhance our approach adding automatic parameter tuningmethods. This will enable us to efficiently explore multiple configurations and possibly identifyhigher-performing architectures. We are also considering the use of an exploration/exploitation10Under review as a conference paper at ICLR 2017Table 6: The probabilities of finding an architecture that outperforms all those in the ranked list whenrandomly sampling a set of architectures. The size of the ranked list by our algorithm is always 10(i.e. for sample size 20 we test a set two times the size of the ranked list.)Dataset Sample size - 10 Sample size - 20Contraceptive 1.7% 3.2%Seismic bumps 11.5 22%Page Blocks 14.8% 27.7%Wind 24.3% 41.5%Puma 32 20.7% 36.5%CPU act 3.4% 6.7%Delta elevators 33.3% 55.5%Mammography 7.5% 14.3%Ailerons 13.9% 25.5%Bank marketing 5.6% 10.4%German Credit 11.9% 22.9%Space 20.2% 36.3%Cardiography 5.6% 11.2%scheme along the lines presented in Li et al. (2016) to enable us to efficiently explore larger archi-tecture spaces.Another approach we plan to explore is to make the search over network architectures a fully-differentiable problem, by encoding the problem only using mechanisms that enable such a search.As an example, let us imagine that we want to decide the best number of internal hidden layers touse in a multi-layer fully-connected neural net. For this, we could create multiple parallel stacksof layers with the same input at the bottom (e.g. the features for each data point) and the samekind of output at the end (e.g. probabilities over the possible classes) and then use a softmax totake a weighted sum of the outputs from each of the parallel stacks. By using a penalty on thenegative entropy of this weighted sum, and increasing the penalty over time, the network shouldlearn to produce the output using only one of the parallel stacks which we can then use at inferencetime. We can also train multiple models simultaneously using this method, and introduce additionalpenalties to ensure that the multiple models explore different architectures during training, to enablea more diverse search.11Under review as a conference paper at ICLR 2017
Bkm4q5eVe
Interesting first step but not ready for publishing
3: Clear rejection
This paper aims at attacking the problem of preselecting deep learning model structures for new domains. It reported a series of experiments on various small tasks and feed-forward DNNs. It claims that some ranking algorithm can be learned based on these results to guide the selection of model structures for new domains. Although the goal is interesting I found their conclusion is neither convincing nor useful in practice for several reasons: 1. They only explored really simple networks (feed-forward DNNs). While this significantly limited the search space, it also limited the value of the experiments. In fact, the best model architecture is highly task (domain) dependent and the type of model (DNN vs CNN vs LSTM) is often much more important than size of the network itself. 2. Their experiments were conduced with some important hyper parameters (e.g., learning rate schedule) fixed. However, it is well known that learning rate often is the most important hyper parameter during training. Without adjusting these important hyper parameters the conclusion on the best model architecture is not convincing. 3. Their experiments seem to indicate that the training data difference is not important. However, this is unlikely to be true as you would definitely want to use larger models (total number of parameters) when your training set is magnitude larger (i.e., log(datasize) can be an important feature). This is likely because they did not run experiments on large datasets. In addition, I think the title of the paper does not accurately reflect what the paper is about and should be modified. Also, this paper cited Sainath et al. 2015 as the work that leads to breakthrough in speech recognition. However, the breakthrough in ASR happened much earlier. The first paper with all three key components was published in 2010: Yu, D., Deng, L. and Dahl, G., 2010, December. Roles of pre-training and fine-tuning in context-dependent DBN-HMMs for real-world speech recognition. In Proc. NIPS Workshop on Deep Learning and Unsupervised Feature Learning. and the more detailed paper was published in 2012 Dahl, G.E., Yu, D., Deng, L. and Acero, A., 2012. Context-dependent pre-trained deep neural networks for large-vocabulary speech recognition. IEEE Transactions on Audio, Speech, and Language Processing, 20(1), pp.30-42. As a conclusion, this paper presented some very preliminary result. Although it's interesting it's not ready for publishing.
5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature
Byx5BTilg
ICLR.cc/2017/conference
2017
Exploring the Application of Deep Learning for Supervised Learning Problems
["Jose Rozanec", "Gilad Katz", "Eui Chul Richard Shin", "Dawn Song"]
One of the main difficulties in applying deep neural nets (DNNs) to new domains is the need to explore multiple architectures in order to discover ones that perform well. We analyze a large set of DNNs across multiple domains and derive insights regarding their effectiveness. We also analyze the characteristics of various DNNs and the general effect they may have on performance. Finally, we explore the application of meta-learning to the problem of architecture ranking. We demonstrate that by using topological features and modeling the changes in its weights, biases and activation functions layers of the initial training steps, we are able to rank architectures based on their predicted performance. We consider this work to be a first step in the important and challenging direction of exploring the space of different neural network architectures.
["Deep learning", "Supervised Learning"]
ABSTRACTOne of the main difficulties in applying deep neural nets (DNNs) to new domainsis the need to explore multiple architectures in order to discover ones that performwell. We analyze a large set of DNNs across multiple domains and derive insightsregarding their effectiveness. We also analyze the characteristics of various DNNsand the general effect they may have on performance. Finally, we explore the ap-plication of meta-learning to the problem of architecture ranking. We demonstratethat by using topological features and modeling the changes in its weights, biasesand activation functions layers of the initial training steps, we are able to rankarchitectures based on their predicted performance. We consider this work to bea first step in the important and challenging direction of exploring the space ofdifferent neural network architectures.1 I NTRODUCTIONRecent advances in deep neural networks (DNNs) have led to breakthroughs in fields such as imageclassification (He et al., 2015; Krizhevsky et al., 2012) and speech recognition (Yu et al., 2010; Dahlet al., 2012). One reason for the effectiveness of DNNs is their ability to integrate low, mid and high-level features in a natural way (Zeiler & Fergus, 2014). While recent work such as (Simonyan &Zisserman, 2014) suggests that in many cases the depth of the architecture is crucial, the emergenceof more complex architectures (He et al., 2015; Szegedy et al., 2015) demonstrates that depth aloneoften does not suffice.While DNNs have been highly effective in several domains, their application in additional fieldsis yet to become widespread. We argue that this is the case due to two challenges. The first isthe difficulty of designing effective architectures for domains in which there is little or no previousknowledge on the application of deep learning. Moreover, since designing DNN architectures isnot intuitive for most people, this task is likely to fall to experts whose time is in high demand.The second challenge, which is strongly coupled with the first, is the large amounts of computingpower and time required to evaluate multiple DNNs. These traits constrain the number of DNNarchitectures that can be evaluated, thus further limiting one’s ability to explore new architectures orrespond to changing circumstances.In this study we explore the possibility of applying architectures that are effective for one domain toanother. We do so by generating a large number of architectures and evaluate their performance onmultiple tabular datasets in order to determine whether the architectures are transferable. We alsoexplore the feasibility of architectures with parallel layers and compare their effectiveness to thatof their “linear” counterparts. Our results show that while architectures do not perform well acrossmultiple datasets, parallel architectures are surprisingly effective.When attempting to apply DNNs to an unknown domain, one way of approaching the problemwould be to randomly “sample” various architectures and analyze their performance distribution.The top-performing architectures found in the sampling can form the base for future explorationwhile the variance in performance can assist in determining the number of architectures that need tobe sampled. We explore a meta-learning approach that may improve the efficiency of this process byranking the architectures based on their expected performance. Our approach models the topologyof the DNN as well as the changes in weights, biases and activation function layers throughout theinitial training steps and uses this information to rank the architectures by their relative performance.Preliminary results are encouraging.1Under review as a conference paper at ICLR 2017While we consider this study to be an important first step, we feel obliged to point out that workis done in a limited setting. To enable the generation of multiple DNN architectures with diversetopologies, we applied uniform and fixed parameters such as layer sizes and learning rates. As aresult, the architecture space we explore is limited. Validating our results on a more diverse set ofarchitectures with multiple hyperparameter configuration will require additional experimentation.We plan to address these issues in future work.Our contributions are as follows:We explore DNNs across multiple datasets, evaluate their effectiveness and analyze if someperform best across datasets.We systematically evaluate a large number of architectures over multiple supervised-classification datasets and derive insights regarding the design and application of DNNswith parallel layers for general classification problems.We present a novel meta learning-based ranking method that utilizes both topological fea-tures as well as weights, biases and activation function layers of the various components ofthe DNN architecture during the initial training phase. To the best of our knowledge, this isthe first time these characteristics have been used in a meta-learning scheme. Preliminaryresults of this approach are promising.2 R ELATED WORKWe review two areas of research whose aim is to better understand and improve the performanceof DNN architectures. The first is area of research focuses on the exploration and analysis of DNNarchitectures. The second area of research is automatic parameter tuning.2.1 E XPLORATION AND ANALYSIS OF DNN ARCHITECTURESDespite their remarkable success in various domains, the inner-workings of DNNs remain to somedegree a “black box”. Multiple studies attempted to provide insight into this matter. In Jarrett et al.(2009), the authors analyze convolutional neural networks (CNNs) and derive insights regarding thearchitecture design and the contribution of its different components. Another work aimed at betterunderstanding CNNs is presented in Shang et al. (2016). The authors analyze widely used CNNarchitectures and derive insights into their possible shortcomings. To address these shortcomings,they propose a new version of the popular ReLU activation scheme.The exploration of DNN architectures has also taken place for recurrent neural networks (RNNs).In Zaremba (2015), the authors explore various modifications to LSTM architectures to improvetheir performance, and propose several enhancements to the architecture. Another study Wu & King(2016) aims to determine the reasons for the effectiveness of LSTMs and identify the contributionof its different elements. Based on their conclusions, the authors proposed a simplified version ofLSTM.2.2 A UTOMATIC DNN PARAMETER TUNINGThe ability to automatically tune the hyperparameters of a DNN architecture is important not onlybecause of its ability to improve performance, but also due to the considerable time it can poten-tially save. In Maclaurin et al. (2015) the authors demonstrate how information extracted from thestochastic gradient descent can efficiently tune multiple parameters in the architecture. An addi-tional work that analyzes the gradient is presented in Duvenaud et al. (2016), where the informationis used to determine when to terminate the training of the architecture to avoid over-fitting. A dif-ferent optimization approach is presented in Mendoza et al., where the authors define a large setof hyperparameters (batch size, learning rate, activation types, etc.) and apply Bayesian optimiza-tion on top-performing configurations. The approach is only applied to feed-forward networks andoutperforms human experts by 10%, using the AUC measure.Additional types of optimization have also been proposed in recent years. In Jin et al. (2016), theauthors focus on setting the size of hidden layers in RNNs. They accomplish this by convertingthe optimization problem into a subset selection problem. An important aspect of this approach is2Under review as a conference paper at ICLR 2017that it takes time constraints into account, thus enabling solutions that are feasible given availableresources. Another approach, in which one long-short term memory network (LSTM) is used tooptimize another, was proposed by Andrychowicz et al. (2016). The two networks have shared pa-rameters but separate hidden states and the optimizer network is both modifying its own weights andthose of the optimized simultaneously. Finally, an approach that automatically adjusts the learningrates of the neural net was presented in Schaul et al. (2013). The approach has been shown to beeffective both on convex and non-convex learning tasks.Recent work by Li et al. (2016) proposes an exploration/exploitation scheme for hyperparametertuning. The authors apply a multi-arm bandits algorithm, with each arm representing a parameterconfiguration. A process of successive halving (Jamieson & Talwalkar, 2015), in which a certainpercentage of the lowest-performing configurations are dropped every nsteps enables the frameworkto explore promising directions. We consider this approach complementary to our proposed meta-learning approach, as the former enables exploration of a large number of configurations while thelatter can reduce time required to assess their performance.3 P ROBLEM DEFINITIONAs mentioned in Section 1, one of the challenges in applying deep learning to a new field is the needto design and test multiple DNN architectures. Only by iterative testing can practitioners discoverthe capabilities and limitations of deep learning in the domain. Even with ever-increasing computingpower, the high computational cost of this process currently presents a significant barrier for mostpractitioners.This limitation leads us to explore the following questions:1. Would DNN architectures that perform well on one general supervised classification prob-lem also be effective when applied to dataset in other domains?2. What types of architectures are effective for general supervised learning problems? Shouldpractitioners consider other types architectures besides “deep”?3. Can DNN architectures outperform “conventional” machine learning classifiers in generalsupervised problems?4. Is it possible to identify top-performing networks in the early stages of the training? ifpossible, such a technique could preserve valuable computing resources.We attempt to begin addressing these questions in the subsequent sections of this study. We itera-tively evaluate a large number of DNN architectures on a set of supervised classification problems.These datasets differ from those of image and speech classification in that they consist of tabulardata with both numeric and discrete features. These differences make it unclear what types of archi-tectures are likely to perform well on these domains. The datasets we analyze were selected becauseof their diversity in terms of size and feature number and composition. These traits also enable us tobetter understand the difficulties in applying DNN architectures across multiple domains.In order to provide meaningful results, the set of architectures we evaluate is also diverse. Wetherefore automatically generate a diverse set of architecture with various topological traits. Becauselittle information is available on the application of deep learning to general supervised classificationproblems, we choose to explore not only architectures that are linear but also architectures withparallel layers. While the generate set is diverse, additional work is required in order to modeladditional types of architectures. We elaborate on these points further in the subsequent section.4 G ENERATING MULTIPLE DNN ARCHITECTURESIn order to effectively explore the architecture space, we require a large and diverse set. We createthis set by automatically generating a large number of architectures and training each of them on alltraining set datasets. Our generation algorithm, presented in Algorithm 1, generates both “deep” and“wide” architectures with parallel layers (see Figure 1(b)). Next we describe the generation process.We consider DNN architectures to consist of components . We define a component as any partof an architecture, be it a layer, normalization or activation function. In this study we consider3Under review as a conference paper at ICLR 2017InputHidden Layer 2Hidden Layer 1OutputInputOutputHidden Layer 1 Hidden Layer 3ConcatOriginal (a) (b)InputHidden Layer 2Hidden Layer 1OutputHidden Layer 3Hidden Layer 2Figure 1: An example of the architectures that can be derived from an existing one.the following components: fully-connected layers, softmax, batch normalization, dropout and theReLU, sigmoid and tanh activation functions.We begin the generation process with a “basic” architecture consisting only of two components:a fully-connected input layer and an output softmax layer. We then expand the set of possiblearchitectures by iteratively applying the following steps:1. For each pair of components in the architecture, identify all component that could be in-serted between them (Figure 1(a)).2. For each pair of components in the architecture, identify all component that could be in-serted in parallel to one of them (Figure 1(b)).3. For each of the components identified in the previous steps, generate a new copy of thearchitecture and perform the corresponding insertion.Our proposed architecture generation approach enables us to generate the topological representationof every possible neural networks that consist of the predefined components. However, we do notgenerate multiple hyperparameter configurations for each topology and use fixed parameters foreach component. We plan to address this limitation in future work, possibly by using an approachsimilar to the one presented in Li et al. (2016). It is also important to point out that we currently donot support weight-sharing and therefore do not consider CNN and RNN architectures. Given thecharacteristics of the analyzed data, we do not consider these architecture types likely to producemeaningful results.Another important aspect of the our architecture generation approach is that we generate architec-tures with connections between layers of various depths. An example of this is shown in Figure 1(b),where we connect layers of depths 1 and 2. This setting enables us to systematically explore morecomplex designs than those commonly used. We analyze these architectures further in Section 6.As the number of possible architectures grows exponentially, we limit the total number of architec-tures that we generate by constraining the maximal number of components in a architecture and thenumber of parallel layers an architecture may contain. The specific settings used in our experimentsare presented in Section 6.1. These settings were chosen in order to ensure a diverse set of both deepand wide architectures given the time and computing-power constraints, and we plan to change themin future work to further diversify the set of generated architectures. To select the architectures fromwhich additional ones will be generated, we apply a priority queue. We first sort the architectures bythe number of their activation layers (in a descending order) with a secondary sorting based on thetotal number of components (in an ascending order). This setting prioritizes the creation of deeperarchitectures with multiple activation layers. For each architecture in the final set, we generate the4Under review as a conference paper at ICLR 2017meta-features described in Section 5. The algorithm for the architecture generation is presented inAlgorithm 1.Algorithm 1 Automatic architecture generation1:procedure ARCHITECTURE GENERATION (arcQueue ,initArc )2: architecturesSet initArc3: architecturesQueue initArc4: while (architecturesQueue 6=;)do5: newarchitectures ;6: architecture arcQueue:pop ()7: for each P(ci; cj)i6=j2fc1; c2; :::; c ngdo8: candidateComponents proposeInsertBetweenCandidates (P(ci; cj))9: for each candidate2candidateComponents do10: newarchitecture insertBetween (architecture; P (ci; cj); candidate )11: newarchitectures newarchitectures [newarchitecture12: candidateComponents proposeInsertAsideCandidates (P(ci; cj))13: for each candidate2candidateComponents do14: newarchitecture insertAside (architecture; P (ci; cj); candidate )15: newarchitectures newarchitectures [newarchitecture16: newarchitectures filter (newarchitectures )17: arcQueue arcQueue[newarchitectures18: architecturesSet architecturesSet [newarchitectures19: return architecturesSet5 M ETA-LEARNING FOR ARCHITECTURE RANKINGOur goal is to determine whether by analyzing the topology of DNN architecture as well as thetransformations it undergoes in its early training iterations could be used to predict its performance.To this end we develop a novel machine learning-based approach that generates a set of featuresfor each analyzed architecture. Once the features are generated, we use a ranking classifier to as-sign a score to each architecture. The classifier is trained on a large corpus of datasets (additionalinformation is provided in Section 6.1).We apply meta-learning (Vilalta & Drissi, 2002) to predict the performance of the DNN architec-tures. Meta-learning is a branch of machine learning in which an algorithm “learns how to learn” byextracting information on the learning process of another algorithm. The features extracted in thisprocess are called meta-features. We generate three types of meta-features: dataset-based ,topology-based andtraining-based . We hypothesize that these groups represent the elements that affect theperformance of the DNN architecture - the data on which it is trained, the structure of the networkand the changes in its weights, biases and activation functions during throughout the training pro-cess. We provide a full overview of the meta-features groups below and detailed information inAppendix A .Dataset-based meta-features. As explained in Section 3, the datasets we use in the evaluationvary significantly in size and feature composition. These meta-features attempt to represent themultiple characteristics that may affect the performance of deep learning algorithms. We generatethree types of meta-features:1.General information: general statistics on the analyzed dataset: number of instances andclasses, number and type of features and statistics on the correlations among various fea-tures.2.Entropy-based measures: we partition the dataset’s features based on their type (discrete,numeric, etc.) and calculate statistics on the Information Gain (IG) of the features in eachgroup.3.Feature diversity: we partition the dataset into type-based groups and use the chi-squaredand paired-t test to calculate the similarity of each pair in each group. We then generatemeta-features using the tests’ statistic values.5Under review as a conference paper at ICLR 2017Topology-based meta-features. Our generated architectures vary significantly in size, depth andwidth. Since these traits are likely to affect their performance, we use the meta-features of this groupto quantify and model them. The meta-features can be partitioned into two groups:1.Architecture composition: general statistics on the number and types of layers and func-tions that make up the architecture, statistics on layer composition as a function of depthetc.2.Connectivity-based measures: for each layer in the architectures, we calculate variousmeasures that are frequently used for graph-analysis. These measures include statistics onthe number and ratio of incoming and outgoing edges (overall, per depth and per type) andnode-centrality evaluation measures.Training-based meta-features. The goal of these meta-features is to model the transformationsundergone by the DNN during the course of its training. These meta-features consist of statisticson the weights, biases and activation function layers of the various components in the architecture.These meta-features can be partitioned into two groups:1.Static evaluation: general statistics on the distribution of the various values across differ-ent depths and layer types. These features provide “snapshot” information on the trainingstatus of the architecture in multiple training steps.2.Time series-based evaluation: We compare the values obtained in the various trainingiterations to those obtained earlier, calculate ratios and modeling the changes in valuesdistribution over time.A full description of all meta-features is provided in Appendix A.6 E XPERIMENTS AND ANALYSIS6.1 E XPERIMENTAL SETUPWe conduct our experiments on 13 supervised classification datasets in a tabular form. We se-lected these datasets since they represent common supervised-learning problems that are not oftenaddressed by deep learning. In addition, their feature composition consists of both numeric and dis-crete features, a trait that makes them different from image and speech classification datasets. Thedatasets vary significantly in size, number and type of features (some contain only numerical featureswhile others also contain discrete features) and class imbalance - traits we hypothesize will makelearning across domains more challenging. All datasets are available on the OpenML repository andtheir properties are represented in Appendix B.We use the following settings:For each dataset, we train the same set of 11,170 architectures, generated as described inSection 4. The maximal width (number of parallel layers) allowed for an architecture wasset to 4, and we terminated the generation process upon reaching the predefined numberof architectures. This deepest architectures generated by this approach have 8 activationlayers and 14 components overall.For architectures training, all datasets were randomly partitioned into training, validationand test sets. 80% of the data points was used by the training and the remaining two setsassigned 10% each. The same split was used for all the architectures explored for eachdataset. Original class ratios were maintained in all sets.All generated architectures were trained until convergence, with the time of terminationdetermined by performance on the validation set.The training-based meta-features were only extracted for the following steps: 20, 40, 60,80 and 100.We used a leave-one-out (LOO) cross-validation approach to train the ranking classifier:for each evaluated dataset di, we train the ranking classifier using the meta-features fromdj2Dwhere i6=j. This setting enables to test whether a meta-model trained on onedataset could be effectively applied on another.6Under review as a conference paper at ICLR 2017We randomly split the generated architectures into two groups. The first group, consistingof 70% of the architectures, is used for training. We use the remaining 30% to evaluate theperformance of our approach on each dataset.6.2 A NALYSISWe begin by analyzing the accuracy distribution of the generated architectures across the datasets.We found that the distribution of accuracies varies significantly across the different datasets, withsome datasets with ranges of [45%-90%] accuracy while others are in the range [89%-95%]. Thisdifference has significant impact on one’s ability to apply architectures that are effective in onedomain to another, as we confirm with the next experiment. An example of accuracies distributionsis presented in figures 2 and 3 and plots for all datasets are presented in Appendix D.Figure 2: Accuracies plot for the datasetAileronsFigure 3: Accuracies plot for the dataset Con-traceptiveAnalyzing the performance differences of “parent–child” architectures. In order to determinewhether our architecture generation method is effective, we analyzed the differences in accuracybetween every architecture and its descendant. Our reason for performing this analysis is as fol-lows: if making incremental additions to an existing architecture does not significantly change itsperformance, then we simply generate a large number of architecture which are nearly identical inperformance.The results of our analysis are presented in Table 1. For every ”parent–child“ pair we calculatethe difference in accuracy on the test set. We then calculate the maximal and average changes inaccuracy for each dataset. It is clear from the results that the changes in accuracy are significant,especially given the fact that changes are accumulated over time (deeper architectures are a result ofmultiple modifications).Next we analyze the “parent–child” architectures with the maximal differences in order to determinewhether the addition of particular component is most likely to induce large changes in accuracy. Ourresults, presented in Table 2, show that no one component type can be consistently attributed withinducing large changes.Applying architectures across datasets. We attempt to determine whether it is possible to findarchitectures that perform well across multiple datasets. For each of the generated architectures, wecalculate its performance-based ranking (i.e. position in a list ordered by the accuracy measure) oneach of the datasets. Then, for each dataset we test the performance of the architecture with thebest average ranking on the remaining datasets. We compare the performance of this architecture tothat of the best evaluated architecture and to that of the best architecture found by our meta-learningmodel (described in the following section). The results, presented in Table 3, show significantdifferences in performance and lead us to conclude that in most cases DNN architectures do notperform well across multiple datasets.7Under review as a conference paper at ICLR 2017Table 1: Analyzing the differences in accuracy for the different architecture parent–child pairs foreach dataset.Dataset Max difference Average differenceContraceptive 5% 1.8%Seismic bumps 4.9% 1.1%Page Blocks 7.4% 1.4%Wind 35% 3.2%Puma 32 19.2% 1.8%CPU act 40% 3.3%Delta elevators 39.5% 2.7%Mammography 3% 1.1%Ailerons 17.4% 5.7%Bank marketing 3.5% 0.8%German Credit 5% 1%Space 11.5% 2.5%Cardiography 11.5% 1%Table 2: Analyzing the differences in accuracy for the different architecture parent–child pairs foreach dataset.Component type Number of appearancesDropout 2Sigmoid 3TanH 2Fully connected 2ReLU 1Batchnorm 3Comparing the performance of DNN architectures to those of “conventional classifiers”. Asa point of reference to “classical” machine learning approaches for classifying tabular data, in Table3 we also presents the performance of the Random Forest algorithm (using the Weka Hall et al.(2009) implementation with the default parameters ). It is clear that neither Random Forest nor theDNN architectures consistently outperform the other. We intend to explore the factors that causethese differences in performance in future work.Table 3: Comparison of the accuracy performance of the best average-ranking architectures to thetop-ranking architecture found by our approach for each dataset.Dataset Best architecture Top ranked (bestfound by model)Architecture with bestaverage rankingRandom ForestContraceptive 84.5% 84% 79.7% 76.4%Seismic bumps 95% 94.1% 92.1% 93.4%Page Blocks 97% 95.2% 89.6% 97.9%Wind 88% 84.3% 54% 86.5%Puma 32 70% 67% 50.7% 88.1%CPU act 91% 87.7% 70% 93.7%Delta elevators 90% 88.7% 79.2% 87.7%Mammography 99% 98.9% 97% 98.8%Ailerons 89% 86.2% 59% 88.6%Bank marketing 96% 95% 94% 90.5%German Credit 77.1% 73.6% 68.2% 76.9%Space 69.6% 66.8% 56.5% 84%Cardiography 94.5% 93.7 86.4% 95.5%Analyzing the performance of architectures with parallel layers. Next we explore whetherarchitectures with parallel layers outperform similar non-parallel architectures. We analyze the 100top-performing architectures of each dataset and calculate the percentage of architectures with par-allel layers. The results, presented in Appendix C, show that this type of architecture consists onaverage of 62% of the top-performing architectures.8Under review as a conference paper at ICLR 2017To determine whether the benefit of applying parallel layers is significant, we randomly choose oneof our datasets (Ailerons) and identify the 100 top-performing architectures with parallel layers.From this set we randomly sample 10 architectures and compare the performance of each of themto those of allof their possible serial counterparts, created by iteratively removing all but one ofthe different parallel layers. Our results, presented in Table 4, show that architectures with parallellayers significantly outperform allof their serial counterparts.Considering the same sample of parallel architectures, we analyze whether architectures perfor-mance can be improved by adding a batch normalization before, after or before and after eachactivation function. As shown by the results in Table 4, we did not find evidence that the additionof batch normalization improves the performance of architectures with parallel layers. We find thisfact surprising and intend to explore this further in future work. An example of one of the parallelarchitectures is presented in Figure 4 in Appendix C.Finally, we also analyze the component composition of the 100 top-performing architectures foreach dataset. The most interesting conclusion found in this analysis is the fact that a relativelyshallow architectures ( 4 fully-connected layers) seem to yield the best performance on average forall datasets. The full analysis of the architecture components is presented in Table 12 in AppendixC.Table 4: Comparison of the performance of parallel architectures to their serial counterparts.Parallel Ar-chitecturesSerialversionsParallel withbatchnorm –beforeParallel withbatchnorm –afterParallel withbatchnorm –before & after)Average 87.6% 71.8% 70.4% 77.4% 76.5%Standard Deviation 0.39% 7.8% 9.9% 4.2% 3.6%6.3 E VALUATING THE META -LEARNING APPRAOCHWe analyze the performance of our meta-learning model as a classifier to rank architectures basedon their performance. For these experiments, we use the following settings:We define the 5% of the top-performing architectures of each dataset as “good” and labelthe remaining as “bad”. We use this setting due to the large variance in the performance ofthe DNN architectures on the different datasets (see Appendix D for full details). We alsointend to experiment with other labeling methods in future work.We use the precision@X measure as the evaluation metric. We calculate it by ranking allarchitectures according with the confidence of the meta-classifier (i.e. the classifier trainedon the meta-features) in them being “good”. Then, for the Xtop-ranking architectures wecalculate the actual percentage of “good” architectures is X.We conduct a separate evaluation on the training-based meta-features and the dataset-basedand topological meta-features. Since the training-based features are more computationallyexpensive to compute, we find it interesting to compare their performance to the othertypes of meta-features. In our experiments we denote the full set as ML full, the training-based meta-features as ML train and the topological and dataset-based meta-features asML data+top.We use the Random Forest algorithm for the training of the meta-model.The results of our evaluation are presented in Table 5. We show that we are able to identify multiplearchitectures in the top-ranking spots in a much higher ratio than their share of the population. It isalso clear that the joint set of all meta-features outperforms both of the examined subsets.Next we conduct a random sampling over architectures, and compare the performance of the sam-pled architectures to those obtained by ranking all architectures using the proposed meta-classifier.Our goal is to determine the probability that Nrandomly-sampled architectures will consist of atleast one architecture that outperforms all the top Mitems ranked by the meta-classifier. We conductthe experiment as follows: for each dataset, we randomly sample a fixed number of architecturesand identify the one with the highest performance among those sampled. We then check if this9Under review as a conference paper at ICLR 2017architecture outperforms all those in the ranked list provided by the meta-learning model. We re-peat this process 50,000 for each dataset and calculate the probability of this scenario. The results,presented in Table 6, show that our model outperforms random sampling for all datasets, often by alarge margin. However, further experimentation is required to fully determine the effectiveness ofthe meta-learning approach.Finally, we analyze the results in order to determine the effectiveness of the different meta-featuresused by our model. The analysis was carried out by running LASSO logistic regression and analyz-ing the weights assigned to the various meta-features. Based on this analysis we reach the followingconclusions:The dataset-based meta-features had the smallest contribution to the performance. Whileit is somewhat surprising given the fact that DNNs perform very differently on datasetwith different characteristics, we conclude that the model is focused on the way in thearchitecture is trained on the data (i.e. weights and activations).The topological meta-features that had the largest contribution were those modeling thedepth of the network, the number of parallel layers and those counting the number of vari-ous components.The ranking model uses a large number of training-based meta-features and from all typesdescribed in Appendix A. However, the model includes only weight and activation-basedmeta-features among the training-based meta-features. The biases-based meta-features arealmost never used.Table 5: The evaluation results of different approaches using the precision@X metric. full,trainandd+tdenote ML full (all meta-features), ML train (training-based meta-features only) andML data+top(dataset-based and topological meta-features) respectively. Best results are in bold.Datasetprecision@5 precision@10 precision@20 precision@50full train d+tfull train d+tfull train d+tfull train d+tContraceptive 20% 20% 0% 20% 10% 20% 20% 5% 15% 20% 10% 8%Seismic Bumps 20% 40% 20% 20% 20% 10% 25% 20% 15% 12% 16% 12%Page Blocks 40% 20% 0% 30% 20% 0% 20% 15% 0% 16% 14% 14%Wind 40% 0% 40% 20% 20% 30% 10% 15% 25% 12% 16% 20%Puma32 20% 20% 0% 10% 20% 20% 15% 20% 10% 16% 10% 10%CPU Act 40% 20% 20% 30% 20% 20% 30% 15% 10% 22% 12% 16%Delta Elevators 20% 20% 20% 20% 20% 10% 15% 25% 20% 20% 20% 12%Mammography 20% 0% 0% 20% 20% 0% 20% 15% 5% 20% 10% 12%Ailerons 40% 40% 40% 30% 30% 20% 30% 20% 20% 28% 22% 26%Bank Marketing 20% 0% 20% 30% 10% 20% 20% 10% 10% 10% 14% 10%German Credit 40% 20% 20% 40% 10% 10% 20% 10% 10% 14% 10% 10%Space 20% 0% 0% 10% 10% 0% 15% 10% 10% 18% 14% 10%Cardiography 20% 0% 20% 20% 10% 10% 20% 15% 20% 18% 14% 16%7 C ONCLUSIONS AND FUTURE WORKIn this study we have explored several aspects of applying DNNs to supervised classification prob-lems. Our results demonstrate the difficulty in using DNN architectures that are effective in onedomain to another. We also systematically compare the performance of architectures with parallellayers to those of similar linear architectures and demonstrate that the former outperforms the latterin many cases. We present a novel approach for predicting the performance of a DNN architectureby analyzing its topology and the changes in its weights, biases and activation function values duringearly phases of training. Our aim is that this work can lay the foundation for a better understandingof the DNN architectures space.For future work we consider several directions. First, we plan to add additional components to theones currently used in our automatic architecture generation method in order to enable further ex-ploration. In addition, we will seek to enhance our approach adding automatic parameter tuningmethods. This will enable us to efficiently explore multiple configurations and possibly identifyhigher-performing architectures. We are also considering the use of an exploration/exploitation10Under review as a conference paper at ICLR 2017Table 6: The probabilities of finding an architecture that outperforms all those in the ranked list whenrandomly sampling a set of architectures. The size of the ranked list by our algorithm is always 10(i.e. for sample size 20 we test a set two times the size of the ranked list.)Dataset Sample size - 10 Sample size - 20Contraceptive 1.7% 3.2%Seismic bumps 11.5 22%Page Blocks 14.8% 27.7%Wind 24.3% 41.5%Puma 32 20.7% 36.5%CPU act 3.4% 6.7%Delta elevators 33.3% 55.5%Mammography 7.5% 14.3%Ailerons 13.9% 25.5%Bank marketing 5.6% 10.4%German Credit 11.9% 22.9%Space 20.2% 36.3%Cardiography 5.6% 11.2%scheme along the lines presented in Li et al. (2016) to enable us to efficiently explore larger archi-tecture spaces.Another approach we plan to explore is to make the search over network architectures a fully-differentiable problem, by encoding the problem only using mechanisms that enable such a search.As an example, let us imagine that we want to decide the best number of internal hidden layers touse in a multi-layer fully-connected neural net. For this, we could create multiple parallel stacksof layers with the same input at the bottom (e.g. the features for each data point) and the samekind of output at the end (e.g. probabilities over the possible classes) and then use a softmax totake a weighted sum of the outputs from each of the parallel stacks. By using a penalty on thenegative entropy of this weighted sum, and increasing the penalty over time, the network shouldlearn to produce the output using only one of the parallel stacks which we can then use at inferencetime. We can also train multiple models simultaneously using this method, and introduce additionalpenalties to ensure that the multiple models explore different architectures during training, to enablea more diverse search.11Under review as a conference paper at ICLR 2017
rymsM8RXx
An interesting but somewhat underwhelming study
5: Marginally below acceptance threshold
This paper presents an intriguing study of how one can pose architecture search as a meta learning problem. By collecting features from networks trained on various datasets and training a “ranking classifier” (the actual details of the classifier do not seem to be described in detail) one can potentially infer what a good architecture for a new problem could be by simply running the ranker on the extracted features for a new problem setup. One notable comment from the paper is that the authors fix some important hyper-parameters for all the networks. I am of the opinion that optimizing the learning rate (and its decay schedule) is actually quite important. I hypothesize that a lot of the conclusions of this paper may change quite a bit if the authors did an actual search over the rates instead. I suspect that instead of training 11k nets, one can train 2k nets with 5 learning rates each and get a much better result that is actually compelling. I am not convinced that the protocol for generating the various architectures is doing a good job at creating a diversity of architecture (simply because of the max depth of 8 layers and 14 components overall). I suspect that most of these generated architectures are actually almost identical performance-wise and that it’s a waste to train so many of them on so many tasks. Unless the authors are already doing this, they should define a pruning mechanism that filters out nets that are too similar to already existing ones. The batch normalization experiments in Table 2 seem odd and under-explained. It is also well-known that the optimal learning rates when using batch norm vs. not using batch norm can differ by an order of magnitude so given the fixed learning rate throughout all experiments, I take these results with some grain of salt. I am not sure we got many insights into the kinds of architectures that ended up being at the top. Either visualizations, or trends (or both), would be great. This work seems to conflate the study of parallel vs. serial architectures with the study of meta learning, which are somewhat distinct issues. I take issue with the table that compares parallel vs. serial performance (table 2) simply because the right way would be to filter the architectures by the same number of parameters / capacity. Ultimately the conclusion seems to be that when applying deep nets in a new domain, it is difficult to come up with a good architecture in advance. In that sense, it is hard to see the paper as a constructive result, because it’s conclusions are that while the ranker may do a good job often-times, it’s not that reliable. Thus I am not convinced that this particular result will be of practical use to folks who are intending to use deep nets for a new domain.
4: The reviewer is confident but not absolutely certain that the evaluation is correct
Byx5BTilg
ICLR.cc/2017/conference
2017
Exploring the Application of Deep Learning for Supervised Learning Problems
["Jose Rozanec", "Gilad Katz", "Eui Chul Richard Shin", "Dawn Song"]
One of the main difficulties in applying deep neural nets (DNNs) to new domains is the need to explore multiple architectures in order to discover ones that perform well. We analyze a large set of DNNs across multiple domains and derive insights regarding their effectiveness. We also analyze the characteristics of various DNNs and the general effect they may have on performance. Finally, we explore the application of meta-learning to the problem of architecture ranking. We demonstrate that by using topological features and modeling the changes in its weights, biases and activation functions layers of the initial training steps, we are able to rank architectures based on their predicted performance. We consider this work to be a first step in the important and challenging direction of exploring the space of different neural network architectures.
["Deep learning", "Supervised Learning"]
ABSTRACTOne of the main difficulties in applying deep neural nets (DNNs) to new domainsis the need to explore multiple architectures in order to discover ones that performwell. We analyze a large set of DNNs across multiple domains and derive insightsregarding their effectiveness. We also analyze the characteristics of various DNNsand the general effect they may have on performance. Finally, we explore the ap-plication of meta-learning to the problem of architecture ranking. We demonstratethat by using topological features and modeling the changes in its weights, biasesand activation functions layers of the initial training steps, we are able to rankarchitectures based on their predicted performance. We consider this work to bea first step in the important and challenging direction of exploring the space ofdifferent neural network architectures.1 I NTRODUCTIONRecent advances in deep neural networks (DNNs) have led to breakthroughs in fields such as imageclassification (He et al., 2015; Krizhevsky et al., 2012) and speech recognition (Yu et al., 2010; Dahlet al., 2012). One reason for the effectiveness of DNNs is their ability to integrate low, mid and high-level features in a natural way (Zeiler & Fergus, 2014). While recent work such as (Simonyan &Zisserman, 2014) suggests that in many cases the depth of the architecture is crucial, the emergenceof more complex architectures (He et al., 2015; Szegedy et al., 2015) demonstrates that depth aloneoften does not suffice.While DNNs have been highly effective in several domains, their application in additional fieldsis yet to become widespread. We argue that this is the case due to two challenges. The first isthe difficulty of designing effective architectures for domains in which there is little or no previousknowledge on the application of deep learning. Moreover, since designing DNN architectures isnot intuitive for most people, this task is likely to fall to experts whose time is in high demand.The second challenge, which is strongly coupled with the first, is the large amounts of computingpower and time required to evaluate multiple DNNs. These traits constrain the number of DNNarchitectures that can be evaluated, thus further limiting one’s ability to explore new architectures orrespond to changing circumstances.In this study we explore the possibility of applying architectures that are effective for one domain toanother. We do so by generating a large number of architectures and evaluate their performance onmultiple tabular datasets in order to determine whether the architectures are transferable. We alsoexplore the feasibility of architectures with parallel layers and compare their effectiveness to thatof their “linear” counterparts. Our results show that while architectures do not perform well acrossmultiple datasets, parallel architectures are surprisingly effective.When attempting to apply DNNs to an unknown domain, one way of approaching the problemwould be to randomly “sample” various architectures and analyze their performance distribution.The top-performing architectures found in the sampling can form the base for future explorationwhile the variance in performance can assist in determining the number of architectures that need tobe sampled. We explore a meta-learning approach that may improve the efficiency of this process byranking the architectures based on their expected performance. Our approach models the topologyof the DNN as well as the changes in weights, biases and activation function layers throughout theinitial training steps and uses this information to rank the architectures by their relative performance.Preliminary results are encouraging.1Under review as a conference paper at ICLR 2017While we consider this study to be an important first step, we feel obliged to point out that workis done in a limited setting. To enable the generation of multiple DNN architectures with diversetopologies, we applied uniform and fixed parameters such as layer sizes and learning rates. As aresult, the architecture space we explore is limited. Validating our results on a more diverse set ofarchitectures with multiple hyperparameter configuration will require additional experimentation.We plan to address these issues in future work.Our contributions are as follows:We explore DNNs across multiple datasets, evaluate their effectiveness and analyze if someperform best across datasets.We systematically evaluate a large number of architectures over multiple supervised-classification datasets and derive insights regarding the design and application of DNNswith parallel layers for general classification problems.We present a novel meta learning-based ranking method that utilizes both topological fea-tures as well as weights, biases and activation function layers of the various components ofthe DNN architecture during the initial training phase. To the best of our knowledge, this isthe first time these characteristics have been used in a meta-learning scheme. Preliminaryresults of this approach are promising.2 R ELATED WORKWe review two areas of research whose aim is to better understand and improve the performanceof DNN architectures. The first is area of research focuses on the exploration and analysis of DNNarchitectures. The second area of research is automatic parameter tuning.2.1 E XPLORATION AND ANALYSIS OF DNN ARCHITECTURESDespite their remarkable success in various domains, the inner-workings of DNNs remain to somedegree a “black box”. Multiple studies attempted to provide insight into this matter. In Jarrett et al.(2009), the authors analyze convolutional neural networks (CNNs) and derive insights regarding thearchitecture design and the contribution of its different components. Another work aimed at betterunderstanding CNNs is presented in Shang et al. (2016). The authors analyze widely used CNNarchitectures and derive insights into their possible shortcomings. To address these shortcomings,they propose a new version of the popular ReLU activation scheme.The exploration of DNN architectures has also taken place for recurrent neural networks (RNNs).In Zaremba (2015), the authors explore various modifications to LSTM architectures to improvetheir performance, and propose several enhancements to the architecture. Another study Wu & King(2016) aims to determine the reasons for the effectiveness of LSTMs and identify the contributionof its different elements. Based on their conclusions, the authors proposed a simplified version ofLSTM.2.2 A UTOMATIC DNN PARAMETER TUNINGThe ability to automatically tune the hyperparameters of a DNN architecture is important not onlybecause of its ability to improve performance, but also due to the considerable time it can poten-tially save. In Maclaurin et al. (2015) the authors demonstrate how information extracted from thestochastic gradient descent can efficiently tune multiple parameters in the architecture. An addi-tional work that analyzes the gradient is presented in Duvenaud et al. (2016), where the informationis used to determine when to terminate the training of the architecture to avoid over-fitting. A dif-ferent optimization approach is presented in Mendoza et al., where the authors define a large setof hyperparameters (batch size, learning rate, activation types, etc.) and apply Bayesian optimiza-tion on top-performing configurations. The approach is only applied to feed-forward networks andoutperforms human experts by 10%, using the AUC measure.Additional types of optimization have also been proposed in recent years. In Jin et al. (2016), theauthors focus on setting the size of hidden layers in RNNs. They accomplish this by convertingthe optimization problem into a subset selection problem. An important aspect of this approach is2Under review as a conference paper at ICLR 2017that it takes time constraints into account, thus enabling solutions that are feasible given availableresources. Another approach, in which one long-short term memory network (LSTM) is used tooptimize another, was proposed by Andrychowicz et al. (2016). The two networks have shared pa-rameters but separate hidden states and the optimizer network is both modifying its own weights andthose of the optimized simultaneously. Finally, an approach that automatically adjusts the learningrates of the neural net was presented in Schaul et al. (2013). The approach has been shown to beeffective both on convex and non-convex learning tasks.Recent work by Li et al. (2016) proposes an exploration/exploitation scheme for hyperparametertuning. The authors apply a multi-arm bandits algorithm, with each arm representing a parameterconfiguration. A process of successive halving (Jamieson & Talwalkar, 2015), in which a certainpercentage of the lowest-performing configurations are dropped every nsteps enables the frameworkto explore promising directions. We consider this approach complementary to our proposed meta-learning approach, as the former enables exploration of a large number of configurations while thelatter can reduce time required to assess their performance.3 P ROBLEM DEFINITIONAs mentioned in Section 1, one of the challenges in applying deep learning to a new field is the needto design and test multiple DNN architectures. Only by iterative testing can practitioners discoverthe capabilities and limitations of deep learning in the domain. Even with ever-increasing computingpower, the high computational cost of this process currently presents a significant barrier for mostpractitioners.This limitation leads us to explore the following questions:1. Would DNN architectures that perform well on one general supervised classification prob-lem also be effective when applied to dataset in other domains?2. What types of architectures are effective for general supervised learning problems? Shouldpractitioners consider other types architectures besides “deep”?3. Can DNN architectures outperform “conventional” machine learning classifiers in generalsupervised problems?4. Is it possible to identify top-performing networks in the early stages of the training? ifpossible, such a technique could preserve valuable computing resources.We attempt to begin addressing these questions in the subsequent sections of this study. We itera-tively evaluate a large number of DNN architectures on a set of supervised classification problems.These datasets differ from those of image and speech classification in that they consist of tabulardata with both numeric and discrete features. These differences make it unclear what types of archi-tectures are likely to perform well on these domains. The datasets we analyze were selected becauseof their diversity in terms of size and feature number and composition. These traits also enable us tobetter understand the difficulties in applying DNN architectures across multiple domains.In order to provide meaningful results, the set of architectures we evaluate is also diverse. Wetherefore automatically generate a diverse set of architecture with various topological traits. Becauselittle information is available on the application of deep learning to general supervised classificationproblems, we choose to explore not only architectures that are linear but also architectures withparallel layers. While the generate set is diverse, additional work is required in order to modeladditional types of architectures. We elaborate on these points further in the subsequent section.4 G ENERATING MULTIPLE DNN ARCHITECTURESIn order to effectively explore the architecture space, we require a large and diverse set. We createthis set by automatically generating a large number of architectures and training each of them on alltraining set datasets. Our generation algorithm, presented in Algorithm 1, generates both “deep” and“wide” architectures with parallel layers (see Figure 1(b)). Next we describe the generation process.We consider DNN architectures to consist of components . We define a component as any partof an architecture, be it a layer, normalization or activation function. In this study we consider3Under review as a conference paper at ICLR 2017InputHidden Layer 2Hidden Layer 1OutputInputOutputHidden Layer 1 Hidden Layer 3ConcatOriginal (a) (b)InputHidden Layer 2Hidden Layer 1OutputHidden Layer 3Hidden Layer 2Figure 1: An example of the architectures that can be derived from an existing one.the following components: fully-connected layers, softmax, batch normalization, dropout and theReLU, sigmoid and tanh activation functions.We begin the generation process with a “basic” architecture consisting only of two components:a fully-connected input layer and an output softmax layer. We then expand the set of possiblearchitectures by iteratively applying the following steps:1. For each pair of components in the architecture, identify all component that could be in-serted between them (Figure 1(a)).2. For each pair of components in the architecture, identify all component that could be in-serted in parallel to one of them (Figure 1(b)).3. For each of the components identified in the previous steps, generate a new copy of thearchitecture and perform the corresponding insertion.Our proposed architecture generation approach enables us to generate the topological representationof every possible neural networks that consist of the predefined components. However, we do notgenerate multiple hyperparameter configurations for each topology and use fixed parameters foreach component. We plan to address this limitation in future work, possibly by using an approachsimilar to the one presented in Li et al. (2016). It is also important to point out that we currently donot support weight-sharing and therefore do not consider CNN and RNN architectures. Given thecharacteristics of the analyzed data, we do not consider these architecture types likely to producemeaningful results.Another important aspect of the our architecture generation approach is that we generate architec-tures with connections between layers of various depths. An example of this is shown in Figure 1(b),where we connect layers of depths 1 and 2. This setting enables us to systematically explore morecomplex designs than those commonly used. We analyze these architectures further in Section 6.As the number of possible architectures grows exponentially, we limit the total number of architec-tures that we generate by constraining the maximal number of components in a architecture and thenumber of parallel layers an architecture may contain. The specific settings used in our experimentsare presented in Section 6.1. These settings were chosen in order to ensure a diverse set of both deepand wide architectures given the time and computing-power constraints, and we plan to change themin future work to further diversify the set of generated architectures. To select the architectures fromwhich additional ones will be generated, we apply a priority queue. We first sort the architectures bythe number of their activation layers (in a descending order) with a secondary sorting based on thetotal number of components (in an ascending order). This setting prioritizes the creation of deeperarchitectures with multiple activation layers. For each architecture in the final set, we generate the4Under review as a conference paper at ICLR 2017meta-features described in Section 5. The algorithm for the architecture generation is presented inAlgorithm 1.Algorithm 1 Automatic architecture generation1:procedure ARCHITECTURE GENERATION (arcQueue ,initArc )2: architecturesSet initArc3: architecturesQueue initArc4: while (architecturesQueue 6=;)do5: newarchitectures ;6: architecture arcQueue:pop ()7: for each P(ci; cj)i6=j2fc1; c2; :::; c ngdo8: candidateComponents proposeInsertBetweenCandidates (P(ci; cj))9: for each candidate2candidateComponents do10: newarchitecture insertBetween (architecture; P (ci; cj); candidate )11: newarchitectures newarchitectures [newarchitecture12: candidateComponents proposeInsertAsideCandidates (P(ci; cj))13: for each candidate2candidateComponents do14: newarchitecture insertAside (architecture; P (ci; cj); candidate )15: newarchitectures newarchitectures [newarchitecture16: newarchitectures filter (newarchitectures )17: arcQueue arcQueue[newarchitectures18: architecturesSet architecturesSet [newarchitectures19: return architecturesSet5 M ETA-LEARNING FOR ARCHITECTURE RANKINGOur goal is to determine whether by analyzing the topology of DNN architecture as well as thetransformations it undergoes in its early training iterations could be used to predict its performance.To this end we develop a novel machine learning-based approach that generates a set of featuresfor each analyzed architecture. Once the features are generated, we use a ranking classifier to as-sign a score to each architecture. The classifier is trained on a large corpus of datasets (additionalinformation is provided in Section 6.1).We apply meta-learning (Vilalta & Drissi, 2002) to predict the performance of the DNN architec-tures. Meta-learning is a branch of machine learning in which an algorithm “learns how to learn” byextracting information on the learning process of another algorithm. The features extracted in thisprocess are called meta-features. We generate three types of meta-features: dataset-based ,topology-based andtraining-based . We hypothesize that these groups represent the elements that affect theperformance of the DNN architecture - the data on which it is trained, the structure of the networkand the changes in its weights, biases and activation functions during throughout the training pro-cess. We provide a full overview of the meta-features groups below and detailed information inAppendix A .Dataset-based meta-features. As explained in Section 3, the datasets we use in the evaluationvary significantly in size and feature composition. These meta-features attempt to represent themultiple characteristics that may affect the performance of deep learning algorithms. We generatethree types of meta-features:1.General information: general statistics on the analyzed dataset: number of instances andclasses, number and type of features and statistics on the correlations among various fea-tures.2.Entropy-based measures: we partition the dataset’s features based on their type (discrete,numeric, etc.) and calculate statistics on the Information Gain (IG) of the features in eachgroup.3.Feature diversity: we partition the dataset into type-based groups and use the chi-squaredand paired-t test to calculate the similarity of each pair in each group. We then generatemeta-features using the tests’ statistic values.5Under review as a conference paper at ICLR 2017Topology-based meta-features. Our generated architectures vary significantly in size, depth andwidth. Since these traits are likely to affect their performance, we use the meta-features of this groupto quantify and model them. The meta-features can be partitioned into two groups:1.Architecture composition: general statistics on the number and types of layers and func-tions that make up the architecture, statistics on layer composition as a function of depthetc.2.Connectivity-based measures: for each layer in the architectures, we calculate variousmeasures that are frequently used for graph-analysis. These measures include statistics onthe number and ratio of incoming and outgoing edges (overall, per depth and per type) andnode-centrality evaluation measures.Training-based meta-features. The goal of these meta-features is to model the transformationsundergone by the DNN during the course of its training. These meta-features consist of statisticson the weights, biases and activation function layers of the various components in the architecture.These meta-features can be partitioned into two groups:1.Static evaluation: general statistics on the distribution of the various values across differ-ent depths and layer types. These features provide “snapshot” information on the trainingstatus of the architecture in multiple training steps.2.Time series-based evaluation: We compare the values obtained in the various trainingiterations to those obtained earlier, calculate ratios and modeling the changes in valuesdistribution over time.A full description of all meta-features is provided in Appendix A.6 E XPERIMENTS AND ANALYSIS6.1 E XPERIMENTAL SETUPWe conduct our experiments on 13 supervised classification datasets in a tabular form. We se-lected these datasets since they represent common supervised-learning problems that are not oftenaddressed by deep learning. In addition, their feature composition consists of both numeric and dis-crete features, a trait that makes them different from image and speech classification datasets. Thedatasets vary significantly in size, number and type of features (some contain only numerical featureswhile others also contain discrete features) and class imbalance - traits we hypothesize will makelearning across domains more challenging. All datasets are available on the OpenML repository andtheir properties are represented in Appendix B.We use the following settings:For each dataset, we train the same set of 11,170 architectures, generated as described inSection 4. The maximal width (number of parallel layers) allowed for an architecture wasset to 4, and we terminated the generation process upon reaching the predefined numberof architectures. This deepest architectures generated by this approach have 8 activationlayers and 14 components overall.For architectures training, all datasets were randomly partitioned into training, validationand test sets. 80% of the data points was used by the training and the remaining two setsassigned 10% each. The same split was used for all the architectures explored for eachdataset. Original class ratios were maintained in all sets.All generated architectures were trained until convergence, with the time of terminationdetermined by performance on the validation set.The training-based meta-features were only extracted for the following steps: 20, 40, 60,80 and 100.We used a leave-one-out (LOO) cross-validation approach to train the ranking classifier:for each evaluated dataset di, we train the ranking classifier using the meta-features fromdj2Dwhere i6=j. This setting enables to test whether a meta-model trained on onedataset could be effectively applied on another.6Under review as a conference paper at ICLR 2017We randomly split the generated architectures into two groups. The first group, consistingof 70% of the architectures, is used for training. We use the remaining 30% to evaluate theperformance of our approach on each dataset.6.2 A NALYSISWe begin by analyzing the accuracy distribution of the generated architectures across the datasets.We found that the distribution of accuracies varies significantly across the different datasets, withsome datasets with ranges of [45%-90%] accuracy while others are in the range [89%-95%]. Thisdifference has significant impact on one’s ability to apply architectures that are effective in onedomain to another, as we confirm with the next experiment. An example of accuracies distributionsis presented in figures 2 and 3 and plots for all datasets are presented in Appendix D.Figure 2: Accuracies plot for the datasetAileronsFigure 3: Accuracies plot for the dataset Con-traceptiveAnalyzing the performance differences of “parent–child” architectures. In order to determinewhether our architecture generation method is effective, we analyzed the differences in accuracybetween every architecture and its descendant. Our reason for performing this analysis is as fol-lows: if making incremental additions to an existing architecture does not significantly change itsperformance, then we simply generate a large number of architecture which are nearly identical inperformance.The results of our analysis are presented in Table 1. For every ”parent–child“ pair we calculatethe difference in accuracy on the test set. We then calculate the maximal and average changes inaccuracy for each dataset. It is clear from the results that the changes in accuracy are significant,especially given the fact that changes are accumulated over time (deeper architectures are a result ofmultiple modifications).Next we analyze the “parent–child” architectures with the maximal differences in order to determinewhether the addition of particular component is most likely to induce large changes in accuracy. Ourresults, presented in Table 2, show that no one component type can be consistently attributed withinducing large changes.Applying architectures across datasets. We attempt to determine whether it is possible to findarchitectures that perform well across multiple datasets. For each of the generated architectures, wecalculate its performance-based ranking (i.e. position in a list ordered by the accuracy measure) oneach of the datasets. Then, for each dataset we test the performance of the architecture with thebest average ranking on the remaining datasets. We compare the performance of this architecture tothat of the best evaluated architecture and to that of the best architecture found by our meta-learningmodel (described in the following section). The results, presented in Table 3, show significantdifferences in performance and lead us to conclude that in most cases DNN architectures do notperform well across multiple datasets.7Under review as a conference paper at ICLR 2017Table 1: Analyzing the differences in accuracy for the different architecture parent–child pairs foreach dataset.Dataset Max difference Average differenceContraceptive 5% 1.8%Seismic bumps 4.9% 1.1%Page Blocks 7.4% 1.4%Wind 35% 3.2%Puma 32 19.2% 1.8%CPU act 40% 3.3%Delta elevators 39.5% 2.7%Mammography 3% 1.1%Ailerons 17.4% 5.7%Bank marketing 3.5% 0.8%German Credit 5% 1%Space 11.5% 2.5%Cardiography 11.5% 1%Table 2: Analyzing the differences in accuracy for the different architecture parent–child pairs foreach dataset.Component type Number of appearancesDropout 2Sigmoid 3TanH 2Fully connected 2ReLU 1Batchnorm 3Comparing the performance of DNN architectures to those of “conventional classifiers”. Asa point of reference to “classical” machine learning approaches for classifying tabular data, in Table3 we also presents the performance of the Random Forest algorithm (using the Weka Hall et al.(2009) implementation with the default parameters ). It is clear that neither Random Forest nor theDNN architectures consistently outperform the other. We intend to explore the factors that causethese differences in performance in future work.Table 3: Comparison of the accuracy performance of the best average-ranking architectures to thetop-ranking architecture found by our approach for each dataset.Dataset Best architecture Top ranked (bestfound by model)Architecture with bestaverage rankingRandom ForestContraceptive 84.5% 84% 79.7% 76.4%Seismic bumps 95% 94.1% 92.1% 93.4%Page Blocks 97% 95.2% 89.6% 97.9%Wind 88% 84.3% 54% 86.5%Puma 32 70% 67% 50.7% 88.1%CPU act 91% 87.7% 70% 93.7%Delta elevators 90% 88.7% 79.2% 87.7%Mammography 99% 98.9% 97% 98.8%Ailerons 89% 86.2% 59% 88.6%Bank marketing 96% 95% 94% 90.5%German Credit 77.1% 73.6% 68.2% 76.9%Space 69.6% 66.8% 56.5% 84%Cardiography 94.5% 93.7 86.4% 95.5%Analyzing the performance of architectures with parallel layers. Next we explore whetherarchitectures with parallel layers outperform similar non-parallel architectures. We analyze the 100top-performing architectures of each dataset and calculate the percentage of architectures with par-allel layers. The results, presented in Appendix C, show that this type of architecture consists onaverage of 62% of the top-performing architectures.8Under review as a conference paper at ICLR 2017To determine whether the benefit of applying parallel layers is significant, we randomly choose oneof our datasets (Ailerons) and identify the 100 top-performing architectures with parallel layers.From this set we randomly sample 10 architectures and compare the performance of each of themto those of allof their possible serial counterparts, created by iteratively removing all but one ofthe different parallel layers. Our results, presented in Table 4, show that architectures with parallellayers significantly outperform allof their serial counterparts.Considering the same sample of parallel architectures, we analyze whether architectures perfor-mance can be improved by adding a batch normalization before, after or before and after eachactivation function. As shown by the results in Table 4, we did not find evidence that the additionof batch normalization improves the performance of architectures with parallel layers. We find thisfact surprising and intend to explore this further in future work. An example of one of the parallelarchitectures is presented in Figure 4 in Appendix C.Finally, we also analyze the component composition of the 100 top-performing architectures foreach dataset. The most interesting conclusion found in this analysis is the fact that a relativelyshallow architectures ( 4 fully-connected layers) seem to yield the best performance on average forall datasets. The full analysis of the architecture components is presented in Table 12 in AppendixC.Table 4: Comparison of the performance of parallel architectures to their serial counterparts.Parallel Ar-chitecturesSerialversionsParallel withbatchnorm –beforeParallel withbatchnorm –afterParallel withbatchnorm –before & after)Average 87.6% 71.8% 70.4% 77.4% 76.5%Standard Deviation 0.39% 7.8% 9.9% 4.2% 3.6%6.3 E VALUATING THE META -LEARNING APPRAOCHWe analyze the performance of our meta-learning model as a classifier to rank architectures basedon their performance. For these experiments, we use the following settings:We define the 5% of the top-performing architectures of each dataset as “good” and labelthe remaining as “bad”. We use this setting due to the large variance in the performance ofthe DNN architectures on the different datasets (see Appendix D for full details). We alsointend to experiment with other labeling methods in future work.We use the precision@X measure as the evaluation metric. We calculate it by ranking allarchitectures according with the confidence of the meta-classifier (i.e. the classifier trainedon the meta-features) in them being “good”. Then, for the Xtop-ranking architectures wecalculate the actual percentage of “good” architectures is X.We conduct a separate evaluation on the training-based meta-features and the dataset-basedand topological meta-features. Since the training-based features are more computationallyexpensive to compute, we find it interesting to compare their performance to the othertypes of meta-features. In our experiments we denote the full set as ML full, the training-based meta-features as ML train and the topological and dataset-based meta-features asML data+top.We use the Random Forest algorithm for the training of the meta-model.The results of our evaluation are presented in Table 5. We show that we are able to identify multiplearchitectures in the top-ranking spots in a much higher ratio than their share of the population. It isalso clear that the joint set of all meta-features outperforms both of the examined subsets.Next we conduct a random sampling over architectures, and compare the performance of the sam-pled architectures to those obtained by ranking all architectures using the proposed meta-classifier.Our goal is to determine the probability that Nrandomly-sampled architectures will consist of atleast one architecture that outperforms all the top Mitems ranked by the meta-classifier. We conductthe experiment as follows: for each dataset, we randomly sample a fixed number of architecturesand identify the one with the highest performance among those sampled. We then check if this9Under review as a conference paper at ICLR 2017architecture outperforms all those in the ranked list provided by the meta-learning model. We re-peat this process 50,000 for each dataset and calculate the probability of this scenario. The results,presented in Table 6, show that our model outperforms random sampling for all datasets, often by alarge margin. However, further experimentation is required to fully determine the effectiveness ofthe meta-learning approach.Finally, we analyze the results in order to determine the effectiveness of the different meta-featuresused by our model. The analysis was carried out by running LASSO logistic regression and analyz-ing the weights assigned to the various meta-features. Based on this analysis we reach the followingconclusions:The dataset-based meta-features had the smallest contribution to the performance. Whileit is somewhat surprising given the fact that DNNs perform very differently on datasetwith different characteristics, we conclude that the model is focused on the way in thearchitecture is trained on the data (i.e. weights and activations).The topological meta-features that had the largest contribution were those modeling thedepth of the network, the number of parallel layers and those counting the number of vari-ous components.The ranking model uses a large number of training-based meta-features and from all typesdescribed in Appendix A. However, the model includes only weight and activation-basedmeta-features among the training-based meta-features. The biases-based meta-features arealmost never used.Table 5: The evaluation results of different approaches using the precision@X metric. full,trainandd+tdenote ML full (all meta-features), ML train (training-based meta-features only) andML data+top(dataset-based and topological meta-features) respectively. Best results are in bold.Datasetprecision@5 precision@10 precision@20 precision@50full train d+tfull train d+tfull train d+tfull train d+tContraceptive 20% 20% 0% 20% 10% 20% 20% 5% 15% 20% 10% 8%Seismic Bumps 20% 40% 20% 20% 20% 10% 25% 20% 15% 12% 16% 12%Page Blocks 40% 20% 0% 30% 20% 0% 20% 15% 0% 16% 14% 14%Wind 40% 0% 40% 20% 20% 30% 10% 15% 25% 12% 16% 20%Puma32 20% 20% 0% 10% 20% 20% 15% 20% 10% 16% 10% 10%CPU Act 40% 20% 20% 30% 20% 20% 30% 15% 10% 22% 12% 16%Delta Elevators 20% 20% 20% 20% 20% 10% 15% 25% 20% 20% 20% 12%Mammography 20% 0% 0% 20% 20% 0% 20% 15% 5% 20% 10% 12%Ailerons 40% 40% 40% 30% 30% 20% 30% 20% 20% 28% 22% 26%Bank Marketing 20% 0% 20% 30% 10% 20% 20% 10% 10% 10% 14% 10%German Credit 40% 20% 20% 40% 10% 10% 20% 10% 10% 14% 10% 10%Space 20% 0% 0% 10% 10% 0% 15% 10% 10% 18% 14% 10%Cardiography 20% 0% 20% 20% 10% 10% 20% 15% 20% 18% 14% 16%7 C ONCLUSIONS AND FUTURE WORKIn this study we have explored several aspects of applying DNNs to supervised classification prob-lems. Our results demonstrate the difficulty in using DNN architectures that are effective in onedomain to another. We also systematically compare the performance of architectures with parallellayers to those of similar linear architectures and demonstrate that the former outperforms the latterin many cases. We present a novel approach for predicting the performance of a DNN architectureby analyzing its topology and the changes in its weights, biases and activation function values duringearly phases of training. Our aim is that this work can lay the foundation for a better understandingof the DNN architectures space.For future work we consider several directions. First, we plan to add additional components to theones currently used in our automatic architecture generation method in order to enable further ex-ploration. In addition, we will seek to enhance our approach adding automatic parameter tuningmethods. This will enable us to efficiently explore multiple configurations and possibly identifyhigher-performing architectures. We are also considering the use of an exploration/exploitation10Under review as a conference paper at ICLR 2017Table 6: The probabilities of finding an architecture that outperforms all those in the ranked list whenrandomly sampling a set of architectures. The size of the ranked list by our algorithm is always 10(i.e. for sample size 20 we test a set two times the size of the ranked list.)Dataset Sample size - 10 Sample size - 20Contraceptive 1.7% 3.2%Seismic bumps 11.5 22%Page Blocks 14.8% 27.7%Wind 24.3% 41.5%Puma 32 20.7% 36.5%CPU act 3.4% 6.7%Delta elevators 33.3% 55.5%Mammography 7.5% 14.3%Ailerons 13.9% 25.5%Bank marketing 5.6% 10.4%German Credit 11.9% 22.9%Space 20.2% 36.3%Cardiography 5.6% 11.2%scheme along the lines presented in Li et al. (2016) to enable us to efficiently explore larger archi-tecture spaces.Another approach we plan to explore is to make the search over network architectures a fully-differentiable problem, by encoding the problem only using mechanisms that enable such a search.As an example, let us imagine that we want to decide the best number of internal hidden layers touse in a multi-layer fully-connected neural net. For this, we could create multiple parallel stacksof layers with the same input at the bottom (e.g. the features for each data point) and the samekind of output at the end (e.g. probabilities over the possible classes) and then use a softmax totake a weighted sum of the outputs from each of the parallel stacks. By using a penalty on thenegative entropy of this weighted sum, and increasing the penalty over time, the network shouldlearn to produce the output using only one of the parallel stacks which we can then use at inferencetime. We can also train multiple models simultaneously using this method, and introduce additionalpenalties to ensure that the multiple models explore different architectures during training, to enablea more diverse search.11Under review as a conference paper at ICLR 2017
Bys4eCH4x
Not convincing
4: Ok but not good enough - rejection
The topic is very interesting, but the paper is not convincing. Specifically, the experiment part is weak. The study should include datasets that are familiar to the community as well as the ones "that are not often addressed by deep learning". The comparison to other approaches is not comprehensive.
3: The reviewer is fairly confident that the evaluation is correct
rJM69B5xx
ICLR.cc/2017/conference
2017
Finding a Jack-of-All-Trades: An Examination of Semi-supervised Learning in Reading Comprehension
["Rudolf Kadlec", "Ond\u0159ej Bajgar", "Peter Hrincar", "Jan Kleindienst"]
Deep learning has proven useful on many NLP tasks including reading comprehension. However it requires a lot of training data which are not available in some domains of application. Hence we examine the possibility of using data-rich domains to pre-train models and then apply them in domains where training data are harder to get. Specifically, we train a neural-network-based model on two context-question-answer datasets, the BookTest and CNN/Daily Mail, and we monitor transfer to subsets of bAbI, a set of artificial tasks designed to test specific reasoning abilities, and of SQuAD, a question-answering dataset which is much closer to real-world applications. Our experiments show very limited transfer if the model isn’t shown any training examples from the target domain however the results are promising if the model is shown at least a few target-domain examples. Furthermore we show that the effect of pre-training is not limited to word embeddings.
["Natural language processing", "Semi-Supervised Learning", "Deep learning", "Transfer Learning"]
ABSTRACTDeep learning has proven useful on many NLP tasks including reading comprehen-sion. However, it requires large amounts of training data which are not available insome domains of application. Hence we examine the possibility of using data-richdomains to pre-train models and then apply them in domains where training dataare harder to get. Specifically, we train a neural-network-based model on twocontext-question-answer datasets, the BookTest and CNN/Daily Mail, and wemonitor transfer to subsets of bAbI, a set of artificial tasks designed to test specificreasoning abilities, and of SQuAD, a question-answering dataset which is muchcloser to real-world applications. Our experiments show very limited transfer ifthe model is not shown any training examples from the target domain howeverthe results are encouraging if the model is shown at least a few target-domainexamples. Furthermore we show that the effect of pre-training is not limited toword embeddings.1 I NTRODUCTIONMachine intelligence has had some notable successes, however often in narrow domains which aresometimes of little practical use to humans – for instance games like chess (Campbell et al., 2002)or Go (Silver et al., 2016). If we aimed to build a general AI that would be able to efficiently assisthumans in a wide range of settings, we would want it to have a much larger set of skills – amongthem would be an ability to understand human language, to perform common-sense reasoning and tobe able to generalize its abilities to new situations like humans do.If we want to achieve this goal through Machine Learning, we need data to learn from. A lot of dataif the task at hand is complex – which is the case for many useful tasks. One way to achieve wideapplicability would be to provide training data for each specific task we would like the machine toperform. However it is unrealistic to obtain a sufficient amount of training data for some domains – itmay for instance require expensive human annotation or all domains of application may be difficultto predict in advance – while the amount of training data in other domains is practically unlimited,(e.g. in language modelling or Cloze-style question answering).The way to bridge this gap – and to achieve the aforementioned adaptability – is transfer learning (Pan& Yang, 2010) and closely related semi-supervised learning (Zhu & Goldberg, 2009) which allowthe system to acquire a set of skills on domains where data are abundant and then use these skills tosucceed on previously unseen domains. Despite how important generalization is for general AI, a lotof research keeps focusing on solving narrow tasks.In this paper we would like to examine transfer of learnt skills and knowledge within the domain of textcomprehension, a field that has lately attracted a lot of attention within the NLP community (Hermannet al., 2015; Hill et al., 2015; Kobayashi et al., 2016; Kadlec et al., 2016b; Chen et al., 2016; Sordoniet al., 2016; Dhingra et al., 2016; Trischler et al., 2016; Weissenborn, 2016; Cui et al., 2016b;a;These authors contributed equally to this work.1Under review as a conference paper at ICLR 2017Li et al., 2016; Shen et al., 2016). Specifically, we would like to address the following researchquestions:1.Whether we could train models on natural-language tasks where data are abundant andtransfer the learnt skills to tasks where in-domain training data may be difficult to obtain.We will first look into what reasoning abilities a model learns from two large-scale reading-comprehension datasets using artificial tasks, and then check whether it can transfer its skillsto real world tasks. Spoiler: both these transfers are very poor if we allow no training at allon the target task.2.Whether pre-training on large-scale datasets does help if we allow the model to train on asmall sample of examples from the target tasks. Here the results are much more positive.3.Finally we examine whether the benefits of pre-training are concentrated in any particularpart of the model - namely the word-embedding part or the context encoder (the reasoningpart). It turns out that pre-training is useful for both components.Although our results do not improve current state of the art in any of the studied tasks, they show aclear positive effect of large-dataset pre-training on the performance of our baseline machine-learningmodel. Previous studies of transfer learning and semi-supervised learning in NLP focused on textclassification (Dai & Le, 2015; Mou et al., 2016) and various parsing tasks (Collobert et al., 2011;Hashimoto et al., 2016). To our knowledge this work is the first study of transfer learning in readingcomprehension, and we hope it will stimulate further work in this important area.We will first briefly introduce the datasets we will be using on the pre-training and target sides,then our baseline model and afterwards in turn describe the method and results of each of the threeexperiments.2 D ATASETS2.1 P RE-TRAINING DATASETSWe have mentioned that for the model pre-training we would want to use a task where training dataare abundant. An example of such task is context-dependent cloze-style-question answering since thetraining data for this task can be generated automatically from a suitable corpus. We will use twosuch pre-training datasets in our experiments: the BookTest (Bajgar et al., 2016) and the CNN/DailyMail (CNN/DM) news dataset (Hermann et al., 2015).The task associated with both datasets is to answer a cloze-style question (i.e. fill in a blank ina sentence) the answer to which needs to be inferred from a context document provided with thequestion.2.1.1 B OOK TESTIn the BookTest dataset, the context document is formed from 20 consecutive sentences from a book.The question is then formed by omitting a common noun or a named entity from the subsequent 21stsentence. Among datasets of this kind, the BookTest is among the largest with more than 14 milliontraining examples coming from 3555 copyright-free books avalable thanks to Project Gutenberg.2.1.2 CNN/D AILY MAILIn the CNN/DM dataset the context document is formed from a news article while the cloze-stylequestion is formed by removing a named entity from one of the short summary sentences which oftenappear at the top of the article.To stop the model from using world knowledge from outside the context article (and hence truly testthe comprehension of the article), all named entities were replaced by anonymous tags, which arefurther shuffled for each example. This may make the comprehension more difficult; however, sincethe answer is always one of the anonymized entities, it also reduces the number of possible answersmaking guessing easier.2Under review as a conference paper at ICLR 20172.2 T ARGET DATASETS2.2.1 BABIThe first target dataset are the bAbI tasks (Weston et al., 2016) – a set of artificial tasks each ofwhich is designed to test a specific kind of reasoning. This toy dataset will allow us to observe whatparticular skills the model may be learning from each of the three training datasets.For our experiments we will be using an architecture designed to select one word from the contextdocument as the answer. Hence we have selected Tasks 1,2,3,4,5,11,12,13,14 and 16 which fulfillthis requirement and added task 15 which required a slight modification. Furthermore because bothpre-training datasets are cloze-style we converted also the bAbI task questions into cloze style (e.g.”Where is John?” to ”John is in the XXXXX.”).For the models pre-trained on CNN/DM we also anonymized the tasks in a way similar to thepre-training dataset - i.e. we replaced all names of characters and also all words that can appear asanswers for the given task by anonymous tags in the style of CNN/DM. This gives even models thathave not seen any training examples from the target domain a chance to answer the questions.Full details about these alterations can be found in Appendix A.2.2.2 SQ UADSecondly, we will look on transfer to the SQuAD dataset (Rajpurkar et al., 2016); here the associatedtask may be already useful in the real world. Although cloze-style questions have the huge advantagein the possibility of being automatically generated from a suitable corpus – the path taken by CNN/DMand the BookTest – in practice humans would use a proper question, not its cloze-style substitute.This brings us to the need of transfer from the data-rich cloze-style training to the domain of properquestions where data are much scarcer due to the necessary human annotation.The SQuAD dataset is a great target dataset to use for this. As opposed to the bAbI tasks, the goalof this dataset is actually a problem whose solving would be useful to humans - answering naturalquestions based on an natural language encyclopedic knowledge base.For our experiments we selected only a subset of the SQuAD training and development exampleswhere the answer is only a single word, since this is an inherent assumption of our machine learningmodel. This way we extracted 28,346 training examples out of the original 100,000 examples and3,233 development examples out of 10,570.3 M ACHINE LEARNING MODEL : AS R EADERWe perform our experiments using the Attention Sum Reader (AS Reader) (Kadlec et al., 2016b)model. The AS Reader is simple to implement while it achieves strong performance on several textcomprehension tasks (Kadlec et al., 2016b; Bajgar et al., 2016; Chu et al., 2016). Since the AS Readeris a building block of many recent text-comprehension models (Trischler et al., 2016; Sordoni et al.,2016; Dhingra et al., 2016; Cui et al., 2016b;a; Shen et al., 2016; Munkhdalai & Yu, 2016) it is agood representative of current research in this field.A high level structure of the AS Reader is shown in Figure 1. The words from the document and thequestion are first converted into vector embeddings using a look-up matrix. The document is thenread by a bidirectional Gated Recurrent Unit (GRU) network (Cho et al., 2014). A concatenationof the hidden states of the forward and backward GRUs at each word is then used as a contextualembedding of this word, intuitively representing the context in which the word is appearing. We canalso understand it as representing the set of questions to which this word may be an answer.Similarly the question is read by a bidirectional GRU but in this case only the final hidden states areconcatenated to form the question embedding .The attention over each word in the context is then calculated as the dot product of its contextualembedding with the question embedding. This attention is then normalized by the softmax functionand summed across all occurrences of each answer candidate. The candidate with most accumulatedattention is selected as the final answer.3Under review as a conference paper at ICLR 2017For a more detailed description of the model including equations check Kadlec et al. (2016b).QuestionencoderDocumentencoderDocumentQuestionPObamaquestion,document.....ObamaandPutin......saidObamainPragueXXXXXvisitedPrague.......Wordembeddings(Bidir-GRU)(Matrix)(Bidir-GRU)............Figure 1: Structure of the AS Reader model.4 E XPERIMENTS : TRANSFER LEARNING IN TEXT COMPREHENSIONNow let us turn in more detail to the three kinds of experiments that we performed.4.1 P RE-TRAINED WITHOUT TARGET ADJUSTMENTIn the first experiment we tested how a model trained on one of the large-scale pre-training datasetsperforms on the bAbI tasks without any opportunity to train on bAbI. Since the BookTest andCNN/DM tasks involve only cloze-style questions, we can’t expect a model trained on them to answernatural ?-style questions. Hence we did not study the transfer to SQuAD in this case, only the transferto the (cloze-converted) bAbI tasks.4.1.1 M ETHODFirst we tested how the AS Reader architecture (Kadlec et al., 2016b) can handle the tasks if traineddirectly on the bAbI training data for each task. Then we tested the degree of transfer from theBookTest and CNN/DM data to the 11 selected bAbI tasks.In the first part of the experiment we trained a separate instance of the AS Reader on the 10,000-example version of the bAbI training data for each of the 11 tasks (for more details see Appendix B.1).On 8 of them the architecture was able to learn the task with accuracy at least 95%1(results for eachtask can be found in Table 4 in Appendix C). Hence if given appropriate training the AS Readeris capable of the reasoning needed to solve most of the selected bAbI tasks. Now when we knowthat the AS Reader is powerful enough to learn the target tasks we can turn to transfer from the twolarge-scale datasets.The main part of this first experiment was then straightforward: we pre-trained multiple models onthe BookTest and CNN/DM datasets and then simply evaluated them on the test datasets of the 11selected bAbI tasks.4.1.2 R ESULTSTable 1 summarizes the results of this experiment. Both the models trained on the BookTest and thosetrained on the CNN/DM dataset perform quite poorly on bAbI and achieve much lower accuracy than1It should be noted that there are several machine learning models that perform better than the AS Readerin the 10k weakly supervised setting, e.g. (Sukhbaatar et al., 2015; Xiong et al., 2016; Graves et al., 2016),however they often need significant fine-tuning. On the other hand we trained plain AS Reader model withoutany modifications. Hyperparameter and feature fine-tuning could probably further increase its performanceon individual tasks however it goes directly against the idea of generality that is at the heart of this work. Forcomparison with state of the art we include results of DMN+ (Xiong et al., 2016) in Table 1 which had the bestaverage performance over the original 20 tasks.4Under review as a conference paper at ICLR 2017Table 1: The mean performance across 11 bAbI tasks. The first two columns show a random baseline2and a baseline that selects the most frequent word from the context which also appears as an answerin the training data for the task. The following three columns show performance of the AS Readertrained on different datasets, the last column shows the results of DMN+ (Xiong et al., 2016), thestate-of-the-art-model on the bAbI 10k dataset. For more detailed results listing per task accuraciessee Appendix C.Model Rnd. Most freq. cand. AS Reader DMN+Train datasetnottrainedbAbI10kBookTest14MCNN/DM1.2MbAbI10kbAbI10kbAbI mean (11 tasks) 6.1 29.9 34.8 38.1 92.7 95.7the models trained directly on each individual bAbI task. However there is some transfer betweenthe tasks since the AS Reader trained on either the BookTest or CNN/DM outperforms a randombaseline2and even an improved baseline which selects the most frequent word from the context thatalso appears as an answer in the training data for this task.The results also show that the models trained on CNN/DM perform somewhat better on most tasksthan the BookTest models. This may be due to the fact that bAbI tasks generally require the model tosummarize information from the context document, which is also what the CNN/DM dataset is testing.On the other hand, the BookTest requires prediction of a possible continuation of a story, wherethe required kind of reasoning is much less clear but certainly different from pure summarization.Another explanation for better performance of CNN/DM models might be that they solve slightlysimpler task since the candidate answers were already pre-selected in the entity anonymization step.Readers interested in how the training-dataset size affects this kind of transfer can check (Kadlecet al., 2016a) where we show that the target-task performance is a bit better if we use the largeBookTest as opposed to its smaller subset, the Children’s Book Test (CBT) (Hill et al., 2015).Conclusions from this experiment are that the skills learned from two large-scale datasets generalizesurprisingly poorly to even simple toy tasks. This may make us ask whether most teams’ focus onsolving narrow tasks is truly beneficial if the skills learnt on these tasks are hard to apply elsewhere.However it also brings us to our next experiment, where we try to provide some help to the strugglingpre-trained models.4.2 P RE-TRAINED WITH TARGET ADJUSTMENTAfter showing that the skills learnt from the BookTest and CNN/DM datasets are by themselvesinsufficient for solving the toy tasks, the next natural question is whether they are useful if helped bytraining on a small sample of examples from the target task. We call this additional phase of trainingtarget adjustment . For this experiment we again use the bAbI tasks, however we also test transferto a subset of the SQuAD dataset, which is much closer to real-world natural-language questionanswering.The results presented in this and the following section are based on training 3701 model instances.4.2.1 M ETHODCommon to bAbI and SQuAD datasets. In this experiment we started with a pre-trained modelwhich we used in the previous experiment. However, after it finished training on one of the largepre-training datasets, we allowed it to train on a subset of training examples from the target dataset.We tried subsets of various sizes ranging from a single example to thousands. We tried training fourdifferent pre-trained models and also, for comparison, four randomly-initialized models with thesame hyperparameters (see Appendix B.2 for details). The experiment with each task-model couplewas run on 4 different data samples of each size which were randomly drawn from the training dataset2The random baseline selects randomly uniformly between all unique words contained in the contextdocument.5Under review as a conference paper at ICLR 20170.000.250.500.751.001 10 100 5001000 5000# training examplesTest accuracyModel typeBookTest Pre−trainedBookTest RandomCNN/DM Pre−trainedCNN/DM RandomMean of best−validation test accuracy for the 11 bAbI tasks(a)●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●0.00.20.41 10 100 5001000 500010000 28127# training examplesTest accuracyModel type●Fully pre−trainedPre−trained embeddingsPre−trained encodersRandomly initializedAccuracies on SQuAD (b)Figure 2: Sub-figure (a) shows the average across the 11 bAbI tasks of the best-validation model’stest accuracy. (b) shows the test accuracy on SQuAD of each model we trained (the points) and thelines join the accuracies of the best-validation models for each training size.of the task to account for variations between these random samples – which may be substantial giventhe small sample size.3bAbI. For each of these models we observed the test accuracy at the best-validation epoch andcompared this number between the randomly initialized and pre-trained models. Validation was doneusing 100 examples which were set aside from the task’s original 10k training data.4We perform theexperiment with models pre-trained on the BookTest and also on CNN/DM.SQuAD subset. In the SQuAD experiment, we trained the model on a subset of the original trainingdataset where answers were only single words and its sub-subsets. We report the best-validationaccuracy on a development set filtered in the same way. This experiment was performed only withthe models pre-trained on BookTest.4.2.2 R ESULTSThe results of these experiments are summarized in Figures 2 and 3.●●● ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●● ●●●●●●●●●●●●●● ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●● ●●●●●●●●●●●●●●●● ●●●●●●●●●●●● ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●● ●●●●●●●●●● ●●●●●●●●● ●●●●●●●●●● ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●● ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●0.000.250.500.751.001 10 100 10005000# training examplesTest accuracyTask 1●●●●●●●●●●●●● ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●● ●●●●●●●●●●● ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●● ● ●●●●●●●●●●● ● ●●●● ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●● ●●●●●●●●●●● ●●●●●●●●●●●●●●●● ●●●●●●● ●●●●●●●● ●●●●●●●●●●●●●●●● ●●●●●●●●●●●●●●●●●●● ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●0.000.250.500.751.001 10 100 10005000# training examplesTest accuracyTask 4●●● ●●●●●●●●●●●●●● ●●●●●●●●● ●●●●●●●●●● ●●●●●●●●●●●● ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●● ●●●●●● ●●●●●●●●●●●●●●●●●●●●●●● ●●●●●●●●●●●●●● ●●●●●●●●●●●●●●●●●●●●● ●●●●●●●●●●●● ●●●●●●●●●● ●●●●●●●●●●●●●●●●●●●●●●●●● ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●● ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●0.000.250.500.751.001 10 100 10005000# training examplesTest accuracyTask 5Model type●●●●BookTest pre−trainedBookTest randomCNN/DM pre−trainedCNN/DM randomFigure 3: Example of 3 bAbI tasks where pre-training seems to help. Note that the task may be easierfor the CNN/DM models due to answer anonymization which restricts the choice of possible answers.3We are planning to release the split training datasets soon.4The other models trained on the full 10k dataset usually use 1000 validation examples (Sukhbaatar et al.,2015; Xiong et al., 2016), however we wanted to focus on low data regime thus we used 10 times less examples.6Under review as a conference paper at ICLR 2017bAbI. Sub-figure 2a shows mean test accuracy of the models that achieved the best validation resultfor each single task. The results for both BookTest and CNN/DM experiments confirm positive effectof pre-training compared to randomly initialized baseline. Figure 3 shows performance on selectedbAbI tasks where pre-training has clearly positive effect, such plot for each of the target tasks isprovided in Appendix C.2 (Figure 4).Note that the CNN/DM models cannot be directly compared to BookTest results due to entityanonymization that seems to simplify the task when the model is trained on smaller datasets.Since our evaluation methodology with different training set sizes is novel, we can compare ourresult only to MemN2N (Sukhbaatar et al., 2015) trained on a 1k dataset. MemN2N is the onlyweakly supervised model that reports accuracy when trained on less than 10k examples. MemN2Nachieves average accuracy 93.2%5on the eleven selected tasks. This is substantially better than bothour random baseline (78.0%) and the BookTest-pre-trained model (79.5%), however our model isnot tuned in any way towards this particular task. One important conceptual difference is that theAS Reader processes the whole context as one sequence of words, whereas MemN2N receives thecontext split into single sentences, which simplifies the task for the network.SQuAD subset. The results of SQuAD experiment also confirm positive effect of pre-training, seeSub-figure 2b, for now compare just lines showing performance of the fully pre-trained model andthe randomly initialized model – the meaning of the remaining two lines shall become clear in thenext section.More detailed statistics about the results of this experiment can be found in Appendix D.We should note that performance of our model is not competitive with the state of the art models onthis dataset. For instance the DCR model (Yu et al., 2016) trained on our SQuAD subset achievesvalidation accuracy 74.9% in this task which is better than our randomly initialized (35.4%) andpre-trained (51.6%) models6. However, the DCR model is designed specifically for the SQuAD task,for instance it utilizes features that are not used by our model.4.3 P ARTIALLY PRE -TRAINED MODELSince our previous experiment confirmed positive effect of pre-training if followed by target-domainadjustment, we wondered which part of the model contains the knowledge transferable to newdomains. To examine this we performed the following experiment.4.3.1 M ETHODOur machine learning model, the AS Reader, consists of two main parts: the word-embedding look-upand the bidirectional GRUs used to encode the document and question (see Figure 1). Therefore anatural question was what the contribution of each of these parts is.To test this we created two models out of each pre-trained model used in the previous experiment.The first model variant uses the pre-trained word embeddings from the original model while the GRUencoders are randomly initialized. We say that this model has pre-trained embeddings . The secondmodel variant uses the opposite setting where the word embeddings are randomly initialized whilethe encoders are taken form a pre-trained model. We call this pre-trained encoders .bAbI. For this experiment we selected only a subset of tasks with training set of 100 examples wherethere was significant difference in accuracy between randomly-initialized and pre-trained models. Forevaluation we use the same methodology as in the previous experiment, that is, we report accuracy ofthe best-validation model averaged over 4 training splits.SQuAD subset. We evaluated both model variants on all training sets from the previous SQuADexperiment using the same methodology.5MemN2N trained on each single task with PE LS RN features, see (Sukhbaatar et al., 2015) for details.6We would like to thank Yu et al. (2016) for training their system on our dataset.7Under review as a conference paper at ICLR 2017Table 2: The effect of pre-training different components of the model for selected tasks. The first rowshows performance (average test accuracy across all trained model instances in each category) of arandomly initialized baseline model. The following three rows show increase in accuracy (measuredin percent absolute) when the model is initialized with weights pre-trained on the BookTest. The lastline shows results for models initialized with Google News word2vec word embeddings (Mikolovet al., 2013).`````````````Model variantTask bAbI task (100 ex.) SQuAD1. 5. 11. 14. (28k ex.)Random init 53% 66% 71% 33% 31%Pre-trained encoders +6 +25 +4 +2 +4Pre-trained embeddings +17 +6 +8 +8 +10Pre-trained full +34 +22 +14 +13 +17Pre-trained word2vec -2 +5 +1 -1 +54.3.2 R ESULTSbAbI. Table 2 shows improvement of pre-trained models over a randomly initialized baseline. Inmost cases (all except Task 5) the fully pre-trained model achieved the best accuracy.SQuAD subset. The accuracies of the four model variants are plotted in Figure 2b together withresults of the previous SQuAD experiment. The graph shows that both pre-trained embeddings andpre-trained encoders alone improve performance over the randomly initialized baseline, however thefully pre-trained model is always the best.The overall result of this experiment is that both pre-training of the word embeddings and pre-trainingof the encoder parameters are important since the fully pre-trained model outperforms both partiallypre-trained variants.5 C ONCLUSIONOur experiments show that transfer from two large cloze-style question-answering datasets to ourtwo target tasks is suprisingly poor, if the models aren’t provided with any examples from the targetdomain. However we show that models that pre-trained models perform significantly better than arandomly initialized model if they are shown at least a few training examples from the target domain.The usefulness of pre-trained word embeddings is well known in the NLP community however weshow that the power of our pre-trained model does not lie just in the embeddings. This suggests thatonce the text-comprehension community agrees on sufficiently versatile model, much larger parts ofthe model could start being reused than just the word-embeddings.The generalization of skills from a training domain to new tasks is an important ingredient of anysystem we would want to call intelligent. This work is an early step to explore this direction.
Sy8NJOerx
Interesting ; needs to improve clarity
6: Marginally above acceptance threshold
First I would like to apologize for the delay in reviewing. summary : This work explores several experiments to transfer training a specific model of reading comprehension ( AS Reader), in an artificial and well populated dataset in order to perform in another target dataset. Here is what I understand are their several experiments to transfer learning, but I am not 100% sure. 1. The model is trained on the big artificial dataset and tested on the small target datasets (section 4.1) 2. The model is pre-trained on the big artificial dataset like before, then fine-tuned on a few examples from the target dataset and tested on the remaining target examples. Several such models are trained using different sub-sets of fine-tuning examples. The results are tested against the performance of randomly intialized then fine-tuned models (section 4.2). 3. The model is pre-trained on the big artificial dataset like before. The model is made of an embedding component and an encoder component. Alternatively, each component is reset to a random initialization, to test the importance of the pre-training in each component. Then the model is fine-tuned on a few examples from the target dataset and tested on the remaining target examples. (section 4.3) I think what makes things difficult to follow is the fact that the test set is composed by several sub tasks, and sometimes what is reported is the mean performance across the tasks, sometimes the performance on a few tasks. Sometimes what we see is the mean performance of several models? You should report standard deviations also. Could you better explain what you mean by best validation ? Interesting and unpretentious work. The clarity of the presentation could be improved maybe by simplifying the experimental setup? The interesting conclusion I think is reported at the end of the section 4.1, when the nuanced difference between the datasets are exposed. Minor: unexplained acronyms: GRU, BT, CBT. benfits p. 2 subsubset p. 6
4: The reviewer is confident but not absolutely certain that the evaluation is correct
rJM69B5xx
ICLR.cc/2017/conference
2017
Finding a Jack-of-All-Trades: An Examination of Semi-supervised Learning in Reading Comprehension
["Rudolf Kadlec", "Ond\u0159ej Bajgar", "Peter Hrincar", "Jan Kleindienst"]
Deep learning has proven useful on many NLP tasks including reading comprehension. However it requires a lot of training data which are not available in some domains of application. Hence we examine the possibility of using data-rich domains to pre-train models and then apply them in domains where training data are harder to get. Specifically, we train a neural-network-based model on two context-question-answer datasets, the BookTest and CNN/Daily Mail, and we monitor transfer to subsets of bAbI, a set of artificial tasks designed to test specific reasoning abilities, and of SQuAD, a question-answering dataset which is much closer to real-world applications. Our experiments show very limited transfer if the model isn’t shown any training examples from the target domain however the results are promising if the model is shown at least a few target-domain examples. Furthermore we show that the effect of pre-training is not limited to word embeddings.
["Natural language processing", "Semi-Supervised Learning", "Deep learning", "Transfer Learning"]
ABSTRACTDeep learning has proven useful on many NLP tasks including reading comprehen-sion. However, it requires large amounts of training data which are not available insome domains of application. Hence we examine the possibility of using data-richdomains to pre-train models and then apply them in domains where training dataare harder to get. Specifically, we train a neural-network-based model on twocontext-question-answer datasets, the BookTest and CNN/Daily Mail, and wemonitor transfer to subsets of bAbI, a set of artificial tasks designed to test specificreasoning abilities, and of SQuAD, a question-answering dataset which is muchcloser to real-world applications. Our experiments show very limited transfer ifthe model is not shown any training examples from the target domain howeverthe results are encouraging if the model is shown at least a few target-domainexamples. Furthermore we show that the effect of pre-training is not limited toword embeddings.1 I NTRODUCTIONMachine intelligence has had some notable successes, however often in narrow domains which aresometimes of little practical use to humans – for instance games like chess (Campbell et al., 2002)or Go (Silver et al., 2016). If we aimed to build a general AI that would be able to efficiently assisthumans in a wide range of settings, we would want it to have a much larger set of skills – amongthem would be an ability to understand human language, to perform common-sense reasoning and tobe able to generalize its abilities to new situations like humans do.If we want to achieve this goal through Machine Learning, we need data to learn from. A lot of dataif the task at hand is complex – which is the case for many useful tasks. One way to achieve wideapplicability would be to provide training data for each specific task we would like the machine toperform. However it is unrealistic to obtain a sufficient amount of training data for some domains – itmay for instance require expensive human annotation or all domains of application may be difficultto predict in advance – while the amount of training data in other domains is practically unlimited,(e.g. in language modelling or Cloze-style question answering).The way to bridge this gap – and to achieve the aforementioned adaptability – is transfer learning (Pan& Yang, 2010) and closely related semi-supervised learning (Zhu & Goldberg, 2009) which allowthe system to acquire a set of skills on domains where data are abundant and then use these skills tosucceed on previously unseen domains. Despite how important generalization is for general AI, a lotof research keeps focusing on solving narrow tasks.In this paper we would like to examine transfer of learnt skills and knowledge within the domain of textcomprehension, a field that has lately attracted a lot of attention within the NLP community (Hermannet al., 2015; Hill et al., 2015; Kobayashi et al., 2016; Kadlec et al., 2016b; Chen et al., 2016; Sordoniet al., 2016; Dhingra et al., 2016; Trischler et al., 2016; Weissenborn, 2016; Cui et al., 2016b;a;These authors contributed equally to this work.1Under review as a conference paper at ICLR 2017Li et al., 2016; Shen et al., 2016). Specifically, we would like to address the following researchquestions:1.Whether we could train models on natural-language tasks where data are abundant andtransfer the learnt skills to tasks where in-domain training data may be difficult to obtain.We will first look into what reasoning abilities a model learns from two large-scale reading-comprehension datasets using artificial tasks, and then check whether it can transfer its skillsto real world tasks. Spoiler: both these transfers are very poor if we allow no training at allon the target task.2.Whether pre-training on large-scale datasets does help if we allow the model to train on asmall sample of examples from the target tasks. Here the results are much more positive.3.Finally we examine whether the benefits of pre-training are concentrated in any particularpart of the model - namely the word-embedding part or the context encoder (the reasoningpart). It turns out that pre-training is useful for both components.Although our results do not improve current state of the art in any of the studied tasks, they show aclear positive effect of large-dataset pre-training on the performance of our baseline machine-learningmodel. Previous studies of transfer learning and semi-supervised learning in NLP focused on textclassification (Dai & Le, 2015; Mou et al., 2016) and various parsing tasks (Collobert et al., 2011;Hashimoto et al., 2016). To our knowledge this work is the first study of transfer learning in readingcomprehension, and we hope it will stimulate further work in this important area.We will first briefly introduce the datasets we will be using on the pre-training and target sides,then our baseline model and afterwards in turn describe the method and results of each of the threeexperiments.2 D ATASETS2.1 P RE-TRAINING DATASETSWe have mentioned that for the model pre-training we would want to use a task where training dataare abundant. An example of such task is context-dependent cloze-style-question answering since thetraining data for this task can be generated automatically from a suitable corpus. We will use twosuch pre-training datasets in our experiments: the BookTest (Bajgar et al., 2016) and the CNN/DailyMail (CNN/DM) news dataset (Hermann et al., 2015).The task associated with both datasets is to answer a cloze-style question (i.e. fill in a blank ina sentence) the answer to which needs to be inferred from a context document provided with thequestion.2.1.1 B OOK TESTIn the BookTest dataset, the context document is formed from 20 consecutive sentences from a book.The question is then formed by omitting a common noun or a named entity from the subsequent 21stsentence. Among datasets of this kind, the BookTest is among the largest with more than 14 milliontraining examples coming from 3555 copyright-free books avalable thanks to Project Gutenberg.2.1.2 CNN/D AILY MAILIn the CNN/DM dataset the context document is formed from a news article while the cloze-stylequestion is formed by removing a named entity from one of the short summary sentences which oftenappear at the top of the article.To stop the model from using world knowledge from outside the context article (and hence truly testthe comprehension of the article), all named entities were replaced by anonymous tags, which arefurther shuffled for each example. This may make the comprehension more difficult; however, sincethe answer is always one of the anonymized entities, it also reduces the number of possible answersmaking guessing easier.2Under review as a conference paper at ICLR 20172.2 T ARGET DATASETS2.2.1 BABIThe first target dataset are the bAbI tasks (Weston et al., 2016) – a set of artificial tasks each ofwhich is designed to test a specific kind of reasoning. This toy dataset will allow us to observe whatparticular skills the model may be learning from each of the three training datasets.For our experiments we will be using an architecture designed to select one word from the contextdocument as the answer. Hence we have selected Tasks 1,2,3,4,5,11,12,13,14 and 16 which fulfillthis requirement and added task 15 which required a slight modification. Furthermore because bothpre-training datasets are cloze-style we converted also the bAbI task questions into cloze style (e.g.”Where is John?” to ”John is in the XXXXX.”).For the models pre-trained on CNN/DM we also anonymized the tasks in a way similar to thepre-training dataset - i.e. we replaced all names of characters and also all words that can appear asanswers for the given task by anonymous tags in the style of CNN/DM. This gives even models thathave not seen any training examples from the target domain a chance to answer the questions.Full details about these alterations can be found in Appendix A.2.2.2 SQ UADSecondly, we will look on transfer to the SQuAD dataset (Rajpurkar et al., 2016); here the associatedtask may be already useful in the real world. Although cloze-style questions have the huge advantagein the possibility of being automatically generated from a suitable corpus – the path taken by CNN/DMand the BookTest – in practice humans would use a proper question, not its cloze-style substitute.This brings us to the need of transfer from the data-rich cloze-style training to the domain of properquestions where data are much scarcer due to the necessary human annotation.The SQuAD dataset is a great target dataset to use for this. As opposed to the bAbI tasks, the goalof this dataset is actually a problem whose solving would be useful to humans - answering naturalquestions based on an natural language encyclopedic knowledge base.For our experiments we selected only a subset of the SQuAD training and development exampleswhere the answer is only a single word, since this is an inherent assumption of our machine learningmodel. This way we extracted 28,346 training examples out of the original 100,000 examples and3,233 development examples out of 10,570.3 M ACHINE LEARNING MODEL : AS R EADERWe perform our experiments using the Attention Sum Reader (AS Reader) (Kadlec et al., 2016b)model. The AS Reader is simple to implement while it achieves strong performance on several textcomprehension tasks (Kadlec et al., 2016b; Bajgar et al., 2016; Chu et al., 2016). Since the AS Readeris a building block of many recent text-comprehension models (Trischler et al., 2016; Sordoni et al.,2016; Dhingra et al., 2016; Cui et al., 2016b;a; Shen et al., 2016; Munkhdalai & Yu, 2016) it is agood representative of current research in this field.A high level structure of the AS Reader is shown in Figure 1. The words from the document and thequestion are first converted into vector embeddings using a look-up matrix. The document is thenread by a bidirectional Gated Recurrent Unit (GRU) network (Cho et al., 2014). A concatenationof the hidden states of the forward and backward GRUs at each word is then used as a contextualembedding of this word, intuitively representing the context in which the word is appearing. We canalso understand it as representing the set of questions to which this word may be an answer.Similarly the question is read by a bidirectional GRU but in this case only the final hidden states areconcatenated to form the question embedding .The attention over each word in the context is then calculated as the dot product of its contextualembedding with the question embedding. This attention is then normalized by the softmax functionand summed across all occurrences of each answer candidate. The candidate with most accumulatedattention is selected as the final answer.3Under review as a conference paper at ICLR 2017For a more detailed description of the model including equations check Kadlec et al. (2016b).QuestionencoderDocumentencoderDocumentQuestionPObamaquestion,document.....ObamaandPutin......saidObamainPragueXXXXXvisitedPrague.......Wordembeddings(Bidir-GRU)(Matrix)(Bidir-GRU)............Figure 1: Structure of the AS Reader model.4 E XPERIMENTS : TRANSFER LEARNING IN TEXT COMPREHENSIONNow let us turn in more detail to the three kinds of experiments that we performed.4.1 P RE-TRAINED WITHOUT TARGET ADJUSTMENTIn the first experiment we tested how a model trained on one of the large-scale pre-training datasetsperforms on the bAbI tasks without any opportunity to train on bAbI. Since the BookTest andCNN/DM tasks involve only cloze-style questions, we can’t expect a model trained on them to answernatural ?-style questions. Hence we did not study the transfer to SQuAD in this case, only the transferto the (cloze-converted) bAbI tasks.4.1.1 M ETHODFirst we tested how the AS Reader architecture (Kadlec et al., 2016b) can handle the tasks if traineddirectly on the bAbI training data for each task. Then we tested the degree of transfer from theBookTest and CNN/DM data to the 11 selected bAbI tasks.In the first part of the experiment we trained a separate instance of the AS Reader on the 10,000-example version of the bAbI training data for each of the 11 tasks (for more details see Appendix B.1).On 8 of them the architecture was able to learn the task with accuracy at least 95%1(results for eachtask can be found in Table 4 in Appendix C). Hence if given appropriate training the AS Readeris capable of the reasoning needed to solve most of the selected bAbI tasks. Now when we knowthat the AS Reader is powerful enough to learn the target tasks we can turn to transfer from the twolarge-scale datasets.The main part of this first experiment was then straightforward: we pre-trained multiple models onthe BookTest and CNN/DM datasets and then simply evaluated them on the test datasets of the 11selected bAbI tasks.4.1.2 R ESULTSTable 1 summarizes the results of this experiment. Both the models trained on the BookTest and thosetrained on the CNN/DM dataset perform quite poorly on bAbI and achieve much lower accuracy than1It should be noted that there are several machine learning models that perform better than the AS Readerin the 10k weakly supervised setting, e.g. (Sukhbaatar et al., 2015; Xiong et al., 2016; Graves et al., 2016),however they often need significant fine-tuning. On the other hand we trained plain AS Reader model withoutany modifications. Hyperparameter and feature fine-tuning could probably further increase its performanceon individual tasks however it goes directly against the idea of generality that is at the heart of this work. Forcomparison with state of the art we include results of DMN+ (Xiong et al., 2016) in Table 1 which had the bestaverage performance over the original 20 tasks.4Under review as a conference paper at ICLR 2017Table 1: The mean performance across 11 bAbI tasks. The first two columns show a random baseline2and a baseline that selects the most frequent word from the context which also appears as an answerin the training data for the task. The following three columns show performance of the AS Readertrained on different datasets, the last column shows the results of DMN+ (Xiong et al., 2016), thestate-of-the-art-model on the bAbI 10k dataset. For more detailed results listing per task accuraciessee Appendix C.Model Rnd. Most freq. cand. AS Reader DMN+Train datasetnottrainedbAbI10kBookTest14MCNN/DM1.2MbAbI10kbAbI10kbAbI mean (11 tasks) 6.1 29.9 34.8 38.1 92.7 95.7the models trained directly on each individual bAbI task. However there is some transfer betweenthe tasks since the AS Reader trained on either the BookTest or CNN/DM outperforms a randombaseline2and even an improved baseline which selects the most frequent word from the context thatalso appears as an answer in the training data for this task.The results also show that the models trained on CNN/DM perform somewhat better on most tasksthan the BookTest models. This may be due to the fact that bAbI tasks generally require the model tosummarize information from the context document, which is also what the CNN/DM dataset is testing.On the other hand, the BookTest requires prediction of a possible continuation of a story, wherethe required kind of reasoning is much less clear but certainly different from pure summarization.Another explanation for better performance of CNN/DM models might be that they solve slightlysimpler task since the candidate answers were already pre-selected in the entity anonymization step.Readers interested in how the training-dataset size affects this kind of transfer can check (Kadlecet al., 2016a) where we show that the target-task performance is a bit better if we use the largeBookTest as opposed to its smaller subset, the Children’s Book Test (CBT) (Hill et al., 2015).Conclusions from this experiment are that the skills learned from two large-scale datasets generalizesurprisingly poorly to even simple toy tasks. This may make us ask whether most teams’ focus onsolving narrow tasks is truly beneficial if the skills learnt on these tasks are hard to apply elsewhere.However it also brings us to our next experiment, where we try to provide some help to the strugglingpre-trained models.4.2 P RE-TRAINED WITH TARGET ADJUSTMENTAfter showing that the skills learnt from the BookTest and CNN/DM datasets are by themselvesinsufficient for solving the toy tasks, the next natural question is whether they are useful if helped bytraining on a small sample of examples from the target task. We call this additional phase of trainingtarget adjustment . For this experiment we again use the bAbI tasks, however we also test transferto a subset of the SQuAD dataset, which is much closer to real-world natural-language questionanswering.The results presented in this and the following section are based on training 3701 model instances.4.2.1 M ETHODCommon to bAbI and SQuAD datasets. In this experiment we started with a pre-trained modelwhich we used in the previous experiment. However, after it finished training on one of the largepre-training datasets, we allowed it to train on a subset of training examples from the target dataset.We tried subsets of various sizes ranging from a single example to thousands. We tried training fourdifferent pre-trained models and also, for comparison, four randomly-initialized models with thesame hyperparameters (see Appendix B.2 for details). The experiment with each task-model couplewas run on 4 different data samples of each size which were randomly drawn from the training dataset2The random baseline selects randomly uniformly between all unique words contained in the contextdocument.5Under review as a conference paper at ICLR 20170.000.250.500.751.001 10 100 5001000 5000# training examplesTest accuracyModel typeBookTest Pre−trainedBookTest RandomCNN/DM Pre−trainedCNN/DM RandomMean of best−validation test accuracy for the 11 bAbI tasks(a)●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●0.00.20.41 10 100 5001000 500010000 28127# training examplesTest accuracyModel type●Fully pre−trainedPre−trained embeddingsPre−trained encodersRandomly initializedAccuracies on SQuAD (b)Figure 2: Sub-figure (a) shows the average across the 11 bAbI tasks of the best-validation model’stest accuracy. (b) shows the test accuracy on SQuAD of each model we trained (the points) and thelines join the accuracies of the best-validation models for each training size.of the task to account for variations between these random samples – which may be substantial giventhe small sample size.3bAbI. For each of these models we observed the test accuracy at the best-validation epoch andcompared this number between the randomly initialized and pre-trained models. Validation was doneusing 100 examples which were set aside from the task’s original 10k training data.4We perform theexperiment with models pre-trained on the BookTest and also on CNN/DM.SQuAD subset. In the SQuAD experiment, we trained the model on a subset of the original trainingdataset where answers were only single words and its sub-subsets. We report the best-validationaccuracy on a development set filtered in the same way. This experiment was performed only withthe models pre-trained on BookTest.4.2.2 R ESULTSThe results of these experiments are summarized in Figures 2 and 3.●●● ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●● ●●●●●●●●●●●●●● ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●● ●●●●●●●●●●●●●●●● ●●●●●●●●●●●● ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●● ●●●●●●●●●● ●●●●●●●●● ●●●●●●●●●● ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●● ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●0.000.250.500.751.001 10 100 10005000# training examplesTest accuracyTask 1●●●●●●●●●●●●● ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●● ●●●●●●●●●●● ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●● ● ●●●●●●●●●●● ● ●●●● ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●● ●●●●●●●●●●● ●●●●●●●●●●●●●●●● ●●●●●●● ●●●●●●●● ●●●●●●●●●●●●●●●● ●●●●●●●●●●●●●●●●●●● ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●0.000.250.500.751.001 10 100 10005000# training examplesTest accuracyTask 4●●● ●●●●●●●●●●●●●● ●●●●●●●●● ●●●●●●●●●● ●●●●●●●●●●●● ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●● ●●●●●● ●●●●●●●●●●●●●●●●●●●●●●● ●●●●●●●●●●●●●● ●●●●●●●●●●●●●●●●●●●●● ●●●●●●●●●●●● ●●●●●●●●●● ●●●●●●●●●●●●●●●●●●●●●●●●● ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●● ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●0.000.250.500.751.001 10 100 10005000# training examplesTest accuracyTask 5Model type●●●●BookTest pre−trainedBookTest randomCNN/DM pre−trainedCNN/DM randomFigure 3: Example of 3 bAbI tasks where pre-training seems to help. Note that the task may be easierfor the CNN/DM models due to answer anonymization which restricts the choice of possible answers.3We are planning to release the split training datasets soon.4The other models trained on the full 10k dataset usually use 1000 validation examples (Sukhbaatar et al.,2015; Xiong et al., 2016), however we wanted to focus on low data regime thus we used 10 times less examples.6Under review as a conference paper at ICLR 2017bAbI. Sub-figure 2a shows mean test accuracy of the models that achieved the best validation resultfor each single task. The results for both BookTest and CNN/DM experiments confirm positive effectof pre-training compared to randomly initialized baseline. Figure 3 shows performance on selectedbAbI tasks where pre-training has clearly positive effect, such plot for each of the target tasks isprovided in Appendix C.2 (Figure 4).Note that the CNN/DM models cannot be directly compared to BookTest results due to entityanonymization that seems to simplify the task when the model is trained on smaller datasets.Since our evaluation methodology with different training set sizes is novel, we can compare ourresult only to MemN2N (Sukhbaatar et al., 2015) trained on a 1k dataset. MemN2N is the onlyweakly supervised model that reports accuracy when trained on less than 10k examples. MemN2Nachieves average accuracy 93.2%5on the eleven selected tasks. This is substantially better than bothour random baseline (78.0%) and the BookTest-pre-trained model (79.5%), however our model isnot tuned in any way towards this particular task. One important conceptual difference is that theAS Reader processes the whole context as one sequence of words, whereas MemN2N receives thecontext split into single sentences, which simplifies the task for the network.SQuAD subset. The results of SQuAD experiment also confirm positive effect of pre-training, seeSub-figure 2b, for now compare just lines showing performance of the fully pre-trained model andthe randomly initialized model – the meaning of the remaining two lines shall become clear in thenext section.More detailed statistics about the results of this experiment can be found in Appendix D.We should note that performance of our model is not competitive with the state of the art models onthis dataset. For instance the DCR model (Yu et al., 2016) trained on our SQuAD subset achievesvalidation accuracy 74.9% in this task which is better than our randomly initialized (35.4%) andpre-trained (51.6%) models6. However, the DCR model is designed specifically for the SQuAD task,for instance it utilizes features that are not used by our model.4.3 P ARTIALLY PRE -TRAINED MODELSince our previous experiment confirmed positive effect of pre-training if followed by target-domainadjustment, we wondered which part of the model contains the knowledge transferable to newdomains. To examine this we performed the following experiment.4.3.1 M ETHODOur machine learning model, the AS Reader, consists of two main parts: the word-embedding look-upand the bidirectional GRUs used to encode the document and question (see Figure 1). Therefore anatural question was what the contribution of each of these parts is.To test this we created two models out of each pre-trained model used in the previous experiment.The first model variant uses the pre-trained word embeddings from the original model while the GRUencoders are randomly initialized. We say that this model has pre-trained embeddings . The secondmodel variant uses the opposite setting where the word embeddings are randomly initialized whilethe encoders are taken form a pre-trained model. We call this pre-trained encoders .bAbI. For this experiment we selected only a subset of tasks with training set of 100 examples wherethere was significant difference in accuracy between randomly-initialized and pre-trained models. Forevaluation we use the same methodology as in the previous experiment, that is, we report accuracy ofthe best-validation model averaged over 4 training splits.SQuAD subset. We evaluated both model variants on all training sets from the previous SQuADexperiment using the same methodology.5MemN2N trained on each single task with PE LS RN features, see (Sukhbaatar et al., 2015) for details.6We would like to thank Yu et al. (2016) for training their system on our dataset.7Under review as a conference paper at ICLR 2017Table 2: The effect of pre-training different components of the model for selected tasks. The first rowshows performance (average test accuracy across all trained model instances in each category) of arandomly initialized baseline model. The following three rows show increase in accuracy (measuredin percent absolute) when the model is initialized with weights pre-trained on the BookTest. The lastline shows results for models initialized with Google News word2vec word embeddings (Mikolovet al., 2013).`````````````Model variantTask bAbI task (100 ex.) SQuAD1. 5. 11. 14. (28k ex.)Random init 53% 66% 71% 33% 31%Pre-trained encoders +6 +25 +4 +2 +4Pre-trained embeddings +17 +6 +8 +8 +10Pre-trained full +34 +22 +14 +13 +17Pre-trained word2vec -2 +5 +1 -1 +54.3.2 R ESULTSbAbI. Table 2 shows improvement of pre-trained models over a randomly initialized baseline. Inmost cases (all except Task 5) the fully pre-trained model achieved the best accuracy.SQuAD subset. The accuracies of the four model variants are plotted in Figure 2b together withresults of the previous SQuAD experiment. The graph shows that both pre-trained embeddings andpre-trained encoders alone improve performance over the randomly initialized baseline, however thefully pre-trained model is always the best.The overall result of this experiment is that both pre-training of the word embeddings and pre-trainingof the encoder parameters are important since the fully pre-trained model outperforms both partiallypre-trained variants.5 C ONCLUSIONOur experiments show that transfer from two large cloze-style question-answering datasets to ourtwo target tasks is suprisingly poor, if the models aren’t provided with any examples from the targetdomain. However we show that models that pre-trained models perform significantly better than arandomly initialized model if they are shown at least a few training examples from the target domain.The usefulness of pre-trained word embeddings is well known in the NLP community however weshow that the power of our pre-trained model does not lie just in the embeddings. This suggests thatonce the text-comprehension community agrees on sufficiently versatile model, much larger parts ofthe model could start being reused than just the word-embeddings.The generalization of skills from a training domain to new tasks is an important ingredient of anysystem we would want to call intelligent. This work is an early step to explore this direction.
H1ENTGXNe
needs more thorough analysis
3: Clear rejection
This work investigates the performance of transfer learning from resource-rich setup (BookTest, CNN/Daily Mail corpora) to low-resource (bAbI, SQuAD benchmarks) settings. Experiments show poor improvements in 0-shot learning. However, when the model is exposed to few training instances some improvements are observed. The claims made here require a more comprehensive analysis. I criticize the use of bAbI as a low-resource real-world scenario. bAbI is designed as a unit test and is far from representing many natural language phenomena. Thus, the claims related to bAbI can only be weak evidence for questioning transfer learning high-resource to low-resource in real-world scenarios. I highly recommend using recently proposed real-world scenarios [1,2]. More importantly, the work does not explain why and how do we get improvement using transfer learning. They remotely address this by hypothesizing the knowledge of transfer is not just encoded in embeddings but also in the model. Considering the related work [3], these claims bring a marginal novelty and still "how and why" should be central in this work. [1] http://www.msmarco.org/dataset.aspx [2] https://datasets.maluuba.com/NewsQA [3] http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.149.8551&rep=rep1&type=pdf
4: The reviewer is confident but not absolutely certain that the evaluation is correct
rJM69B5xx
ICLR.cc/2017/conference
2017
Finding a Jack-of-All-Trades: An Examination of Semi-supervised Learning in Reading Comprehension
["Rudolf Kadlec", "Ond\u0159ej Bajgar", "Peter Hrincar", "Jan Kleindienst"]
Deep learning has proven useful on many NLP tasks including reading comprehension. However it requires a lot of training data which are not available in some domains of application. Hence we examine the possibility of using data-rich domains to pre-train models and then apply them in domains where training data are harder to get. Specifically, we train a neural-network-based model on two context-question-answer datasets, the BookTest and CNN/Daily Mail, and we monitor transfer to subsets of bAbI, a set of artificial tasks designed to test specific reasoning abilities, and of SQuAD, a question-answering dataset which is much closer to real-world applications. Our experiments show very limited transfer if the model isn’t shown any training examples from the target domain however the results are promising if the model is shown at least a few target-domain examples. Furthermore we show that the effect of pre-training is not limited to word embeddings.
["Natural language processing", "Semi-Supervised Learning", "Deep learning", "Transfer Learning"]
ABSTRACTDeep learning has proven useful on many NLP tasks including reading comprehen-sion. However, it requires large amounts of training data which are not available insome domains of application. Hence we examine the possibility of using data-richdomains to pre-train models and then apply them in domains where training dataare harder to get. Specifically, we train a neural-network-based model on twocontext-question-answer datasets, the BookTest and CNN/Daily Mail, and wemonitor transfer to subsets of bAbI, a set of artificial tasks designed to test specificreasoning abilities, and of SQuAD, a question-answering dataset which is muchcloser to real-world applications. Our experiments show very limited transfer ifthe model is not shown any training examples from the target domain howeverthe results are encouraging if the model is shown at least a few target-domainexamples. Furthermore we show that the effect of pre-training is not limited toword embeddings.1 I NTRODUCTIONMachine intelligence has had some notable successes, however often in narrow domains which aresometimes of little practical use to humans – for instance games like chess (Campbell et al., 2002)or Go (Silver et al., 2016). If we aimed to build a general AI that would be able to efficiently assisthumans in a wide range of settings, we would want it to have a much larger set of skills – amongthem would be an ability to understand human language, to perform common-sense reasoning and tobe able to generalize its abilities to new situations like humans do.If we want to achieve this goal through Machine Learning, we need data to learn from. A lot of dataif the task at hand is complex – which is the case for many useful tasks. One way to achieve wideapplicability would be to provide training data for each specific task we would like the machine toperform. However it is unrealistic to obtain a sufficient amount of training data for some domains – itmay for instance require expensive human annotation or all domains of application may be difficultto predict in advance – while the amount of training data in other domains is practically unlimited,(e.g. in language modelling or Cloze-style question answering).The way to bridge this gap – and to achieve the aforementioned adaptability – is transfer learning (Pan& Yang, 2010) and closely related semi-supervised learning (Zhu & Goldberg, 2009) which allowthe system to acquire a set of skills on domains where data are abundant and then use these skills tosucceed on previously unseen domains. Despite how important generalization is for general AI, a lotof research keeps focusing on solving narrow tasks.In this paper we would like to examine transfer of learnt skills and knowledge within the domain of textcomprehension, a field that has lately attracted a lot of attention within the NLP community (Hermannet al., 2015; Hill et al., 2015; Kobayashi et al., 2016; Kadlec et al., 2016b; Chen et al., 2016; Sordoniet al., 2016; Dhingra et al., 2016; Trischler et al., 2016; Weissenborn, 2016; Cui et al., 2016b;a;These authors contributed equally to this work.1Under review as a conference paper at ICLR 2017Li et al., 2016; Shen et al., 2016). Specifically, we would like to address the following researchquestions:1.Whether we could train models on natural-language tasks where data are abundant andtransfer the learnt skills to tasks where in-domain training data may be difficult to obtain.We will first look into what reasoning abilities a model learns from two large-scale reading-comprehension datasets using artificial tasks, and then check whether it can transfer its skillsto real world tasks. Spoiler: both these transfers are very poor if we allow no training at allon the target task.2.Whether pre-training on large-scale datasets does help if we allow the model to train on asmall sample of examples from the target tasks. Here the results are much more positive.3.Finally we examine whether the benefits of pre-training are concentrated in any particularpart of the model - namely the word-embedding part or the context encoder (the reasoningpart). It turns out that pre-training is useful for both components.Although our results do not improve current state of the art in any of the studied tasks, they show aclear positive effect of large-dataset pre-training on the performance of our baseline machine-learningmodel. Previous studies of transfer learning and semi-supervised learning in NLP focused on textclassification (Dai & Le, 2015; Mou et al., 2016) and various parsing tasks (Collobert et al., 2011;Hashimoto et al., 2016). To our knowledge this work is the first study of transfer learning in readingcomprehension, and we hope it will stimulate further work in this important area.We will first briefly introduce the datasets we will be using on the pre-training and target sides,then our baseline model and afterwards in turn describe the method and results of each of the threeexperiments.2 D ATASETS2.1 P RE-TRAINING DATASETSWe have mentioned that for the model pre-training we would want to use a task where training dataare abundant. An example of such task is context-dependent cloze-style-question answering since thetraining data for this task can be generated automatically from a suitable corpus. We will use twosuch pre-training datasets in our experiments: the BookTest (Bajgar et al., 2016) and the CNN/DailyMail (CNN/DM) news dataset (Hermann et al., 2015).The task associated with both datasets is to answer a cloze-style question (i.e. fill in a blank ina sentence) the answer to which needs to be inferred from a context document provided with thequestion.2.1.1 B OOK TESTIn the BookTest dataset, the context document is formed from 20 consecutive sentences from a book.The question is then formed by omitting a common noun or a named entity from the subsequent 21stsentence. Among datasets of this kind, the BookTest is among the largest with more than 14 milliontraining examples coming from 3555 copyright-free books avalable thanks to Project Gutenberg.2.1.2 CNN/D AILY MAILIn the CNN/DM dataset the context document is formed from a news article while the cloze-stylequestion is formed by removing a named entity from one of the short summary sentences which oftenappear at the top of the article.To stop the model from using world knowledge from outside the context article (and hence truly testthe comprehension of the article), all named entities were replaced by anonymous tags, which arefurther shuffled for each example. This may make the comprehension more difficult; however, sincethe answer is always one of the anonymized entities, it also reduces the number of possible answersmaking guessing easier.2Under review as a conference paper at ICLR 20172.2 T ARGET DATASETS2.2.1 BABIThe first target dataset are the bAbI tasks (Weston et al., 2016) – a set of artificial tasks each ofwhich is designed to test a specific kind of reasoning. This toy dataset will allow us to observe whatparticular skills the model may be learning from each of the three training datasets.For our experiments we will be using an architecture designed to select one word from the contextdocument as the answer. Hence we have selected Tasks 1,2,3,4,5,11,12,13,14 and 16 which fulfillthis requirement and added task 15 which required a slight modification. Furthermore because bothpre-training datasets are cloze-style we converted also the bAbI task questions into cloze style (e.g.”Where is John?” to ”John is in the XXXXX.”).For the models pre-trained on CNN/DM we also anonymized the tasks in a way similar to thepre-training dataset - i.e. we replaced all names of characters and also all words that can appear asanswers for the given task by anonymous tags in the style of CNN/DM. This gives even models thathave not seen any training examples from the target domain a chance to answer the questions.Full details about these alterations can be found in Appendix A.2.2.2 SQ UADSecondly, we will look on transfer to the SQuAD dataset (Rajpurkar et al., 2016); here the associatedtask may be already useful in the real world. Although cloze-style questions have the huge advantagein the possibility of being automatically generated from a suitable corpus – the path taken by CNN/DMand the BookTest – in practice humans would use a proper question, not its cloze-style substitute.This brings us to the need of transfer from the data-rich cloze-style training to the domain of properquestions where data are much scarcer due to the necessary human annotation.The SQuAD dataset is a great target dataset to use for this. As opposed to the bAbI tasks, the goalof this dataset is actually a problem whose solving would be useful to humans - answering naturalquestions based on an natural language encyclopedic knowledge base.For our experiments we selected only a subset of the SQuAD training and development exampleswhere the answer is only a single word, since this is an inherent assumption of our machine learningmodel. This way we extracted 28,346 training examples out of the original 100,000 examples and3,233 development examples out of 10,570.3 M ACHINE LEARNING MODEL : AS R EADERWe perform our experiments using the Attention Sum Reader (AS Reader) (Kadlec et al., 2016b)model. The AS Reader is simple to implement while it achieves strong performance on several textcomprehension tasks (Kadlec et al., 2016b; Bajgar et al., 2016; Chu et al., 2016). Since the AS Readeris a building block of many recent text-comprehension models (Trischler et al., 2016; Sordoni et al.,2016; Dhingra et al., 2016; Cui et al., 2016b;a; Shen et al., 2016; Munkhdalai & Yu, 2016) it is agood representative of current research in this field.A high level structure of the AS Reader is shown in Figure 1. The words from the document and thequestion are first converted into vector embeddings using a look-up matrix. The document is thenread by a bidirectional Gated Recurrent Unit (GRU) network (Cho et al., 2014). A concatenationof the hidden states of the forward and backward GRUs at each word is then used as a contextualembedding of this word, intuitively representing the context in which the word is appearing. We canalso understand it as representing the set of questions to which this word may be an answer.Similarly the question is read by a bidirectional GRU but in this case only the final hidden states areconcatenated to form the question embedding .The attention over each word in the context is then calculated as the dot product of its contextualembedding with the question embedding. This attention is then normalized by the softmax functionand summed across all occurrences of each answer candidate. The candidate with most accumulatedattention is selected as the final answer.3Under review as a conference paper at ICLR 2017For a more detailed description of the model including equations check Kadlec et al. (2016b).QuestionencoderDocumentencoderDocumentQuestionPObamaquestion,document.....ObamaandPutin......saidObamainPragueXXXXXvisitedPrague.......Wordembeddings(Bidir-GRU)(Matrix)(Bidir-GRU)............Figure 1: Structure of the AS Reader model.4 E XPERIMENTS : TRANSFER LEARNING IN TEXT COMPREHENSIONNow let us turn in more detail to the three kinds of experiments that we performed.4.1 P RE-TRAINED WITHOUT TARGET ADJUSTMENTIn the first experiment we tested how a model trained on one of the large-scale pre-training datasetsperforms on the bAbI tasks without any opportunity to train on bAbI. Since the BookTest andCNN/DM tasks involve only cloze-style questions, we can’t expect a model trained on them to answernatural ?-style questions. Hence we did not study the transfer to SQuAD in this case, only the transferto the (cloze-converted) bAbI tasks.4.1.1 M ETHODFirst we tested how the AS Reader architecture (Kadlec et al., 2016b) can handle the tasks if traineddirectly on the bAbI training data for each task. Then we tested the degree of transfer from theBookTest and CNN/DM data to the 11 selected bAbI tasks.In the first part of the experiment we trained a separate instance of the AS Reader on the 10,000-example version of the bAbI training data for each of the 11 tasks (for more details see Appendix B.1).On 8 of them the architecture was able to learn the task with accuracy at least 95%1(results for eachtask can be found in Table 4 in Appendix C). Hence if given appropriate training the AS Readeris capable of the reasoning needed to solve most of the selected bAbI tasks. Now when we knowthat the AS Reader is powerful enough to learn the target tasks we can turn to transfer from the twolarge-scale datasets.The main part of this first experiment was then straightforward: we pre-trained multiple models onthe BookTest and CNN/DM datasets and then simply evaluated them on the test datasets of the 11selected bAbI tasks.4.1.2 R ESULTSTable 1 summarizes the results of this experiment. Both the models trained on the BookTest and thosetrained on the CNN/DM dataset perform quite poorly on bAbI and achieve much lower accuracy than1It should be noted that there are several machine learning models that perform better than the AS Readerin the 10k weakly supervised setting, e.g. (Sukhbaatar et al., 2015; Xiong et al., 2016; Graves et al., 2016),however they often need significant fine-tuning. On the other hand we trained plain AS Reader model withoutany modifications. Hyperparameter and feature fine-tuning could probably further increase its performanceon individual tasks however it goes directly against the idea of generality that is at the heart of this work. Forcomparison with state of the art we include results of DMN+ (Xiong et al., 2016) in Table 1 which had the bestaverage performance over the original 20 tasks.4Under review as a conference paper at ICLR 2017Table 1: The mean performance across 11 bAbI tasks. The first two columns show a random baseline2and a baseline that selects the most frequent word from the context which also appears as an answerin the training data for the task. The following three columns show performance of the AS Readertrained on different datasets, the last column shows the results of DMN+ (Xiong et al., 2016), thestate-of-the-art-model on the bAbI 10k dataset. For more detailed results listing per task accuraciessee Appendix C.Model Rnd. Most freq. cand. AS Reader DMN+Train datasetnottrainedbAbI10kBookTest14MCNN/DM1.2MbAbI10kbAbI10kbAbI mean (11 tasks) 6.1 29.9 34.8 38.1 92.7 95.7the models trained directly on each individual bAbI task. However there is some transfer betweenthe tasks since the AS Reader trained on either the BookTest or CNN/DM outperforms a randombaseline2and even an improved baseline which selects the most frequent word from the context thatalso appears as an answer in the training data for this task.The results also show that the models trained on CNN/DM perform somewhat better on most tasksthan the BookTest models. This may be due to the fact that bAbI tasks generally require the model tosummarize information from the context document, which is also what the CNN/DM dataset is testing.On the other hand, the BookTest requires prediction of a possible continuation of a story, wherethe required kind of reasoning is much less clear but certainly different from pure summarization.Another explanation for better performance of CNN/DM models might be that they solve slightlysimpler task since the candidate answers were already pre-selected in the entity anonymization step.Readers interested in how the training-dataset size affects this kind of transfer can check (Kadlecet al., 2016a) where we show that the target-task performance is a bit better if we use the largeBookTest as opposed to its smaller subset, the Children’s Book Test (CBT) (Hill et al., 2015).Conclusions from this experiment are that the skills learned from two large-scale datasets generalizesurprisingly poorly to even simple toy tasks. This may make us ask whether most teams’ focus onsolving narrow tasks is truly beneficial if the skills learnt on these tasks are hard to apply elsewhere.However it also brings us to our next experiment, where we try to provide some help to the strugglingpre-trained models.4.2 P RE-TRAINED WITH TARGET ADJUSTMENTAfter showing that the skills learnt from the BookTest and CNN/DM datasets are by themselvesinsufficient for solving the toy tasks, the next natural question is whether they are useful if helped bytraining on a small sample of examples from the target task. We call this additional phase of trainingtarget adjustment . For this experiment we again use the bAbI tasks, however we also test transferto a subset of the SQuAD dataset, which is much closer to real-world natural-language questionanswering.The results presented in this and the following section are based on training 3701 model instances.4.2.1 M ETHODCommon to bAbI and SQuAD datasets. In this experiment we started with a pre-trained modelwhich we used in the previous experiment. However, after it finished training on one of the largepre-training datasets, we allowed it to train on a subset of training examples from the target dataset.We tried subsets of various sizes ranging from a single example to thousands. We tried training fourdifferent pre-trained models and also, for comparison, four randomly-initialized models with thesame hyperparameters (see Appendix B.2 for details). The experiment with each task-model couplewas run on 4 different data samples of each size which were randomly drawn from the training dataset2The random baseline selects randomly uniformly between all unique words contained in the contextdocument.5Under review as a conference paper at ICLR 20170.000.250.500.751.001 10 100 5001000 5000# training examplesTest accuracyModel typeBookTest Pre−trainedBookTest RandomCNN/DM Pre−trainedCNN/DM RandomMean of best−validation test accuracy for the 11 bAbI tasks(a)●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●0.00.20.41 10 100 5001000 500010000 28127# training examplesTest accuracyModel type●Fully pre−trainedPre−trained embeddingsPre−trained encodersRandomly initializedAccuracies on SQuAD (b)Figure 2: Sub-figure (a) shows the average across the 11 bAbI tasks of the best-validation model’stest accuracy. (b) shows the test accuracy on SQuAD of each model we trained (the points) and thelines join the accuracies of the best-validation models for each training size.of the task to account for variations between these random samples – which may be substantial giventhe small sample size.3bAbI. For each of these models we observed the test accuracy at the best-validation epoch andcompared this number between the randomly initialized and pre-trained models. Validation was doneusing 100 examples which were set aside from the task’s original 10k training data.4We perform theexperiment with models pre-trained on the BookTest and also on CNN/DM.SQuAD subset. In the SQuAD experiment, we trained the model on a subset of the original trainingdataset where answers were only single words and its sub-subsets. We report the best-validationaccuracy on a development set filtered in the same way. This experiment was performed only withthe models pre-trained on BookTest.4.2.2 R ESULTSThe results of these experiments are summarized in Figures 2 and 3.●●● ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●● ●●●●●●●●●●●●●● ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●● ●●●●●●●●●●●●●●●● ●●●●●●●●●●●● ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●● ●●●●●●●●●● ●●●●●●●●● ●●●●●●●●●● ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●● ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●0.000.250.500.751.001 10 100 10005000# training examplesTest accuracyTask 1●●●●●●●●●●●●● ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●● ●●●●●●●●●●● ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●● ● ●●●●●●●●●●● ● ●●●● ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●● ●●●●●●●●●●● ●●●●●●●●●●●●●●●● ●●●●●●● ●●●●●●●● ●●●●●●●●●●●●●●●● ●●●●●●●●●●●●●●●●●●● ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●0.000.250.500.751.001 10 100 10005000# training examplesTest accuracyTask 4●●● ●●●●●●●●●●●●●● ●●●●●●●●● ●●●●●●●●●● ●●●●●●●●●●●● ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●● ●●●●●● ●●●●●●●●●●●●●●●●●●●●●●● ●●●●●●●●●●●●●● ●●●●●●●●●●●●●●●●●●●●● ●●●●●●●●●●●● ●●●●●●●●●● ●●●●●●●●●●●●●●●●●●●●●●●●● ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●● ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●0.000.250.500.751.001 10 100 10005000# training examplesTest accuracyTask 5Model type●●●●BookTest pre−trainedBookTest randomCNN/DM pre−trainedCNN/DM randomFigure 3: Example of 3 bAbI tasks where pre-training seems to help. Note that the task may be easierfor the CNN/DM models due to answer anonymization which restricts the choice of possible answers.3We are planning to release the split training datasets soon.4The other models trained on the full 10k dataset usually use 1000 validation examples (Sukhbaatar et al.,2015; Xiong et al., 2016), however we wanted to focus on low data regime thus we used 10 times less examples.6Under review as a conference paper at ICLR 2017bAbI. Sub-figure 2a shows mean test accuracy of the models that achieved the best validation resultfor each single task. The results for both BookTest and CNN/DM experiments confirm positive effectof pre-training compared to randomly initialized baseline. Figure 3 shows performance on selectedbAbI tasks where pre-training has clearly positive effect, such plot for each of the target tasks isprovided in Appendix C.2 (Figure 4).Note that the CNN/DM models cannot be directly compared to BookTest results due to entityanonymization that seems to simplify the task when the model is trained on smaller datasets.Since our evaluation methodology with different training set sizes is novel, we can compare ourresult only to MemN2N (Sukhbaatar et al., 2015) trained on a 1k dataset. MemN2N is the onlyweakly supervised model that reports accuracy when trained on less than 10k examples. MemN2Nachieves average accuracy 93.2%5on the eleven selected tasks. This is substantially better than bothour random baseline (78.0%) and the BookTest-pre-trained model (79.5%), however our model isnot tuned in any way towards this particular task. One important conceptual difference is that theAS Reader processes the whole context as one sequence of words, whereas MemN2N receives thecontext split into single sentences, which simplifies the task for the network.SQuAD subset. The results of SQuAD experiment also confirm positive effect of pre-training, seeSub-figure 2b, for now compare just lines showing performance of the fully pre-trained model andthe randomly initialized model – the meaning of the remaining two lines shall become clear in thenext section.More detailed statistics about the results of this experiment can be found in Appendix D.We should note that performance of our model is not competitive with the state of the art models onthis dataset. For instance the DCR model (Yu et al., 2016) trained on our SQuAD subset achievesvalidation accuracy 74.9% in this task which is better than our randomly initialized (35.4%) andpre-trained (51.6%) models6. However, the DCR model is designed specifically for the SQuAD task,for instance it utilizes features that are not used by our model.4.3 P ARTIALLY PRE -TRAINED MODELSince our previous experiment confirmed positive effect of pre-training if followed by target-domainadjustment, we wondered which part of the model contains the knowledge transferable to newdomains. To examine this we performed the following experiment.4.3.1 M ETHODOur machine learning model, the AS Reader, consists of two main parts: the word-embedding look-upand the bidirectional GRUs used to encode the document and question (see Figure 1). Therefore anatural question was what the contribution of each of these parts is.To test this we created two models out of each pre-trained model used in the previous experiment.The first model variant uses the pre-trained word embeddings from the original model while the GRUencoders are randomly initialized. We say that this model has pre-trained embeddings . The secondmodel variant uses the opposite setting where the word embeddings are randomly initialized whilethe encoders are taken form a pre-trained model. We call this pre-trained encoders .bAbI. For this experiment we selected only a subset of tasks with training set of 100 examples wherethere was significant difference in accuracy between randomly-initialized and pre-trained models. Forevaluation we use the same methodology as in the previous experiment, that is, we report accuracy ofthe best-validation model averaged over 4 training splits.SQuAD subset. We evaluated both model variants on all training sets from the previous SQuADexperiment using the same methodology.5MemN2N trained on each single task with PE LS RN features, see (Sukhbaatar et al., 2015) for details.6We would like to thank Yu et al. (2016) for training their system on our dataset.7Under review as a conference paper at ICLR 2017Table 2: The effect of pre-training different components of the model for selected tasks. The first rowshows performance (average test accuracy across all trained model instances in each category) of arandomly initialized baseline model. The following three rows show increase in accuracy (measuredin percent absolute) when the model is initialized with weights pre-trained on the BookTest. The lastline shows results for models initialized with Google News word2vec word embeddings (Mikolovet al., 2013).`````````````Model variantTask bAbI task (100 ex.) SQuAD1. 5. 11. 14. (28k ex.)Random init 53% 66% 71% 33% 31%Pre-trained encoders +6 +25 +4 +2 +4Pre-trained embeddings +17 +6 +8 +8 +10Pre-trained full +34 +22 +14 +13 +17Pre-trained word2vec -2 +5 +1 -1 +54.3.2 R ESULTSbAbI. Table 2 shows improvement of pre-trained models over a randomly initialized baseline. Inmost cases (all except Task 5) the fully pre-trained model achieved the best accuracy.SQuAD subset. The accuracies of the four model variants are plotted in Figure 2b together withresults of the previous SQuAD experiment. The graph shows that both pre-trained embeddings andpre-trained encoders alone improve performance over the randomly initialized baseline, however thefully pre-trained model is always the best.The overall result of this experiment is that both pre-training of the word embeddings and pre-trainingof the encoder parameters are important since the fully pre-trained model outperforms both partiallypre-trained variants.5 C ONCLUSIONOur experiments show that transfer from two large cloze-style question-answering datasets to ourtwo target tasks is suprisingly poor, if the models aren’t provided with any examples from the targetdomain. However we show that models that pre-trained models perform significantly better than arandomly initialized model if they are shown at least a few training examples from the target domain.The usefulness of pre-trained word embeddings is well known in the NLP community however weshow that the power of our pre-trained model does not lie just in the embeddings. This suggests thatonce the text-comprehension community agrees on sufficiently versatile model, much larger parts ofthe model could start being reused than just the word-embeddings.The generalization of skills from a training domain to new tasks is an important ingredient of anysystem we would want to call intelligent. This work is an early step to explore this direction.
B1v6yArVx
review
4: Ok but not good enough - rejection
This paper proposes a study of transfer learning in the context of QA from stories. A system is presented with a a short story and has to answer a question about it. This paper studies how a system trained to answer questions on a dataset can eventually be used to answer questions from another dataset. The results are mostly negative: transfer seems almost non-existant. This paper is centered around presenting negative results. Indeed the main hypothesis of transferring between QA datasets with the attention sum reader turns out impossible and one needs a small portion of labeled data from the target dataset to get meaningful performance. Having only negative results could be fine if the paper was bringing some value with a sharp analysis of the failure modes and of the reasons behind it. Because this might indicate some research directions to follow. However, there is not much of that. The answers to the pre-review questions actually start to give some insights: typing seems to be transferred for instance. How about the impact of syntax (very different between bAbI, Gutenberg books, and CNN news articles)? And the word/entity/ngrams distributions overlap between the 3 datasets? Unfortunately, there is not much to take-away from this paper.
4: The reviewer is confident but not absolutely certain that the evaluation is correct
S1J0E-71l
ICLR.cc/2017/conference
2017
Surprisal-Driven Feedback in Recurrent Networks
["Kamil Rocki"]
Recurrent neural nets are widely used for predicting temporal data. Their inherent deep feedforward structure allows learning complex sequential patterns. It is believed that top-down feedback might be an important missing ingredient which in theory could help disambiguate similar patterns depending on broader context. In this paper, we introduce surprisal-driven recurrent networks, which take into account past error information when making new predictions. This is achieved by continuously monitoring the discrepancy between most recent predictions and the actual observations. Furthermore, we show that it outperforms other stochastic and fully deterministic approaches on enwik8 character level prediction task achieving 1.37 BPC.
["Unsupervised Learning", "Applications", "Deep learning"]
ABSTRACTRecurrent neural nets are widely used for predicting temporal data. Their inher-ent deep feedforward structure allows learning complex sequential patterns. It isbelieved that top-down feedback might be an important missing ingredient whichin theory could help disambiguate similar patterns depending on broader context.In this paper, we introduce surprisal-driven recurrent networks, which take intoaccount past error information when making new predictions. This is achievedby continuously monitoring the discrepancy between most recent predictions andthe actual observations. Furthermore, we show that it outperforms other stochas-tic and fully deterministic approaches on enwik8 character level prediction taskachieving 1.37 BPC.1 I NTRODUCTIONBased on human performance on the same task, it is believed that an important ingredient which ismissing in state-of-the-art variants of recurrent networks is top-down feedback. Despite evidenceof its existence, it is not entirely clear how mammalian brain might implement such a mechanism.It is important to understand what kind of top-down interaction contributes to improved predictioncapability in order to tackle more challenging AI problems requiring interpretation of deeper con-textual information. Furthermore, it might provide clues as what makes human cognitive abilitiesso unique. Existing approaches which consider top-down feedback in neural networks are primar-ily focused on stacked layers of neurons, where higher-level representations constitute a top-downsignal source. In this paper, we propose that the discrepancy between most recent predictions andobservations might be effectively used as a feedback signal affecting further predictions. It is verycommon to use such a discrepancy during learning phase as the error which is subject to minimiza-tion, but not during inference. We show that is also possible to use such top-down signal withoutlosing generality of the algorithm and that it improves generalization capabilities when applied toLong-Short Term Memory (Hochreiter & Schmidhuber, 1997) architecture. It is important to pointout that the feedback idea presented here applies only to temporal data.1.1 S UMMARY OF CONTRIBUTIONSThe main contributions of this work are:the introduction of a novel way of incorporating most recent misprediction measure as anadditional input signalextending state-of-the-art performance on character-level text modeling using HutterWikipedia dataset.1.2 R ELATED WORKThere exist other approaches which attempted to introduce top-down input for improving predic-tions. One such architecture is Gated-Feedback RNN (Chung et al., 2015). An important differencebetween architecture proposed here and theirs is the source of the feedback signal. In GF-RNN it isassumed that there exist higher level representation layers and they constitute the feedback source.1Under review as a conference paper at ICLR 2017On the other hand, here, feedback depends directly on the discrepancy between past predictions andcurrent observation and operates even within a single layer. Another related concept is Ladder Net-works (Rasmus et al., 2015), where top-down connections contribute to improved semi-supervisedlearning performance.2 F EEDBACK : M ISPREDICTION -DRIVEN PREDICTION nd the supervision of taxation by regional authorities. The federal government controls more than 9051 to Lucius Tarrutius of Firmum [[Romulus and Remus]] were conceived in the womb on the 23rd day of t05102 Northern Ireland]] == External links == * [http://www.enniskillen.com Enniskillen.Com] * [http:/05103 by]] *[[Alan Colmes]] *[[Janice Dean]] *[[Laurie Dhue]] *[[Steve Doocy]] *[[Donna Fiducia]] | *[[Da05104 sity]] #[[Denison University]] #[[Des Moines Area Community College]] ([[Des Moines, Iowa|Des Moine05105 } [[af:4 Augustus]] [[ar:4 Ø£ØoØ3Ø·Ø3]] [[an:4 d'agosto]] [[ast:4 d'agostu]] [[bg:4 аÐ2Ð3Ñ#Ñ#Ñ#]]0246 : [http://images.google.nl/images?q=Herman+Brood+art&amp;hl=nl&amp;lr=&amp;ie=UTF-8&amp;sa=N&amp;t010207 >Church of England</title> <id>5955</id> <revision> <id>42087195</id> <timestam0248 ding=&quot;4&quot; cellspacing=&quot;0&quot; style=&quot;margin: 1em 1em 1em 0; background: #f9f9f90129 n> <id>35151715</id> <timestamp>2006-01-14T15:11:15Z</timestamp> <contributor> 02410 ' ([[1992 in film|1992]]) *''Guncrazy'' ([[1992 in film|1992]]) *''No Place to Hide'' ([[1993 in fi051011 . *[[1975]] - [[Barbara Walters]] signs a five-year $5 million contract with the American Broadcast0102012 th the [[theory of evolution]] by [[natural selection]]. This conflict is most prevalent in the [[U051013 y known as the Gibraltar Housewives Association, and subsequently, in the early eighties it was cha051014 ;&amp;#1489;&amp;#1500;&amp;#1512;]] [[id:Assembler]] [[lt:Asembleris]] [[pl:Asembler]] [[ru:&amp;#051015 at, sandy soils. Granites sometimes occur in circular depressions surrounded by a range of hills, f0516Figure 1: Illustration of stsignal on a typical batch of 16 sequences of length 100 from enwik8dataset.y-axis is negative log probability in bits. Intuitively surprise signal is low when a textfragment is highly predictable (i.e. in the < timestamp > part - sequence no 10, the tag itselfis highly predictable, whereas the exact date cannot be predicted and should not be the focus ofattention). The main idea presented in this paper is that feedback signal stshould be able to help indistinguishing predictable and inherently unpredictable parts during the inference phase.2Under review as a conference paper at ICLR 20172.1 N OTATIONThe following notation is used throughout the section:x- inputsh- hidden unitsy- outputsp- output probabilities (normalized y)s- surprisalt- time stepW- feedforward x!hconnection matrixU- recurrenth!hconnection matrixV- feedbacks!hconnection matrixS- truncated BPTT lengthM- number of inputsN- number of hidden unitsdenotes matrix multiplicationdenotes elementwise multiplication();tanh ()- elementwise nonlinearitiesx=@E@xIn case of LSTM, the following concatenated representations are used:gt=264itftotut375b=264bibfbobu375U=264UiUfUoUu375W=264WiWfWoWu375V=264ViVfVoVu375 (1)2.2 S IMPLE RNN WITHOUT FEEDBACKFirst, we show a simple recurrent neural network architecture without feedback which serves as abasis for demonstrating our approach. It is illustrated in Fig. 2 and formulated as follows:ht=tanh(Wxt+Uht1+b) (2):::ht1yt1xthtytxt+1::: tanhinternal statefeedforward inputWUWyFigure 2: Simple RNN; h- internal (hidden) states; xare inputs,yare optional outputs to be emitted3Under review as a conference paper at ICLR 20172.3 F EEDBACK AUGMENTED RECURRENT NETWORKS:::ht1yt1stxthtyt::: tanhinternal statefeedforward inputerror feedbackpredictionWUVWyFigure 3: Surprisal-Feedback RNN; strepresents surprisal (in information theory sense) - the dis-crepancy between prediction at time step t1and the actual observation at time step t; it constitutesadditional input signal to be considered when making a prediction for the next time step.Figure 3 presents the main idea of surprisal-driven feedback in recurrent networks. In addition tofeedforward and recurrent connections WandU, we added one additional matrix V. One moreinput signal, namely Vstis being considered when updating hidden states of the network. Wepropose that the discrepancy stbetween most recent predictions pt1and observations xtmightbe effectively used as a feedback signal affecting further predictions. Such information is usuallyused during learning phase as an error signal, but not during inference. Our hypothesis is that itrepresents an important source of information which can be used during the inference phase, shouldbe used and that it bring benefits in the form of improved generalization capability. Figure 1 presentsexamples of feedback signal being considered. Intuitively, when surprisal is near zero, the sum ofinput signals is the same as in a typical RNN. Next subsections provide mathematical descriptionof the feedback architecture in terms of forward and backward passes for the Back PropagationThrough Time (BPTT) (Werbos, 1990) algorithm.2.4 F ORWARD PASSSeth0,c0to zero andp0to uniform distribution or carry over the last state to emulate full BPTT.8i;pi0=1M;i2f0;1;::;M1g;t= 0 (3)for t = 1:1:S-1I. Surprisal partst=iXlogpit1xit (4)IIa. Computing hidden activities, Simple RNNht=tanh(Wxt+Uht1+Vst+b) (5)IIb. Computing hidden activities, LSTM (to be used instead of IIa)ft=(Wfxt+Ufht1+Vfst+bf) (6)it=(Wixt+Uiht1+Vist+bi) (7)4Under review as a conference paper at ICLR 2017ot=(Woxt+Uoht1+Vost+bo) (8)ut=tanh(Wuxt+Uuht1+Vust+bu) (9)ct= (1ft)ct1+itut (10)^ct=tanh(ct) (11)ht=ot^ct (12)III. Outputsyit=Wyht+by (13)Softmax normalizationpit=eyitPieyit(14)2.5 B ACKWARD PASSfor t = S-1:-1:1I. Backprop through predictionsBackprop through softmax, cross-entropy error, accumulate@Et@yt=@Et@yt+pt1xt (15)y!Wy;by@E@Wy=@E@Wy+hTt@Et@yt(16)@E@by=@E@by+MXi=1@Eit@yit(17)y!h@Et@ht=@Et@ht+@Et@ytWTy (18)IIa. Backprop through hidden nonlinearity (simple RNN version)@Et@ht=@Et@ht+@Et@httanh0(ht) (19)@Et@gt=@Et@ht(20)IIb. Backprop through c;h;g (LSTM version)5Under review as a conference paper at ICLR 2017Backprop through memory cells, (keep gradients from the previous iteration)@Et@ct=@Et@ct+@Et@htottanh0(^ct) (21)Carry error over to@Et@ct1@Et@ct1=@Et@ct1+@Et@ct(1ft) (22)Propagate error through the gates@Et@ot=@Et@ht^ct0(ot) (23)@Et@it=@Et@ctut0(it) (24)@Et@ft=@Et@ctct10(ft) (25)@Et@ut=@Et@ctittanh0(ut) (26)Carry error over to@Et@ht1@Et@ht1=@Et@gtUT(27)III. Backprop through linearities@Et@b=@Et@b+NXi=1@Et@git(28)@E@U=@E@U+hTt1@Et@gt(29)@E@W=@E@W+xTt@Et@gt(30)@E@x=@E@x+@Et@gtWT(31)IV . Surprisal part@E@V=@E@V+sTt@Et@gt(32)@E@st=@E@gtVT(33)@Et@pt1=@Et@stxt (34)Adjust@Et@pt1according to the sum of gradients and carry over to@Et@yt1@Et@yt1=@Et@pt1pt1MXi=1@Et@pit1(35)6Under review as a conference paper at ICLR 2017Time4h8h16h24h32h40h48h60h72hBits/Character0.911.11.21.31.41.51.61.71.8Standard LSTM (test)Feedback LSTM (test)Standard LSTM (train)Feedback LSTM (train)Test Bits/Character1.41.451.51.551.61.651.71.75Train Bits/Character0.80.911.11.21.31.41.51.61.7Standard LSTMFeedback LSTMFigure 4: Training progress on enwik8 corpus, bits/character3 E XPERIMENTSWe ran experiments on the enwik8 dataset. It constitutes first 108bytes of English Wikipedia dump(with all extra symbols present in XML), also known as Hutter Prize challenge dataset2. First 90%of each corpus was used for training, the next 5% for validation and the last 5% for reporting testaccuracy. In each iteration sequences of length 10000 were randomly selected. The learning algo-rithm used was Adagrad1with a learning rate of 0.001. Weights were initialized using so-calledXavier initialization Glorot & Bengio (2010). Sequence length for BPTT was 100 and batch size128, states were carried over for the entire sequence of 10000 emulating full BPTT. Forget bias wasset initially to 1. Other parameters set to zero. The algorithm was written in C++ and CUDA 8 andran on GTX Titan GPU for up to 10 days. Table 1 presents results comparing existing state-of-the-art approaches to the introduced Feedback LSTM algorithm which outperforms all other methodsdespite not having any regularizer.Table 1: Bits per character on the Hutter Wikipedia dataset (test data).BPCmRNN(Sutskever et al., 2011) 1.60GF-RNN (Chung et al., 2015) 1.58Grid LSTM (Kalchbrenner et al., 2015) 1.47Standard LSTM41.45MI-LSTM (Wu et al., 2016) 1.44Recurrent Highway Networks (Zilly et al., 2016) 1.42Array LSTM (Rocki, 2016) 1.40Feedback LSTM 1.39Hypernetworks (Ha et al., 2016) 1.38Feedback LSTM + Zoneout (Krueger et al., 2016) 1.374 S UMMARYWe introduced feedback recurrent network architecture, which takes advantage of temporal natureof the data and monitors the discrepancy between predictions and observations. This prediction error1with a modification taking into consideration only recent window of gradient updates2http://mattmahoney.net/dc/text.html3This method does not belong to the ’dynamic evaluation’ group: 1. It never actually sees test data duringtraining. 2. It does not adapt weights during testing4our implementation7Under review as a conference paper at ICLR 2017information, also known as surprisal, is used when making new guesses. We showed that combiningcommonly used feedforward, recurrent and such feedback signals improves generalization capabil-ities of Long-Short Term Memory network. It outperforms other stochastic and fully deterministicapproaches on enwik8 character level prediction achieving 1.37 BPC.5 F URTHER WORKIt is still an open question what the feedback should really constitute as well as how it shouldinteract with lower-level neurons (additive, multiplicative or another type of connection). Furtherimprovements may be possible with the addition of regularization. Another research direction isincorporating sparsity in order improve disentangling sources of variation in temporal data.ACKNOWLEDGEMENTSThis work has been supported in part by the Defense Advanced Research Projects Agency (DARPA).
r1pQ4-zNe
Misleading
3: Clear rejection
This paper proposes to use previous error signal of the output layer as an additional input to recurrent update function in order to enhance the modelling power of a dynamic system such as RNNs. -This paper makes an erroneous assumption: test label information is not given in most of the real world applications, except few applications. This means that the language modelling task, which is the only experiment of this paper, may not be the right task to test this approach. Also, comparing against the models that do not use test error signal at inference time is unfair. We cannot just say that the test label information is being observed, this only holds in online-prediction problems. -The experiment is only conducted on one dataset, reporting state-of-the-art result, but unfortunately this is not true. There are already more than four papers reporting better numbers than the one reported in this task, however the author did not cite them. I understand that this paper came before the other papers, but the manuscript should be updated before the final decision. -The model size is still missing and without this information, it is hard to judge the contribution of the proposed trick.
5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature
S1J0E-71l
ICLR.cc/2017/conference
2017
Surprisal-Driven Feedback in Recurrent Networks
["Kamil Rocki"]
Recurrent neural nets are widely used for predicting temporal data. Their inherent deep feedforward structure allows learning complex sequential patterns. It is believed that top-down feedback might be an important missing ingredient which in theory could help disambiguate similar patterns depending on broader context. In this paper, we introduce surprisal-driven recurrent networks, which take into account past error information when making new predictions. This is achieved by continuously monitoring the discrepancy between most recent predictions and the actual observations. Furthermore, we show that it outperforms other stochastic and fully deterministic approaches on enwik8 character level prediction task achieving 1.37 BPC.
["Unsupervised Learning", "Applications", "Deep learning"]
ABSTRACTRecurrent neural nets are widely used for predicting temporal data. Their inher-ent deep feedforward structure allows learning complex sequential patterns. It isbelieved that top-down feedback might be an important missing ingredient whichin theory could help disambiguate similar patterns depending on broader context.In this paper, we introduce surprisal-driven recurrent networks, which take intoaccount past error information when making new predictions. This is achievedby continuously monitoring the discrepancy between most recent predictions andthe actual observations. Furthermore, we show that it outperforms other stochas-tic and fully deterministic approaches on enwik8 character level prediction taskachieving 1.37 BPC.1 I NTRODUCTIONBased on human performance on the same task, it is believed that an important ingredient which ismissing in state-of-the-art variants of recurrent networks is top-down feedback. Despite evidenceof its existence, it is not entirely clear how mammalian brain might implement such a mechanism.It is important to understand what kind of top-down interaction contributes to improved predictioncapability in order to tackle more challenging AI problems requiring interpretation of deeper con-textual information. Furthermore, it might provide clues as what makes human cognitive abilitiesso unique. Existing approaches which consider top-down feedback in neural networks are primar-ily focused on stacked layers of neurons, where higher-level representations constitute a top-downsignal source. In this paper, we propose that the discrepancy between most recent predictions andobservations might be effectively used as a feedback signal affecting further predictions. It is verycommon to use such a discrepancy during learning phase as the error which is subject to minimiza-tion, but not during inference. We show that is also possible to use such top-down signal withoutlosing generality of the algorithm and that it improves generalization capabilities when applied toLong-Short Term Memory (Hochreiter & Schmidhuber, 1997) architecture. It is important to pointout that the feedback idea presented here applies only to temporal data.1.1 S UMMARY OF CONTRIBUTIONSThe main contributions of this work are:the introduction of a novel way of incorporating most recent misprediction measure as anadditional input signalextending state-of-the-art performance on character-level text modeling using HutterWikipedia dataset.1.2 R ELATED WORKThere exist other approaches which attempted to introduce top-down input for improving predic-tions. One such architecture is Gated-Feedback RNN (Chung et al., 2015). An important differencebetween architecture proposed here and theirs is the source of the feedback signal. In GF-RNN it isassumed that there exist higher level representation layers and they constitute the feedback source.1Under review as a conference paper at ICLR 2017On the other hand, here, feedback depends directly on the discrepancy between past predictions andcurrent observation and operates even within a single layer. Another related concept is Ladder Net-works (Rasmus et al., 2015), where top-down connections contribute to improved semi-supervisedlearning performance.2 F EEDBACK : M ISPREDICTION -DRIVEN PREDICTION nd the supervision of taxation by regional authorities. The federal government controls more than 9051 to Lucius Tarrutius of Firmum [[Romulus and Remus]] were conceived in the womb on the 23rd day of t05102 Northern Ireland]] == External links == * [http://www.enniskillen.com Enniskillen.Com] * [http:/05103 by]] *[[Alan Colmes]] *[[Janice Dean]] *[[Laurie Dhue]] *[[Steve Doocy]] *[[Donna Fiducia]] | *[[Da05104 sity]] #[[Denison University]] #[[Des Moines Area Community College]] ([[Des Moines, Iowa|Des Moine05105 } [[af:4 Augustus]] [[ar:4 Ø£ØoØ3Ø·Ø3]] [[an:4 d'agosto]] [[ast:4 d'agostu]] [[bg:4 аÐ2Ð3Ñ#Ñ#Ñ#]]0246 : [http://images.google.nl/images?q=Herman+Brood+art&amp;hl=nl&amp;lr=&amp;ie=UTF-8&amp;sa=N&amp;t010207 >Church of England</title> <id>5955</id> <revision> <id>42087195</id> <timestam0248 ding=&quot;4&quot; cellspacing=&quot;0&quot; style=&quot;margin: 1em 1em 1em 0; background: #f9f9f90129 n> <id>35151715</id> <timestamp>2006-01-14T15:11:15Z</timestamp> <contributor> 02410 ' ([[1992 in film|1992]]) *''Guncrazy'' ([[1992 in film|1992]]) *''No Place to Hide'' ([[1993 in fi051011 . *[[1975]] - [[Barbara Walters]] signs a five-year $5 million contract with the American Broadcast0102012 th the [[theory of evolution]] by [[natural selection]]. This conflict is most prevalent in the [[U051013 y known as the Gibraltar Housewives Association, and subsequently, in the early eighties it was cha051014 ;&amp;#1489;&amp;#1500;&amp;#1512;]] [[id:Assembler]] [[lt:Asembleris]] [[pl:Asembler]] [[ru:&amp;#051015 at, sandy soils. Granites sometimes occur in circular depressions surrounded by a range of hills, f0516Figure 1: Illustration of stsignal on a typical batch of 16 sequences of length 100 from enwik8dataset.y-axis is negative log probability in bits. Intuitively surprise signal is low when a textfragment is highly predictable (i.e. in the < timestamp > part - sequence no 10, the tag itselfis highly predictable, whereas the exact date cannot be predicted and should not be the focus ofattention). The main idea presented in this paper is that feedback signal stshould be able to help indistinguishing predictable and inherently unpredictable parts during the inference phase.2Under review as a conference paper at ICLR 20172.1 N OTATIONThe following notation is used throughout the section:x- inputsh- hidden unitsy- outputsp- output probabilities (normalized y)s- surprisalt- time stepW- feedforward x!hconnection matrixU- recurrenth!hconnection matrixV- feedbacks!hconnection matrixS- truncated BPTT lengthM- number of inputsN- number of hidden unitsdenotes matrix multiplicationdenotes elementwise multiplication();tanh ()- elementwise nonlinearitiesx=@E@xIn case of LSTM, the following concatenated representations are used:gt=264itftotut375b=264bibfbobu375U=264UiUfUoUu375W=264WiWfWoWu375V=264ViVfVoVu375 (1)2.2 S IMPLE RNN WITHOUT FEEDBACKFirst, we show a simple recurrent neural network architecture without feedback which serves as abasis for demonstrating our approach. It is illustrated in Fig. 2 and formulated as follows:ht=tanh(Wxt+Uht1+b) (2):::ht1yt1xthtytxt+1::: tanhinternal statefeedforward inputWUWyFigure 2: Simple RNN; h- internal (hidden) states; xare inputs,yare optional outputs to be emitted3Under review as a conference paper at ICLR 20172.3 F EEDBACK AUGMENTED RECURRENT NETWORKS:::ht1yt1stxthtyt::: tanhinternal statefeedforward inputerror feedbackpredictionWUVWyFigure 3: Surprisal-Feedback RNN; strepresents surprisal (in information theory sense) - the dis-crepancy between prediction at time step t1and the actual observation at time step t; it constitutesadditional input signal to be considered when making a prediction for the next time step.Figure 3 presents the main idea of surprisal-driven feedback in recurrent networks. In addition tofeedforward and recurrent connections WandU, we added one additional matrix V. One moreinput signal, namely Vstis being considered when updating hidden states of the network. Wepropose that the discrepancy stbetween most recent predictions pt1and observations xtmightbe effectively used as a feedback signal affecting further predictions. Such information is usuallyused during learning phase as an error signal, but not during inference. Our hypothesis is that itrepresents an important source of information which can be used during the inference phase, shouldbe used and that it bring benefits in the form of improved generalization capability. Figure 1 presentsexamples of feedback signal being considered. Intuitively, when surprisal is near zero, the sum ofinput signals is the same as in a typical RNN. Next subsections provide mathematical descriptionof the feedback architecture in terms of forward and backward passes for the Back PropagationThrough Time (BPTT) (Werbos, 1990) algorithm.2.4 F ORWARD PASSSeth0,c0to zero andp0to uniform distribution or carry over the last state to emulate full BPTT.8i;pi0=1M;i2f0;1;::;M1g;t= 0 (3)for t = 1:1:S-1I. Surprisal partst=iXlogpit1xit (4)IIa. Computing hidden activities, Simple RNNht=tanh(Wxt+Uht1+Vst+b) (5)IIb. Computing hidden activities, LSTM (to be used instead of IIa)ft=(Wfxt+Ufht1+Vfst+bf) (6)it=(Wixt+Uiht1+Vist+bi) (7)4Under review as a conference paper at ICLR 2017ot=(Woxt+Uoht1+Vost+bo) (8)ut=tanh(Wuxt+Uuht1+Vust+bu) (9)ct= (1ft)ct1+itut (10)^ct=tanh(ct) (11)ht=ot^ct (12)III. Outputsyit=Wyht+by (13)Softmax normalizationpit=eyitPieyit(14)2.5 B ACKWARD PASSfor t = S-1:-1:1I. Backprop through predictionsBackprop through softmax, cross-entropy error, accumulate@Et@yt=@Et@yt+pt1xt (15)y!Wy;by@E@Wy=@E@Wy+hTt@Et@yt(16)@E@by=@E@by+MXi=1@Eit@yit(17)y!h@Et@ht=@Et@ht+@Et@ytWTy (18)IIa. Backprop through hidden nonlinearity (simple RNN version)@Et@ht=@Et@ht+@Et@httanh0(ht) (19)@Et@gt=@Et@ht(20)IIb. Backprop through c;h;g (LSTM version)5Under review as a conference paper at ICLR 2017Backprop through memory cells, (keep gradients from the previous iteration)@Et@ct=@Et@ct+@Et@htottanh0(^ct) (21)Carry error over to@Et@ct1@Et@ct1=@Et@ct1+@Et@ct(1ft) (22)Propagate error through the gates@Et@ot=@Et@ht^ct0(ot) (23)@Et@it=@Et@ctut0(it) (24)@Et@ft=@Et@ctct10(ft) (25)@Et@ut=@Et@ctittanh0(ut) (26)Carry error over to@Et@ht1@Et@ht1=@Et@gtUT(27)III. Backprop through linearities@Et@b=@Et@b+NXi=1@Et@git(28)@E@U=@E@U+hTt1@Et@gt(29)@E@W=@E@W+xTt@Et@gt(30)@E@x=@E@x+@Et@gtWT(31)IV . Surprisal part@E@V=@E@V+sTt@Et@gt(32)@E@st=@E@gtVT(33)@Et@pt1=@Et@stxt (34)Adjust@Et@pt1according to the sum of gradients and carry over to@Et@yt1@Et@yt1=@Et@pt1pt1MXi=1@Et@pit1(35)6Under review as a conference paper at ICLR 2017Time4h8h16h24h32h40h48h60h72hBits/Character0.911.11.21.31.41.51.61.71.8Standard LSTM (test)Feedback LSTM (test)Standard LSTM (train)Feedback LSTM (train)Test Bits/Character1.41.451.51.551.61.651.71.75Train Bits/Character0.80.911.11.21.31.41.51.61.7Standard LSTMFeedback LSTMFigure 4: Training progress on enwik8 corpus, bits/character3 E XPERIMENTSWe ran experiments on the enwik8 dataset. It constitutes first 108bytes of English Wikipedia dump(with all extra symbols present in XML), also known as Hutter Prize challenge dataset2. First 90%of each corpus was used for training, the next 5% for validation and the last 5% for reporting testaccuracy. In each iteration sequences of length 10000 were randomly selected. The learning algo-rithm used was Adagrad1with a learning rate of 0.001. Weights were initialized using so-calledXavier initialization Glorot & Bengio (2010). Sequence length for BPTT was 100 and batch size128, states were carried over for the entire sequence of 10000 emulating full BPTT. Forget bias wasset initially to 1. Other parameters set to zero. The algorithm was written in C++ and CUDA 8 andran on GTX Titan GPU for up to 10 days. Table 1 presents results comparing existing state-of-the-art approaches to the introduced Feedback LSTM algorithm which outperforms all other methodsdespite not having any regularizer.Table 1: Bits per character on the Hutter Wikipedia dataset (test data).BPCmRNN(Sutskever et al., 2011) 1.60GF-RNN (Chung et al., 2015) 1.58Grid LSTM (Kalchbrenner et al., 2015) 1.47Standard LSTM41.45MI-LSTM (Wu et al., 2016) 1.44Recurrent Highway Networks (Zilly et al., 2016) 1.42Array LSTM (Rocki, 2016) 1.40Feedback LSTM 1.39Hypernetworks (Ha et al., 2016) 1.38Feedback LSTM + Zoneout (Krueger et al., 2016) 1.374 S UMMARYWe introduced feedback recurrent network architecture, which takes advantage of temporal natureof the data and monitors the discrepancy between predictions and observations. This prediction error1with a modification taking into consideration only recent window of gradient updates2http://mattmahoney.net/dc/text.html3This method does not belong to the ’dynamic evaluation’ group: 1. It never actually sees test data duringtraining. 2. It does not adapt weights during testing4our implementation7Under review as a conference paper at ICLR 2017information, also known as surprisal, is used when making new guesses. We showed that combiningcommonly used feedforward, recurrent and such feedback signals improves generalization capabil-ities of Long-Short Term Memory network. It outperforms other stochastic and fully deterministicapproaches on enwik8 character level prediction achieving 1.37 BPC.5 F URTHER WORKIt is still an open question what the feedback should really constitute as well as how it shouldinteract with lower-level neurons (additive, multiplicative or another type of connection). Furtherimprovements may be possible with the addition of regularization. Another research direction isincorporating sparsity in order improve disentangling sources of variation in temporal data.ACKNOWLEDGEMENTSThis work has been supported in part by the Defense Advanced Research Projects Agency (DARPA).
SyWZXPLEg
Badly Written
4: Ok but not good enough - rejection
Summary: This paper proposes to use surprisal-driven feedback for training recurrent neural networks where they feedback the next-step prediction error of the network as an input to the network. Authors have shown a result on language modeling tasks. Contributions: The introduction of surprisal-driven feedback, which is just the feedback from the errors of the model from the previous time-steps. Questions: A point which is not fully clear from the paper is whether if you have used the ground-truth labels on the test set for the surprisal feedback part of the model? I assume that authors do that since they claim that they use the misprediction error as additional input. Criticisms: The paper is really badly written, authors should rethink the organization of the paper. Most of the equations presented in the paper, about BPTT are not necessary for the main-text and could be moved to Appendix. The justification is not convincing enough. Experimental results are lacking, only results on a single dataset are provided. Although the authors claim that they got SOTA on enwiki8, there are other papers such as the HyperNetworks that got better results (1.34) than the result they achieve. This claim is wrong. The model requires the ground-truth labels for the test-set, however, this assumption really limits the application of this technique to a very limited set of applications(more or less rules out most conditional language modeling tasks). High-level Review: Pros: - A simple modification of the model that seems to improve the results and it is an interesting modification. Cons: - The authors need to use test-set labels. - Writing of the paper is bad. - The authors assume that they have access to the ground-truth labels during the test-set. - Experimental results are lacking
4: The reviewer is confident but not absolutely certain that the evaluation is correct
S1J0E-71l
ICLR.cc/2017/conference
2017
Surprisal-Driven Feedback in Recurrent Networks
["Kamil Rocki"]
Recurrent neural nets are widely used for predicting temporal data. Their inherent deep feedforward structure allows learning complex sequential patterns. It is believed that top-down feedback might be an important missing ingredient which in theory could help disambiguate similar patterns depending on broader context. In this paper, we introduce surprisal-driven recurrent networks, which take into account past error information when making new predictions. This is achieved by continuously monitoring the discrepancy between most recent predictions and the actual observations. Furthermore, we show that it outperforms other stochastic and fully deterministic approaches on enwik8 character level prediction task achieving 1.37 BPC.
["Unsupervised Learning", "Applications", "Deep learning"]
ABSTRACTRecurrent neural nets are widely used for predicting temporal data. Their inher-ent deep feedforward structure allows learning complex sequential patterns. It isbelieved that top-down feedback might be an important missing ingredient whichin theory could help disambiguate similar patterns depending on broader context.In this paper, we introduce surprisal-driven recurrent networks, which take intoaccount past error information when making new predictions. This is achievedby continuously monitoring the discrepancy between most recent predictions andthe actual observations. Furthermore, we show that it outperforms other stochas-tic and fully deterministic approaches on enwik8 character level prediction taskachieving 1.37 BPC.1 I NTRODUCTIONBased on human performance on the same task, it is believed that an important ingredient which ismissing in state-of-the-art variants of recurrent networks is top-down feedback. Despite evidenceof its existence, it is not entirely clear how mammalian brain might implement such a mechanism.It is important to understand what kind of top-down interaction contributes to improved predictioncapability in order to tackle more challenging AI problems requiring interpretation of deeper con-textual information. Furthermore, it might provide clues as what makes human cognitive abilitiesso unique. Existing approaches which consider top-down feedback in neural networks are primar-ily focused on stacked layers of neurons, where higher-level representations constitute a top-downsignal source. In this paper, we propose that the discrepancy between most recent predictions andobservations might be effectively used as a feedback signal affecting further predictions. It is verycommon to use such a discrepancy during learning phase as the error which is subject to minimiza-tion, but not during inference. We show that is also possible to use such top-down signal withoutlosing generality of the algorithm and that it improves generalization capabilities when applied toLong-Short Term Memory (Hochreiter & Schmidhuber, 1997) architecture. It is important to pointout that the feedback idea presented here applies only to temporal data.1.1 S UMMARY OF CONTRIBUTIONSThe main contributions of this work are:the introduction of a novel way of incorporating most recent misprediction measure as anadditional input signalextending state-of-the-art performance on character-level text modeling using HutterWikipedia dataset.1.2 R ELATED WORKThere exist other approaches which attempted to introduce top-down input for improving predic-tions. One such architecture is Gated-Feedback RNN (Chung et al., 2015). An important differencebetween architecture proposed here and theirs is the source of the feedback signal. In GF-RNN it isassumed that there exist higher level representation layers and they constitute the feedback source.1Under review as a conference paper at ICLR 2017On the other hand, here, feedback depends directly on the discrepancy between past predictions andcurrent observation and operates even within a single layer. Another related concept is Ladder Net-works (Rasmus et al., 2015), where top-down connections contribute to improved semi-supervisedlearning performance.2 F EEDBACK : M ISPREDICTION -DRIVEN PREDICTION nd the supervision of taxation by regional authorities. The federal government controls more than 9051 to Lucius Tarrutius of Firmum [[Romulus and Remus]] were conceived in the womb on the 23rd day of t05102 Northern Ireland]] == External links == * [http://www.enniskillen.com Enniskillen.Com] * [http:/05103 by]] *[[Alan Colmes]] *[[Janice Dean]] *[[Laurie Dhue]] *[[Steve Doocy]] *[[Donna Fiducia]] | *[[Da05104 sity]] #[[Denison University]] #[[Des Moines Area Community College]] ([[Des Moines, Iowa|Des Moine05105 } [[af:4 Augustus]] [[ar:4 Ø£ØoØ3Ø·Ø3]] [[an:4 d'agosto]] [[ast:4 d'agostu]] [[bg:4 аÐ2Ð3Ñ#Ñ#Ñ#]]0246 : [http://images.google.nl/images?q=Herman+Brood+art&amp;hl=nl&amp;lr=&amp;ie=UTF-8&amp;sa=N&amp;t010207 >Church of England</title> <id>5955</id> <revision> <id>42087195</id> <timestam0248 ding=&quot;4&quot; cellspacing=&quot;0&quot; style=&quot;margin: 1em 1em 1em 0; background: #f9f9f90129 n> <id>35151715</id> <timestamp>2006-01-14T15:11:15Z</timestamp> <contributor> 02410 ' ([[1992 in film|1992]]) *''Guncrazy'' ([[1992 in film|1992]]) *''No Place to Hide'' ([[1993 in fi051011 . *[[1975]] - [[Barbara Walters]] signs a five-year $5 million contract with the American Broadcast0102012 th the [[theory of evolution]] by [[natural selection]]. This conflict is most prevalent in the [[U051013 y known as the Gibraltar Housewives Association, and subsequently, in the early eighties it was cha051014 ;&amp;#1489;&amp;#1500;&amp;#1512;]] [[id:Assembler]] [[lt:Asembleris]] [[pl:Asembler]] [[ru:&amp;#051015 at, sandy soils. Granites sometimes occur in circular depressions surrounded by a range of hills, f0516Figure 1: Illustration of stsignal on a typical batch of 16 sequences of length 100 from enwik8dataset.y-axis is negative log probability in bits. Intuitively surprise signal is low when a textfragment is highly predictable (i.e. in the < timestamp > part - sequence no 10, the tag itselfis highly predictable, whereas the exact date cannot be predicted and should not be the focus ofattention). The main idea presented in this paper is that feedback signal stshould be able to help indistinguishing predictable and inherently unpredictable parts during the inference phase.2Under review as a conference paper at ICLR 20172.1 N OTATIONThe following notation is used throughout the section:x- inputsh- hidden unitsy- outputsp- output probabilities (normalized y)s- surprisalt- time stepW- feedforward x!hconnection matrixU- recurrenth!hconnection matrixV- feedbacks!hconnection matrixS- truncated BPTT lengthM- number of inputsN- number of hidden unitsdenotes matrix multiplicationdenotes elementwise multiplication();tanh ()- elementwise nonlinearitiesx=@E@xIn case of LSTM, the following concatenated representations are used:gt=264itftotut375b=264bibfbobu375U=264UiUfUoUu375W=264WiWfWoWu375V=264ViVfVoVu375 (1)2.2 S IMPLE RNN WITHOUT FEEDBACKFirst, we show a simple recurrent neural network architecture without feedback which serves as abasis for demonstrating our approach. It is illustrated in Fig. 2 and formulated as follows:ht=tanh(Wxt+Uht1+b) (2):::ht1yt1xthtytxt+1::: tanhinternal statefeedforward inputWUWyFigure 2: Simple RNN; h- internal (hidden) states; xare inputs,yare optional outputs to be emitted3Under review as a conference paper at ICLR 20172.3 F EEDBACK AUGMENTED RECURRENT NETWORKS:::ht1yt1stxthtyt::: tanhinternal statefeedforward inputerror feedbackpredictionWUVWyFigure 3: Surprisal-Feedback RNN; strepresents surprisal (in information theory sense) - the dis-crepancy between prediction at time step t1and the actual observation at time step t; it constitutesadditional input signal to be considered when making a prediction for the next time step.Figure 3 presents the main idea of surprisal-driven feedback in recurrent networks. In addition tofeedforward and recurrent connections WandU, we added one additional matrix V. One moreinput signal, namely Vstis being considered when updating hidden states of the network. Wepropose that the discrepancy stbetween most recent predictions pt1and observations xtmightbe effectively used as a feedback signal affecting further predictions. Such information is usuallyused during learning phase as an error signal, but not during inference. Our hypothesis is that itrepresents an important source of information which can be used during the inference phase, shouldbe used and that it bring benefits in the form of improved generalization capability. Figure 1 presentsexamples of feedback signal being considered. Intuitively, when surprisal is near zero, the sum ofinput signals is the same as in a typical RNN. Next subsections provide mathematical descriptionof the feedback architecture in terms of forward and backward passes for the Back PropagationThrough Time (BPTT) (Werbos, 1990) algorithm.2.4 F ORWARD PASSSeth0,c0to zero andp0to uniform distribution or carry over the last state to emulate full BPTT.8i;pi0=1M;i2f0;1;::;M1g;t= 0 (3)for t = 1:1:S-1I. Surprisal partst=iXlogpit1xit (4)IIa. Computing hidden activities, Simple RNNht=tanh(Wxt+Uht1+Vst+b) (5)IIb. Computing hidden activities, LSTM (to be used instead of IIa)ft=(Wfxt+Ufht1+Vfst+bf) (6)it=(Wixt+Uiht1+Vist+bi) (7)4Under review as a conference paper at ICLR 2017ot=(Woxt+Uoht1+Vost+bo) (8)ut=tanh(Wuxt+Uuht1+Vust+bu) (9)ct= (1ft)ct1+itut (10)^ct=tanh(ct) (11)ht=ot^ct (12)III. Outputsyit=Wyht+by (13)Softmax normalizationpit=eyitPieyit(14)2.5 B ACKWARD PASSfor t = S-1:-1:1I. Backprop through predictionsBackprop through softmax, cross-entropy error, accumulate@Et@yt=@Et@yt+pt1xt (15)y!Wy;by@E@Wy=@E@Wy+hTt@Et@yt(16)@E@by=@E@by+MXi=1@Eit@yit(17)y!h@Et@ht=@Et@ht+@Et@ytWTy (18)IIa. Backprop through hidden nonlinearity (simple RNN version)@Et@ht=@Et@ht+@Et@httanh0(ht) (19)@Et@gt=@Et@ht(20)IIb. Backprop through c;h;g (LSTM version)5Under review as a conference paper at ICLR 2017Backprop through memory cells, (keep gradients from the previous iteration)@Et@ct=@Et@ct+@Et@htottanh0(^ct) (21)Carry error over to@Et@ct1@Et@ct1=@Et@ct1+@Et@ct(1ft) (22)Propagate error through the gates@Et@ot=@Et@ht^ct0(ot) (23)@Et@it=@Et@ctut0(it) (24)@Et@ft=@Et@ctct10(ft) (25)@Et@ut=@Et@ctittanh0(ut) (26)Carry error over to@Et@ht1@Et@ht1=@Et@gtUT(27)III. Backprop through linearities@Et@b=@Et@b+NXi=1@Et@git(28)@E@U=@E@U+hTt1@Et@gt(29)@E@W=@E@W+xTt@Et@gt(30)@E@x=@E@x+@Et@gtWT(31)IV . Surprisal part@E@V=@E@V+sTt@Et@gt(32)@E@st=@E@gtVT(33)@Et@pt1=@Et@stxt (34)Adjust@Et@pt1according to the sum of gradients and carry over to@Et@yt1@Et@yt1=@Et@pt1pt1MXi=1@Et@pit1(35)6Under review as a conference paper at ICLR 2017Time4h8h16h24h32h40h48h60h72hBits/Character0.911.11.21.31.41.51.61.71.8Standard LSTM (test)Feedback LSTM (test)Standard LSTM (train)Feedback LSTM (train)Test Bits/Character1.41.451.51.551.61.651.71.75Train Bits/Character0.80.911.11.21.31.41.51.61.7Standard LSTMFeedback LSTMFigure 4: Training progress on enwik8 corpus, bits/character3 E XPERIMENTSWe ran experiments on the enwik8 dataset. It constitutes first 108bytes of English Wikipedia dump(with all extra symbols present in XML), also known as Hutter Prize challenge dataset2. First 90%of each corpus was used for training, the next 5% for validation and the last 5% for reporting testaccuracy. In each iteration sequences of length 10000 were randomly selected. The learning algo-rithm used was Adagrad1with a learning rate of 0.001. Weights were initialized using so-calledXavier initialization Glorot & Bengio (2010). Sequence length for BPTT was 100 and batch size128, states were carried over for the entire sequence of 10000 emulating full BPTT. Forget bias wasset initially to 1. Other parameters set to zero. The algorithm was written in C++ and CUDA 8 andran on GTX Titan GPU for up to 10 days. Table 1 presents results comparing existing state-of-the-art approaches to the introduced Feedback LSTM algorithm which outperforms all other methodsdespite not having any regularizer.Table 1: Bits per character on the Hutter Wikipedia dataset (test data).BPCmRNN(Sutskever et al., 2011) 1.60GF-RNN (Chung et al., 2015) 1.58Grid LSTM (Kalchbrenner et al., 2015) 1.47Standard LSTM41.45MI-LSTM (Wu et al., 2016) 1.44Recurrent Highway Networks (Zilly et al., 2016) 1.42Array LSTM (Rocki, 2016) 1.40Feedback LSTM 1.39Hypernetworks (Ha et al., 2016) 1.38Feedback LSTM + Zoneout (Krueger et al., 2016) 1.374 S UMMARYWe introduced feedback recurrent network architecture, which takes advantage of temporal natureof the data and monitors the discrepancy between predictions and observations. This prediction error1with a modification taking into consideration only recent window of gradient updates2http://mattmahoney.net/dc/text.html3This method does not belong to the ’dynamic evaluation’ group: 1. It never actually sees test data duringtraining. 2. It does not adapt weights during testing4our implementation7Under review as a conference paper at ICLR 2017information, also known as surprisal, is used when making new guesses. We showed that combiningcommonly used feedforward, recurrent and such feedback signals improves generalization capabil-ities of Long-Short Term Memory network. It outperforms other stochastic and fully deterministicapproaches on enwik8 character level prediction achieving 1.37 BPC.5 F URTHER WORKIt is still an open question what the feedback should really constitute as well as how it shouldinteract with lower-level neurons (additive, multiplicative or another type of connection). Furtherimprovements may be possible with the addition of regularization. Another research direction isincorporating sparsity in order improve disentangling sources of variation in temporal data.ACKNOWLEDGEMENTSThis work has been supported in part by the Defense Advanced Research Projects Agency (DARPA).
SJvqO1GEl
Need some revisions
3: Clear rejection
This paper proposes to leverage "surprisal" as top-down signal in RNN. More specifically author uses the error corresponding to the previous prediction as an extra input at the current timestep in a LSTM. The general idea of suprising-driven feedback is interesting for online prediction task. It is a simple enough idea that seems to bring some significant improvements. However, the paper in its current form has some important flaws. - Overall, the paper writing could be improved. In particular, section 2.4 and 2.5 is composed mostly by the equations of the forward and backward propagation of feedback RNN and feedback LSTM. However, author provides no analysis along with those equations. It is therefore not clear what insight the author tries to express in those sections. In addition, feedback RNN is not evaluated in the experimental section, so it is not clear why feedback RNN is described. - The experimental evaluation is limited. Only one dataset enwik8 is explored. I think it is necessary to try the idea on different datasets to see if feedback LSTM sees some consistent improvements. Also, author claims state-of-art on enwik8, but hypernetwork, already cited in the paper, achieves better results (1.34 BPC, table 4 in the hypernetworks paper). - Author only compares to methods that do not use last prediction error as extra signal. I would argue that a comparison with dynamic evaluation would be more fair. Feedback LSTM uses prediction error as extra input in the forward prop, while dynamic evaluation backprop it through the network and change the weight accordingly. Also they don't propagate the prediction error in the same way, they both leverage "extra" supervised information through the prediction errors. In summary: Pros: - Interesting idea - Seems to improve performances Cons: - Paper writing - Weak evaluation (only one dataset) - Compare only with approaches that does not use the last-timestep error signal
5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature
HyEeMu_xx
ICLR.cc/2017/conference
2017
Progressive Attention Networks for Visual Attribute Prediction
["Paul Hongsuck Seo", "Zhe Lin", "Scott Cohen", "Xiaohui Shen", "Bohyung Han"]
We propose a novel attention model which can accurately attend to target objects of various scales and shapes in images. The model is trained to gradually suppress irrelevant regions in an input image via a progressive attentive process over multiple layers of a convolutional neural network. The attentive process in each layer determines whether to pass or suppress features at certain spatial locations for use in the next layer. We further employ local contexts to estimate attention probability at each location since it is difficult to infer accurate attention by observing a feature vector from a single location only. The experiments on synthetic and real datasets show that the proposed attention network outperforms traditional attention methods in visual attribute prediction tasks.
["Deep learning", "Computer vision", "Multi-modal learning"]
ABSTRACTWe propose a novel attention model which can accurately attend to target objectsof various scales and shapes in images. The model is trained to gradually suppressirrelevant regions in an input image via a progressive attentive process over multiplelayers of a convolutional neural network. The attentive process in each layerdetermines whether to pass or suppress features at certain spatial locations for usein the next layer. We further employ local contexts to estimate attention probabilityat each location since it is difficult to infer accurate attention by observing a featurevector from a single location only. The experiments on synthetic and real datasetsshow that the proposed attention network outperforms traditional attention methodsin visual attribute prediction tasks.1 I NTRODUCTIONAttentive mechanisms often play important roles in modern neural networks (NNs) especially incomputer vision tasks. Many visual attention models have been introduced in the previous literature,and they have shown that attaching an attention to NNs can improve the accuracy in various taskssuch as image classification (Jaderberg et al., 2015; Ba et al., 2015; Mnih et al., 2014; Larochelle &Hinton, 2010), image generation (Gregor et al., 2015), image caption generation (Xu et al., 2015) andvisual question answering (Yang et al., 2015; Andreas et al., 2016; Xu & Saenko, 2015).There are several motivations for incorporating attentive mechanisms in NNs. One of them is thatit is analogous to the perceptual process of human beings. The human visual system concentratesattention to a region of interest instead of processing an entire scene. Likewise, in a neural attentionmodel, we can focus processing only on attended areas of the input image. This benefits us in termsof computational resources; the number of hidden units may be reduced since the hidden activationsonly need to encode the region with attention (Mnih et al., 2014).Another important motivation is that some computer vision tasks, e.g. visual question answering(VQA), require identifying the object for accurate attribute prediction. For example, when theinput image contains multiple objects, the task should focus on the object specified by the question.Figure 1 illustrates an example task to predict the color (answer) of a given input number (query).The query specifies a particular object in the input image (number 7 in this example) for answering itsattribute (red). To address this type of tasks, the network architecture should incorporate an attentivemechanism either explicitly or implicitly.One of the most popular attention mechanisms for NNs is the soft attention method (Xu et al.,2015), which aggregates responses in a feature map weighted by their attention probabilities (seeAppendix A for more details). This process results in a single attended feature vector. Since thesoft attention method is fully differentiable, the entire network can be trained end-to-end withstandard backpropagation. However, it can only model attention to local regions with a certain sizedepending on the receptive field of the layer chosen for attention. This makes the soft attentionmethod inappropriate for complicated cases, where objects involve significant variations in theirscales, and shapes.1Under review as a conference paper at ICLR 2017(a) input image (b) first attention (c) second attention (d) third attention (e) final attentionFigure 1: An example reference problem (with the query 7 and the answer red) and intermediateattention maps using our progressive attention model. It shows that attention is gradually refinedthrough the network layers for resolving the reference problem. Distracting patterns at smaller scalesare suppressed at earlier layers while those at larger scales ( e.g. 9) are suppressed at later layers withlarger receptive fields. All attended images are independently rescaled for the visualization.To overcome this limitation, we propose a novel attention network, referred to as progressive attentionnetwork (PAN), which enables precise attention over objects of different scales and shapes byattaching attentive mechanisms to multiple layers within a convolutional neural network (CNN).More specifically, the proposed network forces attention prediction in intermediate feature maps byforwarding the attended feature maps in each layer to the subsequent layers in the CNN. Since afeature to be attended in the current feature map is obtained by combining lower-level features withsmaller receptive fields, the network can learn to distill the precise spatial support relevant to thetarget objects as final attention. The contribution of this work is three-fold:A novel attention model (progressive attention network) which can be learned to predictattention matching accurate scale and shape of a target objectUse of local contexts to improve the stability of the progressive attention modelAchievement of significant performance improvement over traditional soft and hard attentionapproaches in query-specific visual attribute prediction tasksThe rest of this paper is organized as follows. We first review related work in Section 2. In Section 3,we describe the proposed model with local context information. We then present our experimentalresults on several datasets in Section 4 and conclude the paper in Section 5.2 R ELATED WORKAttention on Features The most straightforward attention mechanism is a feature based method,which selects a subset of features by explicitly attaching an attention model to NN architectures. Theapproaches relying on this attention mechanism have improved performance in many tasks (Xu et al.,2015; Yang et al., 2015; Andreas et al., 2016; Xu & Saenko, 2015; Bahdanau et al., 2015; Luonget al., 2015; Weston et al., 2015; Graves et al., 2014). For example, they have been used to handlesequences of variable lengths in neural machine translation models (Bahdanau et al., 2015; Luonget al., 2015), speech recognition (Chorowski et al., 2014) and handwriting generation (Graves, 2013),and manage memory access mechanisms for memory networks (Weston et al., 2015) and neuralturing machines (Graves et al., 2014). When applied to computer vision tasks to resolve referenceproblems, these models are designed to pay attention to CNN features corresponding to subregionsin the input image. Image caption generation and visual question answering are typical examplesbenefited from this attention mechanism (Xu et al., 2015; Yang et al., 2015; Andreas et al., 2016; Xu& Saenko, 2015).Attention by Image Transformation Another stream of attention models is based on imagetransformations. These approaches transform a regular grid and sample from the input image withthe transformed grid whose element corresponds to a location in the input image. Ba et al. (2015)and Mnih et al. (2014) transform an input image with predicted translation parameters ( txandty)and a fixed scale factor ( ^s<1) for image classification or multiple object recognition. Scale factoris also predicted in (Gregor et al., 2015) for image generation, where the network uses Gaussianfilters for sampling. Spatial transformer networks (STNs) predict all six parameters of the affine2Under review as a conference paper at ICLR 2017transformation matrix, and even extend it to a projective transformation and a 16-point thin platespline transformation (Jaderberg et al., 2015). Because all these transformations used in (Jaderberget al., 2015) involve scale factors, STNs are capable of dealing with objects in different sizes. However,STN is limited when there are multiple candidate regions for attention. Our model overcomes thisproblem by formulating attention as progressive filtering on feature maps instead of assuming objectscan be roughly aligned by a single spatial transformation.Multiple Attention Processes There have been several approaches iteratively performing attentiveprocesses to resolve relations between targets. Yang et al. (2015) iteratively attend to imagesconditioned on the previous attention states for visual question answering as the objects of interestare often not specified explicitly in questions but implicitly in relational expressions about the targetobjects. Also, Weston et al. (2015) and Graves et al. (2014) incorporate attention mechanisms tomemory cells iteratively to retrieve different values stored in the memory. Our proposed model issimilar in spirit of iterative attention but aimed at attending to a single target object via operating onmultiple CNN layers progressively, i.e., attention information is predicted progressively from featuremaps through multiple layers of CNN to capture the fine shapes of the target object.In (Jaderberg et al., 2015), the authors also conducted an experiment with a network with multipletransformer layers. However, the attention shapes of STNs are still constrained to the type oftransformation regardless of the number of transformers. In contrast, the quality of the attentionshapes is improved through progressive attention process in the proposed method. Stollenga et al.(2014) introduced a deep network which manipulates intermediate features of a fixed classifier throughchannel-wise attention process. Although the channel-wise attention process is used at multiple layersof the network to manipulate the intermediate feature representations, they never explored spatialattention process. More importantly, this method requires to have an accurate pretrained classifierfor the target classes prior to learning attention while pretraining a general query-specific attributeclassifier is not trivial. It is also notable that both (Jaderberg et al., 2015) and (Stollenga et al., 2014)target simple classification tasks without queries while we aim to tackle the query-specific attributeprediction task where answers from a single input image can be very different depending on the inputquery.Training Attention Models The networks with soft attention are fully differentiable and thustrainable end-to-end by backpropagation. Xu et al. (2015) and Zaremba & Sutskever (2015) introduceda stochastic hard attention, where the network explicitly selects a single feature based on the predictedattention probability map. Because the explicit selection (or sampling) procedure is not differentiable,REINFORCE learning rule (Williams, 1992), is used to make networks trainable. Transformationbased attention models (Ba et al., 2015; Mnih et al., 2014) are mostly trained by REINFORCElearning rule but STN (Jaderberg et al., 2015) proposed a fully differentiable formulation and madeit possible to train end-to-end. Compared to these attention networks, the proposed network isalso trainable end-to-end by the standard backpropagation without any extra techniques since everyoperation within the network is differentiable.3 P ROGRESSIVE ATTENTION NETWORKSTo overcome the limitation of existing attention models in handling variable object scales and shapes,we propose a progressive attention mechanism. In the proposed model, irrelevant features at differentscales are suppressed by attention filtering steps in different CNN layers, and computation is focusedon the features corresponding to regions of interest. At each attention layer, the model predicts anattention map given the input query and the current feature map via an attention module, and then theattention maps is multiplied to the feature maps channel-wise to obtain attended feature map. In eachlayer, each attended feature map is then forwarded to the next layer of the CNN for construction ofthe following feature map, which is illustrated in Figure 2. This progressive attention process allowsus to estimate precise details of attention areas while maintaining deep representations appropriatefor high-level inference tasks.3Under review as a conference paper at ICLR 2017feature map (fl)attention probability ( αl)attended feature map ( fl)attended feature (fatt)attribute classifierΣnext convolution layer ( gCNNl+1)Figure 2: Overall procedure of progressive attention. Attentive processes are repeatedly applied tofeature maps at multiple layers and the resulting attended feature maps are used as input featuremaps for the next convolution layers in CNN. Attention probabilities lare estimated from featuremaps and input query. In the last attention layer, the attended feature maps are aggregated to a singlefeature vector (by sum pooling) and fed to the final attribute classifier.3.1 P ROGRESSIVE ATTENTIVE PROCESSLetfl2RHlWlClbe an output feature map of a layer l2f0;:::;Lgin CNN with width Wl,heightHlandClchannels, and fli;j2RClbe a feature at (i;j)of the feature map fl. In the proposedPAN, an attentive process is applied to multiple layers of CNN and we obtain the attended featuremap ^fl= [^fli;j], which is given by^fli;j=li;jfli;j: (1)Here, the attention probability li;jfor a feature fli;jis calculated bysli;j=glatt(fli;j;q;latt)andli;j=softmax i;j(sl)ifl=L(sli;j) otherwise; (2)whereglatt()denotes the attention function with a set of parameters lattfor layerl,sli;jis theattention score at (i;j)in layerl,qis the query, and ()is a sigmoid function. The attentionprobability at each location is independent of others in the same feature map, where a sigmoidfunction is employed to constrain attention probabilities between 0 and 1. For the last layer ofattention, we use a softmax function over the entire spatial region for final aggregation of features.Unlike the soft attention model (see Appendix A), in the intermediate attention layers, the attendedfeature map ^flis not summed up to generate a single vector representation of the attended regions.Instead, the attended feature map is forwarded to the next layer as an input to compute the nextfeature map, which is given byfl+1=gl+1CNN(^fl;l+1CNN) (3)wheregl+1CNN()is the next CNN operations parameterized by lCNN.This feedforward procedure with attentive processes in CNN is repeated from the input of the CNN,i.e.,f0=I, until ^fLis obtained. Then, the attended feature fattis finally retrieved by summing upall the features in the final attended feature map ^fLas in soft attention, which is given byfatt=HXiWXj^fLi;j=HXiWXjLi;jfLi;j: (4)The attended feature fattobtained by such process is then used as the input to the visual attributeclassifier as illustrated in Figure 2.In our models, we place the attention layers to the output of max pooling layers instead of every layerin CNN because the reduction of feature resolution within CNN mainly comes from pooling layers.In practice„ we can also skip the first few pooling layers and only attach the attention module to theoutputs of last Kpooling layers.4Under review as a conference paper at ICLR 2017(a)αi,jlgattlfi,jlαi,jlgattlFi,jl(b)Figure 3: Attention estimation (a) without local context and (b) with local context. In (a), li;jispredicted from fli;jonly while its spatially adjacent features are also used to estimate li;jin (b).3.2 M ULTI -RESOLUTION ATTENTION ESTIMATIONIn Eq. (3), the resolution of attention probability map ldepends on the size of the feature mapin the corresponding layer. Due to the nature of a CNN with convolution and pooling layers, theresolution oflwill decrease with the increasing depth of a layer. Since the attentive processes areperformed over multiple layers recursively in our framework, it is possible to attend to the regions ofspecific sizes and shapes. Note that the proposed network can exploit high-level semantics in deeprepresentations for inference without losing attention resolution.The progressive attention model is still very effective in predicting fine attention shapes as theattention information is aggregated over multiple layers to suppress irrelevant structures at differentgranularity. In lower layers, features whose receptive fields contain small distractors are suppressedfirst. Meanwhile, the features from a part of large distractors remain intact but passed to the next layerdelaying its suppression. In higher layers, features of these large distractors would get low attentionprobability as each feature contains information from larger receptive fields allowing the attentionmodule to distinguish whether the feature is from a distractor or the target object. This phenomenonis well demonstrated in the qualitative results in our experiments (Section 4). An additional benefit ofprogressive attention is that it is more straightforward during inference since it is a pure feedforwardnetwork.3.3 L OCAL CONTEXTA basic version of PAN discussed so far predicts an attention probability li;jbased solely on thefeaturefli;jat a single feature map location. We can improve the quality of attention estimation byallowing the attention layers to observe a local context of the target feature. The local context Fli;jofa featurefli;jis composed of its spatially adjacent features. For example, the local context can begiven byFli;j=ffls;tjisi+;jtj+gas illustrated in Figure 3. The attentionscore is now predicted by the attention network with local context assli;j=glatt(Fli;j;q;latt): (5)In this architecture, the area of the local context is given by the filter size corresponding to thecomposite operation of convolution followed by pooling in the next layer. The local context does notneed to be considered in the last layer of attention since its activations are used to compute the finalattended feature map. Local context improves attention prediction as it enables the centroid feature tobe compared with surrounding features which makes the estimated attention more discriminative.3.4 T RAINING PROGRESSIVE ATTENTION NETWORKSTraining a PAN is as simple as training a soft attention network (Xu et al., 2015) because everyoperation within the network is differentiable. The entire network is trained end-to-end by the standardbackpropagation minimizing the binary cross entropies of the object-specific visual attributes. Whenwe train it from a pretrained CNN, the CNN part should always be fine-tuned together since theintermediate attention maps may change the input distributions of their associated layers in CNN.5Under review as a conference paper at ICLR 2017(a) MREF (b) MDIST (c) MBGFigure 4: Example of the MREF datasets.conv1 (3×3@32)pool1 (2×2)conv2 (3×3@32)pool2 (2×2)conv3 (3×3@32)pool3 (2×2)att(STN)fc (classification layer)STNatt(soft)SANatt(hard)HANatt1att2att4PAN qfi,jlFi,jlfc layer(fusion layer, 32 activations)fc layer(estimation layer, 1 activation)si,jlconv4 (3×3@32)pool4 (2×2)att3(a) Network architectures of models on MREF. Arrows rep-resents direct connection to next layer without attention.conv1 (3×3@32)pool1 (2×2)conv2 (3×3@32)pool2 (2×2)conv3 (3×3@32)pool3 (2×2)att(STN)fc (classification layer)STNatt(soft)SOFTatt(hard)HARDatt1att2att3 (soft)HAttNet qfi,jlFi,jlfc layer(fusion layer, 32 activations)fc layer(estimation layer, 1 activation)si,jl(b) Architecture of attention function glatt(). Lo-cal contextsFli;jare used only in PAN-CTX.Figure 5: Detailed illustration of network architectures on MNIST Reference experiments.4 E XPERIMENTS4.1 MNIST R EFERENCEDatasets We conduct experiments on a synthetic dataset created from MNIST (LeCun et al., 1998).The synthetic dataset is referred to as MNIST Reference (MREF; Figure 4a), where each trainingexample is a triple of an image, a query number and its color label. The task on this dataset is topredict the color of the number identified by a query. Five to nine distinct MNIST numbers withdifferent colors in fgreen;yellow;white;red;bluegand scales in [0:5;3:0]are randomly sampledand located in each 100100image. When coloring numbers, Gaussian noise is added to thereference color value. To simulate more realistic situations, we made two variants of MREF bychainging backgrounds to either distractors (MDIST; Figure 4b) or natural images (MBG; Figure 4c).Background images in MDIST are constructed with randomly cropped 55patches of MNISTimages whereas backgrounds of MBG are filled with natural scene images randomly chosen from theSUN Database (Xiao et al., 2014). The training, validation and test sets contain 30,000, 10,000 and10,000 images respectively.Experimental Settings We implement the proposed network with and without the local contextobservation referred to as PAN-CTX and PAN, respectively. In addition, soft attention network (SAN),hard attention network (HAN) (Xu et al., 2015) and two variants of spatial transformer network(STN-S and STN-M) (Jaderberg et al., 2015), are used as baseline models for comparisons. WhileSTN-S is the model with a single transformer layer, STN-M contains multiple transformer layers inthe network. We reimplemented SAN and STNs following the descriptions in (Xu et al., 2015) and(Jaderberg et al., 2015), respectively, and trained HAN by optimizing the marginal log-likelihoodloss as it is more accurate and feasible due to small search space in our task. The architecture ofimage encoding network in SAN and HAN and localization networks in STNs are all identical for faircomparisons. CNN in the proposed network also has the same architecture except for the additionallayers for hierarchical attention. The CNN is composed of four stacks of 33convolutions with 32channels (stride 1) followed by a 22max pooling layer (stride 2) as illustrated in Figure 5a. Weused a single fc layer for classification because the task requires simple color prediction. The attentionfunctionsglatt()for all models are formed as multi-layer perceptrons with two layers (Figure 5b).6Under review as a conference paper at ICLR 2017Table 1: Performance of attention models on MREF, MDIST, and MBG datasets.(a) Color prediction accuracy [%]MREF MDIST MBGSTN-S 39.10 38.32 32.27STN-M 93.89 85.09 52.25SAN 82.94 75.73 53.77HAN 81.84 78.49 55.84PAN 95.92 91.65 69.46PAN-CTX 98.51 96.02 85.55(b) True-positive ratio [%]MREF MDIST MBGUniform 2.34 2.35 2.39SAN 13.61 12.56 6.73HAN 13.95 13.81 7.64PAN 17.39 13.10 8.62PAN-CTX 22.59 22.80 11.010.5 1.0 1.5 2.0 2.5 3.0Scale0.700.750.800.850.900.951.00AccuracyPAN_CTXPANHANSANSTN-M0.5 1.0 1.5 2.0 2.5 3.0Scale0.600.650.700.750.800.850.900.951.00AccuracyPAN_CTXPANHANSANSTN-M0.5 1.0 1.5 2.0 2.5 3.0Scale0.30.40.50.60.70.80.9AccuracyPAN_CTXPANHANSANSTN-M(a) Attribute prediction accuracies of different models on the test subsets in different scales.0.0 0.2 0.4 0.6 0.8 1.0Recall0.00.10.20.30.40.5PrecisionPAN-CTXHANSAN0.0 0.2 0.4 0.6 0.8 1.0Recall0.00.10.20.30.40.5PrecisionPAN-CTXHANSAN0.0 0.2 0.4 0.6 0.8 1.0Recall0.000.050.100.150.200.25PrecisionPAN-CTXHANSAN(b) The precision-recall curves of object segmentation with attention probability.Figure 6: Analysis of algorithms on MREF (left), MDIST (middle), and MBG (right).The function takes the concatenation of a query q, which is a one-hot vector representing the targetobject and a feature vector fli;j, and outputs an attention score sli;j. In PAN-CTX, the attentionfunctions of att1,att2 andatt3 additionally take the local context Fli;jcontaining the adjacentfeatures with = 2. Every model is trained from scratch.Results Table 1a presents color prediction accuracy of all compared algorithms. It is obviousthat PAN outperforms all the previous approaches with significant margins and PAN-CTX furtherimproves the performance by exploiting the local contexts for attention estimation. While STN-Soften fails to predict the correct answers, STN-M learns to predict the color of the target objectthrough multiple transformations and shows comparable performance to PAN in MREF. However,the performance of STN-M dramatically drops as the dataset becomes more complex and realistic,resulting in even lower performance than SAN and HAN. Also, note that STN-S is capable ofattending to any region attended by STN-M since both models predict attention regions by estimatingan affine transformation. STN-M achieves the improvement by learning multiple transformers fromgradients coming from different levels of features. In contrast to those parametric models, theproposed network can predict attention map with more fine-grained shapes capturing the spatialsupport of the target object better.To evaluate the scale sensitivity of each model, we divided the test images into five subsets based ontarget object scales with uniform interval and computed the accuracies of the models. The resultsare presented in Figure 6a, where SAN and HAN tend to predict the correct answers only in a scalerange between 1.0 and 2.0, while their performance is degraded significantly with wild scale changes.STN-M becomes vulnerable to scale variations in more realistic settings. In contrast, PAN andPAN-CTX are robust to scale variations due to their multi-scale attention machanism especially whenthe local contexts are incorporated.Unlike STNs whose attention is constrained to rhombic regions, those models based on feature-wiseattention maps can produce attention regions adaptive to the shapes of the target object. We evaluatethe attention quality of these models using two complementary criteria: true-positive ratio (TPR)7Under review as a conference paper at ICLR 2017query: 8answer: redSAN: whiteHAN: yellowPAN: redInput & Outputs SAN HANPAN -CTXattention 3 attention 2 attention 4(a)(b)(d)(c)Figure 7: Qualitative results of SAN, HAN and PAN-CTX. (a) Input images faded by attendedfeature map (c). (b) Magnitude of activations in feature maps fli;jbefore attention: the activations aremapped to original image space by spreading activations to their receptive fields. (c) Magnitude ofactivations in attended feature maps ^fli;jwhich shows the effect of attention in contrast to (b). (d)Magnitude of activations of the attended feature maps ^fli;jin its original resolution of the featuremap. For PAN-CTX, only last three attention layers are visualized and attentions of ealier layersare accumulated for visualizing higher attention layers. For HAN, (c) and (d) represent attentionprobability because attended feature map is not available. Every image except for input image isrescaled into [0;1]by(xmin)=(maxmin).and precision-recall (PR) curve. TPR measures how strong attention is given to proper location bycomputing the ratio of the aggregated attention probability within the desired area (a.k.a., ground-truth segmentation) to the attention probability in the whole image (Table 1b). PR measures theoverlaps between ground-truth segmentations and binarized segmentation predictions constructedwith different thresholds (Figure 6b). Note that the proposed model with the local context observationgives the best results with significant margin compared to all the other methods in terms of bothcriteria. These results suggest that PAN-CTX constructs more accurate shapes of attended regionsthan all other attention models.Figure 7 shows the qualitative results of the proposed method and two baselines on the MBG dataset.The proposed model yields accurate attention regions eventually by gradually augmenting attentionand suppressing irrelevant regions in the image. We can observe that the proposed model couldmaintain the high attention resolution through the progressive attention process. In contrast, thebaseline models attend to the target objects only once at the top layer resulting in a coarse attention insize and shape. More qualitative results in these experiments are presented in Appendix C.4.2 A TTRIBUTE PREDICTION ON VISUAL GENOMEDataset Visual Genome (VG) (Krishna et al., 2016) is an image dataset containing several types ofannotations: question/answer pairs, image captions, objects, object attributes and object relationship.We formulate the object attribute prediction as a multi-label classification task with reference. Givenan input image and a query ( i.e., an object category), we predict the binary attributes of individualobjects specified by a query. We used 827 object classes and 749 attribute classes that appear more8Under review as a conference paper at ICLR 2017Table 2: Weighted mAP of the attribute prediction and TPR of attentions measured with ground-truthbounding boxes on VG dataset.attention only w/ priormAP TPR mAP TPRSAN 27.62 15.01 31.84 17.65HAN 27.72 17.24 31.93 19.70PAN-CTX 29.38 18.01 32.50 20.17Query : shoeInput ImagePAN -CTXattention map 3 attention map 2 masked imageHANattention map masked imageFigure 8: Visualization of example attentions of HAN and PAN-CTX on VG dataset. Attention mapspresent magnitude of attended features and red boxes show ground truth bounding boxes of query.than 100 times. A total of 86,674 images with 667,882 object attribute labels are used for ourexperiment, and they are split into training, validation and test sets each containing 43,337, 8,667 and34,670 images. The task is challenging because scales of objects largely vary and the attributes maybe associated with very small objects.Experimental Settings and Results We mainly compare our algorithm with SAN and HAN sinceSTNs could not learn a proper attention process on VG. The transformer layers of STNs generatedpadded images of different sizes and rotations to encode the query vector to fit the query-specificbiases. All the networks share the same CNN architecture of VGG-16 network (Simonyan &Zisserman, 2015), which is pretrained on ImageNet (Deng et al., 2009) and is further fine-tunedon the VG dataset for the attribute prediction. For SAN and HAN, an attention layer is attachedto the last pooling layer in VGG-16 while PAN stacks an additional attention layer with the localcontextsFli;jwith= 2on top of each of the last three pooling layers in VGG-16. We skip to placeattention layers at the first two pooling layers ( pool1 andpool2 ) because the features in those layersare not discriminative enough to filter out.We also test models with object class conditional prior. Inthese models, the final attended feature is fused with the query once more by a fully connected layerallowing the network to reflect the conditional distribution of the attributes given the query. Refer toAppendix B for more detailed descriptions on the network architectures.All three models are evaluated in terms of mean average precision (mAP) weighted by the frequen-cies of the attribute labels in the test set, where the computation of mAP follows PASCAL VOCprotocol (Everingham et al., 2010). The proposed method consistently achieves the best weightedmAP scores in both experimental settings as shown in Table 2 but the gain reduces with object classconditional prior. Table 2 also shows TPR of each model measured with the ground-truth boundingbox for evaluating the attention qualities, and the proposed method shows the best TPR. Figure 8presents the qualitative results of the proposed network and HAN on VG dataset.5 C ONCLUSIONWe proposed a novel hierarchical attention network, which progressively attends to regions of interestthrough multiple layers of a CNN. As the model is recursively applied to multiple layers of CNNwith an inherent feature hierarchy, it accurately predicts regions of interest with variable sizes andshapes. We also incorporate local contexts into our attention network for more robust estimation.The proposed network can be trained end-to-end with standard error backpropagation. We tested themodel on both synthetic and real datasets, and demonstrated significant performance improvementover existing attention methods.9Under review as a conference paper at ICLR 2017
HJKt06-Ng
Review
7: Good paper, accept
The paper presents an architecture to incrementally attend to image regions - at multiple layers of a deep CNN. In contrast to most other models, the model does not apply a weighted average pooling in the earlier layers of the network but only in the last layer. Instead, the features are reweighted in each layer with the predicted attention. 1. Contribution of approach: The approach to use attention in this way is to my knowledge novel and interesting. 2. Qualitative results: 2.1. I like the large number of qualitative results; however, I would have wished the focus would have been less on the “number” dataset and more on the Visual Genome dataset. 2.2. The qualitative results for the Genome dataset unfortunately does not provide the predicted attributes. It would be interesting to see e.g. the highest predicted attributes for a given query. So far the results only show the intermediate results. 3. Qualitative results: 3.1. The paper presents results on two datasets, one simulated dataset as well as Visual Genome. On both it shows moderate but significant improvements over related approaches. 3.2. For the visual genome dataset, it would be interesting to include a quantitative evaluation how good the localization performance is of the attention approach. 3.3. It would be interesting to get a more detailed understanding of the model by providing results for different CNN layers where the attention is applied. 4. It would be interesting to see results on more established tasks, e.g. VQA, where the model should similarly apply. In fact, the task on the numbers seems to be identical to the VQA task (input/output), so most/all state-of-the-art VQA approaches should be applicable. Other (minor/discussion points) - Something seems wrong in the last two columns in Figure 11: the query “7” is blue not green. Either the query or the answer seem wrong. - Section 3: “In each layer, the each attended feature map” -> “In each layer, each attended feature map” - I think Appendix A would be clearer if it would be stated that is the attention mechanism used in SAN and which work it is based on. Summary: While the experimental evaluation could be improved with more detailed evaluation, comparisons, and qualitative results, the presented evaluation is sufficient to validate the approach. The approach itself is novel and interesting to my knowledge and speaks for acceptance.
4: The reviewer is confident but not absolutely certain that the evaluation is correct
HyEeMu_xx
ICLR.cc/2017/conference
2017
Progressive Attention Networks for Visual Attribute Prediction
["Paul Hongsuck Seo", "Zhe Lin", "Scott Cohen", "Xiaohui Shen", "Bohyung Han"]
We propose a novel attention model which can accurately attend to target objects of various scales and shapes in images. The model is trained to gradually suppress irrelevant regions in an input image via a progressive attentive process over multiple layers of a convolutional neural network. The attentive process in each layer determines whether to pass or suppress features at certain spatial locations for use in the next layer. We further employ local contexts to estimate attention probability at each location since it is difficult to infer accurate attention by observing a feature vector from a single location only. The experiments on synthetic and real datasets show that the proposed attention network outperforms traditional attention methods in visual attribute prediction tasks.
["Deep learning", "Computer vision", "Multi-modal learning"]
ABSTRACTWe propose a novel attention model which can accurately attend to target objectsof various scales and shapes in images. The model is trained to gradually suppressirrelevant regions in an input image via a progressive attentive process over multiplelayers of a convolutional neural network. The attentive process in each layerdetermines whether to pass or suppress features at certain spatial locations for usein the next layer. We further employ local contexts to estimate attention probabilityat each location since it is difficult to infer accurate attention by observing a featurevector from a single location only. The experiments on synthetic and real datasetsshow that the proposed attention network outperforms traditional attention methodsin visual attribute prediction tasks.1 I NTRODUCTIONAttentive mechanisms often play important roles in modern neural networks (NNs) especially incomputer vision tasks. Many visual attention models have been introduced in the previous literature,and they have shown that attaching an attention to NNs can improve the accuracy in various taskssuch as image classification (Jaderberg et al., 2015; Ba et al., 2015; Mnih et al., 2014; Larochelle &Hinton, 2010), image generation (Gregor et al., 2015), image caption generation (Xu et al., 2015) andvisual question answering (Yang et al., 2015; Andreas et al., 2016; Xu & Saenko, 2015).There are several motivations for incorporating attentive mechanisms in NNs. One of them is thatit is analogous to the perceptual process of human beings. The human visual system concentratesattention to a region of interest instead of processing an entire scene. Likewise, in a neural attentionmodel, we can focus processing only on attended areas of the input image. This benefits us in termsof computational resources; the number of hidden units may be reduced since the hidden activationsonly need to encode the region with attention (Mnih et al., 2014).Another important motivation is that some computer vision tasks, e.g. visual question answering(VQA), require identifying the object for accurate attribute prediction. For example, when theinput image contains multiple objects, the task should focus on the object specified by the question.Figure 1 illustrates an example task to predict the color (answer) of a given input number (query).The query specifies a particular object in the input image (number 7 in this example) for answering itsattribute (red). To address this type of tasks, the network architecture should incorporate an attentivemechanism either explicitly or implicitly.One of the most popular attention mechanisms for NNs is the soft attention method (Xu et al.,2015), which aggregates responses in a feature map weighted by their attention probabilities (seeAppendix A for more details). This process results in a single attended feature vector. Since thesoft attention method is fully differentiable, the entire network can be trained end-to-end withstandard backpropagation. However, it can only model attention to local regions with a certain sizedepending on the receptive field of the layer chosen for attention. This makes the soft attentionmethod inappropriate for complicated cases, where objects involve significant variations in theirscales, and shapes.1Under review as a conference paper at ICLR 2017(a) input image (b) first attention (c) second attention (d) third attention (e) final attentionFigure 1: An example reference problem (with the query 7 and the answer red) and intermediateattention maps using our progressive attention model. It shows that attention is gradually refinedthrough the network layers for resolving the reference problem. Distracting patterns at smaller scalesare suppressed at earlier layers while those at larger scales ( e.g. 9) are suppressed at later layers withlarger receptive fields. All attended images are independently rescaled for the visualization.To overcome this limitation, we propose a novel attention network, referred to as progressive attentionnetwork (PAN), which enables precise attention over objects of different scales and shapes byattaching attentive mechanisms to multiple layers within a convolutional neural network (CNN).More specifically, the proposed network forces attention prediction in intermediate feature maps byforwarding the attended feature maps in each layer to the subsequent layers in the CNN. Since afeature to be attended in the current feature map is obtained by combining lower-level features withsmaller receptive fields, the network can learn to distill the precise spatial support relevant to thetarget objects as final attention. The contribution of this work is three-fold:A novel attention model (progressive attention network) which can be learned to predictattention matching accurate scale and shape of a target objectUse of local contexts to improve the stability of the progressive attention modelAchievement of significant performance improvement over traditional soft and hard attentionapproaches in query-specific visual attribute prediction tasksThe rest of this paper is organized as follows. We first review related work in Section 2. In Section 3,we describe the proposed model with local context information. We then present our experimentalresults on several datasets in Section 4 and conclude the paper in Section 5.2 R ELATED WORKAttention on Features The most straightforward attention mechanism is a feature based method,which selects a subset of features by explicitly attaching an attention model to NN architectures. Theapproaches relying on this attention mechanism have improved performance in many tasks (Xu et al.,2015; Yang et al., 2015; Andreas et al., 2016; Xu & Saenko, 2015; Bahdanau et al., 2015; Luonget al., 2015; Weston et al., 2015; Graves et al., 2014). For example, they have been used to handlesequences of variable lengths in neural machine translation models (Bahdanau et al., 2015; Luonget al., 2015), speech recognition (Chorowski et al., 2014) and handwriting generation (Graves, 2013),and manage memory access mechanisms for memory networks (Weston et al., 2015) and neuralturing machines (Graves et al., 2014). When applied to computer vision tasks to resolve referenceproblems, these models are designed to pay attention to CNN features corresponding to subregionsin the input image. Image caption generation and visual question answering are typical examplesbenefited from this attention mechanism (Xu et al., 2015; Yang et al., 2015; Andreas et al., 2016; Xu& Saenko, 2015).Attention by Image Transformation Another stream of attention models is based on imagetransformations. These approaches transform a regular grid and sample from the input image withthe transformed grid whose element corresponds to a location in the input image. Ba et al. (2015)and Mnih et al. (2014) transform an input image with predicted translation parameters ( txandty)and a fixed scale factor ( ^s<1) for image classification or multiple object recognition. Scale factoris also predicted in (Gregor et al., 2015) for image generation, where the network uses Gaussianfilters for sampling. Spatial transformer networks (STNs) predict all six parameters of the affine2Under review as a conference paper at ICLR 2017transformation matrix, and even extend it to a projective transformation and a 16-point thin platespline transformation (Jaderberg et al., 2015). Because all these transformations used in (Jaderberget al., 2015) involve scale factors, STNs are capable of dealing with objects in different sizes. However,STN is limited when there are multiple candidate regions for attention. Our model overcomes thisproblem by formulating attention as progressive filtering on feature maps instead of assuming objectscan be roughly aligned by a single spatial transformation.Multiple Attention Processes There have been several approaches iteratively performing attentiveprocesses to resolve relations between targets. Yang et al. (2015) iteratively attend to imagesconditioned on the previous attention states for visual question answering as the objects of interestare often not specified explicitly in questions but implicitly in relational expressions about the targetobjects. Also, Weston et al. (2015) and Graves et al. (2014) incorporate attention mechanisms tomemory cells iteratively to retrieve different values stored in the memory. Our proposed model issimilar in spirit of iterative attention but aimed at attending to a single target object via operating onmultiple CNN layers progressively, i.e., attention information is predicted progressively from featuremaps through multiple layers of CNN to capture the fine shapes of the target object.In (Jaderberg et al., 2015), the authors also conducted an experiment with a network with multipletransformer layers. However, the attention shapes of STNs are still constrained to the type oftransformation regardless of the number of transformers. In contrast, the quality of the attentionshapes is improved through progressive attention process in the proposed method. Stollenga et al.(2014) introduced a deep network which manipulates intermediate features of a fixed classifier throughchannel-wise attention process. Although the channel-wise attention process is used at multiple layersof the network to manipulate the intermediate feature representations, they never explored spatialattention process. More importantly, this method requires to have an accurate pretrained classifierfor the target classes prior to learning attention while pretraining a general query-specific attributeclassifier is not trivial. It is also notable that both (Jaderberg et al., 2015) and (Stollenga et al., 2014)target simple classification tasks without queries while we aim to tackle the query-specific attributeprediction task where answers from a single input image can be very different depending on the inputquery.Training Attention Models The networks with soft attention are fully differentiable and thustrainable end-to-end by backpropagation. Xu et al. (2015) and Zaremba & Sutskever (2015) introduceda stochastic hard attention, where the network explicitly selects a single feature based on the predictedattention probability map. Because the explicit selection (or sampling) procedure is not differentiable,REINFORCE learning rule (Williams, 1992), is used to make networks trainable. Transformationbased attention models (Ba et al., 2015; Mnih et al., 2014) are mostly trained by REINFORCElearning rule but STN (Jaderberg et al., 2015) proposed a fully differentiable formulation and madeit possible to train end-to-end. Compared to these attention networks, the proposed network isalso trainable end-to-end by the standard backpropagation without any extra techniques since everyoperation within the network is differentiable.3 P ROGRESSIVE ATTENTION NETWORKSTo overcome the limitation of existing attention models in handling variable object scales and shapes,we propose a progressive attention mechanism. In the proposed model, irrelevant features at differentscales are suppressed by attention filtering steps in different CNN layers, and computation is focusedon the features corresponding to regions of interest. At each attention layer, the model predicts anattention map given the input query and the current feature map via an attention module, and then theattention maps is multiplied to the feature maps channel-wise to obtain attended feature map. In eachlayer, each attended feature map is then forwarded to the next layer of the CNN for construction ofthe following feature map, which is illustrated in Figure 2. This progressive attention process allowsus to estimate precise details of attention areas while maintaining deep representations appropriatefor high-level inference tasks.3Under review as a conference paper at ICLR 2017feature map (fl)attention probability ( αl)attended feature map ( fl)attended feature (fatt)attribute classifierΣnext convolution layer ( gCNNl+1)Figure 2: Overall procedure of progressive attention. Attentive processes are repeatedly applied tofeature maps at multiple layers and the resulting attended feature maps are used as input featuremaps for the next convolution layers in CNN. Attention probabilities lare estimated from featuremaps and input query. In the last attention layer, the attended feature maps are aggregated to a singlefeature vector (by sum pooling) and fed to the final attribute classifier.3.1 P ROGRESSIVE ATTENTIVE PROCESSLetfl2RHlWlClbe an output feature map of a layer l2f0;:::;Lgin CNN with width Wl,heightHlandClchannels, and fli;j2RClbe a feature at (i;j)of the feature map fl. In the proposedPAN, an attentive process is applied to multiple layers of CNN and we obtain the attended featuremap ^fl= [^fli;j], which is given by^fli;j=li;jfli;j: (1)Here, the attention probability li;jfor a feature fli;jis calculated bysli;j=glatt(fli;j;q;latt)andli;j=softmax i;j(sl)ifl=L(sli;j) otherwise; (2)whereglatt()denotes the attention function with a set of parameters lattfor layerl,sli;jis theattention score at (i;j)in layerl,qis the query, and ()is a sigmoid function. The attentionprobability at each location is independent of others in the same feature map, where a sigmoidfunction is employed to constrain attention probabilities between 0 and 1. For the last layer ofattention, we use a softmax function over the entire spatial region for final aggregation of features.Unlike the soft attention model (see Appendix A), in the intermediate attention layers, the attendedfeature map ^flis not summed up to generate a single vector representation of the attended regions.Instead, the attended feature map is forwarded to the next layer as an input to compute the nextfeature map, which is given byfl+1=gl+1CNN(^fl;l+1CNN) (3)wheregl+1CNN()is the next CNN operations parameterized by lCNN.This feedforward procedure with attentive processes in CNN is repeated from the input of the CNN,i.e.,f0=I, until ^fLis obtained. Then, the attended feature fattis finally retrieved by summing upall the features in the final attended feature map ^fLas in soft attention, which is given byfatt=HXiWXj^fLi;j=HXiWXjLi;jfLi;j: (4)The attended feature fattobtained by such process is then used as the input to the visual attributeclassifier as illustrated in Figure 2.In our models, we place the attention layers to the output of max pooling layers instead of every layerin CNN because the reduction of feature resolution within CNN mainly comes from pooling layers.In practice„ we can also skip the first few pooling layers and only attach the attention module to theoutputs of last Kpooling layers.4Under review as a conference paper at ICLR 2017(a)αi,jlgattlfi,jlαi,jlgattlFi,jl(b)Figure 3: Attention estimation (a) without local context and (b) with local context. In (a), li;jispredicted from fli;jonly while its spatially adjacent features are also used to estimate li;jin (b).3.2 M ULTI -RESOLUTION ATTENTION ESTIMATIONIn Eq. (3), the resolution of attention probability map ldepends on the size of the feature mapin the corresponding layer. Due to the nature of a CNN with convolution and pooling layers, theresolution oflwill decrease with the increasing depth of a layer. Since the attentive processes areperformed over multiple layers recursively in our framework, it is possible to attend to the regions ofspecific sizes and shapes. Note that the proposed network can exploit high-level semantics in deeprepresentations for inference without losing attention resolution.The progressive attention model is still very effective in predicting fine attention shapes as theattention information is aggregated over multiple layers to suppress irrelevant structures at differentgranularity. In lower layers, features whose receptive fields contain small distractors are suppressedfirst. Meanwhile, the features from a part of large distractors remain intact but passed to the next layerdelaying its suppression. In higher layers, features of these large distractors would get low attentionprobability as each feature contains information from larger receptive fields allowing the attentionmodule to distinguish whether the feature is from a distractor or the target object. This phenomenonis well demonstrated in the qualitative results in our experiments (Section 4). An additional benefit ofprogressive attention is that it is more straightforward during inference since it is a pure feedforwardnetwork.3.3 L OCAL CONTEXTA basic version of PAN discussed so far predicts an attention probability li;jbased solely on thefeaturefli;jat a single feature map location. We can improve the quality of attention estimation byallowing the attention layers to observe a local context of the target feature. The local context Fli;jofa featurefli;jis composed of its spatially adjacent features. For example, the local context can begiven byFli;j=ffls;tjisi+;jtj+gas illustrated in Figure 3. The attentionscore is now predicted by the attention network with local context assli;j=glatt(Fli;j;q;latt): (5)In this architecture, the area of the local context is given by the filter size corresponding to thecomposite operation of convolution followed by pooling in the next layer. The local context does notneed to be considered in the last layer of attention since its activations are used to compute the finalattended feature map. Local context improves attention prediction as it enables the centroid feature tobe compared with surrounding features which makes the estimated attention more discriminative.3.4 T RAINING PROGRESSIVE ATTENTION NETWORKSTraining a PAN is as simple as training a soft attention network (Xu et al., 2015) because everyoperation within the network is differentiable. The entire network is trained end-to-end by the standardbackpropagation minimizing the binary cross entropies of the object-specific visual attributes. Whenwe train it from a pretrained CNN, the CNN part should always be fine-tuned together since theintermediate attention maps may change the input distributions of their associated layers in CNN.5Under review as a conference paper at ICLR 2017(a) MREF (b) MDIST (c) MBGFigure 4: Example of the MREF datasets.conv1 (3×3@32)pool1 (2×2)conv2 (3×3@32)pool2 (2×2)conv3 (3×3@32)pool3 (2×2)att(STN)fc (classification layer)STNatt(soft)SANatt(hard)HANatt1att2att4PAN qfi,jlFi,jlfc layer(fusion layer, 32 activations)fc layer(estimation layer, 1 activation)si,jlconv4 (3×3@32)pool4 (2×2)att3(a) Network architectures of models on MREF. Arrows rep-resents direct connection to next layer without attention.conv1 (3×3@32)pool1 (2×2)conv2 (3×3@32)pool2 (2×2)conv3 (3×3@32)pool3 (2×2)att(STN)fc (classification layer)STNatt(soft)SOFTatt(hard)HARDatt1att2att3 (soft)HAttNet qfi,jlFi,jlfc layer(fusion layer, 32 activations)fc layer(estimation layer, 1 activation)si,jl(b) Architecture of attention function glatt(). Lo-cal contextsFli;jare used only in PAN-CTX.Figure 5: Detailed illustration of network architectures on MNIST Reference experiments.4 E XPERIMENTS4.1 MNIST R EFERENCEDatasets We conduct experiments on a synthetic dataset created from MNIST (LeCun et al., 1998).The synthetic dataset is referred to as MNIST Reference (MREF; Figure 4a), where each trainingexample is a triple of an image, a query number and its color label. The task on this dataset is topredict the color of the number identified by a query. Five to nine distinct MNIST numbers withdifferent colors in fgreen;yellow;white;red;bluegand scales in [0:5;3:0]are randomly sampledand located in each 100100image. When coloring numbers, Gaussian noise is added to thereference color value. To simulate more realistic situations, we made two variants of MREF bychainging backgrounds to either distractors (MDIST; Figure 4b) or natural images (MBG; Figure 4c).Background images in MDIST are constructed with randomly cropped 55patches of MNISTimages whereas backgrounds of MBG are filled with natural scene images randomly chosen from theSUN Database (Xiao et al., 2014). The training, validation and test sets contain 30,000, 10,000 and10,000 images respectively.Experimental Settings We implement the proposed network with and without the local contextobservation referred to as PAN-CTX and PAN, respectively. In addition, soft attention network (SAN),hard attention network (HAN) (Xu et al., 2015) and two variants of spatial transformer network(STN-S and STN-M) (Jaderberg et al., 2015), are used as baseline models for comparisons. WhileSTN-S is the model with a single transformer layer, STN-M contains multiple transformer layers inthe network. We reimplemented SAN and STNs following the descriptions in (Xu et al., 2015) and(Jaderberg et al., 2015), respectively, and trained HAN by optimizing the marginal log-likelihoodloss as it is more accurate and feasible due to small search space in our task. The architecture ofimage encoding network in SAN and HAN and localization networks in STNs are all identical for faircomparisons. CNN in the proposed network also has the same architecture except for the additionallayers for hierarchical attention. The CNN is composed of four stacks of 33convolutions with 32channels (stride 1) followed by a 22max pooling layer (stride 2) as illustrated in Figure 5a. Weused a single fc layer for classification because the task requires simple color prediction. The attentionfunctionsglatt()for all models are formed as multi-layer perceptrons with two layers (Figure 5b).6Under review as a conference paper at ICLR 2017Table 1: Performance of attention models on MREF, MDIST, and MBG datasets.(a) Color prediction accuracy [%]MREF MDIST MBGSTN-S 39.10 38.32 32.27STN-M 93.89 85.09 52.25SAN 82.94 75.73 53.77HAN 81.84 78.49 55.84PAN 95.92 91.65 69.46PAN-CTX 98.51 96.02 85.55(b) True-positive ratio [%]MREF MDIST MBGUniform 2.34 2.35 2.39SAN 13.61 12.56 6.73HAN 13.95 13.81 7.64PAN 17.39 13.10 8.62PAN-CTX 22.59 22.80 11.010.5 1.0 1.5 2.0 2.5 3.0Scale0.700.750.800.850.900.951.00AccuracyPAN_CTXPANHANSANSTN-M0.5 1.0 1.5 2.0 2.5 3.0Scale0.600.650.700.750.800.850.900.951.00AccuracyPAN_CTXPANHANSANSTN-M0.5 1.0 1.5 2.0 2.5 3.0Scale0.30.40.50.60.70.80.9AccuracyPAN_CTXPANHANSANSTN-M(a) Attribute prediction accuracies of different models on the test subsets in different scales.0.0 0.2 0.4 0.6 0.8 1.0Recall0.00.10.20.30.40.5PrecisionPAN-CTXHANSAN0.0 0.2 0.4 0.6 0.8 1.0Recall0.00.10.20.30.40.5PrecisionPAN-CTXHANSAN0.0 0.2 0.4 0.6 0.8 1.0Recall0.000.050.100.150.200.25PrecisionPAN-CTXHANSAN(b) The precision-recall curves of object segmentation with attention probability.Figure 6: Analysis of algorithms on MREF (left), MDIST (middle), and MBG (right).The function takes the concatenation of a query q, which is a one-hot vector representing the targetobject and a feature vector fli;j, and outputs an attention score sli;j. In PAN-CTX, the attentionfunctions of att1,att2 andatt3 additionally take the local context Fli;jcontaining the adjacentfeatures with = 2. Every model is trained from scratch.Results Table 1a presents color prediction accuracy of all compared algorithms. It is obviousthat PAN outperforms all the previous approaches with significant margins and PAN-CTX furtherimproves the performance by exploiting the local contexts for attention estimation. While STN-Soften fails to predict the correct answers, STN-M learns to predict the color of the target objectthrough multiple transformations and shows comparable performance to PAN in MREF. However,the performance of STN-M dramatically drops as the dataset becomes more complex and realistic,resulting in even lower performance than SAN and HAN. Also, note that STN-S is capable ofattending to any region attended by STN-M since both models predict attention regions by estimatingan affine transformation. STN-M achieves the improvement by learning multiple transformers fromgradients coming from different levels of features. In contrast to those parametric models, theproposed network can predict attention map with more fine-grained shapes capturing the spatialsupport of the target object better.To evaluate the scale sensitivity of each model, we divided the test images into five subsets based ontarget object scales with uniform interval and computed the accuracies of the models. The resultsare presented in Figure 6a, where SAN and HAN tend to predict the correct answers only in a scalerange between 1.0 and 2.0, while their performance is degraded significantly with wild scale changes.STN-M becomes vulnerable to scale variations in more realistic settings. In contrast, PAN andPAN-CTX are robust to scale variations due to their multi-scale attention machanism especially whenthe local contexts are incorporated.Unlike STNs whose attention is constrained to rhombic regions, those models based on feature-wiseattention maps can produce attention regions adaptive to the shapes of the target object. We evaluatethe attention quality of these models using two complementary criteria: true-positive ratio (TPR)7Under review as a conference paper at ICLR 2017query: 8answer: redSAN: whiteHAN: yellowPAN: redInput & Outputs SAN HANPAN -CTXattention 3 attention 2 attention 4(a)(b)(d)(c)Figure 7: Qualitative results of SAN, HAN and PAN-CTX. (a) Input images faded by attendedfeature map (c). (b) Magnitude of activations in feature maps fli;jbefore attention: the activations aremapped to original image space by spreading activations to their receptive fields. (c) Magnitude ofactivations in attended feature maps ^fli;jwhich shows the effect of attention in contrast to (b). (d)Magnitude of activations of the attended feature maps ^fli;jin its original resolution of the featuremap. For PAN-CTX, only last three attention layers are visualized and attentions of ealier layersare accumulated for visualizing higher attention layers. For HAN, (c) and (d) represent attentionprobability because attended feature map is not available. Every image except for input image isrescaled into [0;1]by(xmin)=(maxmin).and precision-recall (PR) curve. TPR measures how strong attention is given to proper location bycomputing the ratio of the aggregated attention probability within the desired area (a.k.a., ground-truth segmentation) to the attention probability in the whole image (Table 1b). PR measures theoverlaps between ground-truth segmentations and binarized segmentation predictions constructedwith different thresholds (Figure 6b). Note that the proposed model with the local context observationgives the best results with significant margin compared to all the other methods in terms of bothcriteria. These results suggest that PAN-CTX constructs more accurate shapes of attended regionsthan all other attention models.Figure 7 shows the qualitative results of the proposed method and two baselines on the MBG dataset.The proposed model yields accurate attention regions eventually by gradually augmenting attentionand suppressing irrelevant regions in the image. We can observe that the proposed model couldmaintain the high attention resolution through the progressive attention process. In contrast, thebaseline models attend to the target objects only once at the top layer resulting in a coarse attention insize and shape. More qualitative results in these experiments are presented in Appendix C.4.2 A TTRIBUTE PREDICTION ON VISUAL GENOMEDataset Visual Genome (VG) (Krishna et al., 2016) is an image dataset containing several types ofannotations: question/answer pairs, image captions, objects, object attributes and object relationship.We formulate the object attribute prediction as a multi-label classification task with reference. Givenan input image and a query ( i.e., an object category), we predict the binary attributes of individualobjects specified by a query. We used 827 object classes and 749 attribute classes that appear more8Under review as a conference paper at ICLR 2017Table 2: Weighted mAP of the attribute prediction and TPR of attentions measured with ground-truthbounding boxes on VG dataset.attention only w/ priormAP TPR mAP TPRSAN 27.62 15.01 31.84 17.65HAN 27.72 17.24 31.93 19.70PAN-CTX 29.38 18.01 32.50 20.17Query : shoeInput ImagePAN -CTXattention map 3 attention map 2 masked imageHANattention map masked imageFigure 8: Visualization of example attentions of HAN and PAN-CTX on VG dataset. Attention mapspresent magnitude of attended features and red boxes show ground truth bounding boxes of query.than 100 times. A total of 86,674 images with 667,882 object attribute labels are used for ourexperiment, and they are split into training, validation and test sets each containing 43,337, 8,667 and34,670 images. The task is challenging because scales of objects largely vary and the attributes maybe associated with very small objects.Experimental Settings and Results We mainly compare our algorithm with SAN and HAN sinceSTNs could not learn a proper attention process on VG. The transformer layers of STNs generatedpadded images of different sizes and rotations to encode the query vector to fit the query-specificbiases. All the networks share the same CNN architecture of VGG-16 network (Simonyan &Zisserman, 2015), which is pretrained on ImageNet (Deng et al., 2009) and is further fine-tunedon the VG dataset for the attribute prediction. For SAN and HAN, an attention layer is attachedto the last pooling layer in VGG-16 while PAN stacks an additional attention layer with the localcontextsFli;jwith= 2on top of each of the last three pooling layers in VGG-16. We skip to placeattention layers at the first two pooling layers ( pool1 andpool2 ) because the features in those layersare not discriminative enough to filter out.We also test models with object class conditional prior. Inthese models, the final attended feature is fused with the query once more by a fully connected layerallowing the network to reflect the conditional distribution of the attributes given the query. Refer toAppendix B for more detailed descriptions on the network architectures.All three models are evaluated in terms of mean average precision (mAP) weighted by the frequen-cies of the attribute labels in the test set, where the computation of mAP follows PASCAL VOCprotocol (Everingham et al., 2010). The proposed method consistently achieves the best weightedmAP scores in both experimental settings as shown in Table 2 but the gain reduces with object classconditional prior. Table 2 also shows TPR of each model measured with the ground-truth boundingbox for evaluating the attention qualities, and the proposed method shows the best TPR. Figure 8presents the qualitative results of the proposed network and HAN on VG dataset.5 C ONCLUSIONWe proposed a novel hierarchical attention network, which progressively attends to regions of interestthrough multiple layers of a CNN. As the model is recursively applied to multiple layers of CNNwith an inherent feature hierarchy, it accurately predicts regions of interest with variable sizes andshapes. We also incorporate local contexts into our attention network for more robust estimation.The proposed network can be trained end-to-end with standard error backpropagation. We tested themodel on both synthetic and real datasets, and demonstrated significant performance improvementover existing attention methods.9Under review as a conference paper at ICLR 2017
SynYYsrNe
4: Ok but not good enough - rejection
This paper proposes an attention mechanism which is essentially a gating on every spatial feature. Though they claim novelty through the attention being progressive, progressive attention has been done before [Spatial Transformer Networks, Deep Networks with Internal Selective Attention through Feedback Connections], and the element-wise multiplicative gates are very similar to convolutional LSTMs and Highway Nets. There is a lack of novelty and no significant results. Pros: - The idea of progressive attention on features is good, but has been done in [Spatial Transformer Networks, Deep Networks with Internal Selective Attention through Feedback Connections] - Good visualisations. Cons: - No progressive baselines were evaluated, e.g. STN and HAN at every layer acting on featuremaps. - Not clear how the query is fed into the localisation networks of baselines. - The difference in performance between author-made synthetic data and the Visual Genome datasets between baselines and PAN is very different. Why is this? There is no significant performance gain on any standard datasets. - No real novelty.
5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature
HyEeMu_xx
ICLR.cc/2017/conference
2017
Progressive Attention Networks for Visual Attribute Prediction
["Paul Hongsuck Seo", "Zhe Lin", "Scott Cohen", "Xiaohui Shen", "Bohyung Han"]
We propose a novel attention model which can accurately attend to target objects of various scales and shapes in images. The model is trained to gradually suppress irrelevant regions in an input image via a progressive attentive process over multiple layers of a convolutional neural network. The attentive process in each layer determines whether to pass or suppress features at certain spatial locations for use in the next layer. We further employ local contexts to estimate attention probability at each location since it is difficult to infer accurate attention by observing a feature vector from a single location only. The experiments on synthetic and real datasets show that the proposed attention network outperforms traditional attention methods in visual attribute prediction tasks.
["Deep learning", "Computer vision", "Multi-modal learning"]
ABSTRACTWe propose a novel attention model which can accurately attend to target objectsof various scales and shapes in images. The model is trained to gradually suppressirrelevant regions in an input image via a progressive attentive process over multiplelayers of a convolutional neural network. The attentive process in each layerdetermines whether to pass or suppress features at certain spatial locations for usein the next layer. We further employ local contexts to estimate attention probabilityat each location since it is difficult to infer accurate attention by observing a featurevector from a single location only. The experiments on synthetic and real datasetsshow that the proposed attention network outperforms traditional attention methodsin visual attribute prediction tasks.1 I NTRODUCTIONAttentive mechanisms often play important roles in modern neural networks (NNs) especially incomputer vision tasks. Many visual attention models have been introduced in the previous literature,and they have shown that attaching an attention to NNs can improve the accuracy in various taskssuch as image classification (Jaderberg et al., 2015; Ba et al., 2015; Mnih et al., 2014; Larochelle &Hinton, 2010), image generation (Gregor et al., 2015), image caption generation (Xu et al., 2015) andvisual question answering (Yang et al., 2015; Andreas et al., 2016; Xu & Saenko, 2015).There are several motivations for incorporating attentive mechanisms in NNs. One of them is thatit is analogous to the perceptual process of human beings. The human visual system concentratesattention to a region of interest instead of processing an entire scene. Likewise, in a neural attentionmodel, we can focus processing only on attended areas of the input image. This benefits us in termsof computational resources; the number of hidden units may be reduced since the hidden activationsonly need to encode the region with attention (Mnih et al., 2014).Another important motivation is that some computer vision tasks, e.g. visual question answering(VQA), require identifying the object for accurate attribute prediction. For example, when theinput image contains multiple objects, the task should focus on the object specified by the question.Figure 1 illustrates an example task to predict the color (answer) of a given input number (query).The query specifies a particular object in the input image (number 7 in this example) for answering itsattribute (red). To address this type of tasks, the network architecture should incorporate an attentivemechanism either explicitly or implicitly.One of the most popular attention mechanisms for NNs is the soft attention method (Xu et al.,2015), which aggregates responses in a feature map weighted by their attention probabilities (seeAppendix A for more details). This process results in a single attended feature vector. Since thesoft attention method is fully differentiable, the entire network can be trained end-to-end withstandard backpropagation. However, it can only model attention to local regions with a certain sizedepending on the receptive field of the layer chosen for attention. This makes the soft attentionmethod inappropriate for complicated cases, where objects involve significant variations in theirscales, and shapes.1Under review as a conference paper at ICLR 2017(a) input image (b) first attention (c) second attention (d) third attention (e) final attentionFigure 1: An example reference problem (with the query 7 and the answer red) and intermediateattention maps using our progressive attention model. It shows that attention is gradually refinedthrough the network layers for resolving the reference problem. Distracting patterns at smaller scalesare suppressed at earlier layers while those at larger scales ( e.g. 9) are suppressed at later layers withlarger receptive fields. All attended images are independently rescaled for the visualization.To overcome this limitation, we propose a novel attention network, referred to as progressive attentionnetwork (PAN), which enables precise attention over objects of different scales and shapes byattaching attentive mechanisms to multiple layers within a convolutional neural network (CNN).More specifically, the proposed network forces attention prediction in intermediate feature maps byforwarding the attended feature maps in each layer to the subsequent layers in the CNN. Since afeature to be attended in the current feature map is obtained by combining lower-level features withsmaller receptive fields, the network can learn to distill the precise spatial support relevant to thetarget objects as final attention. The contribution of this work is three-fold:A novel attention model (progressive attention network) which can be learned to predictattention matching accurate scale and shape of a target objectUse of local contexts to improve the stability of the progressive attention modelAchievement of significant performance improvement over traditional soft and hard attentionapproaches in query-specific visual attribute prediction tasksThe rest of this paper is organized as follows. We first review related work in Section 2. In Section 3,we describe the proposed model with local context information. We then present our experimentalresults on several datasets in Section 4 and conclude the paper in Section 5.2 R ELATED WORKAttention on Features The most straightforward attention mechanism is a feature based method,which selects a subset of features by explicitly attaching an attention model to NN architectures. Theapproaches relying on this attention mechanism have improved performance in many tasks (Xu et al.,2015; Yang et al., 2015; Andreas et al., 2016; Xu & Saenko, 2015; Bahdanau et al., 2015; Luonget al., 2015; Weston et al., 2015; Graves et al., 2014). For example, they have been used to handlesequences of variable lengths in neural machine translation models (Bahdanau et al., 2015; Luonget al., 2015), speech recognition (Chorowski et al., 2014) and handwriting generation (Graves, 2013),and manage memory access mechanisms for memory networks (Weston et al., 2015) and neuralturing machines (Graves et al., 2014). When applied to computer vision tasks to resolve referenceproblems, these models are designed to pay attention to CNN features corresponding to subregionsin the input image. Image caption generation and visual question answering are typical examplesbenefited from this attention mechanism (Xu et al., 2015; Yang et al., 2015; Andreas et al., 2016; Xu& Saenko, 2015).Attention by Image Transformation Another stream of attention models is based on imagetransformations. These approaches transform a regular grid and sample from the input image withthe transformed grid whose element corresponds to a location in the input image. Ba et al. (2015)and Mnih et al. (2014) transform an input image with predicted translation parameters ( txandty)and a fixed scale factor ( ^s<1) for image classification or multiple object recognition. Scale factoris also predicted in (Gregor et al., 2015) for image generation, where the network uses Gaussianfilters for sampling. Spatial transformer networks (STNs) predict all six parameters of the affine2Under review as a conference paper at ICLR 2017transformation matrix, and even extend it to a projective transformation and a 16-point thin platespline transformation (Jaderberg et al., 2015). Because all these transformations used in (Jaderberget al., 2015) involve scale factors, STNs are capable of dealing with objects in different sizes. However,STN is limited when there are multiple candidate regions for attention. Our model overcomes thisproblem by formulating attention as progressive filtering on feature maps instead of assuming objectscan be roughly aligned by a single spatial transformation.Multiple Attention Processes There have been several approaches iteratively performing attentiveprocesses to resolve relations between targets. Yang et al. (2015) iteratively attend to imagesconditioned on the previous attention states for visual question answering as the objects of interestare often not specified explicitly in questions but implicitly in relational expressions about the targetobjects. Also, Weston et al. (2015) and Graves et al. (2014) incorporate attention mechanisms tomemory cells iteratively to retrieve different values stored in the memory. Our proposed model issimilar in spirit of iterative attention but aimed at attending to a single target object via operating onmultiple CNN layers progressively, i.e., attention information is predicted progressively from featuremaps through multiple layers of CNN to capture the fine shapes of the target object.In (Jaderberg et al., 2015), the authors also conducted an experiment with a network with multipletransformer layers. However, the attention shapes of STNs are still constrained to the type oftransformation regardless of the number of transformers. In contrast, the quality of the attentionshapes is improved through progressive attention process in the proposed method. Stollenga et al.(2014) introduced a deep network which manipulates intermediate features of a fixed classifier throughchannel-wise attention process. Although the channel-wise attention process is used at multiple layersof the network to manipulate the intermediate feature representations, they never explored spatialattention process. More importantly, this method requires to have an accurate pretrained classifierfor the target classes prior to learning attention while pretraining a general query-specific attributeclassifier is not trivial. It is also notable that both (Jaderberg et al., 2015) and (Stollenga et al., 2014)target simple classification tasks without queries while we aim to tackle the query-specific attributeprediction task where answers from a single input image can be very different depending on the inputquery.Training Attention Models The networks with soft attention are fully differentiable and thustrainable end-to-end by backpropagation. Xu et al. (2015) and Zaremba & Sutskever (2015) introduceda stochastic hard attention, where the network explicitly selects a single feature based on the predictedattention probability map. Because the explicit selection (or sampling) procedure is not differentiable,REINFORCE learning rule (Williams, 1992), is used to make networks trainable. Transformationbased attention models (Ba et al., 2015; Mnih et al., 2014) are mostly trained by REINFORCElearning rule but STN (Jaderberg et al., 2015) proposed a fully differentiable formulation and madeit possible to train end-to-end. Compared to these attention networks, the proposed network isalso trainable end-to-end by the standard backpropagation without any extra techniques since everyoperation within the network is differentiable.3 P ROGRESSIVE ATTENTION NETWORKSTo overcome the limitation of existing attention models in handling variable object scales and shapes,we propose a progressive attention mechanism. In the proposed model, irrelevant features at differentscales are suppressed by attention filtering steps in different CNN layers, and computation is focusedon the features corresponding to regions of interest. At each attention layer, the model predicts anattention map given the input query and the current feature map via an attention module, and then theattention maps is multiplied to the feature maps channel-wise to obtain attended feature map. In eachlayer, each attended feature map is then forwarded to the next layer of the CNN for construction ofthe following feature map, which is illustrated in Figure 2. This progressive attention process allowsus to estimate precise details of attention areas while maintaining deep representations appropriatefor high-level inference tasks.3Under review as a conference paper at ICLR 2017feature map (fl)attention probability ( αl)attended feature map ( fl)attended feature (fatt)attribute classifierΣnext convolution layer ( gCNNl+1)Figure 2: Overall procedure of progressive attention. Attentive processes are repeatedly applied tofeature maps at multiple layers and the resulting attended feature maps are used as input featuremaps for the next convolution layers in CNN. Attention probabilities lare estimated from featuremaps and input query. In the last attention layer, the attended feature maps are aggregated to a singlefeature vector (by sum pooling) and fed to the final attribute classifier.3.1 P ROGRESSIVE ATTENTIVE PROCESSLetfl2RHlWlClbe an output feature map of a layer l2f0;:::;Lgin CNN with width Wl,heightHlandClchannels, and fli;j2RClbe a feature at (i;j)of the feature map fl. In the proposedPAN, an attentive process is applied to multiple layers of CNN and we obtain the attended featuremap ^fl= [^fli;j], which is given by^fli;j=li;jfli;j: (1)Here, the attention probability li;jfor a feature fli;jis calculated bysli;j=glatt(fli;j;q;latt)andli;j=softmax i;j(sl)ifl=L(sli;j) otherwise; (2)whereglatt()denotes the attention function with a set of parameters lattfor layerl,sli;jis theattention score at (i;j)in layerl,qis the query, and ()is a sigmoid function. The attentionprobability at each location is independent of others in the same feature map, where a sigmoidfunction is employed to constrain attention probabilities between 0 and 1. For the last layer ofattention, we use a softmax function over the entire spatial region for final aggregation of features.Unlike the soft attention model (see Appendix A), in the intermediate attention layers, the attendedfeature map ^flis not summed up to generate a single vector representation of the attended regions.Instead, the attended feature map is forwarded to the next layer as an input to compute the nextfeature map, which is given byfl+1=gl+1CNN(^fl;l+1CNN) (3)wheregl+1CNN()is the next CNN operations parameterized by lCNN.This feedforward procedure with attentive processes in CNN is repeated from the input of the CNN,i.e.,f0=I, until ^fLis obtained. Then, the attended feature fattis finally retrieved by summing upall the features in the final attended feature map ^fLas in soft attention, which is given byfatt=HXiWXj^fLi;j=HXiWXjLi;jfLi;j: (4)The attended feature fattobtained by such process is then used as the input to the visual attributeclassifier as illustrated in Figure 2.In our models, we place the attention layers to the output of max pooling layers instead of every layerin CNN because the reduction of feature resolution within CNN mainly comes from pooling layers.In practice„ we can also skip the first few pooling layers and only attach the attention module to theoutputs of last Kpooling layers.4Under review as a conference paper at ICLR 2017(a)αi,jlgattlfi,jlαi,jlgattlFi,jl(b)Figure 3: Attention estimation (a) without local context and (b) with local context. In (a), li;jispredicted from fli;jonly while its spatially adjacent features are also used to estimate li;jin (b).3.2 M ULTI -RESOLUTION ATTENTION ESTIMATIONIn Eq. (3), the resolution of attention probability map ldepends on the size of the feature mapin the corresponding layer. Due to the nature of a CNN with convolution and pooling layers, theresolution oflwill decrease with the increasing depth of a layer. Since the attentive processes areperformed over multiple layers recursively in our framework, it is possible to attend to the regions ofspecific sizes and shapes. Note that the proposed network can exploit high-level semantics in deeprepresentations for inference without losing attention resolution.The progressive attention model is still very effective in predicting fine attention shapes as theattention information is aggregated over multiple layers to suppress irrelevant structures at differentgranularity. In lower layers, features whose receptive fields contain small distractors are suppressedfirst. Meanwhile, the features from a part of large distractors remain intact but passed to the next layerdelaying its suppression. In higher layers, features of these large distractors would get low attentionprobability as each feature contains information from larger receptive fields allowing the attentionmodule to distinguish whether the feature is from a distractor or the target object. This phenomenonis well demonstrated in the qualitative results in our experiments (Section 4). An additional benefit ofprogressive attention is that it is more straightforward during inference since it is a pure feedforwardnetwork.3.3 L OCAL CONTEXTA basic version of PAN discussed so far predicts an attention probability li;jbased solely on thefeaturefli;jat a single feature map location. We can improve the quality of attention estimation byallowing the attention layers to observe a local context of the target feature. The local context Fli;jofa featurefli;jis composed of its spatially adjacent features. For example, the local context can begiven byFli;j=ffls;tjisi+;jtj+gas illustrated in Figure 3. The attentionscore is now predicted by the attention network with local context assli;j=glatt(Fli;j;q;latt): (5)In this architecture, the area of the local context is given by the filter size corresponding to thecomposite operation of convolution followed by pooling in the next layer. The local context does notneed to be considered in the last layer of attention since its activations are used to compute the finalattended feature map. Local context improves attention prediction as it enables the centroid feature tobe compared with surrounding features which makes the estimated attention more discriminative.3.4 T RAINING PROGRESSIVE ATTENTION NETWORKSTraining a PAN is as simple as training a soft attention network (Xu et al., 2015) because everyoperation within the network is differentiable. The entire network is trained end-to-end by the standardbackpropagation minimizing the binary cross entropies of the object-specific visual attributes. Whenwe train it from a pretrained CNN, the CNN part should always be fine-tuned together since theintermediate attention maps may change the input distributions of their associated layers in CNN.5Under review as a conference paper at ICLR 2017(a) MREF (b) MDIST (c) MBGFigure 4: Example of the MREF datasets.conv1 (3×3@32)pool1 (2×2)conv2 (3×3@32)pool2 (2×2)conv3 (3×3@32)pool3 (2×2)att(STN)fc (classification layer)STNatt(soft)SANatt(hard)HANatt1att2att4PAN qfi,jlFi,jlfc layer(fusion layer, 32 activations)fc layer(estimation layer, 1 activation)si,jlconv4 (3×3@32)pool4 (2×2)att3(a) Network architectures of models on MREF. Arrows rep-resents direct connection to next layer without attention.conv1 (3×3@32)pool1 (2×2)conv2 (3×3@32)pool2 (2×2)conv3 (3×3@32)pool3 (2×2)att(STN)fc (classification layer)STNatt(soft)SOFTatt(hard)HARDatt1att2att3 (soft)HAttNet qfi,jlFi,jlfc layer(fusion layer, 32 activations)fc layer(estimation layer, 1 activation)si,jl(b) Architecture of attention function glatt(). Lo-cal contextsFli;jare used only in PAN-CTX.Figure 5: Detailed illustration of network architectures on MNIST Reference experiments.4 E XPERIMENTS4.1 MNIST R EFERENCEDatasets We conduct experiments on a synthetic dataset created from MNIST (LeCun et al., 1998).The synthetic dataset is referred to as MNIST Reference (MREF; Figure 4a), where each trainingexample is a triple of an image, a query number and its color label. The task on this dataset is topredict the color of the number identified by a query. Five to nine distinct MNIST numbers withdifferent colors in fgreen;yellow;white;red;bluegand scales in [0:5;3:0]are randomly sampledand located in each 100100image. When coloring numbers, Gaussian noise is added to thereference color value. To simulate more realistic situations, we made two variants of MREF bychainging backgrounds to either distractors (MDIST; Figure 4b) or natural images (MBG; Figure 4c).Background images in MDIST are constructed with randomly cropped 55patches of MNISTimages whereas backgrounds of MBG are filled with natural scene images randomly chosen from theSUN Database (Xiao et al., 2014). The training, validation and test sets contain 30,000, 10,000 and10,000 images respectively.Experimental Settings We implement the proposed network with and without the local contextobservation referred to as PAN-CTX and PAN, respectively. In addition, soft attention network (SAN),hard attention network (HAN) (Xu et al., 2015) and two variants of spatial transformer network(STN-S and STN-M) (Jaderberg et al., 2015), are used as baseline models for comparisons. WhileSTN-S is the model with a single transformer layer, STN-M contains multiple transformer layers inthe network. We reimplemented SAN and STNs following the descriptions in (Xu et al., 2015) and(Jaderberg et al., 2015), respectively, and trained HAN by optimizing the marginal log-likelihoodloss as it is more accurate and feasible due to small search space in our task. The architecture ofimage encoding network in SAN and HAN and localization networks in STNs are all identical for faircomparisons. CNN in the proposed network also has the same architecture except for the additionallayers for hierarchical attention. The CNN is composed of four stacks of 33convolutions with 32channels (stride 1) followed by a 22max pooling layer (stride 2) as illustrated in Figure 5a. Weused a single fc layer for classification because the task requires simple color prediction. The attentionfunctionsglatt()for all models are formed as multi-layer perceptrons with two layers (Figure 5b).6Under review as a conference paper at ICLR 2017Table 1: Performance of attention models on MREF, MDIST, and MBG datasets.(a) Color prediction accuracy [%]MREF MDIST MBGSTN-S 39.10 38.32 32.27STN-M 93.89 85.09 52.25SAN 82.94 75.73 53.77HAN 81.84 78.49 55.84PAN 95.92 91.65 69.46PAN-CTX 98.51 96.02 85.55(b) True-positive ratio [%]MREF MDIST MBGUniform 2.34 2.35 2.39SAN 13.61 12.56 6.73HAN 13.95 13.81 7.64PAN 17.39 13.10 8.62PAN-CTX 22.59 22.80 11.010.5 1.0 1.5 2.0 2.5 3.0Scale0.700.750.800.850.900.951.00AccuracyPAN_CTXPANHANSANSTN-M0.5 1.0 1.5 2.0 2.5 3.0Scale0.600.650.700.750.800.850.900.951.00AccuracyPAN_CTXPANHANSANSTN-M0.5 1.0 1.5 2.0 2.5 3.0Scale0.30.40.50.60.70.80.9AccuracyPAN_CTXPANHANSANSTN-M(a) Attribute prediction accuracies of different models on the test subsets in different scales.0.0 0.2 0.4 0.6 0.8 1.0Recall0.00.10.20.30.40.5PrecisionPAN-CTXHANSAN0.0 0.2 0.4 0.6 0.8 1.0Recall0.00.10.20.30.40.5PrecisionPAN-CTXHANSAN0.0 0.2 0.4 0.6 0.8 1.0Recall0.000.050.100.150.200.25PrecisionPAN-CTXHANSAN(b) The precision-recall curves of object segmentation with attention probability.Figure 6: Analysis of algorithms on MREF (left), MDIST (middle), and MBG (right).The function takes the concatenation of a query q, which is a one-hot vector representing the targetobject and a feature vector fli;j, and outputs an attention score sli;j. In PAN-CTX, the attentionfunctions of att1,att2 andatt3 additionally take the local context Fli;jcontaining the adjacentfeatures with = 2. Every model is trained from scratch.Results Table 1a presents color prediction accuracy of all compared algorithms. It is obviousthat PAN outperforms all the previous approaches with significant margins and PAN-CTX furtherimproves the performance by exploiting the local contexts for attention estimation. While STN-Soften fails to predict the correct answers, STN-M learns to predict the color of the target objectthrough multiple transformations and shows comparable performance to PAN in MREF. However,the performance of STN-M dramatically drops as the dataset becomes more complex and realistic,resulting in even lower performance than SAN and HAN. Also, note that STN-S is capable ofattending to any region attended by STN-M since both models predict attention regions by estimatingan affine transformation. STN-M achieves the improvement by learning multiple transformers fromgradients coming from different levels of features. In contrast to those parametric models, theproposed network can predict attention map with more fine-grained shapes capturing the spatialsupport of the target object better.To evaluate the scale sensitivity of each model, we divided the test images into five subsets based ontarget object scales with uniform interval and computed the accuracies of the models. The resultsare presented in Figure 6a, where SAN and HAN tend to predict the correct answers only in a scalerange between 1.0 and 2.0, while their performance is degraded significantly with wild scale changes.STN-M becomes vulnerable to scale variations in more realistic settings. In contrast, PAN andPAN-CTX are robust to scale variations due to their multi-scale attention machanism especially whenthe local contexts are incorporated.Unlike STNs whose attention is constrained to rhombic regions, those models based on feature-wiseattention maps can produce attention regions adaptive to the shapes of the target object. We evaluatethe attention quality of these models using two complementary criteria: true-positive ratio (TPR)7Under review as a conference paper at ICLR 2017query: 8answer: redSAN: whiteHAN: yellowPAN: redInput & Outputs SAN HANPAN -CTXattention 3 attention 2 attention 4(a)(b)(d)(c)Figure 7: Qualitative results of SAN, HAN and PAN-CTX. (a) Input images faded by attendedfeature map (c). (b) Magnitude of activations in feature maps fli;jbefore attention: the activations aremapped to original image space by spreading activations to their receptive fields. (c) Magnitude ofactivations in attended feature maps ^fli;jwhich shows the effect of attention in contrast to (b). (d)Magnitude of activations of the attended feature maps ^fli;jin its original resolution of the featuremap. For PAN-CTX, only last three attention layers are visualized and attentions of ealier layersare accumulated for visualizing higher attention layers. For HAN, (c) and (d) represent attentionprobability because attended feature map is not available. Every image except for input image isrescaled into [0;1]by(xmin)=(maxmin).and precision-recall (PR) curve. TPR measures how strong attention is given to proper location bycomputing the ratio of the aggregated attention probability within the desired area (a.k.a., ground-truth segmentation) to the attention probability in the whole image (Table 1b). PR measures theoverlaps between ground-truth segmentations and binarized segmentation predictions constructedwith different thresholds (Figure 6b). Note that the proposed model with the local context observationgives the best results with significant margin compared to all the other methods in terms of bothcriteria. These results suggest that PAN-CTX constructs more accurate shapes of attended regionsthan all other attention models.Figure 7 shows the qualitative results of the proposed method and two baselines on the MBG dataset.The proposed model yields accurate attention regions eventually by gradually augmenting attentionand suppressing irrelevant regions in the image. We can observe that the proposed model couldmaintain the high attention resolution through the progressive attention process. In contrast, thebaseline models attend to the target objects only once at the top layer resulting in a coarse attention insize and shape. More qualitative results in these experiments are presented in Appendix C.4.2 A TTRIBUTE PREDICTION ON VISUAL GENOMEDataset Visual Genome (VG) (Krishna et al., 2016) is an image dataset containing several types ofannotations: question/answer pairs, image captions, objects, object attributes and object relationship.We formulate the object attribute prediction as a multi-label classification task with reference. Givenan input image and a query ( i.e., an object category), we predict the binary attributes of individualobjects specified by a query. We used 827 object classes and 749 attribute classes that appear more8Under review as a conference paper at ICLR 2017Table 2: Weighted mAP of the attribute prediction and TPR of attentions measured with ground-truthbounding boxes on VG dataset.attention only w/ priormAP TPR mAP TPRSAN 27.62 15.01 31.84 17.65HAN 27.72 17.24 31.93 19.70PAN-CTX 29.38 18.01 32.50 20.17Query : shoeInput ImagePAN -CTXattention map 3 attention map 2 masked imageHANattention map masked imageFigure 8: Visualization of example attentions of HAN and PAN-CTX on VG dataset. Attention mapspresent magnitude of attended features and red boxes show ground truth bounding boxes of query.than 100 times. A total of 86,674 images with 667,882 object attribute labels are used for ourexperiment, and they are split into training, validation and test sets each containing 43,337, 8,667 and34,670 images. The task is challenging because scales of objects largely vary and the attributes maybe associated with very small objects.Experimental Settings and Results We mainly compare our algorithm with SAN and HAN sinceSTNs could not learn a proper attention process on VG. The transformer layers of STNs generatedpadded images of different sizes and rotations to encode the query vector to fit the query-specificbiases. All the networks share the same CNN architecture of VGG-16 network (Simonyan &Zisserman, 2015), which is pretrained on ImageNet (Deng et al., 2009) and is further fine-tunedon the VG dataset for the attribute prediction. For SAN and HAN, an attention layer is attachedto the last pooling layer in VGG-16 while PAN stacks an additional attention layer with the localcontextsFli;jwith= 2on top of each of the last three pooling layers in VGG-16. We skip to placeattention layers at the first two pooling layers ( pool1 andpool2 ) because the features in those layersare not discriminative enough to filter out.We also test models with object class conditional prior. Inthese models, the final attended feature is fused with the query once more by a fully connected layerallowing the network to reflect the conditional distribution of the attributes given the query. Refer toAppendix B for more detailed descriptions on the network architectures.All three models are evaluated in terms of mean average precision (mAP) weighted by the frequen-cies of the attribute labels in the test set, where the computation of mAP follows PASCAL VOCprotocol (Everingham et al., 2010). The proposed method consistently achieves the best weightedmAP scores in both experimental settings as shown in Table 2 but the gain reduces with object classconditional prior. Table 2 also shows TPR of each model measured with the ground-truth boundingbox for evaluating the attention qualities, and the proposed method shows the best TPR. Figure 8presents the qualitative results of the proposed network and HAN on VG dataset.5 C ONCLUSIONWe proposed a novel hierarchical attention network, which progressively attends to regions of interestthrough multiple layers of a CNN. As the model is recursively applied to multiple layers of CNNwith an inherent feature hierarchy, it accurately predicts regions of interest with variable sizes andshapes. We also incorporate local contexts into our attention network for more robust estimation.The proposed network can be trained end-to-end with standard error backpropagation. We tested themodel on both synthetic and real datasets, and demonstrated significant performance improvementover existing attention methods.9Under review as a conference paper at ICLR 2017
SyYWBfzNl
Good paper, but would help to have experiments on a more benchmarked dataset
6: Marginally above acceptance threshold
This paper presents a hierarchical attention model that uses multiple stacked layers of soft attention in a convnet. The authors provide results on a synthetic dataset in addition to doing attribute prediction on the Visual Genome dataset. Overall I think this is a well executed paper, with good experimental results and nice qualitative visualizations. The main thing I believe it is missing would be experiments on a dataset like VQA which would help better place the significance of this work in context of other approaches. An important missing citation is Graves 2013 which had an early version of the attention model. Minor typo: "It confins possible attributes.." -> It confines.. "ImageNet (Deng et al., 2009), is used, and three additional" -> ".., are used,"
3: The reviewer is fairly confident that the evaluation is correct
SJ8BZTjeg
ICLR.cc/2017/conference
2017
Unsupervised Learning Using Generative Adversarial Training And Clustering
["Vittal Premachandran", "Alan L. Yuille"]
In this paper, we propose an unsupervised learning approach that makes use of two components; a deep hierarchical feature extractor, and a more traditional clustering algorithm. We train the feature extractor in a purely unsupervised manner using generative adversarial training and, in the process, study the strengths of learning using a generative model as an adversary. We also show that adversarial training as done in Generative Adversarial Networks (GANs) is not sufficient to automatically group data into categorical clusters. Instead, we use a more traditional grouping algorithm, k-means clustering, to cluster the features learned using adversarial training. We experiment on three well-known datasets, CIFAR-10, CIFAR-100 and STL-10. The experiments show that the proposed approach performs similarly to supervised learning approaches, and, might even be better in situations with small amounts of labeled training data and large amounts of unlabeled data.
["generative adversarial training", "unsupervised", "clustering", "adversarial training", "unsupervised learning", "use", "components", "traditional clustering algorithm", "feature extractor"]
ABSTRACTIn this paper, we propose an unsupervised learning approach that makes use of twocomponents; a deep hierarchical feature extractor, and a more traditional cluster-ing algorithm. We train the feature extractor in a purely unsupervised mannerusing generative adversarial training and, in the process, study the strengths oflearning using a generative model as an adversary. We also show that adversar-ial training as done in Generative Adversarial Networks (GANs) is not sufficientto automatically group data into categorical clusters. Instead, we use a more tra-ditional grouping algorithm, k-means clustering, to cluster the features learnedusing adversarial training. We experiment on three well-known datasets, CIFAR-10, CIFAR-100 and STL-10. The experiments show that the proposed approachperforms similarly to supervised learning approaches, and, might even be betterin situations with small amounts of labeled training data and large amounts ofunlabeled data.1 I NTRODUCTIONMuch of the recent work in machine learning and computer vision has focused on llearning tech-niques for high-level tasks such as image classification (Krizhevsky et al. (2012); Simonyan &Zisserman (2014); He et al. (2015)). Many of the state-of-the-art models employ ConvolutionalNeural Networks (CNNs) to extract high-level feature representations by processing the input datausing multiple layers of convolutions, usually followed by some non-linear transform. CNNs havesuccessfully demonstrated to yield high-quality feature representations that produce state-of-the-artresults on a variety of tasks, not only on image classification (as mentioned above), but also onsemantic segmentation (Long et al. (2015); Chen et al. (2016a)), boundary detection (Xie & Tu(2015); Premachandran et al. (2015)), and object detection (Girshick et al. (2014)), among oth-ers. These models are trained to produce high-quality features using backpropagation, usually bypretraining on a large dataset (such as ImageNet) and then fine tuning on the relevant dataset. Un-fortunately, supervised learning suffers from certain challenges, especially, in terms of scalabilitysince it requires large amounts of labeled data. Labeling millions of images requires extensive effortand is time consuming. Moreover, supervised training with a predefined set of classes, limits thegeneralizability of the learned feature representations to novel classes.To overcome the difficulties of labeling large amounts of training data, effort has gone into thedevelopment of semi-supervised and unsupervised learning techniques. The goal of unsupservisedlearning techniques is to learn representations that are interpretable, easily transferable to noveltasks and novel object categories, and to disentangle the informative representation of the data fromnuisance variables (e.g. lighting, viewpoint, etc.) purely from unlabeled data. A common and widelyused method for unsupervised learning is to do clustering using k-Means. k-Means clustering is asimple method that groups input features into different clusters. Traditionally, this approach mainlyused low-level features such as raw pixel intensities, HOG features, GIST features, SIFT features,etc. Although the performance of k-means on such features is usually poor, Wang et al. (2015) useddeep network features and employed k-means clustering to show strong results on grouping objectparts. But, the deep network that was used to extract the features was pre-trained on ImageNet usingclass-label supervision (so, object knowledge was known). It would be a natural extension to see ifone can learn robust features using hierarchical feature learning in a purely unsupervised manner.1Under review as a conference paper at ICLR 2017However, since the objectives of unsupervised learning are not as concrete as the objectives ofsupervised learning, optimizing deep hierarchical models using backpropagation becomes difficult.Attempts have been made to come up with “pretext” objective functions, which are usually drivenby “common sense” requirements, to do unsupervised learning. Some examples of these objec-tives include minimizing the reconstruction error (Vincent et al. (2008)), training models to identifysurrogate classes (Dosovitskiy et al. (2014)), predicting spatial position of image patches (Doerschet al. (2015); Noroozi & Favaro (2016)), and minimizing the distance in the representation space forobjects tracked over a time period in a video sequence (Wang & Gupta (2015))Recently, much interest has gone into adversarial training. Generative Adversarial Networks(GANs) (Goodfellow et al. (2014)) are of particular interest in this work. Progress in GANs haveenabled significant improvement in the quality of images being generated in the past couple of years(Denton et al. (2015); Radford et al. (2015)). While much of the recent effort has gone in the de-velopment of better architectures and training procedures for modeling and training the generativenetwork, in this work, we systematically study the power of the representations learned by the gen-erator’s adversary, i.e., the discriminative model.In this paper, we learn a deep network using generative adversarial training. We use the featuresextracted from the discriminative component and fuse it with traditional unsupservised learning al-gorithms like k-Means to improve their performance. We perform various experiments over manydifferent datasets (CIFAR-10, CIFAR-100 and STL-10) and show that the representations that canbe learned purely by unsupervised learning from an adversarial signal helps to learn meaningfulrepresentations of input data. Our experiments show that under situations with minimal amounts ofsupervised training examples (and large amounts of unsupervised data), the representations learnedwith adversarial training perform competitively in comparison to supervised training on a similararchitecture. We now provide a brief summary of adversarial training employed by GAN and Info-GAN.2 B ACKGROUND ON ADVERSARIAL TRAININGGenerative Adversarial Networks (Goodfellow et al. (2014)) are composed of two components; thegenerator,G(:), and the discriminator, D(:). The generator maps a latent encoding to the data space,while the discriminator distinguishes between samples generated by the generator and real data. Thegenerator is trained to fool the discriminator, while the discriminator is trained to not get fooled bythe generator.More formally, given training data samples, xPdata(x), wherePdata(x)is the true data dis-tribution, the training of GANs proceeds by iterating between two-steps. In the first step, we fixthe parameters of the generative model, sample a latent code, zPnoise(z), and generate datasamples,G(z), which is then used to train the discriminator, D(:), by updating its parameters to dis-tinguish between G(z)andx. The parameters of the discriminator can be updated by maximizingthe expected log-likelihood,ExPdata (x)[log(D(x))] +EzPnoise (z)[log(1D(G(z)))]: (1)In the second step, we fix the parameters of the discriminator and update the parameters of thegenerator to generate samples that get classified as real by the discriminator. The parameters of G(:)can be updated by minimizing,EzPnoise (z)[log(1D(G(z)))]: (2)The objective of this minimax game can be written asminGmaxDV(G;D ) =ExPdata (x)[log(D(x))] +EzPnoise (z)[log(1D(G(z)))]: (3)2.1 I NFOGANThe formulation described above uses a noise vector, z, which is used by the generator, G(.), tosynthesize data. This noise vector does not impose any constraints on what the generated datashould look like. Chen et al. (2016b) introduce a neat and simple idea to extend GANs into a featureidentifying system called InfoGAN. InfoGAN uses a structured latent code, c, which is input to2Under review as a conference paper at ICLR 2017the generator, G(.), in addition to the noise vector, z. The code can either be a discrete code or acontinuous code. In order to encourage the code to capture the inherent semantic structures in thetraining data, a new term is introduced to the objective function, which acts as a regularizer thatforces high mutual information between the latent code, cand the generated sample, G(z;c). Sinceit is hard to maximize the mutual information, I(c;G(z;c)), directly (because one would need toknow the true distribution P(cjx)), Chen et al. (2016b) provide a variational lower bound, whichcan be obtained when using a parametric auxiliary, Q(cjx), to approximate P(cjx). The variationallower bound that is obtained is,LI(G;Q) =EcP(c);zPnoise (z)[logQ(cjG(c;z))] +H(c): (4)The InfoGAN objective is a regularized version of the original GAN objective (Eq. 3), where theregularizer is the variational lower bound of mutual information,minG;QmaxDVInfoGAN (G;D;Q ) =V(G;D )LI(G;Q): (5)Chen et al. (2016b) share the parameters between Q(.) and D(.), which helps reduce the computa-tional cost. We do the same in all of our experiments.As can be seen from the first term of Eq. 4, the lower bound of the mutual information regularizerconveniently turns out to be a recognition model. If the optimization procedure converges success-fully, one can hope to have learned a latent code that ends up representing the most salient andstructured semantic features present in the data. The noise parameters, z, end up providing thestochasticity to the input that result in the production of samples with diversity.3 U NSUPERVISED LEARNING WITHADVERSARIAL TRAINING ANDK-MEANS ++ C LUSTERINGAs mentioned in Section 1, we are interested in learning representations of images in a purely unsu-pervised manner. Both GAN, and InfoGAN provide a way to train the discriminative network usingthe generated images as an adversary. InfoGAN, is particularly interesting since it has the abilityto directly predict the different categories that might be present in the training database. While thequalitative results presented in Chen et al. (2016b) shows that the categories can be automaticallyidentified on the MNIST dataset, unfortunately, the same result does not seem to extend to morecomplicated and realistic datasets (CIFAR-10, CIFAR-100 and STL-10). We modified the InfoGANcode released by the authors to enable support of the more realistic RGB data. We then trained themodel on the above mentioned datasets to experiment if it could automatically identify the categor-ical clusters present in the respective datasets. We found that while InfoGAN that we trained onthe above-mentioned datasets was successful in generating images that looked different for differentcategorical codes, it was unable to identify the class-level grouping that is present in these datasets.Instead, we adopt a hybrid strategy for unsupervised learning. We first use the generative networkas an adversary to train the discriminative network until convergence. Upon convergence, we ex-tract features from the penultimate layer of the D(.) network and run a more traditional clusteringalgorithm, i.e., k-means++. Surprisingly, this simple strategy turns out to be much more effectiveat grouping data from similar categories than the approach of directly predicting the categoricalgroups. Note that one can plug in more sophisticated unsupervised learning algorithms instead ofk-means++. We use k-means++ to show that even a simple approach can produce reasonable results.Another motivation for using the features from the penultimate layers is that it facilitates featuretransferability to novel classes and tasks. It is common in the supervised learning approaches to firsttrain a deep network on ImageNet images using class-level supervision, then to perform net surgeryto chop off the top level weights, and using this truncated network as a feature extractor for furtherfine tuning on different datasets and tasks. Doing so does not prevent the model from being trainedonly on the ultimate task that it might be used for. One can train the network on a “pretext” taskand transfer the learned weights to other novel tasks. This is especially crucial for unsupervisedlearning since the pretext task that is used to train the models is almost always much different fromthe specific task that the model will ultimately be used for.3Under review as a conference paper at ICLR 2017conv2dsize=5x5dim=64stride=2Conv2dsize=5x5dim=128stride=2Conv2dsize=5x5dim=256stride=2Conv2dsize=5x5dim=512stride=2fcdim=512LeakyReLULeakyReLUBatchNormLeakyReLUBatchNormLeakyReLUBatchNormT/FQ(c|x)DiscriminativeNetworkxdeconv2Dsize=5x5dim=256stride=2deconv2Dsize=5x5dim=128stride=2deconv2Dsize=5x5dim=64stride=2deconv2Dsize=5x5dim=3stride=2tanhReLUReLUReLUReLUBatchNormBatchNormBatchNormBatchNormfczcGenerativeNetworkG(z,c)Figure 1: Figure shows the InfoGAN architecture that was used in all our experiments. Notice thatthe input to G(.) is a combination of zandc. Also notice that most of the parameters are sharedbetween the Q(.) network and the D(.) network, thus improving the computational efficiency.3.1 N ETWORK ARCHITECTUREWe use the DCGAN architecture from Radford et al. (2015) since it is widely used for generatingimages. Figure 1 shows a visualization of the architecture.Generator: Note that the generator has been slightly modified to accept the structured latent code,c, in addition to the random noise, z. The first layer is a fully-connected (fc) layer,which is then reshaped into a 2-D grid of spatial resolution s=16s=16, wheresis the size ofthe output image to be produced. Subsequent to this reshaping, the architecture has four layers oftransposed convolution (sometimes referred to as deconvolution) with a stride of 2, eachof which upsamples the input features to twice the spatial resolution. These layers are sandwichedbybatch norm andReLU layers. Finally, we use a tanh non-linearity to map the features into[1;1].Discriminator: The discriminator is a standard CNN with a series of convolutional layers followedby non-linearities. The architecture uses four convolutional layers sandwiched by batch normandleakyReLU layers. We don’t use maxpooling to reduce the spatial resolution of the input.Instead, we convolve the feature maps with a stride of two, which results in the output of eachconvolution layer to be half the spatial resolution of the input feature map. This base architectureis shared between D(.) and Q(.). On top of this shared network, we use an fclayer to extractthe features, which are then used to predict the categorical distribution. Notice that most of thecomputational cost is shared between the D(.) and the Q(.) networks thereby making the entiretraining process to be computationally efficient.3.2 U NSUPERVISED LEARNING WITH K -MEANS ++As mentioned previously, while InfoGAN has the ability to group data into multiple groups automat-ically, there is no constraint to enforce that the groups need to correspond to the various object-levelcategories that are present in the dataset. While this turned out to be true for the MNIST dataset(Chen et al. (2016b)), we believe that it was possible because the variations in the strokes that pro-duce different digits correspond to the source of biggest variation in the dataset, which convenientlycorresponds to the various digit categories, thereby enabling InfoGAN to act as a category recogni-tion model. In more realistic datasets, the sources of biggest variation need not (and, usually, do not)correspond to variations in the object-level categories. Our experiments show this to be true. Whenwe trained InfoGAN to automatically group the CIFAR-10 images into 10 categories, we found thatwhile InfoGAN was able to group the images into different groups, the groups did not correspondto object category-level groupings. Figure 2 shows some example samples generated by the model.4Under review as a conference paper at ICLR 2017Each row corresponds to a different category and each column in the row corresponds to a differentsample from that category (obtained by keeping cfixed and by varying z). We can see that whileeach row look different from each other, it does not correspond to the CIFAR-10 categories.Therefore, we employ a hybrid approach to unsupervised clustering. We first train the discriminativenetwork using either the vanilla GAN objective or the InfoGAN objective, until convergence. Uponconvergence, we extract features for each image in the training set, from the top of the sharednetwork, labeled as (x)in Figure 1, and do average pooling across the spatial resolution,for each feature channel. We then cluster these features using k-means++ into a discrete set of k-categories. We set kto be the number of object classes that are present in the respective dataset.The cluster centers learned by k-means++ clustering act as the templates for the kcategories thatare present in the dataset.During testing, we extract the feature representation of the test images by passing them throughthe discriminative network trained using the generator as an adversary, do average poolingon(x), and compute the distance of the test feature vector to each of the centers learnt by k-means++ clustering during the training phase. The test image is assigned an index correspondingto the index of the closest center. Our experiments show that clustering on (x)produces betterresults than directly using the recognition model of InfoGAN. Note that while we use the simple k-means++ algorithm for clustering, it could be replaced by more sophisticated unsupervised learningalgorithms. We do not explore further down this route since the scope of this work is to study thestrength of the features learned by adversarial training.Figure 2: Figure shows samples generated from InfoGAN trained on the CIFAR-10 dataset whenthe system was encouraged to identify 10 categories. Each row corresponds to a different clusteridentified by InfoGAN. Each column corresponds to a different sample from that clusters. Wecan see that while InfoGAN can identify clusters that are different from each other, they do notcorrespond to the CIFAR-10 categories. See Sec. 4.1 for quantitative results.An advantage of the hybrid approach is that it now allows us to use a variety of different “pretext”objectives. In other words one can decouple the training objective from the testing requirements. Infact, we experimented by encouraging InfoGAN to identify more groups in the training data thannumber of object-categories in the dataset. For example, we trained InfoGAN on CIFAR-10 datasetby encouraging the system to identify [10, 20, 30, 35, 40, 50 and 75] groups. Of course, these groupsdo not correspond to category-level groupings. However, to our surprise, we found that when thefeatures obtained from InfoGANs trained on large number of categories were used for clustering,they performed better at object categorization than the features obtained from an InfoGAN trainedon the same number of object categories as present in the dataset. Section 4 provides quantitativeresults on these experiments.5Under review as a conference paper at ICLR 20174 E XPERIMENTSWe perform experiments on multiple datasets; CIFAR-10, CIFAR-100 and STL-101. We use groundtruth labels only for evaluation purposes and for training the supervised learning baseline. The train-ing procedure is entirely unsupervised. We report results using two standard metrics that are usedfor evaluating unsupervised learning algorithms; Adjusted RAND Index (ARI) and the NormalizedMutual Information (NMI) score. We provide three baselines; (i) we report results using simplefeatures such as pixel intensities, HOG and GIST, which we call low-level visual features, (ii) wereport results on the features obtained using standard GAN training, (iii) as an upper bound, wereport results using supervised learning where we train the weights in a discriminator network withthe same architecture using category-level labels that are provided by the datasets.It is important to remember that we are interested in comparing the quality of the learned featuresthat can be used for transfer to novel images and not just the classification score on an pre-definedset of categories. The classification accuracy captures only how well a test image was correctlyclassified. If incorrectly classified, it does not quantify how bad the mistake was. ARI, on the otherhand, is a better metric for evaluating the properties of the features because it measures not onlyhow accurately pairs of objects were correctly grouped together, but also takes into account howmany pairs of data points were incorrectly grouped. Therefore, when comparing with the model thatwas trained using supervised learning, we ignore the top-level classification layer of that model, andquantify the quality of the representations, i.e., the features extracted from the penultimate layer,using ARI after clustering on them.Figure 3: This figure shows all the 64 filters from the first layer of the discriminative network trainedon CIFAR-10. The visualization on the left corresponds to the filters learned using adversarialtraining. The visualization on the right corresponds to the filters learned for the same architectureusing supervised learning. It is interesting to see that there the filters on the left have more highfrequency components and the filters on the right are more smooth.Before we go into the quantitative results, we visualize the filters of the first layer of the discrim-inative network and compare them across two different training procedures. Figure 3 shows thevisualization. On the left are the filters from the network that was trained using adversarial training.On the right are the filters from a network with the same architecture but trained using class-levelsupervision. Both these networks were trained using the CIFAR-10 dataset. We can see that whilesome of the filters look similar to each other, many of them are quite different. It is clear that thefilters on the right are more smooth than the filters on the left. Recollect that filters on the left aretrained to fit both the real images and the generated images. When the generated images are not ashigh-quality as the real images, the filters that D(.) learns might not be as regularized as the ones1We have released the code that was used in all our experiments at https://github.com/VittalP/UnsupGAN6Under review as a conference paper at ICLR 2017(a) (b)Figure 4: CIFAR-10: (a) Plots the performance of the grouping algorithm when using the featureslearned from InfoGAN training when trained over multiple categories. Zero groups correspondsto vanilla GAN. -32 and -64 correspond to the output sizes of the generated images. -InfoGANcorresponds to the results obtained with direct prediction using the recognition model in InfoGAN.(b) Note that InfoGAN features perform better than vanilla GAN features. However, supervisedlearning outperforms unsupervised learning on this database.learnt using only real data. We hypothesize that improving the quality of the generated images canhelp regularize the first layer filters in D(.). We leave this route of exploration for future work.4.1 CIFAR-10The CIFAR-10 consists of 50k training images and 10k testing images, of size 3232, dividedamong 10 categories. We trained the model for two different image sizes; 3232and6464. Wetrained InfoGAN with different numbers of categories f10, 20, 30, 35, 40, 50, 75 g. Figure 4a showsa plot of the performance measures versus the number of groups InfoGAN was trained to identify.We can see from the figure that as we increase the number of categories, the performance of themodel goes up into a certain point and drop after that. This indicates that there exists databases forwhich grouping into more categories than present in the ground truth might help. We also plot theperformance of the InfoGAN model when used directly as a prediction model. We can see fromthe plots that k-means++ clustering produces better results (ARI-32=0.097; NMI-32=0.18) thandirect prediction (ARI-32-InfoGAN: 0.085; NMI-32-InfoGAN: 0.14). We label the direct predictionresults with a (-InfoGAN).Figure 4b compares the performance when using different features. We can see that InfoGANfeatures trained with 50 clusters beats the features learned using vanilla GAN by a small margin.However, supervised training does much better (as one might have expected).4.2 CIFAR-100In these sets of experiments, we use the images from the CIFAR-100 database for training. Thisdatabase also contains 50k training examples and 10k test images, divided among 100 fine scalecategories and 20 coarse level categories. We test the performance on the coarse categories. Asbefore, we experiment the InfoGAN training with multiple categories f10, 20, 35, 50g. While thetrend is not as noticeable as in the case of CIFAR-10, the best performance is obtained when we use50 categories. Also, as before, the k-means++ clustering of the features produces better performance(ARI=0.04) than the recognition model of InfoGAN (ARI=0.036).7Under review as a conference paper at ICLR 2017(a) (b)Figure 5: CIFAR-100: (a) # of groups used to train InfoGAN has less of an effect on CIFAR-100 thanit had on CIFAR-10. However, the performance of k-means++ clustering is still better than directprediction using the recognition model of InfoGAN. Please see Fig. 4a for labeling conventions.(b) InfoGAN features and GAN features perform similarly on this dataset. However, supervisedlearning features are only slightly better than the unsupervised counterparts.Figure 5b compares the performance when we use different different features. Notice that the fea-tures obtained by adversarial training are as competitive as the features obtained using supervisedtraining. We that this is because of two reasons; (i) CIFAR-100 coarse level categories are muchharder to distinguish than the CIFAR-10 categories, making it difficult for the supervised model tolearn good features, (ii) the number of training examples per category in CIFAR-100 is lesser thanCIFAR-10 because we are training using the 20 coarse categories compared with 10 of CIFAR-10.We label the direct prediction results with a (-InfoGAN).4.3 STL-10Finally, we also perform experiments on the STL-10 dataset. This database consists of 5000 imagesfor training with labels, 100000 training images without labels, and 8000 images for testing. Thedataset consists of 10 categories, and all the images are of size 9696. This dataset brings out theadvantages of unsupervised learning algorithms. The database is more than two times bigger thanCIFAR-10 and CIFAR-100 datasets in terms of the number of images and each image is 9 times thesize of the CIFAR images. Figure 6b shows that the unsupervised learning with adversarial trainingoutperforms the same models trained using supervised learning. From Figure 6a, we also noticethat the features learned using vanilla GAN does better than the features learned using InfoGAN.Increasing the complexity of the datasets makes it difficult for InfoGAN to group the images in thedataset.5 C ONCLUSIONIn this paper, we explore an unsupervised feature learning technique where the model is trained us-ing adversarial training from a generative network. We use a generative model to generate imagesthat act as an adversary to the discriminative network. We explore the standard GAN architectureand the InfoGAN architecture for training the discriminative model. We also show that direct predic-tion using InfoGAN’s recognition model does not always result in identifying object category-levelinformation. Instead, we fuse the features learned by adversarial training with a traditional unsu-pervised learning approach, k-means clustering, and show that this combination produces betterresults than direct prediction. We also show that, in situations where there are limited amounts oflabeled training data and large amounts of unlabeled data, adversarial training has the potential tooutperform supervised learning.8Under review as a conference paper at ICLR 2017(a) (b)Figure 6: STL-10: (a) InfoGAN’s performance drops with increase in the number of groups. (b)Vanilla GAN’s features outperform InfoGAN-trained features. Also, notice that, with just 5000labeled training images, supervised learning starts to reach its limits. However, our model makesuse of the additional 100000 unlabeled images and is able to learn representations that surpass theperformance of features learned using the supervised model.
B1v-2iWNx
Review
3: Clear rejection
The paper proposes an approach to unsupervised learning based on generative adversarial networks (GANs) and clustering. The general topic of unsupervised learning is important, and the proposed approach makes some sense, but experimental evaluation is very weak and does not allow to judge if the proposed method is competitive with existing alternatives. Therefore the paper cannot be published in its current form. More detailed remarks (many of these are copies of my pre-review questions the authors have not responded to): 1) Realted work overview looks incomplete. There has been work on combining clustering with deep learning, for example [1] or [2] look very related. A long list of potentially related papers can be found here: https://amundtveit.com/2016/12/02/deep-learning-for-clustering/ . From the GAN side, for example [3] looks related. I would like the authors to comment on relation of their approach to existing work, if possible compare with existing approaches, and if not possible - explain why. [1] Xie et al., "Unsupervised Deep Embedding for Clustering Analysis", ICML 2016 http://jmlr.org/proceedings/papers/v48/xieb16.pdf [2] Yang et al., "Joint Unsupervised Learning of Deep Representations and Image Clusters", CVPR 2016 http://www.cv-foundation.org/openaccess/content_cvpr_2016/papers/Yang_Joint_Unsupervised_Learning_CVPR_2016_paper.pdf [3] J.T. Springenberg, "Unsupervised and semi-supervised learning with categorical generative adversarial networks", ICLR 2016, https://arxiv.org/pdf/1511.06390v2.pdf 2) The authors do not report classification accuracies, which makes it very difficult to compare their results with existing work. Classification accuracies should be reported. They may not be a perfect measure of feature quality, but reporting them in addition to ARI and NMI would not hurt. 3) The authors have not compared their approach to existing unsupervised feature learning approaches, for example feature learning with k-means (Coates and Ng 2011), sparse coding methods such as Hierarchical Matching Pursuit (Bo et al., 2012 and 2013), Exemplar-CNN (Dosovitskiy et al. 2014) 4) Looks like in Figure 2 every "class" consists essentially of a single image and its slight variations? Doesn't this mean GAN training failed? Do all your GANs produce samples of this quality? 5) Why do you not show results with visual features on STL-10? 6) Supervisedly learned filters in Figure 3 looks unusual to me, they are normally not that smooth. Have you optimized the hyperparameters? What is the resulting accuracy?
4: The reviewer is confident but not absolutely certain that the evaluation is correct
SJ8BZTjeg
ICLR.cc/2017/conference
2017
Unsupervised Learning Using Generative Adversarial Training And Clustering
["Vittal Premachandran", "Alan L. Yuille"]
In this paper, we propose an unsupervised learning approach that makes use of two components; a deep hierarchical feature extractor, and a more traditional clustering algorithm. We train the feature extractor in a purely unsupervised manner using generative adversarial training and, in the process, study the strengths of learning using a generative model as an adversary. We also show that adversarial training as done in Generative Adversarial Networks (GANs) is not sufficient to automatically group data into categorical clusters. Instead, we use a more traditional grouping algorithm, k-means clustering, to cluster the features learned using adversarial training. We experiment on three well-known datasets, CIFAR-10, CIFAR-100 and STL-10. The experiments show that the proposed approach performs similarly to supervised learning approaches, and, might even be better in situations with small amounts of labeled training data and large amounts of unlabeled data.
["generative adversarial training", "unsupervised", "clustering", "adversarial training", "unsupervised learning", "use", "components", "traditional clustering algorithm", "feature extractor"]
ABSTRACTIn this paper, we propose an unsupervised learning approach that makes use of twocomponents; a deep hierarchical feature extractor, and a more traditional cluster-ing algorithm. We train the feature extractor in a purely unsupervised mannerusing generative adversarial training and, in the process, study the strengths oflearning using a generative model as an adversary. We also show that adversar-ial training as done in Generative Adversarial Networks (GANs) is not sufficientto automatically group data into categorical clusters. Instead, we use a more tra-ditional grouping algorithm, k-means clustering, to cluster the features learnedusing adversarial training. We experiment on three well-known datasets, CIFAR-10, CIFAR-100 and STL-10. The experiments show that the proposed approachperforms similarly to supervised learning approaches, and, might even be betterin situations with small amounts of labeled training data and large amounts ofunlabeled data.1 I NTRODUCTIONMuch of the recent work in machine learning and computer vision has focused on llearning tech-niques for high-level tasks such as image classification (Krizhevsky et al. (2012); Simonyan &Zisserman (2014); He et al. (2015)). Many of the state-of-the-art models employ ConvolutionalNeural Networks (CNNs) to extract high-level feature representations by processing the input datausing multiple layers of convolutions, usually followed by some non-linear transform. CNNs havesuccessfully demonstrated to yield high-quality feature representations that produce state-of-the-artresults on a variety of tasks, not only on image classification (as mentioned above), but also onsemantic segmentation (Long et al. (2015); Chen et al. (2016a)), boundary detection (Xie & Tu(2015); Premachandran et al. (2015)), and object detection (Girshick et al. (2014)), among oth-ers. These models are trained to produce high-quality features using backpropagation, usually bypretraining on a large dataset (such as ImageNet) and then fine tuning on the relevant dataset. Un-fortunately, supervised learning suffers from certain challenges, especially, in terms of scalabilitysince it requires large amounts of labeled data. Labeling millions of images requires extensive effortand is time consuming. Moreover, supervised training with a predefined set of classes, limits thegeneralizability of the learned feature representations to novel classes.To overcome the difficulties of labeling large amounts of training data, effort has gone into thedevelopment of semi-supervised and unsupervised learning techniques. The goal of unsupservisedlearning techniques is to learn representations that are interpretable, easily transferable to noveltasks and novel object categories, and to disentangle the informative representation of the data fromnuisance variables (e.g. lighting, viewpoint, etc.) purely from unlabeled data. A common and widelyused method for unsupervised learning is to do clustering using k-Means. k-Means clustering is asimple method that groups input features into different clusters. Traditionally, this approach mainlyused low-level features such as raw pixel intensities, HOG features, GIST features, SIFT features,etc. Although the performance of k-means on such features is usually poor, Wang et al. (2015) useddeep network features and employed k-means clustering to show strong results on grouping objectparts. But, the deep network that was used to extract the features was pre-trained on ImageNet usingclass-label supervision (so, object knowledge was known). It would be a natural extension to see ifone can learn robust features using hierarchical feature learning in a purely unsupervised manner.1Under review as a conference paper at ICLR 2017However, since the objectives of unsupervised learning are not as concrete as the objectives ofsupervised learning, optimizing deep hierarchical models using backpropagation becomes difficult.Attempts have been made to come up with “pretext” objective functions, which are usually drivenby “common sense” requirements, to do unsupervised learning. Some examples of these objec-tives include minimizing the reconstruction error (Vincent et al. (2008)), training models to identifysurrogate classes (Dosovitskiy et al. (2014)), predicting spatial position of image patches (Doerschet al. (2015); Noroozi & Favaro (2016)), and minimizing the distance in the representation space forobjects tracked over a time period in a video sequence (Wang & Gupta (2015))Recently, much interest has gone into adversarial training. Generative Adversarial Networks(GANs) (Goodfellow et al. (2014)) are of particular interest in this work. Progress in GANs haveenabled significant improvement in the quality of images being generated in the past couple of years(Denton et al. (2015); Radford et al. (2015)). While much of the recent effort has gone in the de-velopment of better architectures and training procedures for modeling and training the generativenetwork, in this work, we systematically study the power of the representations learned by the gen-erator’s adversary, i.e., the discriminative model.In this paper, we learn a deep network using generative adversarial training. We use the featuresextracted from the discriminative component and fuse it with traditional unsupservised learning al-gorithms like k-Means to improve their performance. We perform various experiments over manydifferent datasets (CIFAR-10, CIFAR-100 and STL-10) and show that the representations that canbe learned purely by unsupervised learning from an adversarial signal helps to learn meaningfulrepresentations of input data. Our experiments show that under situations with minimal amounts ofsupervised training examples (and large amounts of unsupervised data), the representations learnedwith adversarial training perform competitively in comparison to supervised training on a similararchitecture. We now provide a brief summary of adversarial training employed by GAN and Info-GAN.2 B ACKGROUND ON ADVERSARIAL TRAININGGenerative Adversarial Networks (Goodfellow et al. (2014)) are composed of two components; thegenerator,G(:), and the discriminator, D(:). The generator maps a latent encoding to the data space,while the discriminator distinguishes between samples generated by the generator and real data. Thegenerator is trained to fool the discriminator, while the discriminator is trained to not get fooled bythe generator.More formally, given training data samples, xPdata(x), wherePdata(x)is the true data dis-tribution, the training of GANs proceeds by iterating between two-steps. In the first step, we fixthe parameters of the generative model, sample a latent code, zPnoise(z), and generate datasamples,G(z), which is then used to train the discriminator, D(:), by updating its parameters to dis-tinguish between G(z)andx. The parameters of the discriminator can be updated by maximizingthe expected log-likelihood,ExPdata (x)[log(D(x))] +EzPnoise (z)[log(1D(G(z)))]: (1)In the second step, we fix the parameters of the discriminator and update the parameters of thegenerator to generate samples that get classified as real by the discriminator. The parameters of G(:)can be updated by minimizing,EzPnoise (z)[log(1D(G(z)))]: (2)The objective of this minimax game can be written asminGmaxDV(G;D ) =ExPdata (x)[log(D(x))] +EzPnoise (z)[log(1D(G(z)))]: (3)2.1 I NFOGANThe formulation described above uses a noise vector, z, which is used by the generator, G(.), tosynthesize data. This noise vector does not impose any constraints on what the generated datashould look like. Chen et al. (2016b) introduce a neat and simple idea to extend GANs into a featureidentifying system called InfoGAN. InfoGAN uses a structured latent code, c, which is input to2Under review as a conference paper at ICLR 2017the generator, G(.), in addition to the noise vector, z. The code can either be a discrete code or acontinuous code. In order to encourage the code to capture the inherent semantic structures in thetraining data, a new term is introduced to the objective function, which acts as a regularizer thatforces high mutual information between the latent code, cand the generated sample, G(z;c). Sinceit is hard to maximize the mutual information, I(c;G(z;c)), directly (because one would need toknow the true distribution P(cjx)), Chen et al. (2016b) provide a variational lower bound, whichcan be obtained when using a parametric auxiliary, Q(cjx), to approximate P(cjx). The variationallower bound that is obtained is,LI(G;Q) =EcP(c);zPnoise (z)[logQ(cjG(c;z))] +H(c): (4)The InfoGAN objective is a regularized version of the original GAN objective (Eq. 3), where theregularizer is the variational lower bound of mutual information,minG;QmaxDVInfoGAN (G;D;Q ) =V(G;D )LI(G;Q): (5)Chen et al. (2016b) share the parameters between Q(.) and D(.), which helps reduce the computa-tional cost. We do the same in all of our experiments.As can be seen from the first term of Eq. 4, the lower bound of the mutual information regularizerconveniently turns out to be a recognition model. If the optimization procedure converges success-fully, one can hope to have learned a latent code that ends up representing the most salient andstructured semantic features present in the data. The noise parameters, z, end up providing thestochasticity to the input that result in the production of samples with diversity.3 U NSUPERVISED LEARNING WITHADVERSARIAL TRAINING ANDK-MEANS ++ C LUSTERINGAs mentioned in Section 1, we are interested in learning representations of images in a purely unsu-pervised manner. Both GAN, and InfoGAN provide a way to train the discriminative network usingthe generated images as an adversary. InfoGAN, is particularly interesting since it has the abilityto directly predict the different categories that might be present in the training database. While thequalitative results presented in Chen et al. (2016b) shows that the categories can be automaticallyidentified on the MNIST dataset, unfortunately, the same result does not seem to extend to morecomplicated and realistic datasets (CIFAR-10, CIFAR-100 and STL-10). We modified the InfoGANcode released by the authors to enable support of the more realistic RGB data. We then trained themodel on the above mentioned datasets to experiment if it could automatically identify the categor-ical clusters present in the respective datasets. We found that while InfoGAN that we trained onthe above-mentioned datasets was successful in generating images that looked different for differentcategorical codes, it was unable to identify the class-level grouping that is present in these datasets.Instead, we adopt a hybrid strategy for unsupervised learning. We first use the generative networkas an adversary to train the discriminative network until convergence. Upon convergence, we ex-tract features from the penultimate layer of the D(.) network and run a more traditional clusteringalgorithm, i.e., k-means++. Surprisingly, this simple strategy turns out to be much more effectiveat grouping data from similar categories than the approach of directly predicting the categoricalgroups. Note that one can plug in more sophisticated unsupervised learning algorithms instead ofk-means++. We use k-means++ to show that even a simple approach can produce reasonable results.Another motivation for using the features from the penultimate layers is that it facilitates featuretransferability to novel classes and tasks. It is common in the supervised learning approaches to firsttrain a deep network on ImageNet images using class-level supervision, then to perform net surgeryto chop off the top level weights, and using this truncated network as a feature extractor for furtherfine tuning on different datasets and tasks. Doing so does not prevent the model from being trainedonly on the ultimate task that it might be used for. One can train the network on a “pretext” taskand transfer the learned weights to other novel tasks. This is especially crucial for unsupervisedlearning since the pretext task that is used to train the models is almost always much different fromthe specific task that the model will ultimately be used for.3Under review as a conference paper at ICLR 2017conv2dsize=5x5dim=64stride=2Conv2dsize=5x5dim=128stride=2Conv2dsize=5x5dim=256stride=2Conv2dsize=5x5dim=512stride=2fcdim=512LeakyReLULeakyReLUBatchNormLeakyReLUBatchNormLeakyReLUBatchNormT/FQ(c|x)DiscriminativeNetworkxdeconv2Dsize=5x5dim=256stride=2deconv2Dsize=5x5dim=128stride=2deconv2Dsize=5x5dim=64stride=2deconv2Dsize=5x5dim=3stride=2tanhReLUReLUReLUReLUBatchNormBatchNormBatchNormBatchNormfczcGenerativeNetworkG(z,c)Figure 1: Figure shows the InfoGAN architecture that was used in all our experiments. Notice thatthe input to G(.) is a combination of zandc. Also notice that most of the parameters are sharedbetween the Q(.) network and the D(.) network, thus improving the computational efficiency.3.1 N ETWORK ARCHITECTUREWe use the DCGAN architecture from Radford et al. (2015) since it is widely used for generatingimages. Figure 1 shows a visualization of the architecture.Generator: Note that the generator has been slightly modified to accept the structured latent code,c, in addition to the random noise, z. The first layer is a fully-connected (fc) layer,which is then reshaped into a 2-D grid of spatial resolution s=16s=16, wheresis the size ofthe output image to be produced. Subsequent to this reshaping, the architecture has four layers oftransposed convolution (sometimes referred to as deconvolution) with a stride of 2, eachof which upsamples the input features to twice the spatial resolution. These layers are sandwichedbybatch norm andReLU layers. Finally, we use a tanh non-linearity to map the features into[1;1].Discriminator: The discriminator is a standard CNN with a series of convolutional layers followedby non-linearities. The architecture uses four convolutional layers sandwiched by batch normandleakyReLU layers. We don’t use maxpooling to reduce the spatial resolution of the input.Instead, we convolve the feature maps with a stride of two, which results in the output of eachconvolution layer to be half the spatial resolution of the input feature map. This base architectureis shared between D(.) and Q(.). On top of this shared network, we use an fclayer to extractthe features, which are then used to predict the categorical distribution. Notice that most of thecomputational cost is shared between the D(.) and the Q(.) networks thereby making the entiretraining process to be computationally efficient.3.2 U NSUPERVISED LEARNING WITH K -MEANS ++As mentioned previously, while InfoGAN has the ability to group data into multiple groups automat-ically, there is no constraint to enforce that the groups need to correspond to the various object-levelcategories that are present in the dataset. While this turned out to be true for the MNIST dataset(Chen et al. (2016b)), we believe that it was possible because the variations in the strokes that pro-duce different digits correspond to the source of biggest variation in the dataset, which convenientlycorresponds to the various digit categories, thereby enabling InfoGAN to act as a category recogni-tion model. In more realistic datasets, the sources of biggest variation need not (and, usually, do not)correspond to variations in the object-level categories. Our experiments show this to be true. Whenwe trained InfoGAN to automatically group the CIFAR-10 images into 10 categories, we found thatwhile InfoGAN was able to group the images into different groups, the groups did not correspondto object category-level groupings. Figure 2 shows some example samples generated by the model.4Under review as a conference paper at ICLR 2017Each row corresponds to a different category and each column in the row corresponds to a differentsample from that category (obtained by keeping cfixed and by varying z). We can see that whileeach row look different from each other, it does not correspond to the CIFAR-10 categories.Therefore, we employ a hybrid approach to unsupervised clustering. We first train the discriminativenetwork using either the vanilla GAN objective or the InfoGAN objective, until convergence. Uponconvergence, we extract features for each image in the training set, from the top of the sharednetwork, labeled as (x)in Figure 1, and do average pooling across the spatial resolution,for each feature channel. We then cluster these features using k-means++ into a discrete set of k-categories. We set kto be the number of object classes that are present in the respective dataset.The cluster centers learned by k-means++ clustering act as the templates for the kcategories thatare present in the dataset.During testing, we extract the feature representation of the test images by passing them throughthe discriminative network trained using the generator as an adversary, do average poolingon(x), and compute the distance of the test feature vector to each of the centers learnt by k-means++ clustering during the training phase. The test image is assigned an index correspondingto the index of the closest center. Our experiments show that clustering on (x)produces betterresults than directly using the recognition model of InfoGAN. Note that while we use the simple k-means++ algorithm for clustering, it could be replaced by more sophisticated unsupervised learningalgorithms. We do not explore further down this route since the scope of this work is to study thestrength of the features learned by adversarial training.Figure 2: Figure shows samples generated from InfoGAN trained on the CIFAR-10 dataset whenthe system was encouraged to identify 10 categories. Each row corresponds to a different clusteridentified by InfoGAN. Each column corresponds to a different sample from that clusters. Wecan see that while InfoGAN can identify clusters that are different from each other, they do notcorrespond to the CIFAR-10 categories. See Sec. 4.1 for quantitative results.An advantage of the hybrid approach is that it now allows us to use a variety of different “pretext”objectives. In other words one can decouple the training objective from the testing requirements. Infact, we experimented by encouraging InfoGAN to identify more groups in the training data thannumber of object-categories in the dataset. For example, we trained InfoGAN on CIFAR-10 datasetby encouraging the system to identify [10, 20, 30, 35, 40, 50 and 75] groups. Of course, these groupsdo not correspond to category-level groupings. However, to our surprise, we found that when thefeatures obtained from InfoGANs trained on large number of categories were used for clustering,they performed better at object categorization than the features obtained from an InfoGAN trainedon the same number of object categories as present in the dataset. Section 4 provides quantitativeresults on these experiments.5Under review as a conference paper at ICLR 20174 E XPERIMENTSWe perform experiments on multiple datasets; CIFAR-10, CIFAR-100 and STL-101. We use groundtruth labels only for evaluation purposes and for training the supervised learning baseline. The train-ing procedure is entirely unsupervised. We report results using two standard metrics that are usedfor evaluating unsupervised learning algorithms; Adjusted RAND Index (ARI) and the NormalizedMutual Information (NMI) score. We provide three baselines; (i) we report results using simplefeatures such as pixel intensities, HOG and GIST, which we call low-level visual features, (ii) wereport results on the features obtained using standard GAN training, (iii) as an upper bound, wereport results using supervised learning where we train the weights in a discriminator network withthe same architecture using category-level labels that are provided by the datasets.It is important to remember that we are interested in comparing the quality of the learned featuresthat can be used for transfer to novel images and not just the classification score on an pre-definedset of categories. The classification accuracy captures only how well a test image was correctlyclassified. If incorrectly classified, it does not quantify how bad the mistake was. ARI, on the otherhand, is a better metric for evaluating the properties of the features because it measures not onlyhow accurately pairs of objects were correctly grouped together, but also takes into account howmany pairs of data points were incorrectly grouped. Therefore, when comparing with the model thatwas trained using supervised learning, we ignore the top-level classification layer of that model, andquantify the quality of the representations, i.e., the features extracted from the penultimate layer,using ARI after clustering on them.Figure 3: This figure shows all the 64 filters from the first layer of the discriminative network trainedon CIFAR-10. The visualization on the left corresponds to the filters learned using adversarialtraining. The visualization on the right corresponds to the filters learned for the same architectureusing supervised learning. It is interesting to see that there the filters on the left have more highfrequency components and the filters on the right are more smooth.Before we go into the quantitative results, we visualize the filters of the first layer of the discrim-inative network and compare them across two different training procedures. Figure 3 shows thevisualization. On the left are the filters from the network that was trained using adversarial training.On the right are the filters from a network with the same architecture but trained using class-levelsupervision. Both these networks were trained using the CIFAR-10 dataset. We can see that whilesome of the filters look similar to each other, many of them are quite different. It is clear that thefilters on the right are more smooth than the filters on the left. Recollect that filters on the left aretrained to fit both the real images and the generated images. When the generated images are not ashigh-quality as the real images, the filters that D(.) learns might not be as regularized as the ones1We have released the code that was used in all our experiments at https://github.com/VittalP/UnsupGAN6Under review as a conference paper at ICLR 2017(a) (b)Figure 4: CIFAR-10: (a) Plots the performance of the grouping algorithm when using the featureslearned from InfoGAN training when trained over multiple categories. Zero groups correspondsto vanilla GAN. -32 and -64 correspond to the output sizes of the generated images. -InfoGANcorresponds to the results obtained with direct prediction using the recognition model in InfoGAN.(b) Note that InfoGAN features perform better than vanilla GAN features. However, supervisedlearning outperforms unsupervised learning on this database.learnt using only real data. We hypothesize that improving the quality of the generated images canhelp regularize the first layer filters in D(.). We leave this route of exploration for future work.4.1 CIFAR-10The CIFAR-10 consists of 50k training images and 10k testing images, of size 3232, dividedamong 10 categories. We trained the model for two different image sizes; 3232and6464. Wetrained InfoGAN with different numbers of categories f10, 20, 30, 35, 40, 50, 75 g. Figure 4a showsa plot of the performance measures versus the number of groups InfoGAN was trained to identify.We can see from the figure that as we increase the number of categories, the performance of themodel goes up into a certain point and drop after that. This indicates that there exists databases forwhich grouping into more categories than present in the ground truth might help. We also plot theperformance of the InfoGAN model when used directly as a prediction model. We can see fromthe plots that k-means++ clustering produces better results (ARI-32=0.097; NMI-32=0.18) thandirect prediction (ARI-32-InfoGAN: 0.085; NMI-32-InfoGAN: 0.14). We label the direct predictionresults with a (-InfoGAN).Figure 4b compares the performance when using different features. We can see that InfoGANfeatures trained with 50 clusters beats the features learned using vanilla GAN by a small margin.However, supervised training does much better (as one might have expected).4.2 CIFAR-100In these sets of experiments, we use the images from the CIFAR-100 database for training. Thisdatabase also contains 50k training examples and 10k test images, divided among 100 fine scalecategories and 20 coarse level categories. We test the performance on the coarse categories. Asbefore, we experiment the InfoGAN training with multiple categories f10, 20, 35, 50g. While thetrend is not as noticeable as in the case of CIFAR-10, the best performance is obtained when we use50 categories. Also, as before, the k-means++ clustering of the features produces better performance(ARI=0.04) than the recognition model of InfoGAN (ARI=0.036).7Under review as a conference paper at ICLR 2017(a) (b)Figure 5: CIFAR-100: (a) # of groups used to train InfoGAN has less of an effect on CIFAR-100 thanit had on CIFAR-10. However, the performance of k-means++ clustering is still better than directprediction using the recognition model of InfoGAN. Please see Fig. 4a for labeling conventions.(b) InfoGAN features and GAN features perform similarly on this dataset. However, supervisedlearning features are only slightly better than the unsupervised counterparts.Figure 5b compares the performance when we use different different features. Notice that the fea-tures obtained by adversarial training are as competitive as the features obtained using supervisedtraining. We that this is because of two reasons; (i) CIFAR-100 coarse level categories are muchharder to distinguish than the CIFAR-10 categories, making it difficult for the supervised model tolearn good features, (ii) the number of training examples per category in CIFAR-100 is lesser thanCIFAR-10 because we are training using the 20 coarse categories compared with 10 of CIFAR-10.We label the direct prediction results with a (-InfoGAN).4.3 STL-10Finally, we also perform experiments on the STL-10 dataset. This database consists of 5000 imagesfor training with labels, 100000 training images without labels, and 8000 images for testing. Thedataset consists of 10 categories, and all the images are of size 9696. This dataset brings out theadvantages of unsupervised learning algorithms. The database is more than two times bigger thanCIFAR-10 and CIFAR-100 datasets in terms of the number of images and each image is 9 times thesize of the CIFAR images. Figure 6b shows that the unsupervised learning with adversarial trainingoutperforms the same models trained using supervised learning. From Figure 6a, we also noticethat the features learned using vanilla GAN does better than the features learned using InfoGAN.Increasing the complexity of the datasets makes it difficult for InfoGAN to group the images in thedataset.5 C ONCLUSIONIn this paper, we explore an unsupervised feature learning technique where the model is trained us-ing adversarial training from a generative network. We use a generative model to generate imagesthat act as an adversary to the discriminative network. We explore the standard GAN architectureand the InfoGAN architecture for training the discriminative model. We also show that direct predic-tion using InfoGAN’s recognition model does not always result in identifying object category-levelinformation. Instead, we fuse the features learned by adversarial training with a traditional unsu-pervised learning approach, k-means clustering, and show that this combination produces betterresults than direct prediction. We also show that, in situations where there are limited amounts oflabeled training data and large amounts of unlabeled data, adversarial training has the potential tooutperform supervised learning.8Under review as a conference paper at ICLR 2017(a) (b)Figure 6: STL-10: (a) InfoGAN’s performance drops with increase in the number of groups. (b)Vanilla GAN’s features outperform InfoGAN-trained features. Also, notice that, with just 5000labeled training images, supervised learning starts to reach its limits. However, our model makesuse of the additional 100000 unlabeled images and is able to learn representations that surpass theperformance of features learned using the supervised model.
SynBgHuNx
review
3: Clear rejection
The papers investigates the task of unsupervised learning with deep features via k-means clustering. The entire pipeline can be decomposed into two steps: (1) unsupervised feature learning based on GAN framework and (2) k-means clustering using learned deep network features. Following the GAN framework and its extension InfoGAN, the first step is to train a pair of discriminator network and generator network from scratch using min-max objective. Then, it applies k-means clustering on the top layer features from discriminator network. For evaluation, the proposed unsupervised feature learning approach is compared against traditional hand-crafted features such as HOG and supervised method on three benchmark datasets. Normalized Mutual Information (NMI) and Adjusted RAND Index (ARI) have been used as the evaluation metrics for experimental comparison. Although the proposed method may be potentially useful in practice (if refined further), I find the method lacks novelty, and the experimental results are not significant enough.
4: The reviewer is confident but not absolutely certain that the evaluation is correct
SJ8BZTjeg
ICLR.cc/2017/conference
2017
Unsupervised Learning Using Generative Adversarial Training And Clustering
["Vittal Premachandran", "Alan L. Yuille"]
In this paper, we propose an unsupervised learning approach that makes use of two components; a deep hierarchical feature extractor, and a more traditional clustering algorithm. We train the feature extractor in a purely unsupervised manner using generative adversarial training and, in the process, study the strengths of learning using a generative model as an adversary. We also show that adversarial training as done in Generative Adversarial Networks (GANs) is not sufficient to automatically group data into categorical clusters. Instead, we use a more traditional grouping algorithm, k-means clustering, to cluster the features learned using adversarial training. We experiment on three well-known datasets, CIFAR-10, CIFAR-100 and STL-10. The experiments show that the proposed approach performs similarly to supervised learning approaches, and, might even be better in situations with small amounts of labeled training data and large amounts of unlabeled data.
["generative adversarial training", "unsupervised", "clustering", "adversarial training", "unsupervised learning", "use", "components", "traditional clustering algorithm", "feature extractor"]
ABSTRACTIn this paper, we propose an unsupervised learning approach that makes use of twocomponents; a deep hierarchical feature extractor, and a more traditional cluster-ing algorithm. We train the feature extractor in a purely unsupervised mannerusing generative adversarial training and, in the process, study the strengths oflearning using a generative model as an adversary. We also show that adversar-ial training as done in Generative Adversarial Networks (GANs) is not sufficientto automatically group data into categorical clusters. Instead, we use a more tra-ditional grouping algorithm, k-means clustering, to cluster the features learnedusing adversarial training. We experiment on three well-known datasets, CIFAR-10, CIFAR-100 and STL-10. The experiments show that the proposed approachperforms similarly to supervised learning approaches, and, might even be betterin situations with small amounts of labeled training data and large amounts ofunlabeled data.1 I NTRODUCTIONMuch of the recent work in machine learning and computer vision has focused on llearning tech-niques for high-level tasks such as image classification (Krizhevsky et al. (2012); Simonyan &Zisserman (2014); He et al. (2015)). Many of the state-of-the-art models employ ConvolutionalNeural Networks (CNNs) to extract high-level feature representations by processing the input datausing multiple layers of convolutions, usually followed by some non-linear transform. CNNs havesuccessfully demonstrated to yield high-quality feature representations that produce state-of-the-artresults on a variety of tasks, not only on image classification (as mentioned above), but also onsemantic segmentation (Long et al. (2015); Chen et al. (2016a)), boundary detection (Xie & Tu(2015); Premachandran et al. (2015)), and object detection (Girshick et al. (2014)), among oth-ers. These models are trained to produce high-quality features using backpropagation, usually bypretraining on a large dataset (such as ImageNet) and then fine tuning on the relevant dataset. Un-fortunately, supervised learning suffers from certain challenges, especially, in terms of scalabilitysince it requires large amounts of labeled data. Labeling millions of images requires extensive effortand is time consuming. Moreover, supervised training with a predefined set of classes, limits thegeneralizability of the learned feature representations to novel classes.To overcome the difficulties of labeling large amounts of training data, effort has gone into thedevelopment of semi-supervised and unsupervised learning techniques. The goal of unsupservisedlearning techniques is to learn representations that are interpretable, easily transferable to noveltasks and novel object categories, and to disentangle the informative representation of the data fromnuisance variables (e.g. lighting, viewpoint, etc.) purely from unlabeled data. A common and widelyused method for unsupervised learning is to do clustering using k-Means. k-Means clustering is asimple method that groups input features into different clusters. Traditionally, this approach mainlyused low-level features such as raw pixel intensities, HOG features, GIST features, SIFT features,etc. Although the performance of k-means on such features is usually poor, Wang et al. (2015) useddeep network features and employed k-means clustering to show strong results on grouping objectparts. But, the deep network that was used to extract the features was pre-trained on ImageNet usingclass-label supervision (so, object knowledge was known). It would be a natural extension to see ifone can learn robust features using hierarchical feature learning in a purely unsupervised manner.1Under review as a conference paper at ICLR 2017However, since the objectives of unsupervised learning are not as concrete as the objectives ofsupervised learning, optimizing deep hierarchical models using backpropagation becomes difficult.Attempts have been made to come up with “pretext” objective functions, which are usually drivenby “common sense” requirements, to do unsupervised learning. Some examples of these objec-tives include minimizing the reconstruction error (Vincent et al. (2008)), training models to identifysurrogate classes (Dosovitskiy et al. (2014)), predicting spatial position of image patches (Doerschet al. (2015); Noroozi & Favaro (2016)), and minimizing the distance in the representation space forobjects tracked over a time period in a video sequence (Wang & Gupta (2015))Recently, much interest has gone into adversarial training. Generative Adversarial Networks(GANs) (Goodfellow et al. (2014)) are of particular interest in this work. Progress in GANs haveenabled significant improvement in the quality of images being generated in the past couple of years(Denton et al. (2015); Radford et al. (2015)). While much of the recent effort has gone in the de-velopment of better architectures and training procedures for modeling and training the generativenetwork, in this work, we systematically study the power of the representations learned by the gen-erator’s adversary, i.e., the discriminative model.In this paper, we learn a deep network using generative adversarial training. We use the featuresextracted from the discriminative component and fuse it with traditional unsupservised learning al-gorithms like k-Means to improve their performance. We perform various experiments over manydifferent datasets (CIFAR-10, CIFAR-100 and STL-10) and show that the representations that canbe learned purely by unsupervised learning from an adversarial signal helps to learn meaningfulrepresentations of input data. Our experiments show that under situations with minimal amounts ofsupervised training examples (and large amounts of unsupervised data), the representations learnedwith adversarial training perform competitively in comparison to supervised training on a similararchitecture. We now provide a brief summary of adversarial training employed by GAN and Info-GAN.2 B ACKGROUND ON ADVERSARIAL TRAININGGenerative Adversarial Networks (Goodfellow et al. (2014)) are composed of two components; thegenerator,G(:), and the discriminator, D(:). The generator maps a latent encoding to the data space,while the discriminator distinguishes between samples generated by the generator and real data. Thegenerator is trained to fool the discriminator, while the discriminator is trained to not get fooled bythe generator.More formally, given training data samples, xPdata(x), wherePdata(x)is the true data dis-tribution, the training of GANs proceeds by iterating between two-steps. In the first step, we fixthe parameters of the generative model, sample a latent code, zPnoise(z), and generate datasamples,G(z), which is then used to train the discriminator, D(:), by updating its parameters to dis-tinguish between G(z)andx. The parameters of the discriminator can be updated by maximizingthe expected log-likelihood,ExPdata (x)[log(D(x))] +EzPnoise (z)[log(1D(G(z)))]: (1)In the second step, we fix the parameters of the discriminator and update the parameters of thegenerator to generate samples that get classified as real by the discriminator. The parameters of G(:)can be updated by minimizing,EzPnoise (z)[log(1D(G(z)))]: (2)The objective of this minimax game can be written asminGmaxDV(G;D ) =ExPdata (x)[log(D(x))] +EzPnoise (z)[log(1D(G(z)))]: (3)2.1 I NFOGANThe formulation described above uses a noise vector, z, which is used by the generator, G(.), tosynthesize data. This noise vector does not impose any constraints on what the generated datashould look like. Chen et al. (2016b) introduce a neat and simple idea to extend GANs into a featureidentifying system called InfoGAN. InfoGAN uses a structured latent code, c, which is input to2Under review as a conference paper at ICLR 2017the generator, G(.), in addition to the noise vector, z. The code can either be a discrete code or acontinuous code. In order to encourage the code to capture the inherent semantic structures in thetraining data, a new term is introduced to the objective function, which acts as a regularizer thatforces high mutual information between the latent code, cand the generated sample, G(z;c). Sinceit is hard to maximize the mutual information, I(c;G(z;c)), directly (because one would need toknow the true distribution P(cjx)), Chen et al. (2016b) provide a variational lower bound, whichcan be obtained when using a parametric auxiliary, Q(cjx), to approximate P(cjx). The variationallower bound that is obtained is,LI(G;Q) =EcP(c);zPnoise (z)[logQ(cjG(c;z))] +H(c): (4)The InfoGAN objective is a regularized version of the original GAN objective (Eq. 3), where theregularizer is the variational lower bound of mutual information,minG;QmaxDVInfoGAN (G;D;Q ) =V(G;D )LI(G;Q): (5)Chen et al. (2016b) share the parameters between Q(.) and D(.), which helps reduce the computa-tional cost. We do the same in all of our experiments.As can be seen from the first term of Eq. 4, the lower bound of the mutual information regularizerconveniently turns out to be a recognition model. If the optimization procedure converges success-fully, one can hope to have learned a latent code that ends up representing the most salient andstructured semantic features present in the data. The noise parameters, z, end up providing thestochasticity to the input that result in the production of samples with diversity.3 U NSUPERVISED LEARNING WITHADVERSARIAL TRAINING ANDK-MEANS ++ C LUSTERINGAs mentioned in Section 1, we are interested in learning representations of images in a purely unsu-pervised manner. Both GAN, and InfoGAN provide a way to train the discriminative network usingthe generated images as an adversary. InfoGAN, is particularly interesting since it has the abilityto directly predict the different categories that might be present in the training database. While thequalitative results presented in Chen et al. (2016b) shows that the categories can be automaticallyidentified on the MNIST dataset, unfortunately, the same result does not seem to extend to morecomplicated and realistic datasets (CIFAR-10, CIFAR-100 and STL-10). We modified the InfoGANcode released by the authors to enable support of the more realistic RGB data. We then trained themodel on the above mentioned datasets to experiment if it could automatically identify the categor-ical clusters present in the respective datasets. We found that while InfoGAN that we trained onthe above-mentioned datasets was successful in generating images that looked different for differentcategorical codes, it was unable to identify the class-level grouping that is present in these datasets.Instead, we adopt a hybrid strategy for unsupervised learning. We first use the generative networkas an adversary to train the discriminative network until convergence. Upon convergence, we ex-tract features from the penultimate layer of the D(.) network and run a more traditional clusteringalgorithm, i.e., k-means++. Surprisingly, this simple strategy turns out to be much more effectiveat grouping data from similar categories than the approach of directly predicting the categoricalgroups. Note that one can plug in more sophisticated unsupervised learning algorithms instead ofk-means++. We use k-means++ to show that even a simple approach can produce reasonable results.Another motivation for using the features from the penultimate layers is that it facilitates featuretransferability to novel classes and tasks. It is common in the supervised learning approaches to firsttrain a deep network on ImageNet images using class-level supervision, then to perform net surgeryto chop off the top level weights, and using this truncated network as a feature extractor for furtherfine tuning on different datasets and tasks. Doing so does not prevent the model from being trainedonly on the ultimate task that it might be used for. One can train the network on a “pretext” taskand transfer the learned weights to other novel tasks. This is especially crucial for unsupervisedlearning since the pretext task that is used to train the models is almost always much different fromthe specific task that the model will ultimately be used for.3Under review as a conference paper at ICLR 2017conv2dsize=5x5dim=64stride=2Conv2dsize=5x5dim=128stride=2Conv2dsize=5x5dim=256stride=2Conv2dsize=5x5dim=512stride=2fcdim=512LeakyReLULeakyReLUBatchNormLeakyReLUBatchNormLeakyReLUBatchNormT/FQ(c|x)DiscriminativeNetworkxdeconv2Dsize=5x5dim=256stride=2deconv2Dsize=5x5dim=128stride=2deconv2Dsize=5x5dim=64stride=2deconv2Dsize=5x5dim=3stride=2tanhReLUReLUReLUReLUBatchNormBatchNormBatchNormBatchNormfczcGenerativeNetworkG(z,c)Figure 1: Figure shows the InfoGAN architecture that was used in all our experiments. Notice thatthe input to G(.) is a combination of zandc. Also notice that most of the parameters are sharedbetween the Q(.) network and the D(.) network, thus improving the computational efficiency.3.1 N ETWORK ARCHITECTUREWe use the DCGAN architecture from Radford et al. (2015) since it is widely used for generatingimages. Figure 1 shows a visualization of the architecture.Generator: Note that the generator has been slightly modified to accept the structured latent code,c, in addition to the random noise, z. The first layer is a fully-connected (fc) layer,which is then reshaped into a 2-D grid of spatial resolution s=16s=16, wheresis the size ofthe output image to be produced. Subsequent to this reshaping, the architecture has four layers oftransposed convolution (sometimes referred to as deconvolution) with a stride of 2, eachof which upsamples the input features to twice the spatial resolution. These layers are sandwichedbybatch norm andReLU layers. Finally, we use a tanh non-linearity to map the features into[1;1].Discriminator: The discriminator is a standard CNN with a series of convolutional layers followedby non-linearities. The architecture uses four convolutional layers sandwiched by batch normandleakyReLU layers. We don’t use maxpooling to reduce the spatial resolution of the input.Instead, we convolve the feature maps with a stride of two, which results in the output of eachconvolution layer to be half the spatial resolution of the input feature map. This base architectureis shared between D(.) and Q(.). On top of this shared network, we use an fclayer to extractthe features, which are then used to predict the categorical distribution. Notice that most of thecomputational cost is shared between the D(.) and the Q(.) networks thereby making the entiretraining process to be computationally efficient.3.2 U NSUPERVISED LEARNING WITH K -MEANS ++As mentioned previously, while InfoGAN has the ability to group data into multiple groups automat-ically, there is no constraint to enforce that the groups need to correspond to the various object-levelcategories that are present in the dataset. While this turned out to be true for the MNIST dataset(Chen et al. (2016b)), we believe that it was possible because the variations in the strokes that pro-duce different digits correspond to the source of biggest variation in the dataset, which convenientlycorresponds to the various digit categories, thereby enabling InfoGAN to act as a category recogni-tion model. In more realistic datasets, the sources of biggest variation need not (and, usually, do not)correspond to variations in the object-level categories. Our experiments show this to be true. Whenwe trained InfoGAN to automatically group the CIFAR-10 images into 10 categories, we found thatwhile InfoGAN was able to group the images into different groups, the groups did not correspondto object category-level groupings. Figure 2 shows some example samples generated by the model.4Under review as a conference paper at ICLR 2017Each row corresponds to a different category and each column in the row corresponds to a differentsample from that category (obtained by keeping cfixed and by varying z). We can see that whileeach row look different from each other, it does not correspond to the CIFAR-10 categories.Therefore, we employ a hybrid approach to unsupervised clustering. We first train the discriminativenetwork using either the vanilla GAN objective or the InfoGAN objective, until convergence. Uponconvergence, we extract features for each image in the training set, from the top of the sharednetwork, labeled as (x)in Figure 1, and do average pooling across the spatial resolution,for each feature channel. We then cluster these features using k-means++ into a discrete set of k-categories. We set kto be the number of object classes that are present in the respective dataset.The cluster centers learned by k-means++ clustering act as the templates for the kcategories thatare present in the dataset.During testing, we extract the feature representation of the test images by passing them throughthe discriminative network trained using the generator as an adversary, do average poolingon(x), and compute the distance of the test feature vector to each of the centers learnt by k-means++ clustering during the training phase. The test image is assigned an index correspondingto the index of the closest center. Our experiments show that clustering on (x)produces betterresults than directly using the recognition model of InfoGAN. Note that while we use the simple k-means++ algorithm for clustering, it could be replaced by more sophisticated unsupervised learningalgorithms. We do not explore further down this route since the scope of this work is to study thestrength of the features learned by adversarial training.Figure 2: Figure shows samples generated from InfoGAN trained on the CIFAR-10 dataset whenthe system was encouraged to identify 10 categories. Each row corresponds to a different clusteridentified by InfoGAN. Each column corresponds to a different sample from that clusters. Wecan see that while InfoGAN can identify clusters that are different from each other, they do notcorrespond to the CIFAR-10 categories. See Sec. 4.1 for quantitative results.An advantage of the hybrid approach is that it now allows us to use a variety of different “pretext”objectives. In other words one can decouple the training objective from the testing requirements. Infact, we experimented by encouraging InfoGAN to identify more groups in the training data thannumber of object-categories in the dataset. For example, we trained InfoGAN on CIFAR-10 datasetby encouraging the system to identify [10, 20, 30, 35, 40, 50 and 75] groups. Of course, these groupsdo not correspond to category-level groupings. However, to our surprise, we found that when thefeatures obtained from InfoGANs trained on large number of categories were used for clustering,they performed better at object categorization than the features obtained from an InfoGAN trainedon the same number of object categories as present in the dataset. Section 4 provides quantitativeresults on these experiments.5Under review as a conference paper at ICLR 20174 E XPERIMENTSWe perform experiments on multiple datasets; CIFAR-10, CIFAR-100 and STL-101. We use groundtruth labels only for evaluation purposes and for training the supervised learning baseline. The train-ing procedure is entirely unsupervised. We report results using two standard metrics that are usedfor evaluating unsupervised learning algorithms; Adjusted RAND Index (ARI) and the NormalizedMutual Information (NMI) score. We provide three baselines; (i) we report results using simplefeatures such as pixel intensities, HOG and GIST, which we call low-level visual features, (ii) wereport results on the features obtained using standard GAN training, (iii) as an upper bound, wereport results using supervised learning where we train the weights in a discriminator network withthe same architecture using category-level labels that are provided by the datasets.It is important to remember that we are interested in comparing the quality of the learned featuresthat can be used for transfer to novel images and not just the classification score on an pre-definedset of categories. The classification accuracy captures only how well a test image was correctlyclassified. If incorrectly classified, it does not quantify how bad the mistake was. ARI, on the otherhand, is a better metric for evaluating the properties of the features because it measures not onlyhow accurately pairs of objects were correctly grouped together, but also takes into account howmany pairs of data points were incorrectly grouped. Therefore, when comparing with the model thatwas trained using supervised learning, we ignore the top-level classification layer of that model, andquantify the quality of the representations, i.e., the features extracted from the penultimate layer,using ARI after clustering on them.Figure 3: This figure shows all the 64 filters from the first layer of the discriminative network trainedon CIFAR-10. The visualization on the left corresponds to the filters learned using adversarialtraining. The visualization on the right corresponds to the filters learned for the same architectureusing supervised learning. It is interesting to see that there the filters on the left have more highfrequency components and the filters on the right are more smooth.Before we go into the quantitative results, we visualize the filters of the first layer of the discrim-inative network and compare them across two different training procedures. Figure 3 shows thevisualization. On the left are the filters from the network that was trained using adversarial training.On the right are the filters from a network with the same architecture but trained using class-levelsupervision. Both these networks were trained using the CIFAR-10 dataset. We can see that whilesome of the filters look similar to each other, many of them are quite different. It is clear that thefilters on the right are more smooth than the filters on the left. Recollect that filters on the left aretrained to fit both the real images and the generated images. When the generated images are not ashigh-quality as the real images, the filters that D(.) learns might not be as regularized as the ones1We have released the code that was used in all our experiments at https://github.com/VittalP/UnsupGAN6Under review as a conference paper at ICLR 2017(a) (b)Figure 4: CIFAR-10: (a) Plots the performance of the grouping algorithm when using the featureslearned from InfoGAN training when trained over multiple categories. Zero groups correspondsto vanilla GAN. -32 and -64 correspond to the output sizes of the generated images. -InfoGANcorresponds to the results obtained with direct prediction using the recognition model in InfoGAN.(b) Note that InfoGAN features perform better than vanilla GAN features. However, supervisedlearning outperforms unsupervised learning on this database.learnt using only real data. We hypothesize that improving the quality of the generated images canhelp regularize the first layer filters in D(.). We leave this route of exploration for future work.4.1 CIFAR-10The CIFAR-10 consists of 50k training images and 10k testing images, of size 3232, dividedamong 10 categories. We trained the model for two different image sizes; 3232and6464. Wetrained InfoGAN with different numbers of categories f10, 20, 30, 35, 40, 50, 75 g. Figure 4a showsa plot of the performance measures versus the number of groups InfoGAN was trained to identify.We can see from the figure that as we increase the number of categories, the performance of themodel goes up into a certain point and drop after that. This indicates that there exists databases forwhich grouping into more categories than present in the ground truth might help. We also plot theperformance of the InfoGAN model when used directly as a prediction model. We can see fromthe plots that k-means++ clustering produces better results (ARI-32=0.097; NMI-32=0.18) thandirect prediction (ARI-32-InfoGAN: 0.085; NMI-32-InfoGAN: 0.14). We label the direct predictionresults with a (-InfoGAN).Figure 4b compares the performance when using different features. We can see that InfoGANfeatures trained with 50 clusters beats the features learned using vanilla GAN by a small margin.However, supervised training does much better (as one might have expected).4.2 CIFAR-100In these sets of experiments, we use the images from the CIFAR-100 database for training. Thisdatabase also contains 50k training examples and 10k test images, divided among 100 fine scalecategories and 20 coarse level categories. We test the performance on the coarse categories. Asbefore, we experiment the InfoGAN training with multiple categories f10, 20, 35, 50g. While thetrend is not as noticeable as in the case of CIFAR-10, the best performance is obtained when we use50 categories. Also, as before, the k-means++ clustering of the features produces better performance(ARI=0.04) than the recognition model of InfoGAN (ARI=0.036).7Under review as a conference paper at ICLR 2017(a) (b)Figure 5: CIFAR-100: (a) # of groups used to train InfoGAN has less of an effect on CIFAR-100 thanit had on CIFAR-10. However, the performance of k-means++ clustering is still better than directprediction using the recognition model of InfoGAN. Please see Fig. 4a for labeling conventions.(b) InfoGAN features and GAN features perform similarly on this dataset. However, supervisedlearning features are only slightly better than the unsupervised counterparts.Figure 5b compares the performance when we use different different features. Notice that the fea-tures obtained by adversarial training are as competitive as the features obtained using supervisedtraining. We that this is because of two reasons; (i) CIFAR-100 coarse level categories are muchharder to distinguish than the CIFAR-10 categories, making it difficult for the supervised model tolearn good features, (ii) the number of training examples per category in CIFAR-100 is lesser thanCIFAR-10 because we are training using the 20 coarse categories compared with 10 of CIFAR-10.We label the direct prediction results with a (-InfoGAN).4.3 STL-10Finally, we also perform experiments on the STL-10 dataset. This database consists of 5000 imagesfor training with labels, 100000 training images without labels, and 8000 images for testing. Thedataset consists of 10 categories, and all the images are of size 9696. This dataset brings out theadvantages of unsupervised learning algorithms. The database is more than two times bigger thanCIFAR-10 and CIFAR-100 datasets in terms of the number of images and each image is 9 times thesize of the CIFAR images. Figure 6b shows that the unsupervised learning with adversarial trainingoutperforms the same models trained using supervised learning. From Figure 6a, we also noticethat the features learned using vanilla GAN does better than the features learned using InfoGAN.Increasing the complexity of the datasets makes it difficult for InfoGAN to group the images in thedataset.5 C ONCLUSIONIn this paper, we explore an unsupervised feature learning technique where the model is trained us-ing adversarial training from a generative network. We use a generative model to generate imagesthat act as an adversary to the discriminative network. We explore the standard GAN architectureand the InfoGAN architecture for training the discriminative model. We also show that direct predic-tion using InfoGAN’s recognition model does not always result in identifying object category-levelinformation. Instead, we fuse the features learned by adversarial training with a traditional unsu-pervised learning approach, k-means clustering, and show that this combination produces betterresults than direct prediction. We also show that, in situations where there are limited amounts oflabeled training data and large amounts of unlabeled data, adversarial training has the potential tooutperform supervised learning.8Under review as a conference paper at ICLR 2017(a) (b)Figure 6: STL-10: (a) InfoGAN’s performance drops with increase in the number of groups. (b)Vanilla GAN’s features outperform InfoGAN-trained features. Also, notice that, with just 5000labeled training images, supervised learning starts to reach its limits. However, our model makesuse of the additional 100000 unlabeled images and is able to learn representations that surpass theperformance of features learned using the supervised model.
BkUsyJGEl
review
3: Clear rejection
This paper proposed an unsupervised learning method based on running kmeans on the features learned by a discriminator network in a generative adversarial network setup. Unsupervised learning methods with GANs is certainly a relevant topic but this paper does not propose anything particularly novel as far as I can tell. More importantly, the evaluation methods in this paper are extremely lacking. The authors omit classification results on CIFAR and STL-10 and instead the only quantitative evaluation plot the performance of the clustering algorithm on the features. Not only are classification results not shown, no comparisons are made to the wealth of related work. I list just a few highly related techniques below. Finally, it appear the authors have not train their GANs correctly as the samples in Fig.2 appear to be from a model that has collapsed during training. In summary, the ideas in this paper are potentially interesting but this paper should not be accepted in its current form due to lack of experimental results and comparisons. (non-exhaustive) list of related work on unsupervised learning (with and without GANs): [1] Springenberg. Unsupervised and Semi-supervised Learning with Categorical Generative Adversarial Networks, ICLR 2016 (https://arxiv.org/abs/1511.06390) [2] Salimans et al. Improved Techniques for Training GANs. NIPS 2016 (https://arxiv.org/abs/1606.03498) [3] Dosovitskiy et al. Discriminative unsupervised feature learning with convolutional neural networks, NIPS 2014 (https://arxiv.org/abs/1406.6909)
5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature
Sy6iJDqlx
ICLR.cc/2017/conference
2017
Attend, Adapt and Transfer: Attentive Deep Architecture for Adaptive Transfer from multiple sources in the same domain
["Janarthanan Rajendran", "Aravind Lakshminarayanan", "Mitesh M. Khapra", "Prasanna P", "Balaraman Ravindran"]
Transferring knowledge from prior source tasks in solving a new target task can be useful in several learning applications. The application of transfer poses two serious challenges which have not been adequately addressed. First, the agent should be able to avoid negative transfer, which happens when the transfer hampers or slows down the learning instead of helping it. Second, the agent should be able to selectively transfer, which is the ability to select and transfer from different and multiple source tasks for different parts of the state space of the target task. We propose A2T (Attend Adapt and Transfer), an attentive deep architecture which adapts and transfers from these source tasks. Our model is generic enough to effect transfer of either policies or value functions. Empirical evaluations on different learning algorithms show that A2T is an effective architecture for transfer by being able to avoid negative transfer while transferring selectively from multiple source tasks in the same domain.
["Deep learning", "Reinforcement Learning", "Transfer Learning"]
ABSTRACTTransferring knowledge from prior source tasks in solving a new target task canbe useful in several learning applications. The application of transfer poses twoserious challenges which have not been adequately addressed. First, the agentshould be able to avoid negative transfer, which happens when the transfer ham-pers or slows down the learning instead of helping it. Second, the agent shouldbe able to selectively transfer, which is the ability to select and transfer from dif-ferent and multiple source tasks for different parts of the state space of the targettask. We propose A2T (Attend, Adapt and Transfer), an attentive deep architec-ture which adapts and transfers from these source tasks. Our model is genericenough to effect transfer of either policies or value functions. Empirical evalua-tions on different learning algorithms show that A2T is an effective architecturefor transfer by being able to avoid negative transfer while transferring selectivelyfrom multiple source tasks in the same domain.1 I NTRODUCTIONOne of the goals of Artificial Intelligence (AI) is to build autonomous agents that can learn andadapt to new environments. Reinforcement Learning (RL) is a key technique for achieving suchadaptability. The goal of RL algorithms is to learn an optimal policy for choosing actions thatmaximize some notion of long term performance. Transferring knowledge gained from tasks solvedearlier to solve a new target task can help, either in terms of speeding up the learning process orin terms of achieving a better solution, among other performance measures. When applied to RL,transfer could be accomplished in many ways (see Taylor & Stone (2009; 2011) for a very goodsurvey of the field). One could use the value function from the source task as an initial estimate inthe target task to cut down exploration [Sorg & Singh (2009)]. Alternatively one could use policiesfrom the source task(s) in the target task. This can take one of two forms - (i) the derived policiescan be used as initial exploratory trajectories [Atkeson & Schaal (1997); Niekum et al. (2013)] inthe target task and (ii) the derived policy could be used to define macro-actions which may then beused by the agent in solving the target task [Mannor et al. (2004); Brunskill & Li (2014)].Authors contributed equally1Published as a conference paper at ICLR 2017While transfer in RL has been much explored, there are two crucial issues that have not been ad-equately addressed in the literature. The first is negative transfer , which occurs when the transferresults in a performance that is worse when compared to learning from scratch in the target task.This severely limits the applicability of many transfer techniques only to cases for which some mea-sure of relatedness between source and target tasks can be guaranteed beforehand. This brings usto the second problem with transfer, which is the issue of identifying an appropriate source taskfrom which to transfer. In some scenarios, different source tasks might be relevant and useful fordifferent parts of the state space of the target task. As a real world analogy, consider multiple players(experts) who are good at different aspects of a game (say, tennis). For example, Player 1 is good atplaying backhand shots while Player 2 is good at playing forehand shots. Consider the case of a newplayer (agent) who wants to learn tennis by selectively learning from these two experts. We handlesuch a situation in our architecture by allowing the agent to learn how to pick and use solutions frommultiple and different source tasks while solving a target task, selectively applicable for differentparts of the state space. We call this selective transfer . Our agent can transfer knowledge fromPlayer 1 when required to play backhand shots and Player 2 for playing forehand shots. Further,let us consider consider the situation that both Player 1 and Player 2 are bad at playing drop shots.Apart from the source tasks, we maintain a base network that learns from scratch on the target task.The agent can pick and use the solution of the base network when solving the target task at the partsof the state space where transferring from the source tasks is negative. Such a situation could arisewhen the source task solutions are irrelevant for solving the target task over a specific portion of thestate space, or when the transferring from the source tasks is negative over a specific portion of thestate space (for example, transferring the bad drop shot abilities of Players 1 and 2). This situationalso entails the first problem of avoiding negative transfer. Our framework allows an agent to avoidtransferring from both Players 1 and 2 while learning to play drop shots, and rather acquire the dropshot skill by learning to use the base network. The architecture is trained such that the base networkuses not just the experience obtained through the usage of its solutions in the target task, but theoverall experience acquired using the combined knowledge of the source tasks and itself. This en-ables the base network solutions to get closer to the behavior of the overall architecture (which usesthe source task solutions as well). This makes it easier for the base network to assist the architectureto fine tune the useful source task solutions to suit the target task perfectly over time.The key contribution in the architecture is a deep attention network , that decides which solutions toattend to, for a given input state. The network learns solutions as a function of current state therebyaiding the agent in adopting different solutions for different parts of the state space in the target task.To this end, we propose A2T: Attend, Adapt and Transfer, an Attentive Deep Architecture for Adap-tive Transfer, that avoids negative transfer while performing selective transfer from multiple sourcetasks in the same domain. In addition to the tennis example, A2T is a fairly generic framework thatcan be used to selectively transfer different skills available from different experts as appropriate tothe situation. For instance, a household robot can appropriately use skills from different expertsfor different household chores. This would require the skill to transfer manipulation skills acrossobjects, tasks and robotic actuators. With a well developed attention mechanism, the most appropri-ate and helpful combination of object-skill-controller can be identified for aiding the learning on arelated new task. Further, A2T is generic enough to effect transfer of either action policies or action-value functions, as the case may be. We also adapt different algorithms in reinforcement learningas appropriate for the different settings and empirically demonstrate that the A2T is effective fortransfer learning for each setting.2 R ELATED WORKAs mentioned earlier, transfer learning approaches could deal with transferring policies or valuefunctions. For example, Banerjee & Stone (2007) describe a method for transferring value functionsby constructing a Game tree . Similarly, Sorg & Singh (2009) use the value function from a sourcetask as the initial estimate of the value function in the target task.Another method to achieve transfer is to reuse policies derived in the source task(s) in the targettask. Probabilistic Policy Reuse as discussed in Fern ́andez & Veloso (2006) maintains a library ofpolicies and selects a policy based on a similarity metric, or a random policy, or a max-policy fromthe knowledge obtained. This is different from the proposed approach in that the proposed approach2Published as a conference paper at ICLR 2017can transfer policies at the granularity of individual states which is not possible in policy-reuserendering it unable to learn customized policy at that granularity.Atkeson & Schaal (1997); Niekumet al. (2013) evaluated the idea of having the transferred policy from the source tasks as explorativepolicies instead of having a random exploration policy. This provides better exploration behaviorprovided the tasks are similar. Talvitie & Singh (2007) try to find the promising policy from a setof candidate policies that are generated using different action mapping to a single solved task. Incontrast, we make use of one or more source tasks to selectively transfer policies at the granularityof state. Apart from policy transfer and value transfer as discussed above, Ferguson & Mahadevan(2006) discuss representation transfer using Proto Value Functions.The idea of negative and selective transfer have been discussed earlier in the literature. For example,Lazaric & Restelli (2011) address the issue of negative transfer in transferring samples for a relatedtask in a multi-task setting. Konidaris et al. (2012) discuss the idea of exploiting shared commonfeatures across related tasks. They learn a shaping function that can be used in later tasks.The two recent works that are very relevant to the proposed architecture are discussed in Parisottoet al. (2015) and Rusu et al. (2016). Parisotto et al. (2015) explore transfer learning in RL acrossAtari games by trying to learn a multi-task network over the source tasks available and directly fine-tune the learned multi-task network on the target task. However, fine-tuning as a transfer paradigmcannot address the issue of negative transfer which they do observe in many of their experiments.Rusu et al. (2016) try to address the negative transfer issue by proposing a sequential learning mech-anism where the filters of the network being learned for an ongoing task are dependent throughlateral connections on the lower level filters of the networks learned already for the previous tasks.The idea is to ensure that dependencies that characterize similarity across tasks could be learnedthrough these lateral connections. Even though they do observe better transfer results than directfine-tuning, they are still not able to avoid negative transfer in some of their experiments.3 P ROPOSED ARCHITECTURELet there be Nsource tasks and let K1;K2;:::KNbe the solutions of these source tasks 1;:::Nrespectively. Let KTbe the solution that we learn in the target task T. Source tasks refer to tasksthat we have already learnt to perform and target task refers to the task that we are interested inlearning now. These solutions could be for example policies or state-action values. Here the sourcetasks should be in the same domain as the target task, having the same state and action spaces. Wepropose a setting where KTis learned as a function of K1;:::;KN;KB, whereKBis the solutionof a base network which starts learning from scratch while acting on the target task. In this work,we use a convex combination of the solutions to obtain KT.KT(s) =wN+1;sKB(s) +NXi=1wi;sKi(s) (1)N+1Xi=1wi;s= 1;wi;s2[0;1] (2)wi;sis the weight given to the ith solution at state s.The agent uses KTto act in the target task. Figure 1a shows the proposed architecture. While thesource task solutions K1;:::;KNremain fixed, the base network solutions are learnt and hence KBcan change over time. There is a central network which learns the weights ( wi;s,i21;2;:::;N +1),given the input state s. We refer to this network as the attention network . The [0;1]weights deter-mine the attention each solution gets allowing the agent to selectively accept or reject the differentsolutions, depending on the input state. We adopt a soft-attention mechanism whereby more thanone weight can be non-zero [Bahdanau et al. (2014)] as opposed to a hard-attention mechanism[Mnih et al. (2014)] where we are forced to have only one non-zero weight.wi;s=exp (ei;s)N+1Pj=1exp (ej;s);i2f1;2;:::;N + 1g (3)3Published as a conference paper at ICLR 2017(a) (b)Figure 1: (a) A2T architecture. The doted arrows represent the path of back propagation. (b) Actor-Critic using A2T.(e1;s;e2;s;:::;eN+1;s) =f(s;a) (4)Here,f(s;a)is a deep neural network (attention network), which could consist of convolutionlayers and fully connected layers depending on the representation of input. It is parametrised by aand takes as input a state sand outputs a vector of length N+ 1, which gives the attention scoresfor theN+ 1solutions at state s. Eq.(3) normalises this score to get the weights that follow Eq.(2).If theith source task solution is useful at state s, thenwi;sis set to a high value by the attentionnetwork. Working at the granularity of states allows the attention network to attend to differentsource tasks, for different parts of the state space of the target task, thus giving it the ability toperform selective transfer. For parts of the state space in the target task, where the source tasksolutions cause negative transfer or where the source task solutions are not relevant, the attentionnetwork learns to give high weight to the base network solution (which can be learnt and improved),thus avoiding negative transfer.Depending on the feedback obtained from the environment upon following KT, the attention net-work’s parameters aare updated to improve performance.As mentioned earlier, the source task solutions, K1;:::;KNremain fixed. Updating these sourcetask’s parameters would cause a significant amount of unlearning in the source tasks solutions andresult in a weaker transfer, which we observed empirically. This also enables the use of source tasksolutions, as long as we have the outputs alone, irrespective of how and where they come from.Even though the agent follows KT, we update the parameters of the base network that producesKB, as if the action taken by the agent was based only on KB. Due to this special way of updatingKB, apart from the experience got through the unique and individual contribution of KBtoKTinparts of the state space where the source task solutions are not relevant, KBalso uses the valuableexperience got by using KTwhich uses the solutions of the source tasks as well.This also means that, if there is a source task whose solution Kjis useful for the target task insome parts of its state space, then KBtries to replicate Kjin those parts of the state space. Inpractise, the source task solutions though useful, might need to be modified to suit perfectly for thetarget task. The base network takes care of these modifications required to make the useful sourcetask solutions perfect for the target task. The special way of training the base network assists thearchitecture in achieving this faster. Note that the agent could follow/use KjthroughKTeven whenKBdoes not attain its replication in the corresponding parts of the state space. This allows for agood performance of the agent in earlier stages training itself, when a useful source task is availableand identified.Since the attention is soft, our model has the flexibility to combine multiple solutions. The use ofdeep neural networks allow the model to work even for large, complex RL problems. The deepattention network, allows the agent to learn complex selection functions, without worrying about4Published as a conference paper at ICLR 2017representation issues a priori. To summarise, for a given state, A2T learns to attend to specificsolutions and adapts this attention over different states, hence attaining useful transfer . A2T isgeneral and can be used for transfer of solutions such as policy and value.3.1 P OLICY TRANSFERThe solutions that we transfer here are the source task policies, taking advantage of which, we learna policy for the target task. Thus, we have K1;:::;KN;KB;KT 1;:::N;B;T. Hererepresents a stochastic policy, a probability distribution over all the actions. The agent acts in thetarget task, by sampling actions from the probability distribution T. The target task policy Tis gotas described in Eq.(1) and Eq.(2). The attention network that produces the weights for the differentsolutions, is trained by the feedback got after taking action following T. The base network thatproducesBis trained as if the sampled action came from B(though it originally came from T),the implications of which were discussed in the previous section. When the attention network’sweight for the policy Bis high, the mixture policy Tis dominated by B, and the base networklearning is nearly on-policy. In the other cases, Bundergoes off-policy learning. But if we lookclosely, even in the latter case, since Bmoves towards T, it tries to be nearly on-policy all thetime. Empirically, we observe that Bconverges. This architecture for policy transfer can be usedalongside any algorithm that has an explicit representation of the policy. Here we describe twoinstantiations of A2T for policy transfer, one for direct policy search using REINFORCE algorithmand another in the Actor-Critic setup.3.1.1 P OLICY TRANSFER IN REINFORCE A LGORITHMS USING A2T:REINFORCE algorithms [Williams (1992)] can be used for direct policy search by making weightadjustments in a direction that lies along the gradient of the expected reinforcement. The full ar-chitecture is same as the one shown in Fig.1a with K . We do direct policy search, and theparameters are updated using REINFORCE. Let the attention network be parametrized by aandthe base network which outputs Bbe parametrized by b. The updates are given by:a a+a(rb)@PMt=1log(T(st;at))@a(5)b b+b(rb)@PMt=1log(B(st;at))@b(6)wherea;bare non-negative factors, ris the return obtained in the episode, bis some baselineandMis the length of the episode. atis the action sampled by the agent at state stfollowingT.Note that while T(st;at)is used in the update of the attention network, B(st;at)is used in theupdate of the base network.3.1.2 P OLICY TRANSFER IN ACTOR -CRITIC USING A2T:Actor-Critic methods [Konda & Tsitsiklis (2000)] are Temporal Difference (TD) methods that havetwo separate components, viz., anactor and a critic . The actor proposes a policy whereas the criticestimates the value function to critique the actor’s policy. The updates to the actor happens throughTD-error which is the one step estimation error that helps in reinforcing an agent’s behaviour.We use A2T for the actor part of the Actor-Critic. The architecture is shown in Fig.1b. The actor,A2T is aware of all the previous learnt tasks and tries to use those solution policies for its benefit.The critic evaluates the action selection from Ton the basis of the performance on the target task.With the same notations as REINFORCE for st;at;a;b;a;b;B;T; let actionatdictatedbyTlead the agent to next state st+1with a reward rt+1and letV(st)represent the value of statestandthe discount factor. Then, the update equations for the actor are as below:t=rt+1+V(st+1)V(st) (7)a a+at@logT(st;at)@a@logT(st;at)@a(8)5Published as a conference paper at ICLR 2017b b+bt@logB(st;at)@b@logB(st;at)@b(9)Here,tis the TD error. The state-value function Vof the critic is learnt using TD learning.3.2 V ALUE TRANSFERIn this case, the solutions being transferred are the source tasks’ action-value functions, which wewill call asQfunctions. Thus, K1;:::;KN;KB;KT Q1;:::;QN;QB;QT. LetArepresentthe discrete action space for the tasks and Qi(s) =fQ(s;aj)8aj2Ag. The agent acts by usingQTin the target task, which is got as described in Eq.(1) and Eq.(2). The attention network and thebase network of A2T are updated as described in the architecture.3.2.1 V ALUE TRANSFER IN QLEARNING USING A2T:The state-action value Qfunction is used to guide the agent to selecting the optimal action aat astates, whereQ(s;a)is a measure of the long-term return obtained by taking action aat states. Oneway to learn optimal policies for an agent is to estimate the optimal Q(s;a)for the task. Q-learning[Watkins & Dayan (1992)] is an off-policy Temporal Difference (TD) learning algorithm that doesso. The Q-values are updated iteratively through the Bellman optimality equation [Puterman (1994)]with the rewards obtained from the task as below:Q(s;a) E[r(s;a;s0) +maxa0Q(s0;a0)]In high dimensional state spaces, it is infeasible to update Q-value for all possible state-action pairs.One way to address this issue is by approximating Q(s;a)through a parametrized function approx-imatorQ(s;a;),thereby generalizing over states and actions by operating on higher level features[Sutton & Barto (1998)]. The DQN [Mnih et al. (2015)] approximates the Q-value function with adeep neural network to be able to predict Q(s;a)over all actions a, for all states s.The loss function used for learning a Deep Q Network is as below:L() =Es;a;r;s0[yDQNQ(s;a;)2];withyDQN=r+maxa0Q(s0;a0;)Here,Lrepresents the expected TD error corresponding to current parameter estimate .rep-resents the parameters of a separate target network , whilerepresents the parameters of the onlinenetwork . The usage of a target network is to improve the stability of the learning updates. Thegradient descent step is shown below:rL() =Es;a;r;s0[(yDQNQ(s;a;))rQ(s;a)]To avoid correlated updates from learning on the same transitions that the current network simulates,an experience replay [Lin (1993)] D(of fixed maximum capacity) is used, where the experiencesare pooled in a FIFO fashion.We use DQN to learn our experts Qi;i21;2:::N on the source tasks. Q-learning is used to ensureQT(s)is driven to a good estimate of Qfunctions for the target task. Taking advantage of the off-policy nature of Q-learning, both QBandQTcan be learned from the experiences gathered by an-greedy behavioral policy based on QT. Let the attention network that outputs wbe parametrisedbyaand the base network outputting QBbe parametrised by b. Letaandbrepresent theparameters of the respective target networks. Note that the usage of target here is to signify theparameters ( a;b) used to calculate the target value in the Q-learning update and is different fromits usage in the context of the target task. The update equations are:yQT= (r+maxa0QT(s0;a0;a;b)) (10)LQT(a;b) =Es;a;r;s0[(yQTQT(s;a;a;b))2] (11)6Published as a conference paper at ICLR 2017(a) Chain World (b) Puddle World 1 (c) Puddle World 2Figure 2: Different worlds for policy transfer experimentsLQB(b) =Es;a;r;s0[(yQTQB(s;a;b))2] (12)raLQT=E[(yQTQT(s;a))raQT(s;a)] (13)rbLQB=E[(yQTQB(s;a))rbQR(s;a)] (14)aandbare updated with the above gradients using RMSProp. Note that the Q-learning updates forboth the attention network (Eq.(11)) and the base network (Eq.(12)) use the target value generatedbyQT. We use target networks for both QBandQTto stabilize the updates and reduce the non-stationarity as in DQN training. The parameters of the target networks are periodically updated tothat of the online networks.4 E XPERIMENTS AND DISCUSSIONWe evaluate the performance of our architecture A2T on policy transfer using two simulated worlds,viz., chain world and puddle world as described below. The main goal of these experiments is to testthe consistency of results with the algorithm motivation. Chain world: Figure 2a shows the chainworld where the goal of the agent is to go from one point in the chain (starting state) to anotherpoint (goal state) in the least number of steps. At each state the agent can choose to either moveone position to the left or to the right. After reaching the goal state the agent gets a reward that isinversely proportional to the number of steps taken to reach the goal.Puddle worlds: Figures 2b and 2c show the discrete version of the standard puddle world thatis widely used in Reinforcement Learning literature. In this world, the goal of the agent is to gofrom a specified start position to the goal position, maximising its return. At each state the agentcan choose one of these four actions: move one position to the north, south, east or west.With 0:9probability the agent moves in the chosen direction and with 0:1probability it moves in a randomdirection irrespective of its choice of action. On reaching the goal state, the agent gets a rewardof+10. On reaching other parts of the grid the agent gets different penalties as mentioned in thelegend of the figures. . We evaluate the performance of our architecture on value transfer using theArcade Learning Environment (ALE) platform [Bellemare et al. (2012)]. Atari 2600: ALE providesa simulator for Atari 2600 games. This is one of the most commonly used benchmark tasks for deepreinforcement learning algorithms [Mnih et al. (2015), Mnih et al. (2016), Parisotto et al. (2015),Rusu et al. (2016)]. We perform our adaptive transfer learning experiments on the Atari 2600 gamePong.4.1 A BILITY TO DO SELECTIVE TRANSFERIn this section, we consider the case when multiple partially favorable source tasks are availablesuch that each of them can assist the learning process for different parts of the state space of thetarget task. The objective here is to first show the effectiveness of the attention network in learningtofocus only on the source task relevant to the state the agent encounters while trying to completethe target task and then evaluating the full architecture with an additional randomly initialised basenetwork.7Published as a conference paper at ICLR 2017(a) The weights given by the attention network. Selectivetransfer in REINFORCE(b) Selective transfer in Actor-CriticFigure 3: Results of the selective policy transfer experimentsThis is illustrated for the Policy Transfer setting using the chain world shown in (Fig. 2a). Considerthat the target task LTis to start inAorBwith uniform probability and reach Cin the least numberof steps. Now, consider that two learned source tasks, viz.,L1andL2, are available. L1is thesource task where the agent has learned to reach the left end ( A) starting from the right end ( B). Incontrast,L2is the source task where the agent has learned to reach the right end ( B) starting fromthe left end ( A). Intuitively, it is clear that the target task should benefit from the policies learnt fortasksL1andL2. We learn to solve the task LTusing REINFORCE given the policies learned forL1andL2. Figure 3a (i) shows the weights given by the attention network to the two source taskpolicies for different parts of the state space at the end of learning. We observe that the attentionnetwork has learned to ignore L1, andL2for the left, and right half of the state space of the targettask, respectively. Next, we add base network and evaluate the full architecture on this task. Figure3a (ii) shows the weights given by the attention network to the different source policies for differentparts of the state space at the end of learning. We observe that the attention network has learned toignoreL1, andL2for the left, and right half of the state space of the target task, respectively. As thebase network replicates Tover time, it has a high weight throughout the state space of the targettask.We also evaluate our architecture in a relatively more complex puddle world shown in Figure 2c. Inthis case,L1is the task of moving from S1toG1, andL2is the task of moving from S2toG1.In the target task LT, the agent has to learn to move to G1starting from either S1orS2chosenwith uniform probability. We learn the task LTusing Actor-Critic method, where the following areavailable (i) learned policy for L1(ii) learned policy for L2and (iii) a randomly initialized policynetwork (the base network). Figure 3b shows the performance results. We observe that actor-criticusing A2T is able to use the policies learned for L1, andL2and performs better than a networklearning from scratch without any knowledge of source tasks.We do a similar evaluation of the attention network, followed by our full architecture for valuetransfer as well. We create partially useful source tasks through a modification of the Atari 2600game Pong. We take inspiration from a real world scenario in the sport Tennis, where one couldimagine two different right-handed (or left) players with the first being an expert player on theforehand but weak on the backhand, while the second is an expert player on the backhand but weakon the forehand. For someone who is learning to play tennis with the same style (right/left) as theexperts, it is easy to follow the forehand expert player whenever he receives a ball on the forehandand follow the backhand expert whenever he receives a ball on the backhand.We try to simulate this scenario in Pong. The trick is to blur the part of the screen where we wantto force the agent to be weak at returning the ball. The blurring we use is to just black out all pixelsin the specific region required. To make sure the blurring doesn’t contrast with the background, wemodify Pong to be played with a black background (pixel value 0) instead of the existing gray (pixelvalue 87). We construct two partially helpful source task experts L1andL2.L1is constructed by8Published as a conference paper at ICLR 2017Figure 4: Visualisation of the attention weights in the Selective Transfer with Attention Networkexperiment: Green and Blue bars signify the attention probabilities for Expert-1 ( L1) and Expert-2 (L2) respectively. We see that in the first two snapshots, the ball is in the lower quadrant andas expected, the attention is high on Expert-1, while in the third and fourth snapshots, as the ballbounces back into the upper quadrant, the attention increases on Expert-2.training a DQN on Pong with the upper quadrant (the agent’s side) blurred, while L2is constructedby training a DQN with the lower quadrant (the agent’s side) blurred. This essentially results inthe ball being invisible when it is in the upper quadrant for L1and lower quadrant for L2. Wetherefore expect L1to be useful in guiding to return balls on the lower quadrant, and L2for theupper quadrant. The goal of the attention network is to learn suitable filters and parameters so that itwill focus on the correct source task for a specific situation in the game. The source task experts L1andL2scored an average of 9.2and8respectively on Pong game play with black background. Withan attention network to suitably weigh the value functions of L1andL2, an average performance of17.2 was recorded just after a single epoch (250,000 frames) of training. (The score in Pong is in therange of [21;21]). This clearly shows that the attention mechanism has learned to take advantageof the experts adaptively. Fig. 4 shows a visualisation of the attention weights for the same.Figure 5: Selective Value Transfer.We then evaluate our full architecture (A2T) inthis setting, i.e with an addition of DQN learn-ing from scratch (base network) to the above set-ting. The architecture can take advantage of theknowledge of the source task experts selectivelyearly on during the training while using the ex-pertise of the base network wherever required, toperform well on the target task. Figure 5 sum-marizes the results, where it is clear that learn-ing with both the partially useful experts is betterthan learning with only one of them which in turnis better than learning from scratch without anyadditional knowledge.4.2 A BILITY TOAVOID NEGATIVE TRANSFER AND ABILITYTOTRANSFER FROM FAVORABLE TASKWe first consider the case when only one learnedsource task is available such that its solution K1(policy or value) can hamper the learning process of the new target task. We refer to such a sourcetask as an unfavorable source task. In such a scenario, the attention network shown in Figure 1ashould learn to assign a very low weight (ignore) to K1. We also consider a modification of thissetting by adding another source task whose solution K2is favorable to the target task. In such ascenario, the attention network should learn to assign high weight (attend) to K2while ignoring K1.We now define an experiment using the puddle world from Figure 2b for policy transfer. The targettask in our experiment is to maximize the return in reaching the goal state G1starting from any oneof the states S1;S2;S3;S4. We artificially construct an unfavorable source task by first learningto solve the above task and then negating the weights of the topmost layer of the actor network.We then add a favorable task to the above setting. We artificially construct a favorable source task9Published as a conference paper at ICLR 2017(a) Avoiding negative transfer(Pong) and transferringfrom a favorable task(b) Avoiding negative transfer(Freeway) and transfer-ring from a favorable taskFigure 7: Avoiding negative transfer and transferring value from a favorable task(higher the better).Specific training and architecture details are mentioned in APPENDIX. The plots are averaged overtwo runs with different random seeds.simply by learning to solve the target task and using the learned actor network. Figure 6 showsthe results. The target task for the value transfer experiment is to reach expert level performanceon Pong. We construct two kinds of unfavorable source tasks for this experiment. Inverse-Pong :A DQN on Pong trained with negated reward functions, that is with R0(s;a) =R(s;a)whereR(s;a)is the reward provided by the ALE emulator for choosing action aat states.Freeway :An expert DQN on another Atari 2600 game, Freeway, which has the same range of optimal valuefunctions and same action space as Pong. We empirically verified that the Freeway expert DQNleads to negative transfer when directly initialized and fine-tuned on Pong which makes this a goodproxy for a negative source task expert even though the target task Pong has a different state space.Figure 6: Avoiding negative transfer and trans-ferring policy from a favorable task(lower thebetter).We artificially construct a favorable source taskby learning a DQN to achieve expertise on thetarget task (Pong) and use the learned network.Figure 7a compares the performance of the var-ious scenarios when the unfavorable source taskis Inverse-Pong, while Figure 7b offers a similarcomparison with the negative expert being Free-way.From all the above results, we can clearly see thatA2T does not get hampered by the unfavorablesource task by learning to ignore the same andperforms competitively with just a randomly ini-tialized learning on the target task without any ex-pert available. Secondly, in the presence of an ad-ditional source task that is favorable, A2T learnsto transfer useful knowledge from the same whileignoring the unfavorable task, thereby reachingexpertise on the target task much faster than theother scenarios.4.3 V ISUALIZATION : EVOLUTION OFATTENTION WEIGHTS WITH ONE POSITIVE AND ONE NEGATIVE EXPERTWe present the evolution of attention weights for the experiment described in Section 4.2 wherewe focus on the efficacy of the A2T framework in providing an agent the ability to avoid negativetransfer andtransfer from a favorable source task (perfect expert) . Figure 8 depicts the evolution of10Published as a conference paper at ICLR 2017the attention weights (normalised in the range of [0;1]) during the training of the A2T framework.The corresponding experiment is the case where the target task is to solve Pong, while there are twosource task experts, one being a perfect Pong playing trained DQN (to serve as positive expert), andthe other being the Inverse-Pong DQN trained with negated reward functions (to serve as negativeexpert). Additionally, there’s also the base network that learns from scratch using the experiencegathered by the attentively combined behavioral policy from the expert networks, the base networkand itself.Figure 8: Evolution of attention weights withone positive and one negative expert.We train the framework for 30 epochs, and theplot illustrates the attention weights every secondepoch. We clearly see from figure 8 that there isno weird co-adaptation that happens in the train-ing, and the attention on the negative expert isuniformly low throughout. Initially, the frame-work needs to collect some level of experienceto figure out that the positive expert is optimal(or close to optimal). Till then, the attention ismostly on the base network, which is learningfrom scratch. The attention then shifts to the pos-itive expert which in turn provides more reward-ing episodes and transition tuples to learn from.Finally, the attention drifts slowly to the base net-work from the positive expert again, after whichthe attention is roughly random in choosing be-tween the execution of positive expert and thebase network. This is because the base networkhas acquired sufficient expertise as the positiveexpert which happens to be optimal for the tar-get task. This visualization clearly shows that A2T is a powerful framework in ignoring a negativeexpert throughout and using a positive expert appropriately to learn quickly from the experiencegathered and acquire sufficient expertise on the target task.4.4 W HEN A PERFECT EXPERT IS NOT AVAILABLE AMONG THE SOURCE TASKSFigure 9: Partial Positive Expert ExperimentIn our experiments in the previous subsectiondealing with prevention of negative transfer andusing a favorable source task, we consider thepositive expert as a perfect (close to optimal) ex-pert on the same task we treat as the target task.This raises the question of relying on the pres-ence of a perfect expert as a positive expert. Ifwe have such a situation, the obvious solution isto execute each of the experts on the target taskand vote for them with probabilities proportionalto the average performance of each.The A2T framework is however generic and notintended to just do source task selection . We il-lustrate this with an additional baseline experi-ment, where the positive source task is an im-perfect expert on the target task . In such a case,just having a weighted average voting among theavailable source task networks based on their in-dividual average rewards is upper bounded by theperformance of the best available positive expert, which happens to be an imperfect expert on the tar-get task. Rather, the base network has to acquire new skills not present in the source task networks.We choose a partially trained network on Pong, that scores an average of 8(max: 21). The graphin figure 9 clearly shows that the A2T framework with a partial Pong expert and a negative expertperforms better than i) learning from scratch, ii) A2T with only one negative expert, and performsworse than A2T with one perfect positive expert and one negative expert. This is expected because11Published as a conference paper at ICLR 2017a partial expert cannot provide as much of expert knowledge as a perfect expert, but still providessome useful knowledge in speeding the process of solving the target task. An important conclusionfrom this experiment is that the A2T framework is capable of discovering new skills not availableamong any of the experts when such skills are required for optimally solving the target task. Tomaintain consistency, we perform the same number of runs for averaging scores and experimentedwith both learning rates and pick the better performing one (0.00025).5 C ONCLUSION AND FUTURE WORKIn this paper we present a very general deep neural network architecture, A2T, for transfer learningthat avoids negative transfer while enabling selective transfer from multiple source tasks in the samedomain. We show simple ways of using A2T for policy transfer and value transfer. We empiricallyevaluate its performance with different algorithms, using simulated worlds and games, and showthat it indeed achieves its stated goals. Apart from transferring task solutions, A2T can also be usedfor transferring other useful knowledge such as the model of the world.While in this work we focused on transfer between tasks that share the same state and action spacesand are in the same domain, the use of deep networks opens up the possibility of going beyond thissetting. For example, a deep neural network can be used to learn common representations [Parisottoet al. (2015)] for multiple tasks thereby enabling transfer between related tasks that could possiblyhave different state-action spaces. A hierarchical attention over the lower level filters across sourcetask networks while learning the filters for the target task network is another natural extension totransfer across tasks with different state-action spaces. The setup from Progressive Neural Networks[Rusu et al. (2016)] could be borrowed for the filter transfer, while the A2T setup can be retained forthe policy/value transfer. Exploring this setting for continuous control tasks so as to transfer frommodular controllers as well avoid negative transfer is also a potential direction for future research.The nature of tasks considered in our experiments is naturally connected to Hierarchical Reinforce-ment Learning and Continual Learning. For instance, the blurring experiments inspired from Tennisbased on experts for specific skills like Forehand and Backhand could be considered as learning fromsub-goals (program modules) like Forehand and Backhand to solve a more complex and broadertask like Tennis by invoking the relevant sub-goals (program modules). This structure could be veryuseful to build a household robot for general purpose navigation and manipulation whereby specificskills such as manipulation of different objects, navigating across different source-destination points,etc could be invoked when necessary. The attention network in the A2T framework is essentiallyasoft meta-controller and hence presents itself as a powerful differentiable tool for Continual andMeta Learning. Meta-Controllers have typically been been designed with discrete decision struc-ture over high level subgoals. This paper presents an alternate differentiable meta-controller with asoft-attention scheme. We believe this aspect can be exploited for differentiable meta-learning ar-chitectures for hierarchical reinforcement learning. Over all, we believe that A2T is a novel way toapproach different problems like Transfer Learning, Meta-Learning and Hierarchical ReinforcementLearning and further refinements on top of this design can be a good direction to explore.ACKNOWLEDGEMENTSThanks to the anonymous reviewers of ICLR 2017 who have provided thoughtful remarks andhelped us revise the paper. We would also like to thank Sherjil Ozair, John Schulman, YoshuaBengio, Sarath Chandar, Caglar Gulchere and Charu Chauhan for useful feedback about the work.12Published as a conference paper at ICLR 2017
SJuj1-NSe
7: Good paper, accept
This paper studies the problem of transferring solutions of existing tasks to tackle a novel task under the framework of reinforcement learning and identifies two important issues of avoiding negative transfer and being selective transfer. The proposed approach is based on a convex combination of existing solutions and the being-learned solution to the novel task. The non-negative weight of each solution implies that the solution of negative effect is ignored and more weights are allocated to more relevant solution in each state. This paper derives this so-called "A2T" learning algorithm for policy transfer and value transfer for REINFORCE and ACTOR-CRITIC algorithms and experiments with synthetic Chain World and Puddle World simulation and Atari 2600 game Pong. +This paper presents a novel approach for transfer reinforcement learning. +The experiments are cleverly designed to demonstrate the ability of the proposed method. -An important aspect of transfer learning is that the algorithm can automatically figure out if the existing solutions to known tasks are sufficient to solve the novel task so that it can save the time and energy of learning-from-scratch. This issue is not studied in this paper as most of experiments have a learning-from-scratch solution as base network. It will be interesting to see how well the algorithm performs without base network. In addition, from Figure 3, 5 and 6, the proposed algorithm seems to accelerate the learning speed, but the overall network seems not better than the solo base network. It will be more convincing to show some example that existing solutions are complementary to the base network. -If ignoring the base network, the proposed network can be considered as ensemble reinforcement learning that take advantages of learned agents with different expertise to solve the novel task.
3: The reviewer is fairly confident that the evaluation is correct
Sy6iJDqlx
ICLR.cc/2017/conference
2017
Attend, Adapt and Transfer: Attentive Deep Architecture for Adaptive Transfer from multiple sources in the same domain
["Janarthanan Rajendran", "Aravind Lakshminarayanan", "Mitesh M. Khapra", "Prasanna P", "Balaraman Ravindran"]
Transferring knowledge from prior source tasks in solving a new target task can be useful in several learning applications. The application of transfer poses two serious challenges which have not been adequately addressed. First, the agent should be able to avoid negative transfer, which happens when the transfer hampers or slows down the learning instead of helping it. Second, the agent should be able to selectively transfer, which is the ability to select and transfer from different and multiple source tasks for different parts of the state space of the target task. We propose A2T (Attend Adapt and Transfer), an attentive deep architecture which adapts and transfers from these source tasks. Our model is generic enough to effect transfer of either policies or value functions. Empirical evaluations on different learning algorithms show that A2T is an effective architecture for transfer by being able to avoid negative transfer while transferring selectively from multiple source tasks in the same domain.
["Deep learning", "Reinforcement Learning", "Transfer Learning"]
ABSTRACTTransferring knowledge from prior source tasks in solving a new target task canbe useful in several learning applications. The application of transfer poses twoserious challenges which have not been adequately addressed. First, the agentshould be able to avoid negative transfer, which happens when the transfer ham-pers or slows down the learning instead of helping it. Second, the agent shouldbe able to selectively transfer, which is the ability to select and transfer from dif-ferent and multiple source tasks for different parts of the state space of the targettask. We propose A2T (Attend, Adapt and Transfer), an attentive deep architec-ture which adapts and transfers from these source tasks. Our model is genericenough to effect transfer of either policies or value functions. Empirical evalua-tions on different learning algorithms show that A2T is an effective architecturefor transfer by being able to avoid negative transfer while transferring selectivelyfrom multiple source tasks in the same domain.1 I NTRODUCTIONOne of the goals of Artificial Intelligence (AI) is to build autonomous agents that can learn andadapt to new environments. Reinforcement Learning (RL) is a key technique for achieving suchadaptability. The goal of RL algorithms is to learn an optimal policy for choosing actions thatmaximize some notion of long term performance. Transferring knowledge gained from tasks solvedearlier to solve a new target task can help, either in terms of speeding up the learning process orin terms of achieving a better solution, among other performance measures. When applied to RL,transfer could be accomplished in many ways (see Taylor & Stone (2009; 2011) for a very goodsurvey of the field). One could use the value function from the source task as an initial estimate inthe target task to cut down exploration [Sorg & Singh (2009)]. Alternatively one could use policiesfrom the source task(s) in the target task. This can take one of two forms - (i) the derived policiescan be used as initial exploratory trajectories [Atkeson & Schaal (1997); Niekum et al. (2013)] inthe target task and (ii) the derived policy could be used to define macro-actions which may then beused by the agent in solving the target task [Mannor et al. (2004); Brunskill & Li (2014)].Authors contributed equally1Published as a conference paper at ICLR 2017While transfer in RL has been much explored, there are two crucial issues that have not been ad-equately addressed in the literature. The first is negative transfer , which occurs when the transferresults in a performance that is worse when compared to learning from scratch in the target task.This severely limits the applicability of many transfer techniques only to cases for which some mea-sure of relatedness between source and target tasks can be guaranteed beforehand. This brings usto the second problem with transfer, which is the issue of identifying an appropriate source taskfrom which to transfer. In some scenarios, different source tasks might be relevant and useful fordifferent parts of the state space of the target task. As a real world analogy, consider multiple players(experts) who are good at different aspects of a game (say, tennis). For example, Player 1 is good atplaying backhand shots while Player 2 is good at playing forehand shots. Consider the case of a newplayer (agent) who wants to learn tennis by selectively learning from these two experts. We handlesuch a situation in our architecture by allowing the agent to learn how to pick and use solutions frommultiple and different source tasks while solving a target task, selectively applicable for differentparts of the state space. We call this selective transfer . Our agent can transfer knowledge fromPlayer 1 when required to play backhand shots and Player 2 for playing forehand shots. Further,let us consider consider the situation that both Player 1 and Player 2 are bad at playing drop shots.Apart from the source tasks, we maintain a base network that learns from scratch on the target task.The agent can pick and use the solution of the base network when solving the target task at the partsof the state space where transferring from the source tasks is negative. Such a situation could arisewhen the source task solutions are irrelevant for solving the target task over a specific portion of thestate space, or when the transferring from the source tasks is negative over a specific portion of thestate space (for example, transferring the bad drop shot abilities of Players 1 and 2). This situationalso entails the first problem of avoiding negative transfer. Our framework allows an agent to avoidtransferring from both Players 1 and 2 while learning to play drop shots, and rather acquire the dropshot skill by learning to use the base network. The architecture is trained such that the base networkuses not just the experience obtained through the usage of its solutions in the target task, but theoverall experience acquired using the combined knowledge of the source tasks and itself. This en-ables the base network solutions to get closer to the behavior of the overall architecture (which usesthe source task solutions as well). This makes it easier for the base network to assist the architectureto fine tune the useful source task solutions to suit the target task perfectly over time.The key contribution in the architecture is a deep attention network , that decides which solutions toattend to, for a given input state. The network learns solutions as a function of current state therebyaiding the agent in adopting different solutions for different parts of the state space in the target task.To this end, we propose A2T: Attend, Adapt and Transfer, an Attentive Deep Architecture for Adap-tive Transfer, that avoids negative transfer while performing selective transfer from multiple sourcetasks in the same domain. In addition to the tennis example, A2T is a fairly generic framework thatcan be used to selectively transfer different skills available from different experts as appropriate tothe situation. For instance, a household robot can appropriately use skills from different expertsfor different household chores. This would require the skill to transfer manipulation skills acrossobjects, tasks and robotic actuators. With a well developed attention mechanism, the most appropri-ate and helpful combination of object-skill-controller can be identified for aiding the learning on arelated new task. Further, A2T is generic enough to effect transfer of either action policies or action-value functions, as the case may be. We also adapt different algorithms in reinforcement learningas appropriate for the different settings and empirically demonstrate that the A2T is effective fortransfer learning for each setting.2 R ELATED WORKAs mentioned earlier, transfer learning approaches could deal with transferring policies or valuefunctions. For example, Banerjee & Stone (2007) describe a method for transferring value functionsby constructing a Game tree . Similarly, Sorg & Singh (2009) use the value function from a sourcetask as the initial estimate of the value function in the target task.Another method to achieve transfer is to reuse policies derived in the source task(s) in the targettask. Probabilistic Policy Reuse as discussed in Fern ́andez & Veloso (2006) maintains a library ofpolicies and selects a policy based on a similarity metric, or a random policy, or a max-policy fromthe knowledge obtained. This is different from the proposed approach in that the proposed approach2Published as a conference paper at ICLR 2017can transfer policies at the granularity of individual states which is not possible in policy-reuserendering it unable to learn customized policy at that granularity.Atkeson & Schaal (1997); Niekumet al. (2013) evaluated the idea of having the transferred policy from the source tasks as explorativepolicies instead of having a random exploration policy. This provides better exploration behaviorprovided the tasks are similar. Talvitie & Singh (2007) try to find the promising policy from a setof candidate policies that are generated using different action mapping to a single solved task. Incontrast, we make use of one or more source tasks to selectively transfer policies at the granularityof state. Apart from policy transfer and value transfer as discussed above, Ferguson & Mahadevan(2006) discuss representation transfer using Proto Value Functions.The idea of negative and selective transfer have been discussed earlier in the literature. For example,Lazaric & Restelli (2011) address the issue of negative transfer in transferring samples for a relatedtask in a multi-task setting. Konidaris et al. (2012) discuss the idea of exploiting shared commonfeatures across related tasks. They learn a shaping function that can be used in later tasks.The two recent works that are very relevant to the proposed architecture are discussed in Parisottoet al. (2015) and Rusu et al. (2016). Parisotto et al. (2015) explore transfer learning in RL acrossAtari games by trying to learn a multi-task network over the source tasks available and directly fine-tune the learned multi-task network on the target task. However, fine-tuning as a transfer paradigmcannot address the issue of negative transfer which they do observe in many of their experiments.Rusu et al. (2016) try to address the negative transfer issue by proposing a sequential learning mech-anism where the filters of the network being learned for an ongoing task are dependent throughlateral connections on the lower level filters of the networks learned already for the previous tasks.The idea is to ensure that dependencies that characterize similarity across tasks could be learnedthrough these lateral connections. Even though they do observe better transfer results than directfine-tuning, they are still not able to avoid negative transfer in some of their experiments.3 P ROPOSED ARCHITECTURELet there be Nsource tasks and let K1;K2;:::KNbe the solutions of these source tasks 1;:::Nrespectively. Let KTbe the solution that we learn in the target task T. Source tasks refer to tasksthat we have already learnt to perform and target task refers to the task that we are interested inlearning now. These solutions could be for example policies or state-action values. Here the sourcetasks should be in the same domain as the target task, having the same state and action spaces. Wepropose a setting where KTis learned as a function of K1;:::;KN;KB, whereKBis the solutionof a base network which starts learning from scratch while acting on the target task. In this work,we use a convex combination of the solutions to obtain KT.KT(s) =wN+1;sKB(s) +NXi=1wi;sKi(s) (1)N+1Xi=1wi;s= 1;wi;s2[0;1] (2)wi;sis the weight given to the ith solution at state s.The agent uses KTto act in the target task. Figure 1a shows the proposed architecture. While thesource task solutions K1;:::;KNremain fixed, the base network solutions are learnt and hence KBcan change over time. There is a central network which learns the weights ( wi;s,i21;2;:::;N +1),given the input state s. We refer to this network as the attention network . The [0;1]weights deter-mine the attention each solution gets allowing the agent to selectively accept or reject the differentsolutions, depending on the input state. We adopt a soft-attention mechanism whereby more thanone weight can be non-zero [Bahdanau et al. (2014)] as opposed to a hard-attention mechanism[Mnih et al. (2014)] where we are forced to have only one non-zero weight.wi;s=exp (ei;s)N+1Pj=1exp (ej;s);i2f1;2;:::;N + 1g (3)3Published as a conference paper at ICLR 2017(a) (b)Figure 1: (a) A2T architecture. The doted arrows represent the path of back propagation. (b) Actor-Critic using A2T.(e1;s;e2;s;:::;eN+1;s) =f(s;a) (4)Here,f(s;a)is a deep neural network (attention network), which could consist of convolutionlayers and fully connected layers depending on the representation of input. It is parametrised by aand takes as input a state sand outputs a vector of length N+ 1, which gives the attention scoresfor theN+ 1solutions at state s. Eq.(3) normalises this score to get the weights that follow Eq.(2).If theith source task solution is useful at state s, thenwi;sis set to a high value by the attentionnetwork. Working at the granularity of states allows the attention network to attend to differentsource tasks, for different parts of the state space of the target task, thus giving it the ability toperform selective transfer. For parts of the state space in the target task, where the source tasksolutions cause negative transfer or where the source task solutions are not relevant, the attentionnetwork learns to give high weight to the base network solution (which can be learnt and improved),thus avoiding negative transfer.Depending on the feedback obtained from the environment upon following KT, the attention net-work’s parameters aare updated to improve performance.As mentioned earlier, the source task solutions, K1;:::;KNremain fixed. Updating these sourcetask’s parameters would cause a significant amount of unlearning in the source tasks solutions andresult in a weaker transfer, which we observed empirically. This also enables the use of source tasksolutions, as long as we have the outputs alone, irrespective of how and where they come from.Even though the agent follows KT, we update the parameters of the base network that producesKB, as if the action taken by the agent was based only on KB. Due to this special way of updatingKB, apart from the experience got through the unique and individual contribution of KBtoKTinparts of the state space where the source task solutions are not relevant, KBalso uses the valuableexperience got by using KTwhich uses the solutions of the source tasks as well.This also means that, if there is a source task whose solution Kjis useful for the target task insome parts of its state space, then KBtries to replicate Kjin those parts of the state space. Inpractise, the source task solutions though useful, might need to be modified to suit perfectly for thetarget task. The base network takes care of these modifications required to make the useful sourcetask solutions perfect for the target task. The special way of training the base network assists thearchitecture in achieving this faster. Note that the agent could follow/use KjthroughKTeven whenKBdoes not attain its replication in the corresponding parts of the state space. This allows for agood performance of the agent in earlier stages training itself, when a useful source task is availableand identified.Since the attention is soft, our model has the flexibility to combine multiple solutions. The use ofdeep neural networks allow the model to work even for large, complex RL problems. The deepattention network, allows the agent to learn complex selection functions, without worrying about4Published as a conference paper at ICLR 2017representation issues a priori. To summarise, for a given state, A2T learns to attend to specificsolutions and adapts this attention over different states, hence attaining useful transfer . A2T isgeneral and can be used for transfer of solutions such as policy and value.3.1 P OLICY TRANSFERThe solutions that we transfer here are the source task policies, taking advantage of which, we learna policy for the target task. Thus, we have K1;:::;KN;KB;KT 1;:::N;B;T. Hererepresents a stochastic policy, a probability distribution over all the actions. The agent acts in thetarget task, by sampling actions from the probability distribution T. The target task policy Tis gotas described in Eq.(1) and Eq.(2). The attention network that produces the weights for the differentsolutions, is trained by the feedback got after taking action following T. The base network thatproducesBis trained as if the sampled action came from B(though it originally came from T),the implications of which were discussed in the previous section. When the attention network’sweight for the policy Bis high, the mixture policy Tis dominated by B, and the base networklearning is nearly on-policy. In the other cases, Bundergoes off-policy learning. But if we lookclosely, even in the latter case, since Bmoves towards T, it tries to be nearly on-policy all thetime. Empirically, we observe that Bconverges. This architecture for policy transfer can be usedalongside any algorithm that has an explicit representation of the policy. Here we describe twoinstantiations of A2T for policy transfer, one for direct policy search using REINFORCE algorithmand another in the Actor-Critic setup.3.1.1 P OLICY TRANSFER IN REINFORCE A LGORITHMS USING A2T:REINFORCE algorithms [Williams (1992)] can be used for direct policy search by making weightadjustments in a direction that lies along the gradient of the expected reinforcement. The full ar-chitecture is same as the one shown in Fig.1a with K . We do direct policy search, and theparameters are updated using REINFORCE. Let the attention network be parametrized by aandthe base network which outputs Bbe parametrized by b. The updates are given by:a a+a(rb)@PMt=1log(T(st;at))@a(5)b b+b(rb)@PMt=1log(B(st;at))@b(6)wherea;bare non-negative factors, ris the return obtained in the episode, bis some baselineandMis the length of the episode. atis the action sampled by the agent at state stfollowingT.Note that while T(st;at)is used in the update of the attention network, B(st;at)is used in theupdate of the base network.3.1.2 P OLICY TRANSFER IN ACTOR -CRITIC USING A2T:Actor-Critic methods [Konda & Tsitsiklis (2000)] are Temporal Difference (TD) methods that havetwo separate components, viz., anactor and a critic . The actor proposes a policy whereas the criticestimates the value function to critique the actor’s policy. The updates to the actor happens throughTD-error which is the one step estimation error that helps in reinforcing an agent’s behaviour.We use A2T for the actor part of the Actor-Critic. The architecture is shown in Fig.1b. The actor,A2T is aware of all the previous learnt tasks and tries to use those solution policies for its benefit.The critic evaluates the action selection from Ton the basis of the performance on the target task.With the same notations as REINFORCE for st;at;a;b;a;b;B;T; let actionatdictatedbyTlead the agent to next state st+1with a reward rt+1and letV(st)represent the value of statestandthe discount factor. Then, the update equations for the actor are as below:t=rt+1+V(st+1)V(st) (7)a a+at@logT(st;at)@a@logT(st;at)@a(8)5Published as a conference paper at ICLR 2017b b+bt@logB(st;at)@b@logB(st;at)@b(9)Here,tis the TD error. The state-value function Vof the critic is learnt using TD learning.3.2 V ALUE TRANSFERIn this case, the solutions being transferred are the source tasks’ action-value functions, which wewill call asQfunctions. Thus, K1;:::;KN;KB;KT Q1;:::;QN;QB;QT. LetArepresentthe discrete action space for the tasks and Qi(s) =fQ(s;aj)8aj2Ag. The agent acts by usingQTin the target task, which is got as described in Eq.(1) and Eq.(2). The attention network and thebase network of A2T are updated as described in the architecture.3.2.1 V ALUE TRANSFER IN QLEARNING USING A2T:The state-action value Qfunction is used to guide the agent to selecting the optimal action aat astates, whereQ(s;a)is a measure of the long-term return obtained by taking action aat states. Oneway to learn optimal policies for an agent is to estimate the optimal Q(s;a)for the task. Q-learning[Watkins & Dayan (1992)] is an off-policy Temporal Difference (TD) learning algorithm that doesso. The Q-values are updated iteratively through the Bellman optimality equation [Puterman (1994)]with the rewards obtained from the task as below:Q(s;a) E[r(s;a;s0) +maxa0Q(s0;a0)]In high dimensional state spaces, it is infeasible to update Q-value for all possible state-action pairs.One way to address this issue is by approximating Q(s;a)through a parametrized function approx-imatorQ(s;a;),thereby generalizing over states and actions by operating on higher level features[Sutton & Barto (1998)]. The DQN [Mnih et al. (2015)] approximates the Q-value function with adeep neural network to be able to predict Q(s;a)over all actions a, for all states s.The loss function used for learning a Deep Q Network is as below:L() =Es;a;r;s0[yDQNQ(s;a;)2];withyDQN=r+maxa0Q(s0;a0;)Here,Lrepresents the expected TD error corresponding to current parameter estimate .rep-resents the parameters of a separate target network , whilerepresents the parameters of the onlinenetwork . The usage of a target network is to improve the stability of the learning updates. Thegradient descent step is shown below:rL() =Es;a;r;s0[(yDQNQ(s;a;))rQ(s;a)]To avoid correlated updates from learning on the same transitions that the current network simulates,an experience replay [Lin (1993)] D(of fixed maximum capacity) is used, where the experiencesare pooled in a FIFO fashion.We use DQN to learn our experts Qi;i21;2:::N on the source tasks. Q-learning is used to ensureQT(s)is driven to a good estimate of Qfunctions for the target task. Taking advantage of the off-policy nature of Q-learning, both QBandQTcan be learned from the experiences gathered by an-greedy behavioral policy based on QT. Let the attention network that outputs wbe parametrisedbyaand the base network outputting QBbe parametrised by b. Letaandbrepresent theparameters of the respective target networks. Note that the usage of target here is to signify theparameters ( a;b) used to calculate the target value in the Q-learning update and is different fromits usage in the context of the target task. The update equations are:yQT= (r+maxa0QT(s0;a0;a;b)) (10)LQT(a;b) =Es;a;r;s0[(yQTQT(s;a;a;b))2] (11)6Published as a conference paper at ICLR 2017(a) Chain World (b) Puddle World 1 (c) Puddle World 2Figure 2: Different worlds for policy transfer experimentsLQB(b) =Es;a;r;s0[(yQTQB(s;a;b))2] (12)raLQT=E[(yQTQT(s;a))raQT(s;a)] (13)rbLQB=E[(yQTQB(s;a))rbQR(s;a)] (14)aandbare updated with the above gradients using RMSProp. Note that the Q-learning updates forboth the attention network (Eq.(11)) and the base network (Eq.(12)) use the target value generatedbyQT. We use target networks for both QBandQTto stabilize the updates and reduce the non-stationarity as in DQN training. The parameters of the target networks are periodically updated tothat of the online networks.4 E XPERIMENTS AND DISCUSSIONWe evaluate the performance of our architecture A2T on policy transfer using two simulated worlds,viz., chain world and puddle world as described below. The main goal of these experiments is to testthe consistency of results with the algorithm motivation. Chain world: Figure 2a shows the chainworld where the goal of the agent is to go from one point in the chain (starting state) to anotherpoint (goal state) in the least number of steps. At each state the agent can choose to either moveone position to the left or to the right. After reaching the goal state the agent gets a reward that isinversely proportional to the number of steps taken to reach the goal.Puddle worlds: Figures 2b and 2c show the discrete version of the standard puddle world thatis widely used in Reinforcement Learning literature. In this world, the goal of the agent is to gofrom a specified start position to the goal position, maximising its return. At each state the agentcan choose one of these four actions: move one position to the north, south, east or west.With 0:9probability the agent moves in the chosen direction and with 0:1probability it moves in a randomdirection irrespective of its choice of action. On reaching the goal state, the agent gets a rewardof+10. On reaching other parts of the grid the agent gets different penalties as mentioned in thelegend of the figures. . We evaluate the performance of our architecture on value transfer using theArcade Learning Environment (ALE) platform [Bellemare et al. (2012)]. Atari 2600: ALE providesa simulator for Atari 2600 games. This is one of the most commonly used benchmark tasks for deepreinforcement learning algorithms [Mnih et al. (2015), Mnih et al. (2016), Parisotto et al. (2015),Rusu et al. (2016)]. We perform our adaptive transfer learning experiments on the Atari 2600 gamePong.4.1 A BILITY TO DO SELECTIVE TRANSFERIn this section, we consider the case when multiple partially favorable source tasks are availablesuch that each of them can assist the learning process for different parts of the state space of thetarget task. The objective here is to first show the effectiveness of the attention network in learningtofocus only on the source task relevant to the state the agent encounters while trying to completethe target task and then evaluating the full architecture with an additional randomly initialised basenetwork.7Published as a conference paper at ICLR 2017(a) The weights given by the attention network. Selectivetransfer in REINFORCE(b) Selective transfer in Actor-CriticFigure 3: Results of the selective policy transfer experimentsThis is illustrated for the Policy Transfer setting using the chain world shown in (Fig. 2a). Considerthat the target task LTis to start inAorBwith uniform probability and reach Cin the least numberof steps. Now, consider that two learned source tasks, viz.,L1andL2, are available. L1is thesource task where the agent has learned to reach the left end ( A) starting from the right end ( B). Incontrast,L2is the source task where the agent has learned to reach the right end ( B) starting fromthe left end ( A). Intuitively, it is clear that the target task should benefit from the policies learnt fortasksL1andL2. We learn to solve the task LTusing REINFORCE given the policies learned forL1andL2. Figure 3a (i) shows the weights given by the attention network to the two source taskpolicies for different parts of the state space at the end of learning. We observe that the attentionnetwork has learned to ignore L1, andL2for the left, and right half of the state space of the targettask, respectively. Next, we add base network and evaluate the full architecture on this task. Figure3a (ii) shows the weights given by the attention network to the different source policies for differentparts of the state space at the end of learning. We observe that the attention network has learned toignoreL1, andL2for the left, and right half of the state space of the target task, respectively. As thebase network replicates Tover time, it has a high weight throughout the state space of the targettask.We also evaluate our architecture in a relatively more complex puddle world shown in Figure 2c. Inthis case,L1is the task of moving from S1toG1, andL2is the task of moving from S2toG1.In the target task LT, the agent has to learn to move to G1starting from either S1orS2chosenwith uniform probability. We learn the task LTusing Actor-Critic method, where the following areavailable (i) learned policy for L1(ii) learned policy for L2and (iii) a randomly initialized policynetwork (the base network). Figure 3b shows the performance results. We observe that actor-criticusing A2T is able to use the policies learned for L1, andL2and performs better than a networklearning from scratch without any knowledge of source tasks.We do a similar evaluation of the attention network, followed by our full architecture for valuetransfer as well. We create partially useful source tasks through a modification of the Atari 2600game Pong. We take inspiration from a real world scenario in the sport Tennis, where one couldimagine two different right-handed (or left) players with the first being an expert player on theforehand but weak on the backhand, while the second is an expert player on the backhand but weakon the forehand. For someone who is learning to play tennis with the same style (right/left) as theexperts, it is easy to follow the forehand expert player whenever he receives a ball on the forehandand follow the backhand expert whenever he receives a ball on the backhand.We try to simulate this scenario in Pong. The trick is to blur the part of the screen where we wantto force the agent to be weak at returning the ball. The blurring we use is to just black out all pixelsin the specific region required. To make sure the blurring doesn’t contrast with the background, wemodify Pong to be played with a black background (pixel value 0) instead of the existing gray (pixelvalue 87). We construct two partially helpful source task experts L1andL2.L1is constructed by8Published as a conference paper at ICLR 2017Figure 4: Visualisation of the attention weights in the Selective Transfer with Attention Networkexperiment: Green and Blue bars signify the attention probabilities for Expert-1 ( L1) and Expert-2 (L2) respectively. We see that in the first two snapshots, the ball is in the lower quadrant andas expected, the attention is high on Expert-1, while in the third and fourth snapshots, as the ballbounces back into the upper quadrant, the attention increases on Expert-2.training a DQN on Pong with the upper quadrant (the agent’s side) blurred, while L2is constructedby training a DQN with the lower quadrant (the agent’s side) blurred. This essentially results inthe ball being invisible when it is in the upper quadrant for L1and lower quadrant for L2. Wetherefore expect L1to be useful in guiding to return balls on the lower quadrant, and L2for theupper quadrant. The goal of the attention network is to learn suitable filters and parameters so that itwill focus on the correct source task for a specific situation in the game. The source task experts L1andL2scored an average of 9.2and8respectively on Pong game play with black background. Withan attention network to suitably weigh the value functions of L1andL2, an average performance of17.2 was recorded just after a single epoch (250,000 frames) of training. (The score in Pong is in therange of [21;21]). This clearly shows that the attention mechanism has learned to take advantageof the experts adaptively. Fig. 4 shows a visualisation of the attention weights for the same.Figure 5: Selective Value Transfer.We then evaluate our full architecture (A2T) inthis setting, i.e with an addition of DQN learn-ing from scratch (base network) to the above set-ting. The architecture can take advantage of theknowledge of the source task experts selectivelyearly on during the training while using the ex-pertise of the base network wherever required, toperform well on the target task. Figure 5 sum-marizes the results, where it is clear that learn-ing with both the partially useful experts is betterthan learning with only one of them which in turnis better than learning from scratch without anyadditional knowledge.4.2 A BILITY TOAVOID NEGATIVE TRANSFER AND ABILITYTOTRANSFER FROM FAVORABLE TASKWe first consider the case when only one learnedsource task is available such that its solution K1(policy or value) can hamper the learning process of the new target task. We refer to such a sourcetask as an unfavorable source task. In such a scenario, the attention network shown in Figure 1ashould learn to assign a very low weight (ignore) to K1. We also consider a modification of thissetting by adding another source task whose solution K2is favorable to the target task. In such ascenario, the attention network should learn to assign high weight (attend) to K2while ignoring K1.We now define an experiment using the puddle world from Figure 2b for policy transfer. The targettask in our experiment is to maximize the return in reaching the goal state G1starting from any oneof the states S1;S2;S3;S4. We artificially construct an unfavorable source task by first learningto solve the above task and then negating the weights of the topmost layer of the actor network.We then add a favorable task to the above setting. We artificially construct a favorable source task9Published as a conference paper at ICLR 2017(a) Avoiding negative transfer(Pong) and transferringfrom a favorable task(b) Avoiding negative transfer(Freeway) and transfer-ring from a favorable taskFigure 7: Avoiding negative transfer and transferring value from a favorable task(higher the better).Specific training and architecture details are mentioned in APPENDIX. The plots are averaged overtwo runs with different random seeds.simply by learning to solve the target task and using the learned actor network. Figure 6 showsthe results. The target task for the value transfer experiment is to reach expert level performanceon Pong. We construct two kinds of unfavorable source tasks for this experiment. Inverse-Pong :A DQN on Pong trained with negated reward functions, that is with R0(s;a) =R(s;a)whereR(s;a)is the reward provided by the ALE emulator for choosing action aat states.Freeway :An expert DQN on another Atari 2600 game, Freeway, which has the same range of optimal valuefunctions and same action space as Pong. We empirically verified that the Freeway expert DQNleads to negative transfer when directly initialized and fine-tuned on Pong which makes this a goodproxy for a negative source task expert even though the target task Pong has a different state space.Figure 6: Avoiding negative transfer and trans-ferring policy from a favorable task(lower thebetter).We artificially construct a favorable source taskby learning a DQN to achieve expertise on thetarget task (Pong) and use the learned network.Figure 7a compares the performance of the var-ious scenarios when the unfavorable source taskis Inverse-Pong, while Figure 7b offers a similarcomparison with the negative expert being Free-way.From all the above results, we can clearly see thatA2T does not get hampered by the unfavorablesource task by learning to ignore the same andperforms competitively with just a randomly ini-tialized learning on the target task without any ex-pert available. Secondly, in the presence of an ad-ditional source task that is favorable, A2T learnsto transfer useful knowledge from the same whileignoring the unfavorable task, thereby reachingexpertise on the target task much faster than theother scenarios.4.3 V ISUALIZATION : EVOLUTION OFATTENTION WEIGHTS WITH ONE POSITIVE AND ONE NEGATIVE EXPERTWe present the evolution of attention weights for the experiment described in Section 4.2 wherewe focus on the efficacy of the A2T framework in providing an agent the ability to avoid negativetransfer andtransfer from a favorable source task (perfect expert) . Figure 8 depicts the evolution of10Published as a conference paper at ICLR 2017the attention weights (normalised in the range of [0;1]) during the training of the A2T framework.The corresponding experiment is the case where the target task is to solve Pong, while there are twosource task experts, one being a perfect Pong playing trained DQN (to serve as positive expert), andthe other being the Inverse-Pong DQN trained with negated reward functions (to serve as negativeexpert). Additionally, there’s also the base network that learns from scratch using the experiencegathered by the attentively combined behavioral policy from the expert networks, the base networkand itself.Figure 8: Evolution of attention weights withone positive and one negative expert.We train the framework for 30 epochs, and theplot illustrates the attention weights every secondepoch. We clearly see from figure 8 that there isno weird co-adaptation that happens in the train-ing, and the attention on the negative expert isuniformly low throughout. Initially, the frame-work needs to collect some level of experienceto figure out that the positive expert is optimal(or close to optimal). Till then, the attention ismostly on the base network, which is learningfrom scratch. The attention then shifts to the pos-itive expert which in turn provides more reward-ing episodes and transition tuples to learn from.Finally, the attention drifts slowly to the base net-work from the positive expert again, after whichthe attention is roughly random in choosing be-tween the execution of positive expert and thebase network. This is because the base networkhas acquired sufficient expertise as the positiveexpert which happens to be optimal for the tar-get task. This visualization clearly shows that A2T is a powerful framework in ignoring a negativeexpert throughout and using a positive expert appropriately to learn quickly from the experiencegathered and acquire sufficient expertise on the target task.4.4 W HEN A PERFECT EXPERT IS NOT AVAILABLE AMONG THE SOURCE TASKSFigure 9: Partial Positive Expert ExperimentIn our experiments in the previous subsectiondealing with prevention of negative transfer andusing a favorable source task, we consider thepositive expert as a perfect (close to optimal) ex-pert on the same task we treat as the target task.This raises the question of relying on the pres-ence of a perfect expert as a positive expert. Ifwe have such a situation, the obvious solution isto execute each of the experts on the target taskand vote for them with probabilities proportionalto the average performance of each.The A2T framework is however generic and notintended to just do source task selection . We il-lustrate this with an additional baseline experi-ment, where the positive source task is an im-perfect expert on the target task . In such a case,just having a weighted average voting among theavailable source task networks based on their in-dividual average rewards is upper bounded by theperformance of the best available positive expert, which happens to be an imperfect expert on the tar-get task. Rather, the base network has to acquire new skills not present in the source task networks.We choose a partially trained network on Pong, that scores an average of 8(max: 21). The graphin figure 9 clearly shows that the A2T framework with a partial Pong expert and a negative expertperforms better than i) learning from scratch, ii) A2T with only one negative expert, and performsworse than A2T with one perfect positive expert and one negative expert. This is expected because11Published as a conference paper at ICLR 2017a partial expert cannot provide as much of expert knowledge as a perfect expert, but still providessome useful knowledge in speeding the process of solving the target task. An important conclusionfrom this experiment is that the A2T framework is capable of discovering new skills not availableamong any of the experts when such skills are required for optimally solving the target task. Tomaintain consistency, we perform the same number of runs for averaging scores and experimentedwith both learning rates and pick the better performing one (0.00025).5 C ONCLUSION AND FUTURE WORKIn this paper we present a very general deep neural network architecture, A2T, for transfer learningthat avoids negative transfer while enabling selective transfer from multiple source tasks in the samedomain. We show simple ways of using A2T for policy transfer and value transfer. We empiricallyevaluate its performance with different algorithms, using simulated worlds and games, and showthat it indeed achieves its stated goals. Apart from transferring task solutions, A2T can also be usedfor transferring other useful knowledge such as the model of the world.While in this work we focused on transfer between tasks that share the same state and action spacesand are in the same domain, the use of deep networks opens up the possibility of going beyond thissetting. For example, a deep neural network can be used to learn common representations [Parisottoet al. (2015)] for multiple tasks thereby enabling transfer between related tasks that could possiblyhave different state-action spaces. A hierarchical attention over the lower level filters across sourcetask networks while learning the filters for the target task network is another natural extension totransfer across tasks with different state-action spaces. The setup from Progressive Neural Networks[Rusu et al. (2016)] could be borrowed for the filter transfer, while the A2T setup can be retained forthe policy/value transfer. Exploring this setting for continuous control tasks so as to transfer frommodular controllers as well avoid negative transfer is also a potential direction for future research.The nature of tasks considered in our experiments is naturally connected to Hierarchical Reinforce-ment Learning and Continual Learning. For instance, the blurring experiments inspired from Tennisbased on experts for specific skills like Forehand and Backhand could be considered as learning fromsub-goals (program modules) like Forehand and Backhand to solve a more complex and broadertask like Tennis by invoking the relevant sub-goals (program modules). This structure could be veryuseful to build a household robot for general purpose navigation and manipulation whereby specificskills such as manipulation of different objects, navigating across different source-destination points,etc could be invoked when necessary. The attention network in the A2T framework is essentiallyasoft meta-controller and hence presents itself as a powerful differentiable tool for Continual andMeta Learning. Meta-Controllers have typically been been designed with discrete decision struc-ture over high level subgoals. This paper presents an alternate differentiable meta-controller with asoft-attention scheme. We believe this aspect can be exploited for differentiable meta-learning ar-chitectures for hierarchical reinforcement learning. Over all, we believe that A2T is a novel way toapproach different problems like Transfer Learning, Meta-Learning and Hierarchical ReinforcementLearning and further refinements on top of this design can be a good direction to explore.ACKNOWLEDGEMENTSThanks to the anonymous reviewers of ICLR 2017 who have provided thoughtful remarks andhelped us revise the paper. We would also like to thank Sherjil Ozair, John Schulman, YoshuaBengio, Sarath Chandar, Caglar Gulchere and Charu Chauhan for useful feedback about the work.12Published as a conference paper at ICLR 2017
Sy-SiOZNe
Review
7: Good paper, accept
The paper tackles important problems in multi-task reinforcement learning: avoid negative transfer and allow finer selective transfer. The method is based on soft attention mechanism, very general, and demonstrated to be applicable in both policy gradient and value iteration methods. The introduction of base network allows learning new policy if the prior policies aren't directly applicable. State-dependent sub policy selection allows finer control and can be thought of assigning state space to different sub policies/experts. The tasks are relatively simplistic but sufficient to demonstrate the benefits. One limitation is that the method is simple and the results/claims are mostly empirical. It would be interesting to see extensions to option-based framework, stochastic hard attention mechanism, sub-policy pruning, progressive networks. In figure 6, the read curve seems to perform worse than the rest in terms of final performance. Perhaps alternative information to put with figures is the attention mask activation statistics during learning, so that we may observe that it learns to turn off adversarial sub-policies and rely on newly learned base policy mostly. This is also generally good to check to see if any weird co-adaptation is happening.
4: The reviewer is confident but not absolutely certain that the evaluation is correct
Sy6iJDqlx
ICLR.cc/2017/conference
2017
Attend, Adapt and Transfer: Attentive Deep Architecture for Adaptive Transfer from multiple sources in the same domain
["Janarthanan Rajendran", "Aravind Lakshminarayanan", "Mitesh M. Khapra", "Prasanna P", "Balaraman Ravindran"]
Transferring knowledge from prior source tasks in solving a new target task can be useful in several learning applications. The application of transfer poses two serious challenges which have not been adequately addressed. First, the agent should be able to avoid negative transfer, which happens when the transfer hampers or slows down the learning instead of helping it. Second, the agent should be able to selectively transfer, which is the ability to select and transfer from different and multiple source tasks for different parts of the state space of the target task. We propose A2T (Attend Adapt and Transfer), an attentive deep architecture which adapts and transfers from these source tasks. Our model is generic enough to effect transfer of either policies or value functions. Empirical evaluations on different learning algorithms show that A2T is an effective architecture for transfer by being able to avoid negative transfer while transferring selectively from multiple source tasks in the same domain.
["Deep learning", "Reinforcement Learning", "Transfer Learning"]
ABSTRACTTransferring knowledge from prior source tasks in solving a new target task canbe useful in several learning applications. The application of transfer poses twoserious challenges which have not been adequately addressed. First, the agentshould be able to avoid negative transfer, which happens when the transfer ham-pers or slows down the learning instead of helping it. Second, the agent shouldbe able to selectively transfer, which is the ability to select and transfer from dif-ferent and multiple source tasks for different parts of the state space of the targettask. We propose A2T (Attend, Adapt and Transfer), an attentive deep architec-ture which adapts and transfers from these source tasks. Our model is genericenough to effect transfer of either policies or value functions. Empirical evalua-tions on different learning algorithms show that A2T is an effective architecturefor transfer by being able to avoid negative transfer while transferring selectivelyfrom multiple source tasks in the same domain.1 I NTRODUCTIONOne of the goals of Artificial Intelligence (AI) is to build autonomous agents that can learn andadapt to new environments. Reinforcement Learning (RL) is a key technique for achieving suchadaptability. The goal of RL algorithms is to learn an optimal policy for choosing actions thatmaximize some notion of long term performance. Transferring knowledge gained from tasks solvedearlier to solve a new target task can help, either in terms of speeding up the learning process orin terms of achieving a better solution, among other performance measures. When applied to RL,transfer could be accomplished in many ways (see Taylor & Stone (2009; 2011) for a very goodsurvey of the field). One could use the value function from the source task as an initial estimate inthe target task to cut down exploration [Sorg & Singh (2009)]. Alternatively one could use policiesfrom the source task(s) in the target task. This can take one of two forms - (i) the derived policiescan be used as initial exploratory trajectories [Atkeson & Schaal (1997); Niekum et al. (2013)] inthe target task and (ii) the derived policy could be used to define macro-actions which may then beused by the agent in solving the target task [Mannor et al. (2004); Brunskill & Li (2014)].Authors contributed equally1Published as a conference paper at ICLR 2017While transfer in RL has been much explored, there are two crucial issues that have not been ad-equately addressed in the literature. The first is negative transfer , which occurs when the transferresults in a performance that is worse when compared to learning from scratch in the target task.This severely limits the applicability of many transfer techniques only to cases for which some mea-sure of relatedness between source and target tasks can be guaranteed beforehand. This brings usto the second problem with transfer, which is the issue of identifying an appropriate source taskfrom which to transfer. In some scenarios, different source tasks might be relevant and useful fordifferent parts of the state space of the target task. As a real world analogy, consider multiple players(experts) who are good at different aspects of a game (say, tennis). For example, Player 1 is good atplaying backhand shots while Player 2 is good at playing forehand shots. Consider the case of a newplayer (agent) who wants to learn tennis by selectively learning from these two experts. We handlesuch a situation in our architecture by allowing the agent to learn how to pick and use solutions frommultiple and different source tasks while solving a target task, selectively applicable for differentparts of the state space. We call this selective transfer . Our agent can transfer knowledge fromPlayer 1 when required to play backhand shots and Player 2 for playing forehand shots. Further,let us consider consider the situation that both Player 1 and Player 2 are bad at playing drop shots.Apart from the source tasks, we maintain a base network that learns from scratch on the target task.The agent can pick and use the solution of the base network when solving the target task at the partsof the state space where transferring from the source tasks is negative. Such a situation could arisewhen the source task solutions are irrelevant for solving the target task over a specific portion of thestate space, or when the transferring from the source tasks is negative over a specific portion of thestate space (for example, transferring the bad drop shot abilities of Players 1 and 2). This situationalso entails the first problem of avoiding negative transfer. Our framework allows an agent to avoidtransferring from both Players 1 and 2 while learning to play drop shots, and rather acquire the dropshot skill by learning to use the base network. The architecture is trained such that the base networkuses not just the experience obtained through the usage of its solutions in the target task, but theoverall experience acquired using the combined knowledge of the source tasks and itself. This en-ables the base network solutions to get closer to the behavior of the overall architecture (which usesthe source task solutions as well). This makes it easier for the base network to assist the architectureto fine tune the useful source task solutions to suit the target task perfectly over time.The key contribution in the architecture is a deep attention network , that decides which solutions toattend to, for a given input state. The network learns solutions as a function of current state therebyaiding the agent in adopting different solutions for different parts of the state space in the target task.To this end, we propose A2T: Attend, Adapt and Transfer, an Attentive Deep Architecture for Adap-tive Transfer, that avoids negative transfer while performing selective transfer from multiple sourcetasks in the same domain. In addition to the tennis example, A2T is a fairly generic framework thatcan be used to selectively transfer different skills available from different experts as appropriate tothe situation. For instance, a household robot can appropriately use skills from different expertsfor different household chores. This would require the skill to transfer manipulation skills acrossobjects, tasks and robotic actuators. With a well developed attention mechanism, the most appropri-ate and helpful combination of object-skill-controller can be identified for aiding the learning on arelated new task. Further, A2T is generic enough to effect transfer of either action policies or action-value functions, as the case may be. We also adapt different algorithms in reinforcement learningas appropriate for the different settings and empirically demonstrate that the A2T is effective fortransfer learning for each setting.2 R ELATED WORKAs mentioned earlier, transfer learning approaches could deal with transferring policies or valuefunctions. For example, Banerjee & Stone (2007) describe a method for transferring value functionsby constructing a Game tree . Similarly, Sorg & Singh (2009) use the value function from a sourcetask as the initial estimate of the value function in the target task.Another method to achieve transfer is to reuse policies derived in the source task(s) in the targettask. Probabilistic Policy Reuse as discussed in Fern ́andez & Veloso (2006) maintains a library ofpolicies and selects a policy based on a similarity metric, or a random policy, or a max-policy fromthe knowledge obtained. This is different from the proposed approach in that the proposed approach2Published as a conference paper at ICLR 2017can transfer policies at the granularity of individual states which is not possible in policy-reuserendering it unable to learn customized policy at that granularity.Atkeson & Schaal (1997); Niekumet al. (2013) evaluated the idea of having the transferred policy from the source tasks as explorativepolicies instead of having a random exploration policy. This provides better exploration behaviorprovided the tasks are similar. Talvitie & Singh (2007) try to find the promising policy from a setof candidate policies that are generated using different action mapping to a single solved task. Incontrast, we make use of one or more source tasks to selectively transfer policies at the granularityof state. Apart from policy transfer and value transfer as discussed above, Ferguson & Mahadevan(2006) discuss representation transfer using Proto Value Functions.The idea of negative and selective transfer have been discussed earlier in the literature. For example,Lazaric & Restelli (2011) address the issue of negative transfer in transferring samples for a relatedtask in a multi-task setting. Konidaris et al. (2012) discuss the idea of exploiting shared commonfeatures across related tasks. They learn a shaping function that can be used in later tasks.The two recent works that are very relevant to the proposed architecture are discussed in Parisottoet al. (2015) and Rusu et al. (2016). Parisotto et al. (2015) explore transfer learning in RL acrossAtari games by trying to learn a multi-task network over the source tasks available and directly fine-tune the learned multi-task network on the target task. However, fine-tuning as a transfer paradigmcannot address the issue of negative transfer which they do observe in many of their experiments.Rusu et al. (2016) try to address the negative transfer issue by proposing a sequential learning mech-anism where the filters of the network being learned for an ongoing task are dependent throughlateral connections on the lower level filters of the networks learned already for the previous tasks.The idea is to ensure that dependencies that characterize similarity across tasks could be learnedthrough these lateral connections. Even though they do observe better transfer results than directfine-tuning, they are still not able to avoid negative transfer in some of their experiments.3 P ROPOSED ARCHITECTURELet there be Nsource tasks and let K1;K2;:::KNbe the solutions of these source tasks 1;:::Nrespectively. Let KTbe the solution that we learn in the target task T. Source tasks refer to tasksthat we have already learnt to perform and target task refers to the task that we are interested inlearning now. These solutions could be for example policies or state-action values. Here the sourcetasks should be in the same domain as the target task, having the same state and action spaces. Wepropose a setting where KTis learned as a function of K1;:::;KN;KB, whereKBis the solutionof a base network which starts learning from scratch while acting on the target task. In this work,we use a convex combination of the solutions to obtain KT.KT(s) =wN+1;sKB(s) +NXi=1wi;sKi(s) (1)N+1Xi=1wi;s= 1;wi;s2[0;1] (2)wi;sis the weight given to the ith solution at state s.The agent uses KTto act in the target task. Figure 1a shows the proposed architecture. While thesource task solutions K1;:::;KNremain fixed, the base network solutions are learnt and hence KBcan change over time. There is a central network which learns the weights ( wi;s,i21;2;:::;N +1),given the input state s. We refer to this network as the attention network . The [0;1]weights deter-mine the attention each solution gets allowing the agent to selectively accept or reject the differentsolutions, depending on the input state. We adopt a soft-attention mechanism whereby more thanone weight can be non-zero [Bahdanau et al. (2014)] as opposed to a hard-attention mechanism[Mnih et al. (2014)] where we are forced to have only one non-zero weight.wi;s=exp (ei;s)N+1Pj=1exp (ej;s);i2f1;2;:::;N + 1g (3)3Published as a conference paper at ICLR 2017(a) (b)Figure 1: (a) A2T architecture. The doted arrows represent the path of back propagation. (b) Actor-Critic using A2T.(e1;s;e2;s;:::;eN+1;s) =f(s;a) (4)Here,f(s;a)is a deep neural network (attention network), which could consist of convolutionlayers and fully connected layers depending on the representation of input. It is parametrised by aand takes as input a state sand outputs a vector of length N+ 1, which gives the attention scoresfor theN+ 1solutions at state s. Eq.(3) normalises this score to get the weights that follow Eq.(2).If theith source task solution is useful at state s, thenwi;sis set to a high value by the attentionnetwork. Working at the granularity of states allows the attention network to attend to differentsource tasks, for different parts of the state space of the target task, thus giving it the ability toperform selective transfer. For parts of the state space in the target task, where the source tasksolutions cause negative transfer or where the source task solutions are not relevant, the attentionnetwork learns to give high weight to the base network solution (which can be learnt and improved),thus avoiding negative transfer.Depending on the feedback obtained from the environment upon following KT, the attention net-work’s parameters aare updated to improve performance.As mentioned earlier, the source task solutions, K1;:::;KNremain fixed. Updating these sourcetask’s parameters would cause a significant amount of unlearning in the source tasks solutions andresult in a weaker transfer, which we observed empirically. This also enables the use of source tasksolutions, as long as we have the outputs alone, irrespective of how and where they come from.Even though the agent follows KT, we update the parameters of the base network that producesKB, as if the action taken by the agent was based only on KB. Due to this special way of updatingKB, apart from the experience got through the unique and individual contribution of KBtoKTinparts of the state space where the source task solutions are not relevant, KBalso uses the valuableexperience got by using KTwhich uses the solutions of the source tasks as well.This also means that, if there is a source task whose solution Kjis useful for the target task insome parts of its state space, then KBtries to replicate Kjin those parts of the state space. Inpractise, the source task solutions though useful, might need to be modified to suit perfectly for thetarget task. The base network takes care of these modifications required to make the useful sourcetask solutions perfect for the target task. The special way of training the base network assists thearchitecture in achieving this faster. Note that the agent could follow/use KjthroughKTeven whenKBdoes not attain its replication in the corresponding parts of the state space. This allows for agood performance of the agent in earlier stages training itself, when a useful source task is availableand identified.Since the attention is soft, our model has the flexibility to combine multiple solutions. The use ofdeep neural networks allow the model to work even for large, complex RL problems. The deepattention network, allows the agent to learn complex selection functions, without worrying about4Published as a conference paper at ICLR 2017representation issues a priori. To summarise, for a given state, A2T learns to attend to specificsolutions and adapts this attention over different states, hence attaining useful transfer . A2T isgeneral and can be used for transfer of solutions such as policy and value.3.1 P OLICY TRANSFERThe solutions that we transfer here are the source task policies, taking advantage of which, we learna policy for the target task. Thus, we have K1;:::;KN;KB;KT 1;:::N;B;T. Hererepresents a stochastic policy, a probability distribution over all the actions. The agent acts in thetarget task, by sampling actions from the probability distribution T. The target task policy Tis gotas described in Eq.(1) and Eq.(2). The attention network that produces the weights for the differentsolutions, is trained by the feedback got after taking action following T. The base network thatproducesBis trained as if the sampled action came from B(though it originally came from T),the implications of which were discussed in the previous section. When the attention network’sweight for the policy Bis high, the mixture policy Tis dominated by B, and the base networklearning is nearly on-policy. In the other cases, Bundergoes off-policy learning. But if we lookclosely, even in the latter case, since Bmoves towards T, it tries to be nearly on-policy all thetime. Empirically, we observe that Bconverges. This architecture for policy transfer can be usedalongside any algorithm that has an explicit representation of the policy. Here we describe twoinstantiations of A2T for policy transfer, one for direct policy search using REINFORCE algorithmand another in the Actor-Critic setup.3.1.1 P OLICY TRANSFER IN REINFORCE A LGORITHMS USING A2T:REINFORCE algorithms [Williams (1992)] can be used for direct policy search by making weightadjustments in a direction that lies along the gradient of the expected reinforcement. The full ar-chitecture is same as the one shown in Fig.1a with K . We do direct policy search, and theparameters are updated using REINFORCE. Let the attention network be parametrized by aandthe base network which outputs Bbe parametrized by b. The updates are given by:a a+a(rb)@PMt=1log(T(st;at))@a(5)b b+b(rb)@PMt=1log(B(st;at))@b(6)wherea;bare non-negative factors, ris the return obtained in the episode, bis some baselineandMis the length of the episode. atis the action sampled by the agent at state stfollowingT.Note that while T(st;at)is used in the update of the attention network, B(st;at)is used in theupdate of the base network.3.1.2 P OLICY TRANSFER IN ACTOR -CRITIC USING A2T:Actor-Critic methods [Konda & Tsitsiklis (2000)] are Temporal Difference (TD) methods that havetwo separate components, viz., anactor and a critic . The actor proposes a policy whereas the criticestimates the value function to critique the actor’s policy. The updates to the actor happens throughTD-error which is the one step estimation error that helps in reinforcing an agent’s behaviour.We use A2T for the actor part of the Actor-Critic. The architecture is shown in Fig.1b. The actor,A2T is aware of all the previous learnt tasks and tries to use those solution policies for its benefit.The critic evaluates the action selection from Ton the basis of the performance on the target task.With the same notations as REINFORCE for st;at;a;b;a;b;B;T; let actionatdictatedbyTlead the agent to next state st+1with a reward rt+1and letV(st)represent the value of statestandthe discount factor. Then, the update equations for the actor are as below:t=rt+1+V(st+1)V(st) (7)a a+at@logT(st;at)@a@logT(st;at)@a(8)5Published as a conference paper at ICLR 2017b b+bt@logB(st;at)@b@logB(st;at)@b(9)Here,tis the TD error. The state-value function Vof the critic is learnt using TD learning.3.2 V ALUE TRANSFERIn this case, the solutions being transferred are the source tasks’ action-value functions, which wewill call asQfunctions. Thus, K1;:::;KN;KB;KT Q1;:::;QN;QB;QT. LetArepresentthe discrete action space for the tasks and Qi(s) =fQ(s;aj)8aj2Ag. The agent acts by usingQTin the target task, which is got as described in Eq.(1) and Eq.(2). The attention network and thebase network of A2T are updated as described in the architecture.3.2.1 V ALUE TRANSFER IN QLEARNING USING A2T:The state-action value Qfunction is used to guide the agent to selecting the optimal action aat astates, whereQ(s;a)is a measure of the long-term return obtained by taking action aat states. Oneway to learn optimal policies for an agent is to estimate the optimal Q(s;a)for the task. Q-learning[Watkins & Dayan (1992)] is an off-policy Temporal Difference (TD) learning algorithm that doesso. The Q-values are updated iteratively through the Bellman optimality equation [Puterman (1994)]with the rewards obtained from the task as below:Q(s;a) E[r(s;a;s0) +maxa0Q(s0;a0)]In high dimensional state spaces, it is infeasible to update Q-value for all possible state-action pairs.One way to address this issue is by approximating Q(s;a)through a parametrized function approx-imatorQ(s;a;),thereby generalizing over states and actions by operating on higher level features[Sutton & Barto (1998)]. The DQN [Mnih et al. (2015)] approximates the Q-value function with adeep neural network to be able to predict Q(s;a)over all actions a, for all states s.The loss function used for learning a Deep Q Network is as below:L() =Es;a;r;s0[yDQNQ(s;a;)2];withyDQN=r+maxa0Q(s0;a0;)Here,Lrepresents the expected TD error corresponding to current parameter estimate .rep-resents the parameters of a separate target network , whilerepresents the parameters of the onlinenetwork . The usage of a target network is to improve the stability of the learning updates. Thegradient descent step is shown below:rL() =Es;a;r;s0[(yDQNQ(s;a;))rQ(s;a)]To avoid correlated updates from learning on the same transitions that the current network simulates,an experience replay [Lin (1993)] D(of fixed maximum capacity) is used, where the experiencesare pooled in a FIFO fashion.We use DQN to learn our experts Qi;i21;2:::N on the source tasks. Q-learning is used to ensureQT(s)is driven to a good estimate of Qfunctions for the target task. Taking advantage of the off-policy nature of Q-learning, both QBandQTcan be learned from the experiences gathered by an-greedy behavioral policy based on QT. Let the attention network that outputs wbe parametrisedbyaand the base network outputting QBbe parametrised by b. Letaandbrepresent theparameters of the respective target networks. Note that the usage of target here is to signify theparameters ( a;b) used to calculate the target value in the Q-learning update and is different fromits usage in the context of the target task. The update equations are:yQT= (r+maxa0QT(s0;a0;a;b)) (10)LQT(a;b) =Es;a;r;s0[(yQTQT(s;a;a;b))2] (11)6Published as a conference paper at ICLR 2017(a) Chain World (b) Puddle World 1 (c) Puddle World 2Figure 2: Different worlds for policy transfer experimentsLQB(b) =Es;a;r;s0[(yQTQB(s;a;b))2] (12)raLQT=E[(yQTQT(s;a))raQT(s;a)] (13)rbLQB=E[(yQTQB(s;a))rbQR(s;a)] (14)aandbare updated with the above gradients using RMSProp. Note that the Q-learning updates forboth the attention network (Eq.(11)) and the base network (Eq.(12)) use the target value generatedbyQT. We use target networks for both QBandQTto stabilize the updates and reduce the non-stationarity as in DQN training. The parameters of the target networks are periodically updated tothat of the online networks.4 E XPERIMENTS AND DISCUSSIONWe evaluate the performance of our architecture A2T on policy transfer using two simulated worlds,viz., chain world and puddle world as described below. The main goal of these experiments is to testthe consistency of results with the algorithm motivation. Chain world: Figure 2a shows the chainworld where the goal of the agent is to go from one point in the chain (starting state) to anotherpoint (goal state) in the least number of steps. At each state the agent can choose to either moveone position to the left or to the right. After reaching the goal state the agent gets a reward that isinversely proportional to the number of steps taken to reach the goal.Puddle worlds: Figures 2b and 2c show the discrete version of the standard puddle world thatis widely used in Reinforcement Learning literature. In this world, the goal of the agent is to gofrom a specified start position to the goal position, maximising its return. At each state the agentcan choose one of these four actions: move one position to the north, south, east or west.With 0:9probability the agent moves in the chosen direction and with 0:1probability it moves in a randomdirection irrespective of its choice of action. On reaching the goal state, the agent gets a rewardof+10. On reaching other parts of the grid the agent gets different penalties as mentioned in thelegend of the figures. . We evaluate the performance of our architecture on value transfer using theArcade Learning Environment (ALE) platform [Bellemare et al. (2012)]. Atari 2600: ALE providesa simulator for Atari 2600 games. This is one of the most commonly used benchmark tasks for deepreinforcement learning algorithms [Mnih et al. (2015), Mnih et al. (2016), Parisotto et al. (2015),Rusu et al. (2016)]. We perform our adaptive transfer learning experiments on the Atari 2600 gamePong.4.1 A BILITY TO DO SELECTIVE TRANSFERIn this section, we consider the case when multiple partially favorable source tasks are availablesuch that each of them can assist the learning process for different parts of the state space of thetarget task. The objective here is to first show the effectiveness of the attention network in learningtofocus only on the source task relevant to the state the agent encounters while trying to completethe target task and then evaluating the full architecture with an additional randomly initialised basenetwork.7Published as a conference paper at ICLR 2017(a) The weights given by the attention network. Selectivetransfer in REINFORCE(b) Selective transfer in Actor-CriticFigure 3: Results of the selective policy transfer experimentsThis is illustrated for the Policy Transfer setting using the chain world shown in (Fig. 2a). Considerthat the target task LTis to start inAorBwith uniform probability and reach Cin the least numberof steps. Now, consider that two learned source tasks, viz.,L1andL2, are available. L1is thesource task where the agent has learned to reach the left end ( A) starting from the right end ( B). Incontrast,L2is the source task where the agent has learned to reach the right end ( B) starting fromthe left end ( A). Intuitively, it is clear that the target task should benefit from the policies learnt fortasksL1andL2. We learn to solve the task LTusing REINFORCE given the policies learned forL1andL2. Figure 3a (i) shows the weights given by the attention network to the two source taskpolicies for different parts of the state space at the end of learning. We observe that the attentionnetwork has learned to ignore L1, andL2for the left, and right half of the state space of the targettask, respectively. Next, we add base network and evaluate the full architecture on this task. Figure3a (ii) shows the weights given by the attention network to the different source policies for differentparts of the state space at the end of learning. We observe that the attention network has learned toignoreL1, andL2for the left, and right half of the state space of the target task, respectively. As thebase network replicates Tover time, it has a high weight throughout the state space of the targettask.We also evaluate our architecture in a relatively more complex puddle world shown in Figure 2c. Inthis case,L1is the task of moving from S1toG1, andL2is the task of moving from S2toG1.In the target task LT, the agent has to learn to move to G1starting from either S1orS2chosenwith uniform probability. We learn the task LTusing Actor-Critic method, where the following areavailable (i) learned policy for L1(ii) learned policy for L2and (iii) a randomly initialized policynetwork (the base network). Figure 3b shows the performance results. We observe that actor-criticusing A2T is able to use the policies learned for L1, andL2and performs better than a networklearning from scratch without any knowledge of source tasks.We do a similar evaluation of the attention network, followed by our full architecture for valuetransfer as well. We create partially useful source tasks through a modification of the Atari 2600game Pong. We take inspiration from a real world scenario in the sport Tennis, where one couldimagine two different right-handed (or left) players with the first being an expert player on theforehand but weak on the backhand, while the second is an expert player on the backhand but weakon the forehand. For someone who is learning to play tennis with the same style (right/left) as theexperts, it is easy to follow the forehand expert player whenever he receives a ball on the forehandand follow the backhand expert whenever he receives a ball on the backhand.We try to simulate this scenario in Pong. The trick is to blur the part of the screen where we wantto force the agent to be weak at returning the ball. The blurring we use is to just black out all pixelsin the specific region required. To make sure the blurring doesn’t contrast with the background, wemodify Pong to be played with a black background (pixel value 0) instead of the existing gray (pixelvalue 87). We construct two partially helpful source task experts L1andL2.L1is constructed by8Published as a conference paper at ICLR 2017Figure 4: Visualisation of the attention weights in the Selective Transfer with Attention Networkexperiment: Green and Blue bars signify the attention probabilities for Expert-1 ( L1) and Expert-2 (L2) respectively. We see that in the first two snapshots, the ball is in the lower quadrant andas expected, the attention is high on Expert-1, while in the third and fourth snapshots, as the ballbounces back into the upper quadrant, the attention increases on Expert-2.training a DQN on Pong with the upper quadrant (the agent’s side) blurred, while L2is constructedby training a DQN with the lower quadrant (the agent’s side) blurred. This essentially results inthe ball being invisible when it is in the upper quadrant for L1and lower quadrant for L2. Wetherefore expect L1to be useful in guiding to return balls on the lower quadrant, and L2for theupper quadrant. The goal of the attention network is to learn suitable filters and parameters so that itwill focus on the correct source task for a specific situation in the game. The source task experts L1andL2scored an average of 9.2and8respectively on Pong game play with black background. Withan attention network to suitably weigh the value functions of L1andL2, an average performance of17.2 was recorded just after a single epoch (250,000 frames) of training. (The score in Pong is in therange of [21;21]). This clearly shows that the attention mechanism has learned to take advantageof the experts adaptively. Fig. 4 shows a visualisation of the attention weights for the same.Figure 5: Selective Value Transfer.We then evaluate our full architecture (A2T) inthis setting, i.e with an addition of DQN learn-ing from scratch (base network) to the above set-ting. The architecture can take advantage of theknowledge of the source task experts selectivelyearly on during the training while using the ex-pertise of the base network wherever required, toperform well on the target task. Figure 5 sum-marizes the results, where it is clear that learn-ing with both the partially useful experts is betterthan learning with only one of them which in turnis better than learning from scratch without anyadditional knowledge.4.2 A BILITY TOAVOID NEGATIVE TRANSFER AND ABILITYTOTRANSFER FROM FAVORABLE TASKWe first consider the case when only one learnedsource task is available such that its solution K1(policy or value) can hamper the learning process of the new target task. We refer to such a sourcetask as an unfavorable source task. In such a scenario, the attention network shown in Figure 1ashould learn to assign a very low weight (ignore) to K1. We also consider a modification of thissetting by adding another source task whose solution K2is favorable to the target task. In such ascenario, the attention network should learn to assign high weight (attend) to K2while ignoring K1.We now define an experiment using the puddle world from Figure 2b for policy transfer. The targettask in our experiment is to maximize the return in reaching the goal state G1starting from any oneof the states S1;S2;S3;S4. We artificially construct an unfavorable source task by first learningto solve the above task and then negating the weights of the topmost layer of the actor network.We then add a favorable task to the above setting. We artificially construct a favorable source task9Published as a conference paper at ICLR 2017(a) Avoiding negative transfer(Pong) and transferringfrom a favorable task(b) Avoiding negative transfer(Freeway) and transfer-ring from a favorable taskFigure 7: Avoiding negative transfer and transferring value from a favorable task(higher the better).Specific training and architecture details are mentioned in APPENDIX. The plots are averaged overtwo runs with different random seeds.simply by learning to solve the target task and using the learned actor network. Figure 6 showsthe results. The target task for the value transfer experiment is to reach expert level performanceon Pong. We construct two kinds of unfavorable source tasks for this experiment. Inverse-Pong :A DQN on Pong trained with negated reward functions, that is with R0(s;a) =R(s;a)whereR(s;a)is the reward provided by the ALE emulator for choosing action aat states.Freeway :An expert DQN on another Atari 2600 game, Freeway, which has the same range of optimal valuefunctions and same action space as Pong. We empirically verified that the Freeway expert DQNleads to negative transfer when directly initialized and fine-tuned on Pong which makes this a goodproxy for a negative source task expert even though the target task Pong has a different state space.Figure 6: Avoiding negative transfer and trans-ferring policy from a favorable task(lower thebetter).We artificially construct a favorable source taskby learning a DQN to achieve expertise on thetarget task (Pong) and use the learned network.Figure 7a compares the performance of the var-ious scenarios when the unfavorable source taskis Inverse-Pong, while Figure 7b offers a similarcomparison with the negative expert being Free-way.From all the above results, we can clearly see thatA2T does not get hampered by the unfavorablesource task by learning to ignore the same andperforms competitively with just a randomly ini-tialized learning on the target task without any ex-pert available. Secondly, in the presence of an ad-ditional source task that is favorable, A2T learnsto transfer useful knowledge from the same whileignoring the unfavorable task, thereby reachingexpertise on the target task much faster than theother scenarios.4.3 V ISUALIZATION : EVOLUTION OFATTENTION WEIGHTS WITH ONE POSITIVE AND ONE NEGATIVE EXPERTWe present the evolution of attention weights for the experiment described in Section 4.2 wherewe focus on the efficacy of the A2T framework in providing an agent the ability to avoid negativetransfer andtransfer from a favorable source task (perfect expert) . Figure 8 depicts the evolution of10Published as a conference paper at ICLR 2017the attention weights (normalised in the range of [0;1]) during the training of the A2T framework.The corresponding experiment is the case where the target task is to solve Pong, while there are twosource task experts, one being a perfect Pong playing trained DQN (to serve as positive expert), andthe other being the Inverse-Pong DQN trained with negated reward functions (to serve as negativeexpert). Additionally, there’s also the base network that learns from scratch using the experiencegathered by the attentively combined behavioral policy from the expert networks, the base networkand itself.Figure 8: Evolution of attention weights withone positive and one negative expert.We train the framework for 30 epochs, and theplot illustrates the attention weights every secondepoch. We clearly see from figure 8 that there isno weird co-adaptation that happens in the train-ing, and the attention on the negative expert isuniformly low throughout. Initially, the frame-work needs to collect some level of experienceto figure out that the positive expert is optimal(or close to optimal). Till then, the attention ismostly on the base network, which is learningfrom scratch. The attention then shifts to the pos-itive expert which in turn provides more reward-ing episodes and transition tuples to learn from.Finally, the attention drifts slowly to the base net-work from the positive expert again, after whichthe attention is roughly random in choosing be-tween the execution of positive expert and thebase network. This is because the base networkhas acquired sufficient expertise as the positiveexpert which happens to be optimal for the tar-get task. This visualization clearly shows that A2T is a powerful framework in ignoring a negativeexpert throughout and using a positive expert appropriately to learn quickly from the experiencegathered and acquire sufficient expertise on the target task.4.4 W HEN A PERFECT EXPERT IS NOT AVAILABLE AMONG THE SOURCE TASKSFigure 9: Partial Positive Expert ExperimentIn our experiments in the previous subsectiondealing with prevention of negative transfer andusing a favorable source task, we consider thepositive expert as a perfect (close to optimal) ex-pert on the same task we treat as the target task.This raises the question of relying on the pres-ence of a perfect expert as a positive expert. Ifwe have such a situation, the obvious solution isto execute each of the experts on the target taskand vote for them with probabilities proportionalto the average performance of each.The A2T framework is however generic and notintended to just do source task selection . We il-lustrate this with an additional baseline experi-ment, where the positive source task is an im-perfect expert on the target task . In such a case,just having a weighted average voting among theavailable source task networks based on their in-dividual average rewards is upper bounded by theperformance of the best available positive expert, which happens to be an imperfect expert on the tar-get task. Rather, the base network has to acquire new skills not present in the source task networks.We choose a partially trained network on Pong, that scores an average of 8(max: 21). The graphin figure 9 clearly shows that the A2T framework with a partial Pong expert and a negative expertperforms better than i) learning from scratch, ii) A2T with only one negative expert, and performsworse than A2T with one perfect positive expert and one negative expert. This is expected because11Published as a conference paper at ICLR 2017a partial expert cannot provide as much of expert knowledge as a perfect expert, but still providessome useful knowledge in speeding the process of solving the target task. An important conclusionfrom this experiment is that the A2T framework is capable of discovering new skills not availableamong any of the experts when such skills are required for optimally solving the target task. Tomaintain consistency, we perform the same number of runs for averaging scores and experimentedwith both learning rates and pick the better performing one (0.00025).5 C ONCLUSION AND FUTURE WORKIn this paper we present a very general deep neural network architecture, A2T, for transfer learningthat avoids negative transfer while enabling selective transfer from multiple source tasks in the samedomain. We show simple ways of using A2T for policy transfer and value transfer. We empiricallyevaluate its performance with different algorithms, using simulated worlds and games, and showthat it indeed achieves its stated goals. Apart from transferring task solutions, A2T can also be usedfor transferring other useful knowledge such as the model of the world.While in this work we focused on transfer between tasks that share the same state and action spacesand are in the same domain, the use of deep networks opens up the possibility of going beyond thissetting. For example, a deep neural network can be used to learn common representations [Parisottoet al. (2015)] for multiple tasks thereby enabling transfer between related tasks that could possiblyhave different state-action spaces. A hierarchical attention over the lower level filters across sourcetask networks while learning the filters for the target task network is another natural extension totransfer across tasks with different state-action spaces. The setup from Progressive Neural Networks[Rusu et al. (2016)] could be borrowed for the filter transfer, while the A2T setup can be retained forthe policy/value transfer. Exploring this setting for continuous control tasks so as to transfer frommodular controllers as well avoid negative transfer is also a potential direction for future research.The nature of tasks considered in our experiments is naturally connected to Hierarchical Reinforce-ment Learning and Continual Learning. For instance, the blurring experiments inspired from Tennisbased on experts for specific skills like Forehand and Backhand could be considered as learning fromsub-goals (program modules) like Forehand and Backhand to solve a more complex and broadertask like Tennis by invoking the relevant sub-goals (program modules). This structure could be veryuseful to build a household robot for general purpose navigation and manipulation whereby specificskills such as manipulation of different objects, navigating across different source-destination points,etc could be invoked when necessary. The attention network in the A2T framework is essentiallyasoft meta-controller and hence presents itself as a powerful differentiable tool for Continual andMeta Learning. Meta-Controllers have typically been been designed with discrete decision struc-ture over high level subgoals. This paper presents an alternate differentiable meta-controller with asoft-attention scheme. We believe this aspect can be exploited for differentiable meta-learning ar-chitectures for hierarchical reinforcement learning. Over all, we believe that A2T is a novel way toapproach different problems like Transfer Learning, Meta-Learning and Hierarchical ReinforcementLearning and further refinements on top of this design can be a good direction to explore.ACKNOWLEDGEMENTSThanks to the anonymous reviewers of ICLR 2017 who have provided thoughtful remarks andhelped us revise the paper. We would also like to thank Sherjil Ozair, John Schulman, YoshuaBengio, Sarath Chandar, Caglar Gulchere and Charu Chauhan for useful feedback about the work.12Published as a conference paper at ICLR 2017
S1CEWJdSl
Final Review: Learned convex combination of many fixed and a jointly learned expert is used to represent action policies in proof-of-concept, transfer/hierarchical RL, settings.
7: Good paper, accept
In this paper a well known soft mixture of experts model is adapted for, and applied to, a specific type of transfer learning problem in reinforcement learning (RL), namely transfer of action policies and value functions between similar tasks. Although not treated as such, the experimental setup is reminiscent of hierarchical RL works, an aspect which the paper does not consider at length, regrettably. One possible implication of this work is that architecture and even learning algorithm choices could simply be stated in terms of the objective of the target task, rather than being hand-engineered by the experimenter. This is clearly an interesting direction of future work which the paper illuminates. Pros: The paper diligently explains how the network architecture fits in with various widely used reinforcement learning setups, which does facilitate continuation of this work. The experiments are good proofs of concept, but do not go beyond that i.m.h.o. Even so, this work provides convincing clues that collections of deep networks, which were trained on not entirely different tasks, generalize better to related tasks when used together rather than through conventional transfer learning (e.g. fine-tuning). Cons: As the paper well recounts in the related work section, libraries of fixed policies have long been formally proposed for reuse while learning similar tasks. Indeed, it is well understood in hierarchical RL literature that it can be beneficial to reuse libraries of fixed (Fernandez & Veloso 2006) or jointly learned policies which may not apply to the entire state space, e.g. options (Pricop et. al). What is not well understood is how to build such libraries, and this paper does not convincingly shed light in that direction, as far as I can tell. The transfer tasks have been picked to effectively illustrate the potential of the proposed architecture, but the paper does not tackle negative transfer or compositional reuse in well known challenging situations outlined in previous work (e.g. Parisotto et. al 2015, Rusu el. al 2015, 2016). Since the main contributions are of an empirical nature, I am curious how the results shown in figures 6 & 7 look plotted against wall-clock time, since relatively low data efficiency is not a limitation for achieving perfect play in Pong (see Mnih. et al, 2015). It would be more illuminating to consider tasks where final performance is plausibly limited by data availability. It would also be interesting if the presented results were achieved with reduced amounts of computation, or reduced representation sizes compared to learning from scratch, especially when one of the useful source tasks is an actual policy trained on the target task. Finally, it is perhaps underwhelming that it takes a quarter of the data required for learning Pong from scratch just to figure out that a perfect Pong policy is already in the expert library. Simply evaluating each expert for 10 episodes and using an average-score-weighted majority vote to mix action choices would probably achieve the same final performance for a smaller fraction of the data.
4: The reviewer is confident but not absolutely certain that the evaluation is correct
ByBwSPcex
ICLR.cc/2017/conference
2017
Song From PI: A Musically Plausible Network for Pop Music Generation
["Hang Chu", "Raquel Urtasun", "Sanja Fidler"]
We present a novel framework for generating pop music. Our model is a hierarchical Recurrent Neural Network, where the layers and the structure of the hierarchy encode our prior knowledge about how pop music is composed. In particular, the bottom layers generate the melody, while the higher levels produce the drums and chords. We conduct several human studies that show strong preference of our generated music over that produced by the recent method by Google. We additionally show two applications of our framework: neural dancing and karaoke, as well as neural story singing.
["Applications"]
ABSTRACTWe present a novel framework for generating pop music. Our model is a hierarchi-cal Recurrent Neural Network, where the layers and the structure of the hierarchyencode our prior knowledge about how pop music is composed. In particular, thebottom layers generate the melody, while the higher levels produce the drums andchords. We conduct several human studies that show strong preference of our gen-erated music over that produced by the recent method by Google. We additionallyshow two applications of our framework: neural dancing and karaoke, as well asneural story singing.1 I NTRODUCTIONNeural networks have revolutionized many fields. They have not only proven to be powerful inperforming perception tasks such as image classification and language understanding, but have alsoshown to be surprisingly good “artists”. In Gatys et al. (2015 ), photos were turned into paintings byexploiting particular drawing styles such as Van Gogh’s, Kiros et al. (2015 ) produced stories aboutimages biased by writing style (e.g., romance books), Karpathy et al. (2016 ) wrote Shakespeareinspired novels, and Simo-Serra et al. (2015 ) gave fashion advice.Music composition is another artistic domain where neural based approaches have been proposed.Early approaches exploiting Recurrent Neural Networks ( Bharucha & Todd (1989 );Mozer (1996 );Chen & Miikkulainen (2001 );Eck & Schmidhuber (2002 )) date back to the 80’s. The main varia-tions between the different models is the representation of the notes and the outputs they produced,which typically encode melody and chord. Most of these approaches were single track, in that theyproduced only one note per time step. The exception is Boulanger-lewandowski et al. (2012 ) whichgenerated polyphonic music, i.e., simultaneous independent melodies.In this paper, we aim to generate pop music, where the melody but also chords and other instrumentsmake up what is typically called a song. We draw inspiration from the Song from byMacdonald1,a piano video on Youtube, where the pleasing music is created from a sequence of digits of . Thisvideo shows both the randomness and the regularity of music. On one hand, since any possible digitsequence is a subset of the digit sequence, this implies that pleasing music can be created evenfrom a totally random base signal. On the other hand, the composer uses specific rules such as AHarmonic Minor scale and harmonies to convert the digit sequence into a music sheet. It is theserules that play the key role in converting randomness into music.Following the ideas of Songs from , we aim to generate both the melody as well as accompanyingeffects such as chords and drums. Arguably, these turn even a not particularly pleasing melody intoa well sounding song. We propose a hierarchical approach, where each level is a Recurrent NeuralNetwork producing a key aspect of the song. The bottom layers generate the melody, while thehigher levels produce drums and chords. This enables the drum and chord layers to compensatefor the melody in order to produce appleasing music. Adopting the key idea from Songs from ,we condition our model on the scale type allowing the melody generator to learn the notes that aretypically played in a particular scale.1https://youtu.be/OMq9he-5HUU1Under review as a conference paper at ICLR 2017We train our model on 100 hours of midi music containing user-composed pop songs and videogame music. We conduct human studies with music generated with our approach and compare itagainst a recent approach by Google, showing that our songs are strongly preferred over the baseline.In our human study we also perform an ablation analysis of our model. We additionally show twonew applications: neural dancing and karaoke as well as neural music singing. As part of the firstapplication we generate a stickman dancing to our music and lyrics that can be sung with, while inthe second application we condition on the output of Kiros et al. (2015 ) which writes a story about animage and convert it into a pop song. We refer the reader to http://www.cs.toronto.edu/songfrompi/for our demos and results.2 R ELATED WORKGenerating music has been an active research area for decades. It brings together machines learn-ing researchers that aim to capture the complex structure of music ( Eck & Schmidhuber (2002 );Boulanger-lewandowski et al. (2012 )), as well as music professionals ( Chan et al. (2006 )) and en-thusiasts ( Johnson ;Sun) that want to see how far a computer can get to be a real composer. Real-timemusic generation is also explored for gaming ( Engels et al. (2015 )).Early approaches mostly instilled knowledge from music theory into generation, by using rules ofhow music segments can be stitched together in a plausible way, e.g., Chan et al. (2006 ). On theother hand, neural networks have been used for music generation since the 80’s ( Bharucha & Todd(1989 );Mozer (1996 );Chen & Miikkulainen (2001 );Eck & Schmidhuber (2002 )).Mozer (1996 )used a Recurrent Neural Network that produced pitch, duration and chord at each time step. Unlikemost other neural network approaches, this work encodes music knowledge into the representation.Eck & Schmidhuber (2002 ) was first to use LSTMs to generate both melody and chord. ComparedtoMozer (1996 ), the LSTM captured more global music structure across the song.Like us, Kang et al. (2012 ) built upon the randomness of melody by trying to accompany it withdrums. However, in their model the scale type is enforced. No details about the model are given, andthus it is virtually impossible to compare to. Boulanger-lewandowski et al. (2012 ) propose to learncomplex polyphonic musical structure which has multiple notes playing in parallel through the song.The model is single-track in that it only produces melody, whereas in our work we aim to producemulti-track songs. Just recently, Huang & Wu (2016 ) proposed a 2-layer LSTM that, like Boulanger-lewandowski et al. (2012 ), produces music that is more complex than a single note sequence, andis able to produce chords. The main novelty of our work over existing approaches is a hierarchicalmodel that incorporates knowledge from music theory to build the neural architecture, and producesmulti-track pop music (melody, chord, drum). We also present two novel fun applications.3 C ONCEPTS FROM MUSIC THEORYWe start by introducing the basic notation and definitions from music theory. A note defines thebasic unit that music is composed of. Music follows the 12-tone system, i.e., 12 is the cycle lengthof all notes. The 12 tones are: C,C♯=D♭,D,D♯=E♭,E,F,F♯=G♭,G,G♯=A♭,A,A♯=B♭,B. Abaris a short segment of time that corresponds to a specific number of beats (notes). The boundariesof the bar are indicated by vertical bar lines.Scale is a subset of notes. There are four types of scales most commonly used: Major (Minor ),Har-monic Minor ,Melodic Minor andBlues . Each scale type specifies a sequence of relative intervals(or shifts) which act relative to the starting note. For example, the sequence for the scale type Majoris2!2!1!2!2!2!1. Thus, C Major specifies the starting note to be C, and applyingthe relative sequence of shifts yields: C2 !D2 !E1 !F2 !G2 !A2 !B1 !C. The subset ofnotes specified by C Major is thus C, D, E, F, G, A, and B (a subset of seven notes). All scales typeshave a subset of seven notes except for Blues which has six. In total we have 48 unique scales, i.e.4 scale types and 12 possible starting notes. We treat Major andMinor as one type as for a Majorscale there is always a Minor that has exactly the same set of notes. In music theory, this is referredto as Relative Minor .2Under review as a conference paper at ICLR 2017xtprfytkeyytprsytchdytdrmxt1prfyt1keyyt1prsxt2prfyt2keyyt2prsxt3prfyt3keyyt3prsxt4prfyt4keyyt4prsyt4chdyt4drm...............xt8prfyt8keyyt8prsyt8chdyt8drmxt9prfyt9keyyt9prs...........................xt16prfyt16keyyt16prsyt16chdyt16drmxt17prfyt17keyyt17prsKey Layer jsPress LayerChord LayerDrum LayerFigure 1: Overview of our framework. Only skip connections for the current time step tare plotted.Chord is a group of notes that sound good together. Similarly to scale, a chord has a start note anda type defining a set of intervals. There are mainly 6 types in triads chords: Major Chord ,MinorChord ,Augmented Chord ,Diminished Chord ,Suspended 2nd Chord , and Suspended 4th Chord .The Circle of Fifths is often used to produce a chord progression. It maps 12 chord starting notesto a circle. When changing from one chord to another chord, moving to a nearby chord on the circleis often preferred as this forms a strong chord progression that produces the sense of harmony.4 H IERARCHICAL RECURRENT NETWORKS FOR POPMUSIC GENERATIONWe follow the high level idea behind the Song from to define our model. In particular, we gen-erate music with a hierarchical Recurrent Neural Network where the layers and the structure of thehierarchy encode our prior knowledge about how pop music is composed. We first outline the modeland describe the details and justifications for our choices in the subsections that follow.We condition our generation on the scale type, as this helps the model to pick up the regularities inpop songs. We encode melody with two random variables at each time step, representing which keyis being played (the key layer ) and the duration that the key will be pressed (the press layer ). Themelody is generated conditioned on the scale, which does not vary across the song as is typically thecase in pop music. We assume the drums and the chords are independent given the melody. Thusconditioned on the melody, at each time step we generate the chord (the chord layer ) as well as thedrums (the drum layer ). The output at all layers yields the final song. We refer the reader to Fig. 1for an illustration of our hierarchical model.4.1 T HE ROLE OF SCALEIt is known from music theory that while in principle each song has 12 tones to choose from, most ofthe notes are in fact only using the six (for Blues) or seven (for other scales) tone subsets specifiedby the scale rule. We found that by conditioning the music generator on scale it captures theseregularities more easily. However, we do not enforce the notes to be generated from the subset andallow our model to generate notes outside the scale.We confirm the above musical fact by analysing over 100 hours of pop song music fromthemidi man dataset. Since scale is defined relative to a starting note, we first try to factor outits influence and normalize all songs to have identical start note. To identify the scale of a song, wecompute the histogram over the 12 tones and match it with the 48 tone subsets of 4 scale types with12 different start notes. We then normalize all songs to have start note Cby applying a constant shifton all notes. This allows us to categorize any song into 4 scale types. Since this shift affects all notesat once, it does not affect how the song sounds (its harmony). Our analysis shows that for all notesin all Major scale songs, 94:66% are within the tone subset. For Harmonic Minor ,Melodic Minor ,3Under review as a conference paper at ICLR 2017(a) (b) (c) (d)Figure 2: Distribution of within-scale note ratio for four scale types. x-axis: percentage of tones within thescale type’s tone set, y-axis: percentage of songs of the scale type. (a)-(d) shows Major (Minor ),HarmonicMinor ,Melodic Minor , and Blues , respectively.andBlues the percentage of notes that belong to the main tone set is 87:16%,85:11%, and 90:93%,respectively. We refer the reader to Fig. 2, where the x-axis denotes the percentage of within-scalenotes of a song, and the y-axis indicates how many songs in the dataset have that percentage. Notethat the majority of the notes follow the scale rule. Furthermore, different scale types have differentinlier distribution. We thus represent scale with a single random variable s2 f1; ;4gwhich isfixed for the whole song, and condition the model on it.24.2 T WO-LAYER RNN FORMELODY GENERATIONWe represent the melody with two random variables per time step: which key is pressed, and theduration of the press. We use a RNN to generate the keys conditioned on the scale. Then conditionedon the output of the key layer, a second RNN generates the duration of the press at each time step.In particular, we model the key layer with a two-layer LSTM ( Hochreiter & Schmidhuber (1997 ))with a 512-dimensional hidden state, which outputs a note (key) at each time step. Note that wecondition on scale s, thus we have a different set of weights for each scale. We only allow notesbetween C3toC6as notes outside this range are usually too low or too high to sound good. Weremind the reader that given a scale, seven (or six for blues) out of the twelve notes (per octave) arestatistically more plausible, however we allow the model to choose from all 12. This results in a37-dimensional output, as there are 36 possible notes corresponding to 3 octaves with 12 notes peroctave, plus silence. Let htkeybe the hidden state of the second key decoder layer at time t. Wecompute the probability of each key using the softmax:P(ytkey)/exp(vytkeyhtkey) (1)where vytkeyis the row of V(the output embedding matrix of notes), corresponding to note ytkey.As input to the LSTM we use a vector that concatenates multiple features: a one-hot encoding of theprevious generated note yt1key, Lookback features, and the melody profile. The Lookback featureswere proposed by Google Magenta ( Waite et al. ) to make it easier for the model to memorizerecently produced notes and potentially repeat them. They include skip connections from two andone bar ago (a bar is 8 consecutively played notes), i.e., yt16keyandyt8key. They also contain twoadditional features, indicating whether the last generated key has been copied from one or two barsago, i.e. /x31(yt1key;yt18key)and /x31(yt1key;yt116key). They also add a 5-dimensional feature indicatinga binary encoding of the current time t. This helps the model keep track where in a 4bar range itis, and thus produce music accordingly.In addition, we introduce a new feature which we refer to as the melody profile . Intuitively, theprofile represents the high-level music flow. To get the profile for each song, we compute the localnote histogram at each time step with width of two bars, and cluster all local histograms within thesong into 10 clusters via k-means. We order the 10 clusters with mean note ordered from low tohigh as cluster 1 to 10, and apply moving averages on the cluster id sequence to encourage localsmoothness. This results in a 10-dimensional one-hot vector representation of the cluster id for eachtime step. This additional information allows the user to set the melody’s ups and downs of the song.The keys alone are not sufficient to describe how the melody is performed. Additionally we also needto know the duration that each key needs to be pressed for. Towards this goal, conditioned on the2For readers with musical background, the Twelve-Tone Serialism technique Schoenberg & Newlin (1951 )prevents emphasis of any one tone. However, our data analysis indicates that pop music is not influenced by it.4Under review as a conference paper at ICLR 2017(a) (b) (c) (d)Figure 3: Co-occurrence of tones in melody (y-axis) and chord (x-axis). (a)-(d) shows Major (Minor ),Har-monic Minor ,Melodic Minor , and Blues , respectively.melody, we generate the duration of each key with a two-layer LSTM with a 512-dimensional hiddenstate. We represent the duration of pressing as a forward counting sequence that is conditioned onthe generated melody. The press outputs 1 when a new key is pressed, and sequentially outputs 2,3, 4 and so on as the key is held on. When the current key is released, the press counter is reset to1. Compared to the event on-off representation of Waite et al. , our representation learns the melodyflow and how to press separately. This is important, as Waite et al. has extremely unbalanced outputdistributions dominated by the repeat-of-holding event. We represent press ytprsas a 8-dimensionalone-hot vector. The input to our LSTM is yt1prs, concatenated with the 37-dimensional one-hotencoding of the melody key ytkey.4.3 C HORD AND DRUM RNN L AYERSWe studied all existing chords in our 100 hours of pop music. Although in principle a chord can beany arbitrary combination of multiple notes, we observed that in the actual music data 99:19% ofthe chords belong to one of 72 chord classes (6 types 12 start notes). Fig. 3shows the correlationbetween the melody’s tone and the starting note of the chord playing at the same time. It can beseen that chord is strongly correlated with melody. These two findings inspire our design. We thusrepresent chord ytchdas a one-hot encoding with 72 classes, and predict it using a two-layer LSTMwith a 512-dimensional hidden state. We generate one chord at each time step. The input is yt4chdconcatenated with yt3:tkey.We look at our music dataset and find all unique drum patterns with duration of a half bar. We thencompute the histogram of all the patterns. This forms a long tail distribution, where 94:60% comesfrom the top 100 common patterns. We generate drum conditioned on the key layer using a two-layer LSTM with 512 dimensional hidden states. Drum ytdrmis represented as one-hot encodingwith of the 100 unique one-bar-long drum patterns. The input is yt4drmconcatenated with the notesfrom the previous three times steps yt3:tkey.4.4 L EARNINGWe use cross-entropy as our loss function to train each layer. We follow the typical training strategywhere we make predictions at each layer and time step but feed in ground-truth information to thenext. This effectively decomposes training, and allows to train all layers in parallel. We use theAdam optimizer, a learning rate of 2e-3 and a learning rate decay of 0.99 after each epoch for 10epochs.4.5 M USIC SYNTHESIS : PUTTING ALL THE OUTPUTS TOGETHERTo synthesize music we first randomly choose a scale and a profile xprf. For generating xprf, werandomly choose one cluster id with a random duration, and repeat until we get the desired totallength of the music sequence. We then perform inference in our model conditioned on the chosenscale, and use xprfas input to our key layer. At each time step, we sample a key according toP(ytkey). We encode it as a one-hot vector and pass to the press, chord and drum layers. We samplethe press, chords and drums at each time step in a similar fashion.5Under review as a conference paper at ICLR 2017Figure 4: Example of our music generation. From top to bottom: melody, chord and drum respectively.Before putting the outputs across layers together, we further adjust the generated sequences at thebar level. For melody, we first check at each bar if the first step is a continuation of a previous noteor silence. If it is the latter, we find the first newly pressed note within the bar and move it to thebeginning of the bar. We do similarly for the windows of two half-bars as well as the four quarter-bars. This makes the melody more likely to be on the beat, and generally sounds better. We verifythis in our experiments.For chord, we generate one chord at each half bar, which is the majority of all single step chordgenerations. Furthermore, we incorporate the rule of chord progression in the Circle of Fifths asbetween chords pairwise smooth terms, and compute the final chord using dynamic programming.For drum, we generate one pattern at each half bar.Our model generates with scale starting note C, and then applies a constant shift to generate musicwith other starting notes. Besides scale, which instrument to use is also customizable. However, wesimply set all instruments as grand piano in all experiments, as the effect and musical meaning ofdifferent instrument combinations is beyond the scope of this paper.5 E XPERIMENTSTo train our model, we took 100 hours of pop music from midi manwhich consists of user-composedpop songs and video game music. In our generation, we always use 120 beats per minute with 4 timesteps per beat. However, songs in the dataset can have arbitrary speed. To neutralize the effect ofthis, we detect the most frequent interval between two adjacent notes for each song, and iterativelydivide or multiply this interval by 2 until it falls in the range between 0:25sand0:5s. We use thisas a measure of the song’s beat duration. We then adjust the song’s temporal axis so that all songshave the same beat duration of 0:5s.A MIDI file can be separated into different channels/tracks, where the 9th channel is specificallypreserved for drums. We categorize the rest of non-drum tracks into melody, chord, and else, bysimply setting thresholds on average number of unique notes within a bar and average number ofnote changing within a bar, as chords are by definition repetitive. Fig. 4shows an example of ourmusic generation.To evaluate the quality of our music generation, we conduct a human survey with 27 participants.All subjects are university students who did not have any prior knowledge about the content of ourproject. In the survey, participants are presented with several pairs of 30-second music clips, and areasked to vote which clip in the pair sounds better. We gave no other information about what theyare listening to. They are also allow to submit a neutral vote in case they cannot decide between thetwo choices. In our study, we consider three cases: our full method versus Magenta Waite et al. , ourmethod with melody only versus Google Magenta ( Waite et al. ), and our method versus our methodwithout the temporal alignment described in Sec.4.5. We randomly generated 10 songs per methodand randomly shuffled each pair. For the Magenta baseline we used its Lookback version, whichwas the latest version at the time of our submission.As shown in Table 1, most participants prefer songs produced by our method compared to Magenta.Participants also made comments such as music sounds better with percussion than piano alone ,andmultiple instruments with continuous play is much better . This confirms that our multi-layergeneration improves music quality. Few participants also point out that drums sound too differentand do not participate to the melody perfectly , which indicates that further improvements can be stillmade. In the second comparison, we study if the quality improvement of our method is only caused6Under review as a conference paper at ICLR 2017Method Ours Magenta Ours-MO Magenta Ours Ours-NA% of votes 81:6% 14:4% 69:6% 13:6% 75:2% 12:0%Table 1: Human evaluation of music generated by different methods: ours and Waite et al. ’s Magenta. Ours-MO and Ours-NA are short for Ours Melody Only and Ours No Alignment. We allowed neutral votes, thus thesum of the pair is less than 100%.Human Magenta Ourssub-seq 7:06 4:39 4:65repeat 4:04 17:08 2:33Table 2: Evaluations of the longest matching sub-sequence with training, and self repeating times.by adding chords and drums, or is also related to our two-layer melody generation with alignment. Itcan be seen that without chords and drums, the score drops as expected, but is still much higher thanthe Magenta baseline. This is because our method produces less recursion and silence , and fasterand more accurate tempo as mentioned by the participants. In the last comparison, most participantsprefer our full method than the no-alignment version, since beats are more subtle and better timed .This proves the usefulness of temporal alignment. We performed significance tests on the evaluationresults in Table 1. All comparisons passed the significance test with significance level 5%. Thelowest alpha values to reject the null hypothesis are 1e-19, 1e-14, and 1e-19, respectively. Furtherexperimental results of removing music scale in our method and adding temporal alignment to thebaseline can be found on our project page.To examine the suitability of the four scale types we use, we collected the list of all existing musicalscales from Wikipedia and measured the scale distribution of the dataset. 37:8%of the data belongsto our four scales, 47:7%belongs to Acoustic, Algerion, Lydian, Adonai Malakh, and Ukrainian,while 14:5%belongs to the rest 31 uncommonly seen scales such as Insen, Iwato, Yo, and Enigmatic.We also found out that the five scales that accounts for 47:7%are either one or two degree awayfrom one of our used scales (all notes are the same except one being one or two steps away). Thisexperiment shows that even in the most rigorous musical setting, at least 85:5%of online songs arevery close to the four scales that we use.Finally we study our model’s capabilities to generate new music. Towards this goal, we generated100 sequences of 50 seconds of length using different random initializations. We perform twoevaluations. First, for each sequence, we search for the longest sub-sequence of keys that matchespart of the training data, and record its length. This evaluates how much the model copies thetraining data. Secondly, we break each generated melody into segments of 2-bars in length (inspiredby common definition of music plagiarism). We then compare each segment to all segments in therest of the 100 generated songs, and record the repeat time. This evaluates how much the modelrepeats itself. For comparison, we repeat the same evaluation for the Magenta baseline, and humancomposed music. Table 2reports the results. It can be seen that our method performs similarlyas Magenta in terms of copying ( sub-seq ). It is somewhat surprising that human composers infact tend to copy more from other songs, which indicates that both generation approaches can befurther relaxed in terms copying. Our method is less likely to generate recurring melodies ( repeat )compared to Magenta, and is closer to the statistics of human-produced songs.6 A PPLICATIONSIn this section we demonstrate two novel applications of our pop music generation framework. Werefer the reader to http://www.cs.toronto.edu/songfrompi/ for the music videos.6.1 N EURAL DANCING AND KARAOKEIn our first application, we attempt to generate both music and a stickman dancing to it, as well asa sequence of karaoke-like text that people can sing along with. To learn the relationship betweenmusic and dance, we download 1 hour of video from the game Just Dance , as well as the MIDI filesfor songs included in the video from different sources. We use the method in Newell et al. (2016 )to track single-frame 2D human pose in the videos. We process the single-frame tracking result toensure left-right body consistency through time, and then use the method of Zhou et al. (2016 ) toconvert the 2D pose sequence into 3D. Example results are shown in Fig. 5. We observe that ourpose processing pipeline is able to extract reasonable human poses most of the time. However, the7Under review as a conference paper at ICLR 2017(a) (b) (c) (d)Figure 5: Examples from Just Dance and 3D human pose tracking result. (a) and (b) are success cases, posetracking fails in (c), and (d) shows the defect in video which makes tracking difficult.quality is not perfect due to tracking failure or video effects. We define pose similarity as averageeuclidean distance of all joints, and cluster poses into 456 clusters. We used Frey & Dueck (2007 )as the number of clusters is large.We learn to generate a stickman dancing by adding another dancing layer on top of the key layer,just like for drum and chord. We generate one pose at each beat, which is equivalent to 4 timesteps or 0.5 seconds in a 120 beat-per-minute music. In particular, we predict one of the 456 poseclusters using a linear projection layer followed by softmax. We use cross-entropy at each time stepas our loss function. At inference time, we further apply moving average to temporally smooth thegenerated 3D pose sequence.To learn the relationship between music and lyrics, we collect 51 hours of lyrics data from theinternet. This data contains 50 hours of text without music, and the rest 1 hour are songs we collectedfrom Just Dance . For the music part, we temporally align each sentence in the lyrics with the midimusic by using the widely-existing lrcformat, which records the time tag at the beginning of everysentence. We select words that appear at least 4 times, which yields a vocabulary size of 3390including unknown and end-of-sentence. Just as for dance, we generate one word per beat usinganother lyrics layer on top of the key layer.6.2 N EURAL STORY SINGINGIn this application our aim is to sing a song about a photo. We first generate a story about thephoto with the neural storyteller Kiros et al. (2015 ) and try to accompany the generated text withmusic. We utilize the same 1 hour dataset of temporally aligned lyrics and music. We further includethe phoneme list of our 3390 vocabulary as we also want to sing the story. Starting from the textproduced by neural storyteller, we arrange it into a temporal sequence with 1 beat per word and ashort pause for end-of-sentence, where the pause length is decided such that the next sentence startsfrom a new bar. As our dataset is relatively small, we generate the profile conditioned on the text,which has less dimensions compared to the key. This is done by a 2-layer LSTM that takes as inputthe generated profile at the last time step concatenated with a one-hot vector of the current word, andoutputs the current profile. We then generate the song with our model given the generated profile.The generated melody key is then used to decide on the pitch frequency of a virtual singer, assumingthe key-to-pitch correspondence of a grand piano. We further constrain that the singer’s final pitch isalways in the range of E3toG4, which we empirically found to be the natural pitch range. We thenreplace all words outside the vocabulary with the sound Ooh, and play the rendered singing with thegenerated music.7 C ONCLUSION AND FUTURE WORKWe have presented a hierarchical approach to pop song generation which exploits music theory inthe model design. In contrast to past work, our approach is able to generate multi-track music. Ourhuman studies shows the strength of our framework compared to an existing strong baseline. Weadditionally proposed two new applications: neural dancing & karaoke, and neural story singing.In this paper, we show that incorporating knowledge from the music theory into the model, aswell as capturing multiple aspects of music results in better sounding songs. However, generatingappealing and interesting music that captures structure, rhythm, and mood is challenging, and thereis an exciting road ahead to improve on these aspects in the future.8Under review as a conference paper at ICLR 2017
HJVuvmzNl
The paper is well written, clear and proposes a reasonable model for generating melody accompanied with chords and drums. The evaluation of the model requires clarification or improvements.
6: Marginally above acceptance threshold
The paper presents a recurrent neural network (RNN) for generating pop music. The model is trained on 100 hours of user composed pop songs and video game music and the resulting music is evaluated in user studies against songs produced by the Magenta framework. Overall, I find the paper to be well written and clear. I appreciate the review early on of music theory concepts. I think the paper provides a reasonable support for the connection between how pop music is composed and the hierarchical model for generating melody accompanied with chords and drums. With some post-processing, the model appears to generate pleasant sounding music as judged by users and from a personal perspective of listening to the examples available on the web. While the generated examples on the web sound pleasant, they also sound quite similar and make it hard to judge what the model has learned. There are some open questions regarding evaluation of the model. The paper would benefit from improvements in both user and metric evaluations. * The Magenta system serves as a lower baseline for evaluation. The study would benefit from an upper baseline by also evaluating against human composed songs. This would help contextualize the findings for both this and future work. * The user study could be improved by examining other dimensions of appeal, perhaps to gauge diversity through "interestingness" over a collection of samples. * I think a paired/side-by-side design for the user study seems limited (examples on http://www.cs.toronto.edu/songfrompi/eval/eval.html). A simpler design with rating one sample at a time may have been more appropriate because there is no natural way to pair the songs. The examples from each system used in the experiment should be provided with labels or an answer key so that readers can compare the merits of each of the systems' compositions themselves. * The authors propose specific metrics for insight into the diversity (longest subsequence and number of repeats). These would be more meaningful with some context, e.g. by comparison with baseline Magenta samples and the training data (as an upper baseline). * Details of the baseline Magenta system would also benefit the paper. * No guidance is provided on how to judge the applications of neural singing, dancing and karaoke.
4: The reviewer is confident but not absolutely certain that the evaluation is correct
ByBwSPcex
ICLR.cc/2017/conference
2017
Song From PI: A Musically Plausible Network for Pop Music Generation
["Hang Chu", "Raquel Urtasun", "Sanja Fidler"]
We present a novel framework for generating pop music. Our model is a hierarchical Recurrent Neural Network, where the layers and the structure of the hierarchy encode our prior knowledge about how pop music is composed. In particular, the bottom layers generate the melody, while the higher levels produce the drums and chords. We conduct several human studies that show strong preference of our generated music over that produced by the recent method by Google. We additionally show two applications of our framework: neural dancing and karaoke, as well as neural story singing.
["Applications"]
ABSTRACTWe present a novel framework for generating pop music. Our model is a hierarchi-cal Recurrent Neural Network, where the layers and the structure of the hierarchyencode our prior knowledge about how pop music is composed. In particular, thebottom layers generate the melody, while the higher levels produce the drums andchords. We conduct several human studies that show strong preference of our gen-erated music over that produced by the recent method by Google. We additionallyshow two applications of our framework: neural dancing and karaoke, as well asneural story singing.1 I NTRODUCTIONNeural networks have revolutionized many fields. They have not only proven to be powerful inperforming perception tasks such as image classification and language understanding, but have alsoshown to be surprisingly good “artists”. In Gatys et al. (2015 ), photos were turned into paintings byexploiting particular drawing styles such as Van Gogh’s, Kiros et al. (2015 ) produced stories aboutimages biased by writing style (e.g., romance books), Karpathy et al. (2016 ) wrote Shakespeareinspired novels, and Simo-Serra et al. (2015 ) gave fashion advice.Music composition is another artistic domain where neural based approaches have been proposed.Early approaches exploiting Recurrent Neural Networks ( Bharucha & Todd (1989 );Mozer (1996 );Chen & Miikkulainen (2001 );Eck & Schmidhuber (2002 )) date back to the 80’s. The main varia-tions between the different models is the representation of the notes and the outputs they produced,which typically encode melody and chord. Most of these approaches were single track, in that theyproduced only one note per time step. The exception is Boulanger-lewandowski et al. (2012 ) whichgenerated polyphonic music, i.e., simultaneous independent melodies.In this paper, we aim to generate pop music, where the melody but also chords and other instrumentsmake up what is typically called a song. We draw inspiration from the Song from byMacdonald1,a piano video on Youtube, where the pleasing music is created from a sequence of digits of . Thisvideo shows both the randomness and the regularity of music. On one hand, since any possible digitsequence is a subset of the digit sequence, this implies that pleasing music can be created evenfrom a totally random base signal. On the other hand, the composer uses specific rules such as AHarmonic Minor scale and harmonies to convert the digit sequence into a music sheet. It is theserules that play the key role in converting randomness into music.Following the ideas of Songs from , we aim to generate both the melody as well as accompanyingeffects such as chords and drums. Arguably, these turn even a not particularly pleasing melody intoa well sounding song. We propose a hierarchical approach, where each level is a Recurrent NeuralNetwork producing a key aspect of the song. The bottom layers generate the melody, while thehigher levels produce drums and chords. This enables the drum and chord layers to compensatefor the melody in order to produce appleasing music. Adopting the key idea from Songs from ,we condition our model on the scale type allowing the melody generator to learn the notes that aretypically played in a particular scale.1https://youtu.be/OMq9he-5HUU1Under review as a conference paper at ICLR 2017We train our model on 100 hours of midi music containing user-composed pop songs and videogame music. We conduct human studies with music generated with our approach and compare itagainst a recent approach by Google, showing that our songs are strongly preferred over the baseline.In our human study we also perform an ablation analysis of our model. We additionally show twonew applications: neural dancing and karaoke as well as neural music singing. As part of the firstapplication we generate a stickman dancing to our music and lyrics that can be sung with, while inthe second application we condition on the output of Kiros et al. (2015 ) which writes a story about animage and convert it into a pop song. We refer the reader to http://www.cs.toronto.edu/songfrompi/for our demos and results.2 R ELATED WORKGenerating music has been an active research area for decades. It brings together machines learn-ing researchers that aim to capture the complex structure of music ( Eck & Schmidhuber (2002 );Boulanger-lewandowski et al. (2012 )), as well as music professionals ( Chan et al. (2006 )) and en-thusiasts ( Johnson ;Sun) that want to see how far a computer can get to be a real composer. Real-timemusic generation is also explored for gaming ( Engels et al. (2015 )).Early approaches mostly instilled knowledge from music theory into generation, by using rules ofhow music segments can be stitched together in a plausible way, e.g., Chan et al. (2006 ). On theother hand, neural networks have been used for music generation since the 80’s ( Bharucha & Todd(1989 );Mozer (1996 );Chen & Miikkulainen (2001 );Eck & Schmidhuber (2002 )).Mozer (1996 )used a Recurrent Neural Network that produced pitch, duration and chord at each time step. Unlikemost other neural network approaches, this work encodes music knowledge into the representation.Eck & Schmidhuber (2002 ) was first to use LSTMs to generate both melody and chord. ComparedtoMozer (1996 ), the LSTM captured more global music structure across the song.Like us, Kang et al. (2012 ) built upon the randomness of melody by trying to accompany it withdrums. However, in their model the scale type is enforced. No details about the model are given, andthus it is virtually impossible to compare to. Boulanger-lewandowski et al. (2012 ) propose to learncomplex polyphonic musical structure which has multiple notes playing in parallel through the song.The model is single-track in that it only produces melody, whereas in our work we aim to producemulti-track songs. Just recently, Huang & Wu (2016 ) proposed a 2-layer LSTM that, like Boulanger-lewandowski et al. (2012 ), produces music that is more complex than a single note sequence, andis able to produce chords. The main novelty of our work over existing approaches is a hierarchicalmodel that incorporates knowledge from music theory to build the neural architecture, and producesmulti-track pop music (melody, chord, drum). We also present two novel fun applications.3 C ONCEPTS FROM MUSIC THEORYWe start by introducing the basic notation and definitions from music theory. A note defines thebasic unit that music is composed of. Music follows the 12-tone system, i.e., 12 is the cycle lengthof all notes. The 12 tones are: C,C♯=D♭,D,D♯=E♭,E,F,F♯=G♭,G,G♯=A♭,A,A♯=B♭,B. Abaris a short segment of time that corresponds to a specific number of beats (notes). The boundariesof the bar are indicated by vertical bar lines.Scale is a subset of notes. There are four types of scales most commonly used: Major (Minor ),Har-monic Minor ,Melodic Minor andBlues . Each scale type specifies a sequence of relative intervals(or shifts) which act relative to the starting note. For example, the sequence for the scale type Majoris2!2!1!2!2!2!1. Thus, C Major specifies the starting note to be C, and applyingthe relative sequence of shifts yields: C2 !D2 !E1 !F2 !G2 !A2 !B1 !C. The subset ofnotes specified by C Major is thus C, D, E, F, G, A, and B (a subset of seven notes). All scales typeshave a subset of seven notes except for Blues which has six. In total we have 48 unique scales, i.e.4 scale types and 12 possible starting notes. We treat Major andMinor as one type as for a Majorscale there is always a Minor that has exactly the same set of notes. In music theory, this is referredto as Relative Minor .2Under review as a conference paper at ICLR 2017xtprfytkeyytprsytchdytdrmxt1prfyt1keyyt1prsxt2prfyt2keyyt2prsxt3prfyt3keyyt3prsxt4prfyt4keyyt4prsyt4chdyt4drm...............xt8prfyt8keyyt8prsyt8chdyt8drmxt9prfyt9keyyt9prs...........................xt16prfyt16keyyt16prsyt16chdyt16drmxt17prfyt17keyyt17prsKey Layer jsPress LayerChord LayerDrum LayerFigure 1: Overview of our framework. Only skip connections for the current time step tare plotted.Chord is a group of notes that sound good together. Similarly to scale, a chord has a start note anda type defining a set of intervals. There are mainly 6 types in triads chords: Major Chord ,MinorChord ,Augmented Chord ,Diminished Chord ,Suspended 2nd Chord , and Suspended 4th Chord .The Circle of Fifths is often used to produce a chord progression. It maps 12 chord starting notesto a circle. When changing from one chord to another chord, moving to a nearby chord on the circleis often preferred as this forms a strong chord progression that produces the sense of harmony.4 H IERARCHICAL RECURRENT NETWORKS FOR POPMUSIC GENERATIONWe follow the high level idea behind the Song from to define our model. In particular, we gen-erate music with a hierarchical Recurrent Neural Network where the layers and the structure of thehierarchy encode our prior knowledge about how pop music is composed. We first outline the modeland describe the details and justifications for our choices in the subsections that follow.We condition our generation on the scale type, as this helps the model to pick up the regularities inpop songs. We encode melody with two random variables at each time step, representing which keyis being played (the key layer ) and the duration that the key will be pressed (the press layer ). Themelody is generated conditioned on the scale, which does not vary across the song as is typically thecase in pop music. We assume the drums and the chords are independent given the melody. Thusconditioned on the melody, at each time step we generate the chord (the chord layer ) as well as thedrums (the drum layer ). The output at all layers yields the final song. We refer the reader to Fig. 1for an illustration of our hierarchical model.4.1 T HE ROLE OF SCALEIt is known from music theory that while in principle each song has 12 tones to choose from, most ofthe notes are in fact only using the six (for Blues) or seven (for other scales) tone subsets specifiedby the scale rule. We found that by conditioning the music generator on scale it captures theseregularities more easily. However, we do not enforce the notes to be generated from the subset andallow our model to generate notes outside the scale.We confirm the above musical fact by analysing over 100 hours of pop song music fromthemidi man dataset. Since scale is defined relative to a starting note, we first try to factor outits influence and normalize all songs to have identical start note. To identify the scale of a song, wecompute the histogram over the 12 tones and match it with the 48 tone subsets of 4 scale types with12 different start notes. We then normalize all songs to have start note Cby applying a constant shifton all notes. This allows us to categorize any song into 4 scale types. Since this shift affects all notesat once, it does not affect how the song sounds (its harmony). Our analysis shows that for all notesin all Major scale songs, 94:66% are within the tone subset. For Harmonic Minor ,Melodic Minor ,3Under review as a conference paper at ICLR 2017(a) (b) (c) (d)Figure 2: Distribution of within-scale note ratio for four scale types. x-axis: percentage of tones within thescale type’s tone set, y-axis: percentage of songs of the scale type. (a)-(d) shows Major (Minor ),HarmonicMinor ,Melodic Minor , and Blues , respectively.andBlues the percentage of notes that belong to the main tone set is 87:16%,85:11%, and 90:93%,respectively. We refer the reader to Fig. 2, where the x-axis denotes the percentage of within-scalenotes of a song, and the y-axis indicates how many songs in the dataset have that percentage. Notethat the majority of the notes follow the scale rule. Furthermore, different scale types have differentinlier distribution. We thus represent scale with a single random variable s2 f1; ;4gwhich isfixed for the whole song, and condition the model on it.24.2 T WO-LAYER RNN FORMELODY GENERATIONWe represent the melody with two random variables per time step: which key is pressed, and theduration of the press. We use a RNN to generate the keys conditioned on the scale. Then conditionedon the output of the key layer, a second RNN generates the duration of the press at each time step.In particular, we model the key layer with a two-layer LSTM ( Hochreiter & Schmidhuber (1997 ))with a 512-dimensional hidden state, which outputs a note (key) at each time step. Note that wecondition on scale s, thus we have a different set of weights for each scale. We only allow notesbetween C3toC6as notes outside this range are usually too low or too high to sound good. Weremind the reader that given a scale, seven (or six for blues) out of the twelve notes (per octave) arestatistically more plausible, however we allow the model to choose from all 12. This results in a37-dimensional output, as there are 36 possible notes corresponding to 3 octaves with 12 notes peroctave, plus silence. Let htkeybe the hidden state of the second key decoder layer at time t. Wecompute the probability of each key using the softmax:P(ytkey)/exp(vytkeyhtkey) (1)where vytkeyis the row of V(the output embedding matrix of notes), corresponding to note ytkey.As input to the LSTM we use a vector that concatenates multiple features: a one-hot encoding of theprevious generated note yt1key, Lookback features, and the melody profile. The Lookback featureswere proposed by Google Magenta ( Waite et al. ) to make it easier for the model to memorizerecently produced notes and potentially repeat them. They include skip connections from two andone bar ago (a bar is 8 consecutively played notes), i.e., yt16keyandyt8key. They also contain twoadditional features, indicating whether the last generated key has been copied from one or two barsago, i.e. /x31(yt1key;yt18key)and /x31(yt1key;yt116key). They also add a 5-dimensional feature indicatinga binary encoding of the current time t. This helps the model keep track where in a 4bar range itis, and thus produce music accordingly.In addition, we introduce a new feature which we refer to as the melody profile . Intuitively, theprofile represents the high-level music flow. To get the profile for each song, we compute the localnote histogram at each time step with width of two bars, and cluster all local histograms within thesong into 10 clusters via k-means. We order the 10 clusters with mean note ordered from low tohigh as cluster 1 to 10, and apply moving averages on the cluster id sequence to encourage localsmoothness. This results in a 10-dimensional one-hot vector representation of the cluster id for eachtime step. This additional information allows the user to set the melody’s ups and downs of the song.The keys alone are not sufficient to describe how the melody is performed. Additionally we also needto know the duration that each key needs to be pressed for. Towards this goal, conditioned on the2For readers with musical background, the Twelve-Tone Serialism technique Schoenberg & Newlin (1951 )prevents emphasis of any one tone. However, our data analysis indicates that pop music is not influenced by it.4Under review as a conference paper at ICLR 2017(a) (b) (c) (d)Figure 3: Co-occurrence of tones in melody (y-axis) and chord (x-axis). (a)-(d) shows Major (Minor ),Har-monic Minor ,Melodic Minor , and Blues , respectively.melody, we generate the duration of each key with a two-layer LSTM with a 512-dimensional hiddenstate. We represent the duration of pressing as a forward counting sequence that is conditioned onthe generated melody. The press outputs 1 when a new key is pressed, and sequentially outputs 2,3, 4 and so on as the key is held on. When the current key is released, the press counter is reset to1. Compared to the event on-off representation of Waite et al. , our representation learns the melodyflow and how to press separately. This is important, as Waite et al. has extremely unbalanced outputdistributions dominated by the repeat-of-holding event. We represent press ytprsas a 8-dimensionalone-hot vector. The input to our LSTM is yt1prs, concatenated with the 37-dimensional one-hotencoding of the melody key ytkey.4.3 C HORD AND DRUM RNN L AYERSWe studied all existing chords in our 100 hours of pop music. Although in principle a chord can beany arbitrary combination of multiple notes, we observed that in the actual music data 99:19% ofthe chords belong to one of 72 chord classes (6 types 12 start notes). Fig. 3shows the correlationbetween the melody’s tone and the starting note of the chord playing at the same time. It can beseen that chord is strongly correlated with melody. These two findings inspire our design. We thusrepresent chord ytchdas a one-hot encoding with 72 classes, and predict it using a two-layer LSTMwith a 512-dimensional hidden state. We generate one chord at each time step. The input is yt4chdconcatenated with yt3:tkey.We look at our music dataset and find all unique drum patterns with duration of a half bar. We thencompute the histogram of all the patterns. This forms a long tail distribution, where 94:60% comesfrom the top 100 common patterns. We generate drum conditioned on the key layer using a two-layer LSTM with 512 dimensional hidden states. Drum ytdrmis represented as one-hot encodingwith of the 100 unique one-bar-long drum patterns. The input is yt4drmconcatenated with the notesfrom the previous three times steps yt3:tkey.4.4 L EARNINGWe use cross-entropy as our loss function to train each layer. We follow the typical training strategywhere we make predictions at each layer and time step but feed in ground-truth information to thenext. This effectively decomposes training, and allows to train all layers in parallel. We use theAdam optimizer, a learning rate of 2e-3 and a learning rate decay of 0.99 after each epoch for 10epochs.4.5 M USIC SYNTHESIS : PUTTING ALL THE OUTPUTS TOGETHERTo synthesize music we first randomly choose a scale and a profile xprf. For generating xprf, werandomly choose one cluster id with a random duration, and repeat until we get the desired totallength of the music sequence. We then perform inference in our model conditioned on the chosenscale, and use xprfas input to our key layer. At each time step, we sample a key according toP(ytkey). We encode it as a one-hot vector and pass to the press, chord and drum layers. We samplethe press, chords and drums at each time step in a similar fashion.5Under review as a conference paper at ICLR 2017Figure 4: Example of our music generation. From top to bottom: melody, chord and drum respectively.Before putting the outputs across layers together, we further adjust the generated sequences at thebar level. For melody, we first check at each bar if the first step is a continuation of a previous noteor silence. If it is the latter, we find the first newly pressed note within the bar and move it to thebeginning of the bar. We do similarly for the windows of two half-bars as well as the four quarter-bars. This makes the melody more likely to be on the beat, and generally sounds better. We verifythis in our experiments.For chord, we generate one chord at each half bar, which is the majority of all single step chordgenerations. Furthermore, we incorporate the rule of chord progression in the Circle of Fifths asbetween chords pairwise smooth terms, and compute the final chord using dynamic programming.For drum, we generate one pattern at each half bar.Our model generates with scale starting note C, and then applies a constant shift to generate musicwith other starting notes. Besides scale, which instrument to use is also customizable. However, wesimply set all instruments as grand piano in all experiments, as the effect and musical meaning ofdifferent instrument combinations is beyond the scope of this paper.5 E XPERIMENTSTo train our model, we took 100 hours of pop music from midi manwhich consists of user-composedpop songs and video game music. In our generation, we always use 120 beats per minute with 4 timesteps per beat. However, songs in the dataset can have arbitrary speed. To neutralize the effect ofthis, we detect the most frequent interval between two adjacent notes for each song, and iterativelydivide or multiply this interval by 2 until it falls in the range between 0:25sand0:5s. We use thisas a measure of the song’s beat duration. We then adjust the song’s temporal axis so that all songshave the same beat duration of 0:5s.A MIDI file can be separated into different channels/tracks, where the 9th channel is specificallypreserved for drums. We categorize the rest of non-drum tracks into melody, chord, and else, bysimply setting thresholds on average number of unique notes within a bar and average number ofnote changing within a bar, as chords are by definition repetitive. Fig. 4shows an example of ourmusic generation.To evaluate the quality of our music generation, we conduct a human survey with 27 participants.All subjects are university students who did not have any prior knowledge about the content of ourproject. In the survey, participants are presented with several pairs of 30-second music clips, and areasked to vote which clip in the pair sounds better. We gave no other information about what theyare listening to. They are also allow to submit a neutral vote in case they cannot decide between thetwo choices. In our study, we consider three cases: our full method versus Magenta Waite et al. , ourmethod with melody only versus Google Magenta ( Waite et al. ), and our method versus our methodwithout the temporal alignment described in Sec.4.5. We randomly generated 10 songs per methodand randomly shuffled each pair. For the Magenta baseline we used its Lookback version, whichwas the latest version at the time of our submission.As shown in Table 1, most participants prefer songs produced by our method compared to Magenta.Participants also made comments such as music sounds better with percussion than piano alone ,andmultiple instruments with continuous play is much better . This confirms that our multi-layergeneration improves music quality. Few participants also point out that drums sound too differentand do not participate to the melody perfectly , which indicates that further improvements can be stillmade. In the second comparison, we study if the quality improvement of our method is only caused6Under review as a conference paper at ICLR 2017Method Ours Magenta Ours-MO Magenta Ours Ours-NA% of votes 81:6% 14:4% 69:6% 13:6% 75:2% 12:0%Table 1: Human evaluation of music generated by different methods: ours and Waite et al. ’s Magenta. Ours-MO and Ours-NA are short for Ours Melody Only and Ours No Alignment. We allowed neutral votes, thus thesum of the pair is less than 100%.Human Magenta Ourssub-seq 7:06 4:39 4:65repeat 4:04 17:08 2:33Table 2: Evaluations of the longest matching sub-sequence with training, and self repeating times.by adding chords and drums, or is also related to our two-layer melody generation with alignment. Itcan be seen that without chords and drums, the score drops as expected, but is still much higher thanthe Magenta baseline. This is because our method produces less recursion and silence , and fasterand more accurate tempo as mentioned by the participants. In the last comparison, most participantsprefer our full method than the no-alignment version, since beats are more subtle and better timed .This proves the usefulness of temporal alignment. We performed significance tests on the evaluationresults in Table 1. All comparisons passed the significance test with significance level 5%. Thelowest alpha values to reject the null hypothesis are 1e-19, 1e-14, and 1e-19, respectively. Furtherexperimental results of removing music scale in our method and adding temporal alignment to thebaseline can be found on our project page.To examine the suitability of the four scale types we use, we collected the list of all existing musicalscales from Wikipedia and measured the scale distribution of the dataset. 37:8%of the data belongsto our four scales, 47:7%belongs to Acoustic, Algerion, Lydian, Adonai Malakh, and Ukrainian,while 14:5%belongs to the rest 31 uncommonly seen scales such as Insen, Iwato, Yo, and Enigmatic.We also found out that the five scales that accounts for 47:7%are either one or two degree awayfrom one of our used scales (all notes are the same except one being one or two steps away). Thisexperiment shows that even in the most rigorous musical setting, at least 85:5%of online songs arevery close to the four scales that we use.Finally we study our model’s capabilities to generate new music. Towards this goal, we generated100 sequences of 50 seconds of length using different random initializations. We perform twoevaluations. First, for each sequence, we search for the longest sub-sequence of keys that matchespart of the training data, and record its length. This evaluates how much the model copies thetraining data. Secondly, we break each generated melody into segments of 2-bars in length (inspiredby common definition of music plagiarism). We then compare each segment to all segments in therest of the 100 generated songs, and record the repeat time. This evaluates how much the modelrepeats itself. For comparison, we repeat the same evaluation for the Magenta baseline, and humancomposed music. Table 2reports the results. It can be seen that our method performs similarlyas Magenta in terms of copying ( sub-seq ). It is somewhat surprising that human composers infact tend to copy more from other songs, which indicates that both generation approaches can befurther relaxed in terms copying. Our method is less likely to generate recurring melodies ( repeat )compared to Magenta, and is closer to the statistics of human-produced songs.6 A PPLICATIONSIn this section we demonstrate two novel applications of our pop music generation framework. Werefer the reader to http://www.cs.toronto.edu/songfrompi/ for the music videos.6.1 N EURAL DANCING AND KARAOKEIn our first application, we attempt to generate both music and a stickman dancing to it, as well asa sequence of karaoke-like text that people can sing along with. To learn the relationship betweenmusic and dance, we download 1 hour of video from the game Just Dance , as well as the MIDI filesfor songs included in the video from different sources. We use the method in Newell et al. (2016 )to track single-frame 2D human pose in the videos. We process the single-frame tracking result toensure left-right body consistency through time, and then use the method of Zhou et al. (2016 ) toconvert the 2D pose sequence into 3D. Example results are shown in Fig. 5. We observe that ourpose processing pipeline is able to extract reasonable human poses most of the time. However, the7Under review as a conference paper at ICLR 2017(a) (b) (c) (d)Figure 5: Examples from Just Dance and 3D human pose tracking result. (a) and (b) are success cases, posetracking fails in (c), and (d) shows the defect in video which makes tracking difficult.quality is not perfect due to tracking failure or video effects. We define pose similarity as averageeuclidean distance of all joints, and cluster poses into 456 clusters. We used Frey & Dueck (2007 )as the number of clusters is large.We learn to generate a stickman dancing by adding another dancing layer on top of the key layer,just like for drum and chord. We generate one pose at each beat, which is equivalent to 4 timesteps or 0.5 seconds in a 120 beat-per-minute music. In particular, we predict one of the 456 poseclusters using a linear projection layer followed by softmax. We use cross-entropy at each time stepas our loss function. At inference time, we further apply moving average to temporally smooth thegenerated 3D pose sequence.To learn the relationship between music and lyrics, we collect 51 hours of lyrics data from theinternet. This data contains 50 hours of text without music, and the rest 1 hour are songs we collectedfrom Just Dance . For the music part, we temporally align each sentence in the lyrics with the midimusic by using the widely-existing lrcformat, which records the time tag at the beginning of everysentence. We select words that appear at least 4 times, which yields a vocabulary size of 3390including unknown and end-of-sentence. Just as for dance, we generate one word per beat usinganother lyrics layer on top of the key layer.6.2 N EURAL STORY SINGINGIn this application our aim is to sing a song about a photo. We first generate a story about thephoto with the neural storyteller Kiros et al. (2015 ) and try to accompany the generated text withmusic. We utilize the same 1 hour dataset of temporally aligned lyrics and music. We further includethe phoneme list of our 3390 vocabulary as we also want to sing the story. Starting from the textproduced by neural storyteller, we arrange it into a temporal sequence with 1 beat per word and ashort pause for end-of-sentence, where the pause length is decided such that the next sentence startsfrom a new bar. As our dataset is relatively small, we generate the profile conditioned on the text,which has less dimensions compared to the key. This is done by a 2-layer LSTM that takes as inputthe generated profile at the last time step concatenated with a one-hot vector of the current word, andoutputs the current profile. We then generate the song with our model given the generated profile.The generated melody key is then used to decide on the pitch frequency of a virtual singer, assumingthe key-to-pitch correspondence of a grand piano. We further constrain that the singer’s final pitch isalways in the range of E3toG4, which we empirically found to be the natural pitch range. We thenreplace all words outside the vocabulary with the sound Ooh, and play the rendered singing with thegenerated music.7 C ONCLUSION AND FUTURE WORKWe have presented a hierarchical approach to pop song generation which exploits music theory inthe model design. In contrast to past work, our approach is able to generate multi-track music. Ourhuman studies shows the strength of our framework compared to an existing strong baseline. Weadditionally proposed two new applications: neural dancing & karaoke, and neural story singing.In this paper, we show that incorporating knowledge from the music theory into the model, aswell as capturing multiple aspects of music results in better sounding songs. However, generatingappealing and interesting music that captures structure, rhythm, and mood is challenging, and thereis an exciting road ahead to improve on these aspects in the future.8Under review as a conference paper at ICLR 2017
S1NaYRb4x
7: Good paper, accept
The paper describes a recurrent neural network model for generating pop music in the symbolic domain (i.e. MIDI). The layers of the model each generate part of the output, with the first few layers responsible for generating the melody, and further layers generating drums and chords conditioned on the generated melody. The authors argue that this matches how pop music is usually composed. The model is trained on 100+ hours of pop music in MIDI format. The resulting generated music is compared against that produced by another system using human evaluation, which is probably the only way in which such a system can be fairly evaluated. I appreciate that the authors went through the trouble of setting up these experiments. The RNNs generating the different outputs (i.e. key, duration, chord, melody) are trained in sequence, conditioned on the output of the previous step(s).I found the text a bit confusing at times as it initially seems to describe a single end-to-end trained model (even if this is never stated explicitly). It only becomes clear later on that the layers are trained in sequence, with additional supervision provided at each stage. This may simply be a personal bias because recent work on hierarchical RNNs that I've read has focused on end-to-end training, but nevertheless it might be useful to mention this more clearly from the get-go. The post-processing of model samples described in the 2nd paragraph of Section 4.5 seems to affect results quite dramatically (based on the results in Table 1). It seems equally applicable to the outputs of the Magenta system, so it might be interesting to compare this version to Magenta as well, to get an idea of how much it contributes to the improvement over the Magenta system. It would be somewhat disappointing if it ends up accounting for most of the gain. I am still unconvinced by the experiment described in the last paragraph of Section 5, where subsequences of generated music fragments are searched for in the training data. While I agree with the authors that minor differences in note choice can have profound effects on how the melody is perceived, I still think this is not particularly convincing, and I think drawing the unambiguous conclusion that "our model is able to generate new music" from this experiment is a bit premature. The additional applications described in Section 6 feel a bit like an afterthought and the datasets used are probably too small for the results to be meaningful. Instead I would have preferred to read about how to reduce the importance of prior knowledge in the design of the model. Considering the venue this work was submitted to, moving towards a more "end-to-end learning" approach (rather than incorporating even more prior knowledge, as the conclusion seems to imply) seems like an interesting direction for future research. Minor remark: giving the formulas for LSTM is probably a bit of a waste of space, especially if you're not explaining the semantics. A reference is sufficient, and in fact adding a reference to the original LSTM paper is probably a good idea regardless.
3: The reviewer is fairly confident that the evaluation is correct
ByBwSPcex
ICLR.cc/2017/conference
2017
Song From PI: A Musically Plausible Network for Pop Music Generation
["Hang Chu", "Raquel Urtasun", "Sanja Fidler"]
We present a novel framework for generating pop music. Our model is a hierarchical Recurrent Neural Network, where the layers and the structure of the hierarchy encode our prior knowledge about how pop music is composed. In particular, the bottom layers generate the melody, while the higher levels produce the drums and chords. We conduct several human studies that show strong preference of our generated music over that produced by the recent method by Google. We additionally show two applications of our framework: neural dancing and karaoke, as well as neural story singing.
["Applications"]
ABSTRACTWe present a novel framework for generating pop music. Our model is a hierarchi-cal Recurrent Neural Network, where the layers and the structure of the hierarchyencode our prior knowledge about how pop music is composed. In particular, thebottom layers generate the melody, while the higher levels produce the drums andchords. We conduct several human studies that show strong preference of our gen-erated music over that produced by the recent method by Google. We additionallyshow two applications of our framework: neural dancing and karaoke, as well asneural story singing.1 I NTRODUCTIONNeural networks have revolutionized many fields. They have not only proven to be powerful inperforming perception tasks such as image classification and language understanding, but have alsoshown to be surprisingly good “artists”. In Gatys et al. (2015 ), photos were turned into paintings byexploiting particular drawing styles such as Van Gogh’s, Kiros et al. (2015 ) produced stories aboutimages biased by writing style (e.g., romance books), Karpathy et al. (2016 ) wrote Shakespeareinspired novels, and Simo-Serra et al. (2015 ) gave fashion advice.Music composition is another artistic domain where neural based approaches have been proposed.Early approaches exploiting Recurrent Neural Networks ( Bharucha & Todd (1989 );Mozer (1996 );Chen & Miikkulainen (2001 );Eck & Schmidhuber (2002 )) date back to the 80’s. The main varia-tions between the different models is the representation of the notes and the outputs they produced,which typically encode melody and chord. Most of these approaches were single track, in that theyproduced only one note per time step. The exception is Boulanger-lewandowski et al. (2012 ) whichgenerated polyphonic music, i.e., simultaneous independent melodies.In this paper, we aim to generate pop music, where the melody but also chords and other instrumentsmake up what is typically called a song. We draw inspiration from the Song from byMacdonald1,a piano video on Youtube, where the pleasing music is created from a sequence of digits of . Thisvideo shows both the randomness and the regularity of music. On one hand, since any possible digitsequence is a subset of the digit sequence, this implies that pleasing music can be created evenfrom a totally random base signal. On the other hand, the composer uses specific rules such as AHarmonic Minor scale and harmonies to convert the digit sequence into a music sheet. It is theserules that play the key role in converting randomness into music.Following the ideas of Songs from , we aim to generate both the melody as well as accompanyingeffects such as chords and drums. Arguably, these turn even a not particularly pleasing melody intoa well sounding song. We propose a hierarchical approach, where each level is a Recurrent NeuralNetwork producing a key aspect of the song. The bottom layers generate the melody, while thehigher levels produce drums and chords. This enables the drum and chord layers to compensatefor the melody in order to produce appleasing music. Adopting the key idea from Songs from ,we condition our model on the scale type allowing the melody generator to learn the notes that aretypically played in a particular scale.1https://youtu.be/OMq9he-5HUU1Under review as a conference paper at ICLR 2017We train our model on 100 hours of midi music containing user-composed pop songs and videogame music. We conduct human studies with music generated with our approach and compare itagainst a recent approach by Google, showing that our songs are strongly preferred over the baseline.In our human study we also perform an ablation analysis of our model. We additionally show twonew applications: neural dancing and karaoke as well as neural music singing. As part of the firstapplication we generate a stickman dancing to our music and lyrics that can be sung with, while inthe second application we condition on the output of Kiros et al. (2015 ) which writes a story about animage and convert it into a pop song. We refer the reader to http://www.cs.toronto.edu/songfrompi/for our demos and results.2 R ELATED WORKGenerating music has been an active research area for decades. It brings together machines learn-ing researchers that aim to capture the complex structure of music ( Eck & Schmidhuber (2002 );Boulanger-lewandowski et al. (2012 )), as well as music professionals ( Chan et al. (2006 )) and en-thusiasts ( Johnson ;Sun) that want to see how far a computer can get to be a real composer. Real-timemusic generation is also explored for gaming ( Engels et al. (2015 )).Early approaches mostly instilled knowledge from music theory into generation, by using rules ofhow music segments can be stitched together in a plausible way, e.g., Chan et al. (2006 ). On theother hand, neural networks have been used for music generation since the 80’s ( Bharucha & Todd(1989 );Mozer (1996 );Chen & Miikkulainen (2001 );Eck & Schmidhuber (2002 )).Mozer (1996 )used a Recurrent Neural Network that produced pitch, duration and chord at each time step. Unlikemost other neural network approaches, this work encodes music knowledge into the representation.Eck & Schmidhuber (2002 ) was first to use LSTMs to generate both melody and chord. ComparedtoMozer (1996 ), the LSTM captured more global music structure across the song.Like us, Kang et al. (2012 ) built upon the randomness of melody by trying to accompany it withdrums. However, in their model the scale type is enforced. No details about the model are given, andthus it is virtually impossible to compare to. Boulanger-lewandowski et al. (2012 ) propose to learncomplex polyphonic musical structure which has multiple notes playing in parallel through the song.The model is single-track in that it only produces melody, whereas in our work we aim to producemulti-track songs. Just recently, Huang & Wu (2016 ) proposed a 2-layer LSTM that, like Boulanger-lewandowski et al. (2012 ), produces music that is more complex than a single note sequence, andis able to produce chords. The main novelty of our work over existing approaches is a hierarchicalmodel that incorporates knowledge from music theory to build the neural architecture, and producesmulti-track pop music (melody, chord, drum). We also present two novel fun applications.3 C ONCEPTS FROM MUSIC THEORYWe start by introducing the basic notation and definitions from music theory. A note defines thebasic unit that music is composed of. Music follows the 12-tone system, i.e., 12 is the cycle lengthof all notes. The 12 tones are: C,C♯=D♭,D,D♯=E♭,E,F,F♯=G♭,G,G♯=A♭,A,A♯=B♭,B. Abaris a short segment of time that corresponds to a specific number of beats (notes). The boundariesof the bar are indicated by vertical bar lines.Scale is a subset of notes. There are four types of scales most commonly used: Major (Minor ),Har-monic Minor ,Melodic Minor andBlues . Each scale type specifies a sequence of relative intervals(or shifts) which act relative to the starting note. For example, the sequence for the scale type Majoris2!2!1!2!2!2!1. Thus, C Major specifies the starting note to be C, and applyingthe relative sequence of shifts yields: C2 !D2 !E1 !F2 !G2 !A2 !B1 !C. The subset ofnotes specified by C Major is thus C, D, E, F, G, A, and B (a subset of seven notes). All scales typeshave a subset of seven notes except for Blues which has six. In total we have 48 unique scales, i.e.4 scale types and 12 possible starting notes. We treat Major andMinor as one type as for a Majorscale there is always a Minor that has exactly the same set of notes. In music theory, this is referredto as Relative Minor .2Under review as a conference paper at ICLR 2017xtprfytkeyytprsytchdytdrmxt1prfyt1keyyt1prsxt2prfyt2keyyt2prsxt3prfyt3keyyt3prsxt4prfyt4keyyt4prsyt4chdyt4drm...............xt8prfyt8keyyt8prsyt8chdyt8drmxt9prfyt9keyyt9prs...........................xt16prfyt16keyyt16prsyt16chdyt16drmxt17prfyt17keyyt17prsKey Layer jsPress LayerChord LayerDrum LayerFigure 1: Overview of our framework. Only skip connections for the current time step tare plotted.Chord is a group of notes that sound good together. Similarly to scale, a chord has a start note anda type defining a set of intervals. There are mainly 6 types in triads chords: Major Chord ,MinorChord ,Augmented Chord ,Diminished Chord ,Suspended 2nd Chord , and Suspended 4th Chord .The Circle of Fifths is often used to produce a chord progression. It maps 12 chord starting notesto a circle. When changing from one chord to another chord, moving to a nearby chord on the circleis often preferred as this forms a strong chord progression that produces the sense of harmony.4 H IERARCHICAL RECURRENT NETWORKS FOR POPMUSIC GENERATIONWe follow the high level idea behind the Song from to define our model. In particular, we gen-erate music with a hierarchical Recurrent Neural Network where the layers and the structure of thehierarchy encode our prior knowledge about how pop music is composed. We first outline the modeland describe the details and justifications for our choices in the subsections that follow.We condition our generation on the scale type, as this helps the model to pick up the regularities inpop songs. We encode melody with two random variables at each time step, representing which keyis being played (the key layer ) and the duration that the key will be pressed (the press layer ). Themelody is generated conditioned on the scale, which does not vary across the song as is typically thecase in pop music. We assume the drums and the chords are independent given the melody. Thusconditioned on the melody, at each time step we generate the chord (the chord layer ) as well as thedrums (the drum layer ). The output at all layers yields the final song. We refer the reader to Fig. 1for an illustration of our hierarchical model.4.1 T HE ROLE OF SCALEIt is known from music theory that while in principle each song has 12 tones to choose from, most ofthe notes are in fact only using the six (for Blues) or seven (for other scales) tone subsets specifiedby the scale rule. We found that by conditioning the music generator on scale it captures theseregularities more easily. However, we do not enforce the notes to be generated from the subset andallow our model to generate notes outside the scale.We confirm the above musical fact by analysing over 100 hours of pop song music fromthemidi man dataset. Since scale is defined relative to a starting note, we first try to factor outits influence and normalize all songs to have identical start note. To identify the scale of a song, wecompute the histogram over the 12 tones and match it with the 48 tone subsets of 4 scale types with12 different start notes. We then normalize all songs to have start note Cby applying a constant shifton all notes. This allows us to categorize any song into 4 scale types. Since this shift affects all notesat once, it does not affect how the song sounds (its harmony). Our analysis shows that for all notesin all Major scale songs, 94:66% are within the tone subset. For Harmonic Minor ,Melodic Minor ,3Under review as a conference paper at ICLR 2017(a) (b) (c) (d)Figure 2: Distribution of within-scale note ratio for four scale types. x-axis: percentage of tones within thescale type’s tone set, y-axis: percentage of songs of the scale type. (a)-(d) shows Major (Minor ),HarmonicMinor ,Melodic Minor , and Blues , respectively.andBlues the percentage of notes that belong to the main tone set is 87:16%,85:11%, and 90:93%,respectively. We refer the reader to Fig. 2, where the x-axis denotes the percentage of within-scalenotes of a song, and the y-axis indicates how many songs in the dataset have that percentage. Notethat the majority of the notes follow the scale rule. Furthermore, different scale types have differentinlier distribution. We thus represent scale with a single random variable s2 f1; ;4gwhich isfixed for the whole song, and condition the model on it.24.2 T WO-LAYER RNN FORMELODY GENERATIONWe represent the melody with two random variables per time step: which key is pressed, and theduration of the press. We use a RNN to generate the keys conditioned on the scale. Then conditionedon the output of the key layer, a second RNN generates the duration of the press at each time step.In particular, we model the key layer with a two-layer LSTM ( Hochreiter & Schmidhuber (1997 ))with a 512-dimensional hidden state, which outputs a note (key) at each time step. Note that wecondition on scale s, thus we have a different set of weights for each scale. We only allow notesbetween C3toC6as notes outside this range are usually too low or too high to sound good. Weremind the reader that given a scale, seven (or six for blues) out of the twelve notes (per octave) arestatistically more plausible, however we allow the model to choose from all 12. This results in a37-dimensional output, as there are 36 possible notes corresponding to 3 octaves with 12 notes peroctave, plus silence. Let htkeybe the hidden state of the second key decoder layer at time t. Wecompute the probability of each key using the softmax:P(ytkey)/exp(vytkeyhtkey) (1)where vytkeyis the row of V(the output embedding matrix of notes), corresponding to note ytkey.As input to the LSTM we use a vector that concatenates multiple features: a one-hot encoding of theprevious generated note yt1key, Lookback features, and the melody profile. The Lookback featureswere proposed by Google Magenta ( Waite et al. ) to make it easier for the model to memorizerecently produced notes and potentially repeat them. They include skip connections from two andone bar ago (a bar is 8 consecutively played notes), i.e., yt16keyandyt8key. They also contain twoadditional features, indicating whether the last generated key has been copied from one or two barsago, i.e. /x31(yt1key;yt18key)and /x31(yt1key;yt116key). They also add a 5-dimensional feature indicatinga binary encoding of the current time t. This helps the model keep track where in a 4bar range itis, and thus produce music accordingly.In addition, we introduce a new feature which we refer to as the melody profile . Intuitively, theprofile represents the high-level music flow. To get the profile for each song, we compute the localnote histogram at each time step with width of two bars, and cluster all local histograms within thesong into 10 clusters via k-means. We order the 10 clusters with mean note ordered from low tohigh as cluster 1 to 10, and apply moving averages on the cluster id sequence to encourage localsmoothness. This results in a 10-dimensional one-hot vector representation of the cluster id for eachtime step. This additional information allows the user to set the melody’s ups and downs of the song.The keys alone are not sufficient to describe how the melody is performed. Additionally we also needto know the duration that each key needs to be pressed for. Towards this goal, conditioned on the2For readers with musical background, the Twelve-Tone Serialism technique Schoenberg & Newlin (1951 )prevents emphasis of any one tone. However, our data analysis indicates that pop music is not influenced by it.4Under review as a conference paper at ICLR 2017(a) (b) (c) (d)Figure 3: Co-occurrence of tones in melody (y-axis) and chord (x-axis). (a)-(d) shows Major (Minor ),Har-monic Minor ,Melodic Minor , and Blues , respectively.melody, we generate the duration of each key with a two-layer LSTM with a 512-dimensional hiddenstate. We represent the duration of pressing as a forward counting sequence that is conditioned onthe generated melody. The press outputs 1 when a new key is pressed, and sequentially outputs 2,3, 4 and so on as the key is held on. When the current key is released, the press counter is reset to1. Compared to the event on-off representation of Waite et al. , our representation learns the melodyflow and how to press separately. This is important, as Waite et al. has extremely unbalanced outputdistributions dominated by the repeat-of-holding event. We represent press ytprsas a 8-dimensionalone-hot vector. The input to our LSTM is yt1prs, concatenated with the 37-dimensional one-hotencoding of the melody key ytkey.4.3 C HORD AND DRUM RNN L AYERSWe studied all existing chords in our 100 hours of pop music. Although in principle a chord can beany arbitrary combination of multiple notes, we observed that in the actual music data 99:19% ofthe chords belong to one of 72 chord classes (6 types 12 start notes). Fig. 3shows the correlationbetween the melody’s tone and the starting note of the chord playing at the same time. It can beseen that chord is strongly correlated with melody. These two findings inspire our design. We thusrepresent chord ytchdas a one-hot encoding with 72 classes, and predict it using a two-layer LSTMwith a 512-dimensional hidden state. We generate one chord at each time step. The input is yt4chdconcatenated with yt3:tkey.We look at our music dataset and find all unique drum patterns with duration of a half bar. We thencompute the histogram of all the patterns. This forms a long tail distribution, where 94:60% comesfrom the top 100 common patterns. We generate drum conditioned on the key layer using a two-layer LSTM with 512 dimensional hidden states. Drum ytdrmis represented as one-hot encodingwith of the 100 unique one-bar-long drum patterns. The input is yt4drmconcatenated with the notesfrom the previous three times steps yt3:tkey.4.4 L EARNINGWe use cross-entropy as our loss function to train each layer. We follow the typical training strategywhere we make predictions at each layer and time step but feed in ground-truth information to thenext. This effectively decomposes training, and allows to train all layers in parallel. We use theAdam optimizer, a learning rate of 2e-3 and a learning rate decay of 0.99 after each epoch for 10epochs.4.5 M USIC SYNTHESIS : PUTTING ALL THE OUTPUTS TOGETHERTo synthesize music we first randomly choose a scale and a profile xprf. For generating xprf, werandomly choose one cluster id with a random duration, and repeat until we get the desired totallength of the music sequence. We then perform inference in our model conditioned on the chosenscale, and use xprfas input to our key layer. At each time step, we sample a key according toP(ytkey). We encode it as a one-hot vector and pass to the press, chord and drum layers. We samplethe press, chords and drums at each time step in a similar fashion.5Under review as a conference paper at ICLR 2017Figure 4: Example of our music generation. From top to bottom: melody, chord and drum respectively.Before putting the outputs across layers together, we further adjust the generated sequences at thebar level. For melody, we first check at each bar if the first step is a continuation of a previous noteor silence. If it is the latter, we find the first newly pressed note within the bar and move it to thebeginning of the bar. We do similarly for the windows of two half-bars as well as the four quarter-bars. This makes the melody more likely to be on the beat, and generally sounds better. We verifythis in our experiments.For chord, we generate one chord at each half bar, which is the majority of all single step chordgenerations. Furthermore, we incorporate the rule of chord progression in the Circle of Fifths asbetween chords pairwise smooth terms, and compute the final chord using dynamic programming.For drum, we generate one pattern at each half bar.Our model generates with scale starting note C, and then applies a constant shift to generate musicwith other starting notes. Besides scale, which instrument to use is also customizable. However, wesimply set all instruments as grand piano in all experiments, as the effect and musical meaning ofdifferent instrument combinations is beyond the scope of this paper.5 E XPERIMENTSTo train our model, we took 100 hours of pop music from midi manwhich consists of user-composedpop songs and video game music. In our generation, we always use 120 beats per minute with 4 timesteps per beat. However, songs in the dataset can have arbitrary speed. To neutralize the effect ofthis, we detect the most frequent interval between two adjacent notes for each song, and iterativelydivide or multiply this interval by 2 until it falls in the range between 0:25sand0:5s. We use thisas a measure of the song’s beat duration. We then adjust the song’s temporal axis so that all songshave the same beat duration of 0:5s.A MIDI file can be separated into different channels/tracks, where the 9th channel is specificallypreserved for drums. We categorize the rest of non-drum tracks into melody, chord, and else, bysimply setting thresholds on average number of unique notes within a bar and average number ofnote changing within a bar, as chords are by definition repetitive. Fig. 4shows an example of ourmusic generation.To evaluate the quality of our music generation, we conduct a human survey with 27 participants.All subjects are university students who did not have any prior knowledge about the content of ourproject. In the survey, participants are presented with several pairs of 30-second music clips, and areasked to vote which clip in the pair sounds better. We gave no other information about what theyare listening to. They are also allow to submit a neutral vote in case they cannot decide between thetwo choices. In our study, we consider three cases: our full method versus Magenta Waite et al. , ourmethod with melody only versus Google Magenta ( Waite et al. ), and our method versus our methodwithout the temporal alignment described in Sec.4.5. We randomly generated 10 songs per methodand randomly shuffled each pair. For the Magenta baseline we used its Lookback version, whichwas the latest version at the time of our submission.As shown in Table 1, most participants prefer songs produced by our method compared to Magenta.Participants also made comments such as music sounds better with percussion than piano alone ,andmultiple instruments with continuous play is much better . This confirms that our multi-layergeneration improves music quality. Few participants also point out that drums sound too differentand do not participate to the melody perfectly , which indicates that further improvements can be stillmade. In the second comparison, we study if the quality improvement of our method is only caused6Under review as a conference paper at ICLR 2017Method Ours Magenta Ours-MO Magenta Ours Ours-NA% of votes 81:6% 14:4% 69:6% 13:6% 75:2% 12:0%Table 1: Human evaluation of music generated by different methods: ours and Waite et al. ’s Magenta. Ours-MO and Ours-NA are short for Ours Melody Only and Ours No Alignment. We allowed neutral votes, thus thesum of the pair is less than 100%.Human Magenta Ourssub-seq 7:06 4:39 4:65repeat 4:04 17:08 2:33Table 2: Evaluations of the longest matching sub-sequence with training, and self repeating times.by adding chords and drums, or is also related to our two-layer melody generation with alignment. Itcan be seen that without chords and drums, the score drops as expected, but is still much higher thanthe Magenta baseline. This is because our method produces less recursion and silence , and fasterand more accurate tempo as mentioned by the participants. In the last comparison, most participantsprefer our full method than the no-alignment version, since beats are more subtle and better timed .This proves the usefulness of temporal alignment. We performed significance tests on the evaluationresults in Table 1. All comparisons passed the significance test with significance level 5%. Thelowest alpha values to reject the null hypothesis are 1e-19, 1e-14, and 1e-19, respectively. Furtherexperimental results of removing music scale in our method and adding temporal alignment to thebaseline can be found on our project page.To examine the suitability of the four scale types we use, we collected the list of all existing musicalscales from Wikipedia and measured the scale distribution of the dataset. 37:8%of the data belongsto our four scales, 47:7%belongs to Acoustic, Algerion, Lydian, Adonai Malakh, and Ukrainian,while 14:5%belongs to the rest 31 uncommonly seen scales such as Insen, Iwato, Yo, and Enigmatic.We also found out that the five scales that accounts for 47:7%are either one or two degree awayfrom one of our used scales (all notes are the same except one being one or two steps away). Thisexperiment shows that even in the most rigorous musical setting, at least 85:5%of online songs arevery close to the four scales that we use.Finally we study our model’s capabilities to generate new music. Towards this goal, we generated100 sequences of 50 seconds of length using different random initializations. We perform twoevaluations. First, for each sequence, we search for the longest sub-sequence of keys that matchespart of the training data, and record its length. This evaluates how much the model copies thetraining data. Secondly, we break each generated melody into segments of 2-bars in length (inspiredby common definition of music plagiarism). We then compare each segment to all segments in therest of the 100 generated songs, and record the repeat time. This evaluates how much the modelrepeats itself. For comparison, we repeat the same evaluation for the Magenta baseline, and humancomposed music. Table 2reports the results. It can be seen that our method performs similarlyas Magenta in terms of copying ( sub-seq ). It is somewhat surprising that human composers infact tend to copy more from other songs, which indicates that both generation approaches can befurther relaxed in terms copying. Our method is less likely to generate recurring melodies ( repeat )compared to Magenta, and is closer to the statistics of human-produced songs.6 A PPLICATIONSIn this section we demonstrate two novel applications of our pop music generation framework. Werefer the reader to http://www.cs.toronto.edu/songfrompi/ for the music videos.6.1 N EURAL DANCING AND KARAOKEIn our first application, we attempt to generate both music and a stickman dancing to it, as well asa sequence of karaoke-like text that people can sing along with. To learn the relationship betweenmusic and dance, we download 1 hour of video from the game Just Dance , as well as the MIDI filesfor songs included in the video from different sources. We use the method in Newell et al. (2016 )to track single-frame 2D human pose in the videos. We process the single-frame tracking result toensure left-right body consistency through time, and then use the method of Zhou et al. (2016 ) toconvert the 2D pose sequence into 3D. Example results are shown in Fig. 5. We observe that ourpose processing pipeline is able to extract reasonable human poses most of the time. However, the7Under review as a conference paper at ICLR 2017(a) (b) (c) (d)Figure 5: Examples from Just Dance and 3D human pose tracking result. (a) and (b) are success cases, posetracking fails in (c), and (d) shows the defect in video which makes tracking difficult.quality is not perfect due to tracking failure or video effects. We define pose similarity as averageeuclidean distance of all joints, and cluster poses into 456 clusters. We used Frey & Dueck (2007 )as the number of clusters is large.We learn to generate a stickman dancing by adding another dancing layer on top of the key layer,just like for drum and chord. We generate one pose at each beat, which is equivalent to 4 timesteps or 0.5 seconds in a 120 beat-per-minute music. In particular, we predict one of the 456 poseclusters using a linear projection layer followed by softmax. We use cross-entropy at each time stepas our loss function. At inference time, we further apply moving average to temporally smooth thegenerated 3D pose sequence.To learn the relationship between music and lyrics, we collect 51 hours of lyrics data from theinternet. This data contains 50 hours of text without music, and the rest 1 hour are songs we collectedfrom Just Dance . For the music part, we temporally align each sentence in the lyrics with the midimusic by using the widely-existing lrcformat, which records the time tag at the beginning of everysentence. We select words that appear at least 4 times, which yields a vocabulary size of 3390including unknown and end-of-sentence. Just as for dance, we generate one word per beat usinganother lyrics layer on top of the key layer.6.2 N EURAL STORY SINGINGIn this application our aim is to sing a song about a photo. We first generate a story about thephoto with the neural storyteller Kiros et al. (2015 ) and try to accompany the generated text withmusic. We utilize the same 1 hour dataset of temporally aligned lyrics and music. We further includethe phoneme list of our 3390 vocabulary as we also want to sing the story. Starting from the textproduced by neural storyteller, we arrange it into a temporal sequence with 1 beat per word and ashort pause for end-of-sentence, where the pause length is decided such that the next sentence startsfrom a new bar. As our dataset is relatively small, we generate the profile conditioned on the text,which has less dimensions compared to the key. This is done by a 2-layer LSTM that takes as inputthe generated profile at the last time step concatenated with a one-hot vector of the current word, andoutputs the current profile. We then generate the song with our model given the generated profile.The generated melody key is then used to decide on the pitch frequency of a virtual singer, assumingthe key-to-pitch correspondence of a grand piano. We further constrain that the singer’s final pitch isalways in the range of E3toG4, which we empirically found to be the natural pitch range. We thenreplace all words outside the vocabulary with the sound Ooh, and play the rendered singing with thegenerated music.7 C ONCLUSION AND FUTURE WORKWe have presented a hierarchical approach to pop song generation which exploits music theory inthe model design. In contrast to past work, our approach is able to generate multi-track music. Ourhuman studies shows the strength of our framework compared to an existing strong baseline. Weadditionally proposed two new applications: neural dancing & karaoke, and neural story singing.In this paper, we show that incorporating knowledge from the music theory into the model, aswell as capturing multiple aspects of music results in better sounding songs. However, generatingappealing and interesting music that captures structure, rhythm, and mood is challenging, and thereis an exciting road ahead to improve on these aspects in the future.8Under review as a conference paper at ICLR 2017
SJ9RkQV4g
Nice idea, lacking thorough evaluation.
4: Ok but not good enough - rejection
In this paper, the authors build music-theoretical structure directly into a music generating LSTM. Even though such simple rules should be learnable from data, this surely is a neat idea for the limited-data regime. Further, it seems like a desirable model when thinking about musicians, for instance, wanting to train on own (and thus limited) source material. They consider a dataset of 100 hours of midi and add multiple priors, drawn from basic music theory. The priors as such are OK, but some of them seem rather heuristic and should, in my opinion, be learned from data and it should be discussed how the performance changes if you remove them. Further, the authors evaluate their study on an artificial benchmark, consisting of a behavioral experiment where 27 subjects judge songs generated by Magenta and their approach in a questionable side-by-side evaluation. Using this as a performance criterion is problematic, as no details about the subjects are given away and no attempt is made to assess the statistical significance of such results, let alone discussing the difficulty of pairing the songs. Further, I assume there are standard behavioral batteries, concerned with assessing music preferences that should have been used or at least addressed. Introducing the neural Karaoke and dancing is fun, but does not have much scientific value at this point, as it does not seem to work in a meaningful way. I would recommend to either improve results of the latter drastically or add it as an extra to a blog post and remove it from the paper. It is good that the authors make an attempt to encode general prior knowledge into their architecture, but I am not convinced by the results and the heuristic choices being made. Further, it is still not 100% clear to me how the weighted probability distribution is constructed for the scales and how strong the prior it effectively incorporates is. If it is very strong, it is not surprising to me, that the songs sound relatively coherent, as in scale playing with rejected outliers have to sound somewhat coherent. I am not familiar enough with the Magenta baseline system and it is problematic that the baseline is not explained well. If the baseline does not take explicit scale priors into account, it does make sense that it sounds less coherent by definition. This has to be discussed and the effect of the introduced priors has to be evaluated. Finally, the question remains if this will generalize to datasets with more than 4 dominant scales and why the authors chose their thresholds the way they did. Did the model perform worse if one chooses to include more scales? How do you know, that the heavy tail of such a distribution is not desirable and important for natural sounding music, did you investigate this? The multitrack idea is great. However, I am not convinced it works in this case. The results sound more like assigning a couple of notes in the melodies to rhythmic sounds, but they do not interact with the melody, they just move along, as if they were part of the melody. This is not how rhythm works in music in most cases. Pro: + Incorporating general musical knowledge into the learned network is a good idea and non-trivial. + Idea to introduce a behavioral measure for the quality of the generated samples is useful, as music is very subjective. + The multitrack idea seems useful and a clear step beyond Magenta as far as I understand. Con: - However, the multitrack part of the architecture does not seem to work properly, rhythm does not seem to behave differently from melodic movement. - The music excerpts sound very simplistic and similar. - The pair-wise evaluation metric with 27 supposedly "random" subjects is not very meaningful and very likely, not significant. - Evaluation of generative models is difficult, but the authors could have done better. - The 4 bins of the random scale variable seem ad-hoc. I have to emphasize that I like the ideas introduced in this paper, but I am not convinced by the way they are presented and evaluated. I would like to suggest this paper for workshop publication.
3: The reviewer is fairly confident that the evaluation is correct